text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
version 3 of a patch series that introduces a faster/leaner
sysctl internal implementation.
Due to high number of patches and low general interest I'll just point
you to the tree/branch:
git://github.com/luciang/linux-2.6-new-sysctl.git v3-new-sysctl-alg
Patches are on top of v2.6.39. I did not pick a more recent (random)
point in Linus' tree to rebase these onto to not mess up testing.
Changes
Changes
we need to register sysctl tables with the same content affecting
different pieces of data.
- enforced sysctl checks
$ time modprobe dummy numdummies=N
NOTE: these stats are from v2. v3 should be a bit slower due to:
- the compatibility layer
- the old stats used cookies to prevent kmemdups() on ctl_table arrays
- the old patches had an optimisation for directories with many
subdirs that was replaced (in v3) with rbtrees
Without this patch series :(
- ipv4 only
- N=1000 time= 0m 06s
- N=2000 time= 0m 30s
- N=4000 time= 2m 35s
- ipv4 and ipv6
- N=1000 time= 0m 24s
- N=2000 time= 2m 14s
- N=4000 time=10m 16s
- N=5000 time=16m 3s
With this patch series :)
- ipv4 only
- N=1000 time= 0m 0.33s
- N=2000 time= 0m 1.25s
- N=4000 time= 0m 5.31s
- ipv4 and ipv6
- N=1000 time= 0m 0.41s
- N=2000 time= 0m 1.62s
- N=4000 time= 0m 7.64s
- N=5000 time= 0m 12.35s
- N=8000 time= 0m 36.95s
Patches marked with RFC: are patches where reviewers should pay more
attention as I may have missed something.
Part 1: introduce compatibility layer:
sysctl: introduce temporary sysctl wrappers
sysctl: register only tables of sysctl files
Part 2: minimal changes to sysctl users:
sysctl: call sysctl_init before the first sysctl registration
sysctl: no-child: manually register kernel/random
sysctl: no-child: manually register kernel/keys
sysctl: no-child: manually register fs/inotify
sysctl: no-child: manually register fs/epoll
sysctl: no-child: manually register root tables
Part
Part 4: new algorithm/data structures:
sysctl: faster tree-based sysctl implementation
Part 5: checks/warns requested during review:
sysctl: add duplicate entry and sanity ctl_table checks
sysctl: alloc ctl_table_header with kmem_cache
RFC: sysctl: change type of ctl_procfs_refs to u8
sysctl: check netns-specific registration order respected
sysctl: warn if registration/unregistration order is not respected
RFC: sysctl: always perform sysctl checks
Part fields
Part 7: Eric requested ability to register an empty dir:
sysctl: add register_sysctl_dir: register an empty sysctl directory
Part | 233 +++++---
include/linux/inotify.h | 2 -
include/linux/key.h | 3 -
include/linux/poll.h | 2 -
include/linux/sysctl.h | 221 +++++---
include/net/net_namespace.h | 4 +-
init/main.c | 1 +
kernel/Makefile | 5 +-
kernel/sysctl.c | 1161 ++++++++++++++++++++++++++++----------
kernel/sysctl_check.c | 325 +++++++----
lib/Kconfig.debug | 8 -
net/sysctl_net.c | 86 ++--
security/keys/key.c | 7 +
security/keys/sysctl.c | 18 +-
18 files changed, 1495 insertions(+), 654 deletions(-)
--
.
..: Luc
|
http://lwn.net/Articles/444240/
|
CC-MAIN-2013-20
|
refinedweb
| 484
| 57.16
|
:
SELECT ID, FirstName, LastName, Street, City, ST, Zip FROM Students
4. Here’s an example script that generates two JSON files from that query. One file contains JSON row arrays, and the other JSON key-value objects. Below, we’ll walk through it step-by-step.
import pyodbc import json import collections connstr = 'DRIVER={SQL Server};SERVER=ServerName;DATABASE=Test;' conn = pyodbc.connect(connstr) cursor = conn.cursor() cursor.execute(""" SELECT ID, FirstName, LastName, Street, City, ST, Zip FROM Students """) rows = cursor.fetchall() # Convert query to row arrays # Convert query to objects of key-value pairs conn.close()
Let’s break this down. After our import statements, we set a connection string to the server. Then, we use pyodbc to open that connection and execute the query:
connstr = 'DRIVER={SQL Server};SERVER=ServerName;DATABASE=Test;' conn = pyodbc.connect(connstr) cursor = conn.cursor() cursor.execute(""" SELECT ID, FirstName, LastName, Street, City, ST, Zip FROM Students """) rows = cursor.fetchall():
[ [ 1, "Samantha", "Baker", "9 Main St.", "Hyde Park", "NY", "12538" ], [ 2, "Mark", "Salomon", "12 Destination Blvd.", "Highland", "NY", "12528" ] ] valid JSON:
[ { "id": 1, "FirstName": "Samantha", "LastName": "Baker", "Street": "9 Main St.", "City": "Hyde Park", "ST": "NY", "Zip": "12538" }, { "id": 2, "FirstName": "Mark", "LastName": "Salomon", "Street": "12 Destination Blvd.", "City": "Highland", "ST": "NY", "Zip": "12528" } ].
Learn data analysis with SQL!
If you like what you read here, check out my book Practical SQL: A Beginner’s Guide to Storytelling with Data from No Starch Press. 😛.
Great stuff!!
Hi Anthony,
Great example. I am having one problem and I wonder if you can help me … When I run below code, I keep getting ‘TypeError: ‘list’ object is not collable’ I am using Postgres. I wonder what I am doing wrong?
…. and the rest of the code.
Sumi,
On which line are you getting the error?
Thank you for your quick response. The error is happening on ‘print rowarray_list(t)’. Thank you.
Sumi,
rowarray_list is a list object. If you want to print all the values in it, you can simply do:
If you want to get the first item in the list, you would do:
Thank you, it fixed it 🙂
perfect, thanks!
Thanks a lot, it helps me a lot ~
Hello Anthony,
Your explanations are simple and awesome. It’s pretty cool.
I love this blog for its structure. I have a project that can be divided into following steps:-
1) Collect data from MySqlDb using python(in dict) and convert as Json array/string
2)Parse Json array/string to Google visualization api(gviz api in Javascript) and form charts(Bar/Pie)-Draw charts with keys vs values(from MySqldb)
I tried your method to get data and it throws an error saying ” Attribute Error- tuple object has no object as XYZ where XYZ are column name in my table from which I am pulling data.
Can you please suggest a way to do this or an alternate way to draw graphs for the data I am collecting from MySQL dB. A demonstration would be very helpful.
Thank You,
Maya
Hi, Maya,
Glad you find this site useful. I don’t have much time to write posts these days, but the old ones still seem to help people 🙂
I have very little experience with MySQL, so I am not sure exactly what the issue is. However, which database connector are you using in Python to access MySQL?
Hi Anthony,
Great post! I am using Netezza DB running on a different linux server and I am trying to build jason from sql on another linux server. Basically, want to connect Netezza DB from another machine, query table and build jason.
Any ideas on this.
thanks a lot.
Nasir,
Apologies — my only experience with Netezza is hearing about it on a job interview and learning that it’s an appliance based on PostgreSQL. Given that, I wonder whether you can use the psycopg2 Python library to connect to it, execute your queries and generate JSON per my post?
Hi Anthony,
I am getting an error AttributeError: ‘tuple’ object has no attribute ‘gid’
Can you please help me..
Prakash,
We’ll need more info to help. Version of Python, database you’re connecting to, and perhaps a code example?
Thanks Anthony,
Your code snippet helped me so much.
Thanks so much Anthony. Very useful article.
I learnt something new in very short period of time.
sir from your code how can i have a output like this.
{“data”:
[{“name”:”sample”}]}
Hi Anthony,
I would like to create a json file like below
If married is true then only I need Wife section in json file
Also I have 2 tables, People, wife both the tables linked with people_id
Could you please help me on this?
Thanks for the post, this is essentially finished the script I was in the middle of writing 🙂
Hey, Ryan, you’re welcome!
How would I create a json object for each sql statment executed in this for loop?
Chris,
Take a closer look at the example I provide. You will need to iterate over all the rows returned when you execute each query.
Hi Anthony,
Thanks for this post. It’s an old one, but I’m glad to see that it’s still helping people.
I’m wondering why is there under the “# Convert query to row arrays” sesction, the iteration through the rows and creation of a tupple which is then appended to the rowarray_list.
Wouldn’t “rowarray_list.append(row)” without creating the tuple do the same thing? If so or if not what’s the advantage of doing it the long way?
Also, similar question regarding “# Convert query to objects of key-value pairs” section.
It might be easier and perhaps efficient to do that in SQL statement like such
“ID as ID, FirstName as FirstName” etc…to which (I assume) you will get rows in dictionary format which you can directly append to object_list.
Python is still new to me, so I’m trying to understand the steps that I should take vs steps I can skip.
Thank you! 🙂
Hi, Kapil,
It’s been a long time since I wrote this post 🙂
With the module pyodbc, a row is not a regular Python tuple but instead it’s an object that has some tuple-like properties. See the docs at
Note that this is different than how some other Python database adapters work. Psycopg, for example, does return a tuple:
Either way, one benefit of doing things “the long way” (as you put it) is that you can control the contents and order of your responses. You can include or exclude fields as you see fit, and for some applications that can help keep the size of your JSON files small.
Hope that helps!
Hi Anthony,
Thanks for you reply, and also for posting this article.
🙂
awesome idea for combining list with dictionary to form the formatted json data.
Mass appreciation, this makes my rest api building an enjoy.
Hi,
Does it support Many to Many and 1 to Many relationships?
I am trying to convert mysql 2D flat data into nested documents(for mongoDB) using a Json schema.
Hello.
Thank you for your post.
I try to use IP to get access to MySQL Server just like this:
but I have got the error:
What is the right way to access SQL Server via IP?
how to get data from database as a dictionary using python json format….what the coding and query please help me
If my sql server I am trying to query from has spaces in the row fields ( for example DNS Name is the name of one field) how would I call this correctly when doing row.whatever as row.[DNS Name] doesn’t work and neither does row.’DNS Name’ or the two combined.
Hi Anthony,
Thanks for the good post. I am using oracle DB, where I get my data in xml format, is it possible I can directly convert that xml into json? Instead of defining all columns explicitly. I have somewhere around 150 columns. So I am getting all data in an xml in my sql where as I want to directly convert my xml into json in python… Can you please help with sample code?
Hi Sajeesh,
I’m not able to help with sample code at the moment, but you may want to looking into XML parsing libraries available for Python, such as lxml. You could look at reading the XML from the database and then transforming it using Python to the JSON output you want.
You may not need Python, however. Try
Thank you so much for your article, Anthony and all! This is very helpful.
import pyodbc
import json
import collections
cnxn = pyodbc.connect(“Driver={SQL Server Native Client 11.0};”
“Server=tcp:foodtracker1.database.windows.net,1433;”
“Database=Database;”
“uid=andreea;pwd=Banana16!”)
cursor = cnxn.cursor()
cursor.execute(‘select product_name, quantity, unit_of_measurement from dbo.PRODUCT’)
rows = cursor.fetchall()
# f = open(“date”, ‘w’)
#
#
# for row in rows:
# f.write(row[1])
# f.write(“\n”)
#
#
# f.close()
rowarray_list = []
for row in rows:
t = (row[0],row[1],row[2])
rowarray_list.append(t)
j = json.dumps(rowarray_list)
rowarrays_file = ‘product_rowarrays.js’
f = open(rowarrays_file,’w’)
objects_list = []
for row in rows:
d = collections.OrderedDict()
d[‘product_name’]=row.product_name
d[‘quantity’]=row.quantity
d[‘unit_of_measurement’]=row.unit_of_measurement
objects_list.append(d)
j = json.dumps(objects_list)
objects_file = ‘product_objects.js’
f = open(objects_file,’w’)
cnxn.close()
Hello. It does not write anything in js files.
|
https://anthonydebarros.com/2012/03/11/generate-json-from-sql-using-python/
|
CC-MAIN-2019-13
|
refinedweb
| 1,581
| 74.59
|
NAME
socketpair - create a pair of connected sockets
SYNOPSIS
#include <sys/types.h> /* See NOTES */ #include <sys/socket.h> int socketpair(int d, int type, int protocol, int sv[2]);
DESCRIPTION
The socketpair() call creates an unnamed pair of connected sockets in the specified domain d, of the specified type, and using the optionally specified protocol. The descriptors used in referencing the new sockets are returned in sv[0] and sv[1]. The two sockets are indistinguishable.
RETURN VALUE
On success, zero is returned. On error, -1 is returned, and errno is set appropriately.
ERRORS
EAFNOSUPPORT The specified address family is not supported on this machine. EFAULT The address sv does not specify a valid part of the process address space. EMFILE Too many descriptors are in use by this process. ENFILE The system limit on the total number of open files has been reached. EOPNOTSUPP The specified protocol does not support creation of socket pairs. EPROTONOSUPPORT The specified protocol is not supported on this machine.
CONFORMING TO
4.4BSD, POSIX.1-2001. The socketpair() function call appeared in 4.2BSD. It is generally portable to/from non-BSD systems supporting clones of the BSD socket layer (including System V variants).
NOTES
On Linux, the only supported domain for this call is AF_UNIX (or synonymously, AF_LOCAL). (Most implementations have the same restriction.) POSIX.1-2001 does not require the inclusion of <sys/types.h>, and this header file is not required on Linux. However, some historical (BSD) implementations required this header file, and portable applications are probably wise to include it.
SEE ALSO
pipe(2), read(2), socket(2), write(2), unix(7)
COLOPHON
This page is part of release 2.77 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
|
http://manpages.ubuntu.com/manpages/hardy/en/man2/socketpair.2.html
|
CC-MAIN-2014-49
|
refinedweb
| 302
| 51.34
|
Examples:
In computer communication, there are many cases in which a protocol needs to cope with packet loss. In order to provide a reliable channel, lost packets must be retransmitted. This example shows a simple communication protocol that uses (the absence of) acknowledgment packets to detect a lost packet. If a lost packet is detected, the packet is retransmitted.
The example shows two protothreads, one for the sender and one for the receiver.
#include "pt.h" PT_THREAD(sender(struct pt *pt)) { PT_BEGIN(pt); do { send_packet(); /* Wait until an ackowledgement has been received, or until the timer expires. If the timer expires, we should send the packet again. */ timer_set(&timer, TIMEOUT); PT_WAIT_UNTIL(pt, acknowledgment_received() || timer_expired(&timer)); } while(timer_expired(&timer)); PT_END(pt); } PT_THREAD(receiver(struct pt *pt)) { PT_BEGIN(pt); /* Wait until a packet has been received, and send an acknowledgment. */ PT_WAIT_UNTIL(pt, packet_received()); send_acknowledgement(); PT_END(pt); }
Protothreads can be used for introducing delays inside a function, without using a full threading model. The following example shows a function writing text to a one-line LCD panel. If the text is longer than the size of the panel, the text should be scrolling in from the right.
#include "pt.h" #include "timer.h" struct state { char *text; char *scrollptr; struct pt pt; struct timer timer; }; PT_THREAD(display_text(struct state *s)) { PT_BEGIN(&s->pt); /* If the text is shorter than the display size, show it right away. */ if(strlen(s->text) <= LCD_SIZE) { lcd_display_text(s->text); } else { /* If the text is longer than the display, we should scroll in the text from the right with a delay of one second per scroll step. We do this in a for() loop, where the loop variable is the pointer to the first character to be displayed. */ for(s->scrollptr = s->text; strlen(s->scrollptr) > LCD_SIZE; ++s->scrollptr) { lcd_display_text(s->scrollptr); /* Wait for one second. */ timer_set(&s->timer, ONE_SECOND); PT_WAIT_UNTIL(&s->pt, timer_expired(&s->timer)); } } PT_END(&s->pt); }
This example is a bit more complicated and shows how to implement a simple code lock - the kind of device that is placed next to doors and that you have to push a four digit number into in order to unlock the door.
The code lock waits for key presses from a numeric keyboard and if the correct code is entered, the lock is unlocked. There is a maximum time of one second between each key press, and after the correct code has been entered, no more keys must be pressed for 0.5 seconds before the lock is opened.
The code uses external functions for timers and for checking the keyboard. These functions are included in the full source code for this example, which is included in the downloadable tarball.
/* * This is the code that has to be entered. */ static const char code[4] = {'1', '4', '2', '3'}; /* * Declaration of the protothread function implementing the code lock * logic. The protothread function is declared using the PT_THREAD() * macro. The function is declared with the "static" keyword since it * is local to this file. The name of the function is codelock_thread * and it takes one argument, pt, of the type struct pt. * */ static PT_THREAD(codelock_thread(struct pt *pt)) { /* This is a local variable that holds the number of keys that have * been pressed. Note that it is declared with the "static" keyword * to make sure that the variable is *not* allocated on the stack. */ static int keys; /* * Declare the beginning of the protothread. */ PT_BEGIN(pt); /* * We'll let the protothread loop until the protothread is * expliticly exited with PT_EXIT(). */ while(1) { /* * We'll be reading key presses until we get the right amount of * correct keys. */ for(keys = 0; keys < sizeof(code); ++keys) { /* * If we haven't gotten any keypresses, we'll simply wait for one. */ if(keys == 0) { /* * The PT_WAIT_UNTIL() function will block until the condition * key_pressed() is true. */ PT_WAIT_UNTIL(pt, key_pressed()); } else { /* * If the "key" variable was larger than zero, we have already * gotten at least one correct key press. If so, we'll not * only wait for the next key, but we'll also set a timer that * expires in one second. This gives the person pressing the * keys one second to press the next key in the code. */ timer_set(&codelock_timer, 1000); /* * The following statement shows how complex blocking * conditions can be easily expressed with protothreads and * the PT_WAIT_UNTIL() function. */ PT_WAIT_UNTIL(pt, key_pressed() || timer_expired(&codelock_timer)); /* * If the timer expired, we should break out of the for() loop * and start reading keys from the beginning of the while(1) * loop instead. */ if(timer_expired(&codelock_timer)) { printf("Code lock timer expired.\n"); /* * Break out from the for() loop and start from the * beginning of the while(1) loop. */ break; } } /* * Check if the pressed key was correct. */ if(key != code[keys]) { printf("Incorrect key '%c' found\n", key); /* * Break out of the for() loop since the key was incorrect. */ break; } else { printf("Correct key '%c' found\n", key); } } /* * Check if we have gotten all keys. */ if(keys == sizeof(code)) { printf("Correct code entered, waiting for 500 ms before unlocking.\n"); /* * Ok, we got the correct code. But to make sure that the code * was not just a fluke of luck by an intruder, but the correct * code entered by a person that knows the correct code, we'll * wait for half a second before opening the lock. If another * key is pressed during this time, we'll assume that it was a * fluke of luck that the correct code was entered the first * time. */ timer_set(&codelock_timer, 500); PT_WAIT_UNTIL(pt, key_pressed() || timer_expired(&codelock_timer)); /* * If we continued from the PT_WAIT_UNTIL() statement without * the timer expired, we don't open the lock. */ if(!timer_expired(&codelock_timer)) { printf("Key pressed during final wait, code lock locked again.\n"); } else { /* * If the timer expired, we'll open the lock and exit from the * protothread. */ printf("Code lock unlocked.\n"); PT_EXIT(pt); } } } /* * Finally, we'll mark the end of the protothread. */ PT_END(pt); }
The following example shows how to implement the bounded buffer problem using the protothreads semaphore library. The example uses three protothreads: one producer() protothread that produces items, one consumer() protothread that consumes items, and one driver_thread() that schedules the producer and consumer protothreads.
Note that there is no need for a mutex to guard the add_to_buffer() and get_from_buffer() functions because of the implicit locking semantics of protothreads - a protothread will never be preempted and will never block except in an explicit PT_WAIT statement.
#include "pt-sem.h" #define NUM_ITEMS 32 #define BUFSIZE 8 static struct pt_sem full, empty; static PT_THREAD(producer(struct pt *pt)) { static int produced; PT_BEGIN(pt); for(produced = 0; produced < NUM_ITEMS; ++produced) { PT_SEM_WAIT(pt, &full); add_to_buffer(produce_item()); PT_SEM_SIGNAL(pt, &empty); } PT_END(pt); } static PT_THREAD(consumer(struct pt *pt)) { static int consumed; PT_BEGIN(pt); for(consumed = 0; consumed < NUM_ITEMS; ++consumed) { PT_SEM_WAIT(pt, &empty); consume_item(get_from_buffer()); PT_SEM_SIGNAL(pt, &full); } PT_END(pt); } static PT_THREAD(driver_thread(struct pt *pt)) { static struct pt pt_producer, pt_consumer; PT_BEGIN(pt); PT_SEM_INIT(&empty, 0); PT_SEM_INIT(&full, BUFSIZE); PT_INIT(&pt_producer); PT_INIT(&pt_consumer); PT_WAIT_THREAD(pt, producer(&pt_producer) & consumer(&pt_consumer)); PT_END(pt); }
This example shows an interrupt handler in a device driver for a TR1001 radio chip. The driver receives incoming data in bytes and constructs a frame that is covered by a CRC checksum. The driver is implemented both with protothreads and with an explicit state machine. The state machine has 11 states and is implemented using the C switch() statement. In contrast, the protothreads-based implementation does not have any explicit states.
The flow of control in the state machine-based implementation is quite hard to follow from inspection of the code, whereas the flow of control is evident in the protothreads based implementation.
PT_THREAD(tr1001_rxhandler(unsigned char incoming_byte)) { PT_YIELDING(); static unsigned char rxtmp, tmppos; PT_BEGIN(&rxhandler_pt); while(1) { /* Wait until we receive the first syncronization byte. */ PT_WAIT_UNTIL(&rxhandler_pt, incoming_byte == SYNCH1); tr1001_rxstate = RXSTATE_RECEVING; /* Read all incoming syncronization bytes. */ PT_WAIT_WHILE(&rxhandler_pt, incoming_byte == SYNCH1); /* We should receive the second synch byte by now, otherwise we'll restart the protothread. */ if(incoming_byte != SYNCH2) { PT_RESTART(&rxhandler_pt); } /* Reset the CRC. */ rxcrc = 0xffff; /* Read packet header. */ for(tmppos = 0; tmppos < TR1001_HDRLEN; ++tmppos) { /* Wait for the first byte of the packet to arrive. */ PT_YIELD(&rxhandler_pt); /* If the incoming byte isn't a valid Manchester encoded byte, we start again from the beinning. */ if(!me_valid(incoming_byte)) { PT_RESTART(&rxhandler_pt); } rxtmp = me_decode8(incoming_byte); /* Wait for the next byte to arrive. */ PT_YIELD(&rxhandler_pt); if(!me_valid(incoming_byte)) { PT_RESTART(&rxhandler_pt); } /* Put together the two bytes into a single Manchester decoded byte. */ tr1001_rxbuf[tmppos] = (rxtmp << 4) | me_decode8(incoming_byte); /* Calculate the CRC. */ rxcrc = crc16_add(tr1001_rxbuf[tmppos], rxcrc); } /* Since we've got the header, we can grab the length from it. */ tr1001_rxlen = ((((struct tr1001_hdr *)tr1001_rxbuf)->len[0] << 8) + ((struct tr1001_hdr *)tr1001_rxbuf)->len[1]); /* If the length is longer than we can handle, we'll start from the beginning. */ if(tmppos + tr1001_rxlen > sizeof(tr1001_rxbuf)) { PT_RESTART(&rxhandler_pt); } /* Read packet data. */ for(tmppos = 6; tmppos < tr1001_rxlen + TR1001_HDRLEN; ++tmppos) { PT_YIELD(&rxhandler_pt); if(!me_valid(incoming_byte)) { PT_RESTART(&rxhandler_pt); } rxtmp = me_decode8(incoming_byte); PT_YIELD(&rxhandler_pt); if(!me_valid(incoming_byte)) { PT_RESTART(&rxhandler_pt); } tr1001_rxbuf[tmppos] = (rxtmp << 4) | me_decode8(incoming_byte); rxcrc = crc16_add(tr1001_rxbuf[tmppos], rxcrc); } /* Read the frame CRC. */ for(tmppos = 0; tmppos < 4; ++tmppos) { PT_YIELD(&rxhandler_pt); if(!me_valid(incoming_byte)) { PT_RESTART(&rxhandler_pt); } rxcrctmp = (rxcrctmp << 4) | me_decode8(incoming_byte); } if(rxcrctmp == rxcrc) { /* A full packet has been received and the CRC checks out. We'll request the driver to take care of the incoming data. */ tr1001_drv_request_poll(); /* We'll set the receive state flag to signal that a full frame is present in the buffer, and we'll wait until the buffer has been taken care of. */ tr1001_rxstate = RXSTATE_FULL; PT_WAIT_UNTIL(&rxhandler_pt, tr1001_rxstate != RXSTATE_FULL); } } PT_END(&rxhandler_pt); }
/* No bytes read, waiting for synch byte. */ #define RXSTATE_READY 0 /* Second start byte read, waiting for header. */ #define RXSTATE_START 1 /* Reading packet header, first Manchester encoded byte. */ #define RXSTATE_HEADER1 2 /* Reading packet header, second Manchester encoded byte. */ #define RXSTATE_HEADER2 3 /* Reading packet data, first Manchester encoded byte. */ #define RXSTATE_DATA1 4 /* Reading packet data, second Manchester encoded byte. */ #define RXSTATE_DATA2 5 /* Receiving CRC16 */ #define RXSTATE_CRC1 6 #define RXSTATE_CRC2 7 #define RXSTATE_CRC3 8 #define RXSTATE_CRC4 9 /* A full packet has been received. */ #define RXSTATE_FULL 10 void tr1001_rxhandler(unsigned char c) { switch(tr1001_rxstate) { case RXSTATE_READY: if(c == SYNCH1) { tr1001_rxstate = RXSTATE_START; rxpos = 0; } break; case RXSTATE_START: if(c == SYNCH1) { tr1001_rxstate = RXSTATE_START; } else if(c == SYNCH2) { tr1001_rxstate = RXSTATE_HEADER1; rxcrc = 0xffff; } else { tr1001_rxstate = RXSTATE_READY; } break; case RXSTATE_HEADER1: if(me_valid(c)) { tr1001_rxbuf[rxpos] = me_decode8(c); tr1001_rxstate = RXSTATE_HEADER2; } else { tr1001_rxstate = RXSTATE_ERROR; } break; case RXSTATE_HEADER_HDRLEN) { tr1001_rxlen = ((((struct tr1001_hdr *)tr1001_rxbuf)->len[0] << 8) + ((struct tr1001_hdr *)tr1001_rxbuf)->len[1]); if(rxpos + tr1001_rxlen == sizeof(tr1001_rxbuf)) { tr1001_rxstate = RXSTATE_DATA1; } } else { tr1001_rxstate = RXSTATE_HEADER1; } } else { tr1001_rxstate = RXSTATE_READY; } break; case RXSTATE_DATA1: if(me_valid(c)) { tr1001_rxbuf[rxpos] = me_decode8(c); tr1001_rxstate = RXSTATE_DATA2; } else { tr1001_rxstate = RXSTATE_READY; } break; case RXSTATE_DATA_rxlen + TR1001_HDRLEN) { tr1001_rxstate = RXSTATE_CRC1; } else if(rxpos > sizeof(tr1001_rxbuf)) { tr1001_rxstate = RXSTATE_READY; } else { tr1001_rxstate = RXSTATE_DATA1; } } else { tr1001_rxstate = RXSTATE_READY; } break; case RXSTATE_CRC1: if(me_valid(c)) { rxcrctmp = me_decode8(c); tr1001_rxstate = RXSTATE_CRC2; } else { tr1001_rxstate = RXSTATE_READY; } break; case RXSTATE_CRC2: if(me_valid(c)) { rxcrctmp = (rxcrctmp << 4) | me_decode8(c); tr1001_rxstate = RXSTATE_CRC3; } else { tr1001_rxstate = RXSTATE_READY; } break; case RXSTATE_CRC3: if(me_valid(c)) { rxcrctmp = (rxcrctmp << 4) | me_decode8(c); tr1001_rxstate = RXSTATE_CRC4; } else { tr1001_rxstate = RXSTATE_READY; } break; case RXSTATE_CRC4: if(me_valid(c)) { rxcrctmp = (rxcrctmp << 4) | me_decode8(c); if(rxcrctmp == rxcrc) { tr1001_rxstate = RXSTATE_FULL; tr1001_drv_request_poll(); } else { tr1001_rxstate = RXSTATE_READY; } } else { tr1001_rxstate = RXSTATE_READY; } break; case RXSTATE_FULL: /* Just drop the incoming byte. */ break; default: /* Just drop the incoming byte. */ tr1001_rxstate = RXSTATE_READY; break; } }
|
http://dunkels.com/adam/pt/examples.html
|
CC-MAIN-2018-05
|
refinedweb
| 1,873
| 53.41
|
LED. In future projects I can see these modules being a great way to add some intelligence to a Pi powered robot or car.
The HC-SR04 module cost approximately £3 ($5) and is the size of a box of matches. The two transducers give it a distinctive appearance. It is designed to be powered by 5V, has 1 input pin and 1 output pin. The module works by sending an ultrasonic pulse into the air and measuring the time it takes to bounce back. This value can then be used to calculate the distance the pulse travelled.
Connecting To The Pi
Powering the module is easy. Just connect the +5V and Ground pins to Pin 2 and Pin 6 on the Pi’s GPIO header.
The input pin on the module is called the “trigger” as it is used to trigger the sending of the ultrasonic pulse. Ideally it wants a 5V signal but it works just fine with a 3.3V signal from the GPIO. So I connected the trigger directly to Pin 16 (GPIO23) on my GPIO header.
You can use any GPIO pins you like on your RPi but you will need to note the references and amend your Python script accordingly. GPIO header script.
Ultrasonic Module Circuit
Here is a photo of my circuit. I used a small piece of breadboard and some male-to-female jumper cables.
Python Script
Now for the script to actually take some measurements. In this example I am using Python. Why Python? It’s my favourite language on the Pi so I tend to use it for all my experiments but the technique here can easily be applied to C.
You can either download the script directly using this link or via the command line on the Pi using :
wget
This can then be run using :
sudo python ultrasonic_1.py
Speed of Sound
The calculation that is used to find the distance relies on the speed of sound. This varies with temperature. The scripts calculate the correct value to use based on a pre-defined temperature. You can change this value if required or perhaps measure it dynamically using a temperature sensor.
Photos
Here are some photos of my ultrasonic sensor connected to Raspberry Pi via the GPIO header :
Accuracy
Here are some points about accuracy :
-.
- The transducers have a wide angle of sensitivity. In a cluttered environment you may get shorter readings due to objects to the side of the module.
- Measurements work down to about 2cm. Below this limit the results can give strange results.
- If the ultrasonic transducers touch anything the results are unpredictable.
Thanks to this technology I now know that the distance from my desk to the ceiling is 155cm.
Update
If anyone wants to experiment with these devices in C then check out this page :
It also includes a comparison between Python and C implementations.
Credits
Thanks to Leroy Milamber for correcting an error in my code.
Related Articles
Here are some of my other articles you might find interesting if you enjoyed this one :
Thanks so much for the post. I received one of these sensors for Christmas!
Excellent! Definitely worth a play. Not sure what I am going to use one for but if I ever make a Pi based vehicle it will almost certainly feature a few of them.
I didn’t see a model number for the sensor mentioned here, but it looks very similar in both appearance and operation to the HY-SRF05’s I played about with.
Whilst I was experimenting with these I managed to persuade Joan on the forums to write a simple C implementation for the HY-SRF05, but it looks like it could be used with yours without modification. If you fancy some experimentation, the code is posted as part of an article here …
Mine is labelled SRF04 although my photos seem to have concealed the “4”. Had a look at your page. Great stuff. I’ll add a link from my article to your page for anyone who is interested in working with C.
Thanks for the link. I should probably dig my sensors out and make some time to have another play around with them. It sounds like some of my measurements may have been rather affected by clutter, as they would have been taken from my desk.
Interesting post. Do you really think the timings will be better if you use C? I’ve got one of these too, but was planning to use an Arduino in between the Pi and the sensor because of the accuracy of timing on the Pi.
It was more of a guess but I suspect C would be slightly faster. Although the main source of error I saw was the influence of surrounding surfaces. If the sensor had a wide, clear “tunnel” to the target the accuracy was fairly good. As soon as you get a bit of clutter in the environment the results can be unexpected.
There seems to be a discrepancy between what you describe and your code:
You mention that the module sets ECHO to HIGH (5V) for the amount of time it took the pulse to go and come back…
—quote—
The output pin is low (0V) until the module has taken its distance measurement. It then sets this pin high (+5V) for the same amount of time that it took the pulse to return. So our script needs to measure the time this pin stays high.
—quote—
Your code actually measures the time from the end of the trigger until the end of the HIGH echo…
Based on your module’s description, should it not rather be:
# Send 10us pulse to trigger
GPIO.output(GPIO_TRIGGER, True)
time.sleep(0.00001)
GPIO.output(GPIO_TRIGGER, False)
# measure the time this pin stays high
# start value is the last low for ECHO pin
while GPIO.input(GPIO_ECHO)==0:
start = time.time()
# end value is the last high for ECHO pin
while GPIO.input(GPIO_ECHO)==1:
stop = time.time()
# Calculate pulse length
elapsed = stop-start
and maybe you won’t need the adjustment value.
Thoughts ?
Kind regards
Leroy, you are correct. Not quite sure why I was recording the stop time when I should have been updating the start time while waiting for the pulse to start. I have updated my script and removed the adjustment value. I have quickly tested with my hardware and the adjustment value is no longer needed. Thank you for your helpful comment!
Great stuff. Thanks for the excellent write-up. I was able to interface this to my Pi without any smoke. Thanks for the knowledge of the voltage divider!
How accurate is this sensor? What is the resolution? Can it distinguish between 3 inches and 4 inches, for example? What are the maximum and minimum ranges
it can be used at?
Thanks
It can tell the difference between 3 and 4 inches. The minimum range is probably about 1 inch. I haven’t tried it over longer distances so not quite sure what the maximum is.
Cheapest ultrasonic unit I can find is £14.85!? Any advice? K..
Search eBay for “ultrasonic module”. There should be plenty for £3 including UK sellers.
I had some initial problems with this on a rev2 pi. It was sensing the initial pulse and so returning some very small figures. Looking at the specsheet for this device (mine is a HC-SR04) it suggest a 60ms wait after the trigger to avoid detecting the trigger. That’s an easy fix:
after the
GPIO.output(GPIO_TRIGGER, False)command just add
time.sleep(0.00006)
and all works fine.
The specsheet also says it will measure upto 4m, but in reality it is a lot shorter, and using python the distance is not very accurate (within 5cms per meter--ish) it also measures down to about 3cm in my experience.
Thanks for the code though, saved me a huge headache trying to fathom it.
Cant wait to get mine Matt, should be here soon.
Going to be ripping off this articles into a video soon 🙂 Standard reference back to you though 🙂
Hey Matt,
Made this for my upcoming video. Though you might want a copy to update your blog.
Thanks
Matt
Hey!
Thanks for this great tutorial! One thing that I am having a few issues with! How would I get a continuous display of data? Not just the one value? And also what are the units? CM?
Thanks!
The Raspberry Pi Guy
Hi,
This is excellent article/tutorial.
I have made 20m long UTP cable where on one end was connected sensor, and on other end was raspi (using single wire for connections). Everything is working perfectly (even without delay mentioned in comments by PJ Lucas).
What I have found is problem in code… There is possibility that:
– gpio pin is already in use
– program never gets high or low signal in while loop (happened two times)
For first thing there needs to be checked if gpio is in use, where on second thing I would first calculate how long program would need to do max wait, and than just compare start/end time against that value and exit loop.
My application of this tutorial will be to have sensor on 40m long cable with sensor set down in water well measuring water level (1m over water level aprox, covered with warm plastic to prevent water damage), and raspi would be outside of well in dry place.
Thank you for tutorial and idea how to use raspi.
Hi,
where do you get those tiny voltage dividers from? I couldn’t even find them on amazon!
thanks! great site!
LOS
The voltage divider is just two standard resistors.
This wont work for me. Does not get any echo and therefore loops in line 43/44.
However, if I directly connect the echo pin (no Rs) it gives me results.
Why is that? Can anyone tell me?
P.S:
Minimum Range is ~ 3.0 and maximum is 120
I can only suggest the module you are using isn’t outputting 5V on the echo pin. Is it the same module? Are you powering it from 5V? If it is using a lower voltage the resistors would be feeding the GPIO pin with a voltage that is to low to register as a valid HIGH.
I have a query regarding this. I get a feeling that since the OS running on top of the RPi – Raspbian is not a real-time operating system, the trigger signal from the ultrasonic can be missed on reflection. This can be due to the fact that there are several other processes running in the background of the RPi, and so the echo pin’s reading command can be missed out or done later than when it should have.
I used the module to find out height of people walking under the sensor. I tried implementations on RPi (python code) as well as Arduino. While the Arduino recorded around 10 values on an average for a person walking, the RPi code was only able to record around 4-5 values. This, I presume might be because of the reason I mentioned above. Any thoughts?
hey matt
i used your ultrasonic_2.py program in my project, i want to add beep in my program (means.. it should generate a beep when it calculate a distance or detect obstacle) but i am unable to do so
Pingback: Raspberry Pi – Ultrasonic distance sensor | Paul Görgen
Hi, great tutorial!
How many watts do the resistors have?
They are standard 1/4W resistors.
hello, i want to connect 2 or posibly more ultrasonic sensors to 1 raspberry pi, can you walk me through of what i need? cause im trying to duplicate all the code by this i mean, defining 2 new pins, setting them, basically just duplicating everything, but its not working for me.
Hi,
Thank you for the wonderful write up. Unfortunately, when i try to run it, i get bogus readings.
For example, 12.2 cm distance no matter what object i put in front of it, the only things i changed where the pin assignments
import time
import RPi.GPIO as gpio
gpio.setmode( gpio.BCM )
# setup pin assignments
sonar_trigger = 27
sonar_echo = 22
print "Ultrasonic Measurement"
gpio.setup( sonar_trigger, gpio.OUT )
gpio.setup( sonar_echo, gpio.IN )
while( True ):
gpio.output( sonar_trigger, False )
# allow module to settle
time.sleep( 0.5 )
# send 10us pulse to TRIG
gpio.output( sonar_trigger, True )
time.sleep( 0.00001 )
gpio.output( sonar_trigger, False )
while gpio.input( sonar_echo ) == 0:
start_time = time.time()
while gpio.input( sonar_echo ) == 1:
end_time = time.time()
# calculate pulse length (in cm)
duration = end_time - start_time
# convert pulse length to distance (cm/s)
distance = duration * 34320
distance = distance / 2
print "Distance: %.1fcm" % distance
time.sleep(1)
Does your code have the correct indentation?
Great Tutorial! Worked flawlessly!
The next step for me is to make a stepper motor stop at a certain distance. Can you help me how to do that sir?
Like when the Ultrasonic is reading 2cm, stop the motor.
I can give you the short code if you like 🙂
I’m enjoying this tutorial as well. I have a question on the electronics side. Based on your figure, I’m calculating that the voltage drop from Echo out to the Raspberry Pi pin is:
5 * (330 / (330 + 470)) = 2.06,
which I thought would be 3.3. What am I missing?
Thanks,
Ben
The equation for a voltage divider has R2 on the top not R1. So in this case it is 5 * (470 / (330 + 470)) = 2.94.
Ah, okay. The pin’s voltage drop happens over R2 because that is the path to ground. Thanks for the explanation!
For what it’s worth, my incorrect implementation still worked. I found from that the threshold for high is 1.3 volts. I was at 1.7 with my resistor setup, so I guess I still made it over.
Hi, great tutorial. is there any way to connect a second sensor? if so what alteration would need to be made on the scripting?
Great instruction! Worked perfectly for me first time!
Hi!
This post is pretty old, but is nice if you want to start with the ultrasound device.
I’ve polished the code to make it pretty reliable (14 cm = 13,9 14,1 cm).
wget
Enjoy the code!
Greetings,
Maarten
Hello
Good article, thanks for this
I cannot take 5V from echo pin. What is it reason ? Any suggestions ?
You can’t read the 5V output directly with a GPIO pin on the Pi because they can only accept a maximum of 3.3V. If you put 5V on a GPIO pin it may get damaged.
The best thing to do is use the resistors but try different GPIO pins. As long as you update the references in the code you can use whatever GPIO pins you want.
You ‘ll get more stable results by clamping the measurements to a max value.
while ( GPIO.input(GPIO_ECHO)==1 and (inRange == True) ):
stop = time.time()
elapsed = stop – start
if (elapsed > 0.017):
inRange = False
Pingback: Interactive Scuplture Pi | abstractunicorn
Great tutorial. I am in the process now of doing this exact project. I was wondering how hard it would be to have the system not necessarily display a running distance but email an address if the distance reached a certain point? I am thinking this would work good for a water level monitor. Any thoughts? Thanks again.
It’s possible to send email using Python so this is quite possible. You would just need to read a value, check if it was above or below a certain value and then call a function to send the email. I’ve got a tutorial on sending email with Python :
I am thinking of using this thing to detect if a person is standing in front of my living room’s door. Can it detect a person who is standing 1-3 feet away from the sensor? Also can it trigger a push notification on my mobile which would be in the same private network as my Pi3?
It should work OK at that range. For push notifications on iOS and Android you can use the excellent service. You can trigger notifications using their Python API.
Thanks a lot Matt for such a detailed article !!
|
http://www.raspberrypi-spy.co.uk/2012/12/ultrasonic-distance-measurement-using-python-part-1/
|
CC-MAIN-2017-34
|
refinedweb
| 2,739
| 74.49
|
Post your Comment
Use Compound Statement in JSP Code
Use Compound Statement in JSP Code
A compound statement is collection.... The following jsp code will show you how to use
compound statement.
compound
compound labelling - Java Beginners
compound labelling Hi,
i need help labelling chemical compounds using a java code. Can anyone help
Use Break Statement in jsp code
Use Break Statement in jsp code
...;H1>use break statement in jsp code</H1>
<%
double array...;/HTML>
Save this code as a jsp file named "break_statement_jsp.jsp
Break statement in java
Break statement in java
Break statement in java is used to change the normal control flow of
compound statement like while, do-while , for. Break statement is used in many
languages such C, C++ etc. Sometimes it may
JSP If statement Example
JSP If statement Example
In this section with the help of example you will learn about the "If" statement in JSP. Control
statement is used.... Now you are aware of if block, now we are going to use if
statement using
Nested If Statement
Nested If Statement
In this section you will study about the Nested-if Statement in jsp.
Nested If statement means to use the if statement inside the other if
statement
JSP IF Statement
as the condition, evaluates to true.
Here we will use both IF statement in JSP.
Syntax...JSP IF Statement
In this section we will read about how the If statement is used in the JSP
programming.
In JSP, If statement can be used as it is used
How to use 'if' statement in jsp page?
How to use 'if' statement in jsp page?
... how to use 'if'
statement in jsp page. This statement is used to test... this code as a .jsp file named "use_if_statement.jsp"
How to use switch statement in jsp code
How to use switch statement in jsp code
... and that will be an error.
The example below demonstrates how we can use switch
statement in our JSP...;TITLE>use switch statement in jsp page</TITLE>
</HEAD>
<BODY
Insert data in mysql database through jsp using prepared statement
;
This is detailed jsp code that how to insert data into
database by using prepared statement...;
Save this code as a .jsp file named "prepared_statement_query.jsp"... Insert data in mysql database through jsp using prepared statement
JSP code
JSP code I get an error when i execute the following code :
<... = "insert into name(firstname,lastname) values(?,?)" ;
Statement st = con.createStatement();
st.executeQuery(query);
%>
<jsp:forward
Break Statement in JSP
Break Statement in JSP
The use of break statement is to escape... are using the switch statement under
which we will use break statement. The for loop
jsp code - JSP-Servlet
statement?
pls provide code how to get first selected drop down list value to generate second drop down list by using jsp?
pls ?
Hi Friend...jsp code hi
my requirement is generate dynamic drop down lists
The Switch statement
a statement following a switch
statement use break statement within the code block.... To avoid this we can use Switch statements
in Java. The switch statement is used....
Then there is a code block following the switch statement that comprises
Use if statement with LOOP statement
Use if statement with LOOP statement
... with Example
The Tutorial illustrate a example from if statement with LOOP
statement. In this example we create a procedure display that accept
prepared statement
prepared statement plese give me a code that have preparedstatement interface and uses a select query having condition date="s";
where s is the date,
but this date parameter passed as the string
Update statement
("Code"));
t1=new JTextField(15);
p2.add(t1);
p2.add(new JLabel... void update(int code, String dat, String title, String type,
String...='"+per+"',"+"ContactNo='"+conta+"',"+"MobileNo='"+mobr+"'WHERE Code=" + code
code for jsp - Ajax
code for jsp please give code for using jsp page and Ajax.Retrive... country.By using jsp and Ajax.
Hello Friend
I send the code you... = DriverManager.getConnection("url", "usernmae", "password");
Statement stmt
Closing Statement and ResultSet Objects - Development process
Closing Statement and ResultSet Objects I Friend.
i am getting the closing the objects problem.
i am using the business logic in jsp now.it... to close the objects.My code is
selectQuery="select no_of_hours from
ajax and jsp code - Ajax
ajax and jsp code can u please give me the code for retriving... in listbox display the corresponding all the states.
using jsp and ajax ...("jdbc:mysql://localhost:3306/mydatabase", "root", "");
Statement stmt
code - JSP-Servlet
,
Code to help in solving the problem :
import java.sql.*;
import...+db, user, pass);
try{
Statement st = con.createStatement... code does not execute.");
}
}
catch (Exception e
Continue statement in jsp
What is the difference between an if statement and a switch statement?
What is the difference between an if statement and a switch statement? Hi,
What is the difference between an if statement and a switch statement?
Thanks,
Hi,
The if statement gives options to select one option
The INSERT INTO Statement, SQL Tutorial
values then we should use the
following SQL statement...The INSERT INTO Statement
The INSERT INTO
statement is used to insert or add a record
Switch statement in PHP
the break statement. When the expression and statement matches themselves the code...Switch statement in PHP HII,
Explain about switch statement in PHP?
hello,
Switch statement is executed line by line. PHP executes
jsp code
jsp code what are the jsp code for view page in online journal
ELSE STATEMENT!! - Java Beginners
ELSE STATEMENT!! Hi!
I just want to know why doesn't my else statement works? Else statement is there in the code below which is:
else...
//CODE
int[] random = new int[20];
boolean found = false;
int i;
String s
Post your Comment
|
http://roseindia.net/discussion/21695-Use-Compound-Statement-in-JSP-Code.html
|
CC-MAIN-2015-48
|
refinedweb
| 960
| 74.49
|
SyncMutexEvent(), SyncMutexEvent_r()
Attach an event to a mutex
Synopsis:
#include <sys/neutrino.h> int SyncMutexEvent( sync_t * sync, struct sigevent * event ); int SyncMutexEvent_r( sync_t * sync, struct sigevent * event );
Arguments:
- sync
- A pointer to the synchronization object for the mutex that you want to attach an event to.
- event
- A pointer to the sigevent structure that describes the event that you want to attach, or NULL if you want to detach the currently registered event..
If you call SyncMutexEvent() with a NULL event, the function deletes any existing event registration.
SyncMutexEvent() and SyncMutexEvent_r() are similar, except for the way they indicate errors. See the Returns section for details.
Returns:
The only difference between these functions is the way they indicate errors:
Errors:
- EAGAIN
- All kernel synchronization event objects are in use.
- EFAULT
- A fault occurred when the kernel tried to access sync.
- EINVAL
- The synchronization object pointed to by sync doesn't exist.
- EPERM
- The calling process doesn't have the required permission; see procmgr_ability() .
|
https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/s/syncmutexevent.html
|
CC-MAIN-2019-47
|
refinedweb
| 163
| 55.34
|
RPC::ExtDirect::API - Remoting API generator for Ext.Direct
use RPC::ExtDirect::API namespace => 'myApp', router_path => '/router', poll_path => '/events', remoting_var => 'Ext.app.REMOTING_API', polling_var => 'Ext.app.POLLING_API', auto_connect => 0, no_polling => 0, before => \&global_before_hook, after => \&global_after_hook, ;
This module provides Ext.Direct API code generation.
In order for Ext.Direct client code to know about what Actions (classes) and Methods are available on the server side, these should be defined in a chunk of JavaScript code that gets requested from the client at startup time. It is usually included in the index.html after main ExtJS code:
<script type="text/javascript" src="extjs/ext-debug.js"></script> <script type="text/javascript" src="/extdirect_api"></script> <script type="text/javascript" src="myapp.js"></script>
RPC::ExtDirect::API provides a way to configure Ext.Direct definition variable(s) to accomodate specific application needs. To do so, pass configuration options to the module when you 'use' it, like shown above.
The following configuration options are supported:
namespace - Declares the namespace your Actions will live in. To call the Methods on client side, you will have to qualify them with namespace: namespace.Action.Method, e.g.: myApp.Foo.Bar router_path - URI for Ext.Direct Router calls. For CGI implementation, this should be the name of CGI script that provides API; for more sophisticated environments it is an anchor for specified PATH_INFO. poll_path - URI for Ext.Direct Event provider calls. Client side will poll this URI periodically, hence the name. remoting_var - By default, Ext.Direct Configuration for remoting (forward) Methods is stored in Ext.app.REMOTING_API variable. If for any reason you would like to change that, do this by setting remoting_var. Note that in production environment you would probably want to use a compiled version of JavaScript application that consist of one big JavaScript file. In this case, it is recommended to include API declaration as the first script in your index.html and change remoting API variable name to something like EXT_DIRECT_API. Default variable name depends on Ext.app namespace being available by the time Ext.Direct API is downloaded, which is often not the case. polling_var - By default, Ext.Direct does not provide a standard name for Event providers to be advertised in. For similarity, POLLING_API name is used to declare Event provider so it can be used on client side without having to hardcode any URIs explicitly. POLLING_API configuration will only be advertised to client side if there are any Event provider Methods declared. Note that the same caveat applies here as with remoting_var. no_polling - Explicitly declare that no Event providers are supported by server side. This results in POLLING_API configuration being suppressed even if there are any Methods with declared pollHandler ExtDirect attribute. auto_connect - Generate the code that adds Remoting and Polling providers on the client side without having to do this manually. before - Global "before" hook. instead - Global "instead" hook. after - Global "after" hook. For more information on hooks and their usage, see L<RPC::ExtDirect>.
There are no methods intended for external use in this module.
Alexander Tokarev <tokarev@cpan.org>
This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See perlartistic.
|
http://search.cpan.org/~tokarev/RPC-ExtDirect/lib/RPC/ExtDirect/API.pm
|
CC-MAIN-2013-48
|
refinedweb
| 532
| 60.61
|
Fisher is a package manager for the fish shell. It defines a common interface for package authors to build and distribute shell scripts in a portable way. You can use it to extend your shell capabilities, change the look of your prompt and create repeatable configurations across different systems effortlessly.
Here's why you'll love Fisher:
- No configuration needed.
- Oh My Fish! package support.
- Blazing fast concurrent package downloads.
- Cached downloads—if you've installed a package before, you can install it again offline!
- Add, update and remove functions, completions, key bindings, and configuration snippets from a variety of sources using the command line, editing your fishfile or both!
Looking for packages? Browse git.io/awesome-fish or search on GitHub.
InstallationInstallation
Just download
fisher.fish to your functions directory (or any directory in your function path).
curl --create-dirs -sLo ~/.config/fish/functions/fisher.fish
Your shell can take a few seconds before loading newly added functions. If the
fisher command is not immediately available, launch a new session or replace the running shell with a new one.
System RequirementsSystem Requirements
Stuck in fish 2.0 and can't upgrade your shell? Check our legacy fish support guide and good luck!
Bootstrap installationBootstrap installation
To automate the installation process in a new system, installing packages listed in your fishfile, add the following code to your fish configuration file.
if not functions -q fisher set -q XDG_CONFIG_HOME; or set XDG_CONFIG_HOME ~/.config curl --create-dirs -sLo $XDG_CONFIG_HOME/fish/functions/fisher.fish fish -c fisher end
Changing the installation prefixChanging the installation prefix
Use the
$fisher_path environment variable to change the location where functions, completions, and configuration snippets will be copied to when a package is installed. The default location will be your fish configuration directory.
Just one rule:
fisher owns
$XDG_CONFIG_HOME/fisher, and uses it for its own purposes. Trying to use this path for your own
fisher configs will break!
Note: Do I need this? If you want to keep your own functions, completions, and configuration snippets separate from packages installed with Fisher, customize the installation prefix. If you prefer to keep everything in the same place, you can skip this.
set -g fisher_path /path/to/another/location set fish_function_path $fish_function_path[1] $fisher_path/functions $fish_function_path[2..-1] set fish_complete_path $fish_complete_path[1] $fisher_path/completions $fish_complete_path[2..-1] for file in $fisher_path/conf.d/*.fish builtin source $file 2> /dev/null end
Getting startedGetting started
You've found an interesting utility you'd like to try out. Or maybe you've created a package yourself. How do you install it on your system? How do update or remove it?
You can use Fisher to add, update, and remove packages interactively, taking advantage of fish tab completion and syntax highlighting. Or edit your fishfile and commit your changes. Do you prefer a CLI-centered approach, text-based approach, or a mix of both?
Adding packagesAdding packages
Add packages using the
add command followed by the path to the repository on GitHub.
fisher add jethrokuan/z rafaelrinaldi/pure
To add a package from a different location, use the address of the server and the path to the repository. HTTPS is always assumed, so you don't need to write the protocol.
fisher add gitlab.com/jorgebucaran/kraken
To add a package from a private repository set the
fisher_user_api_token variable to your username followed by a colon and your authorization token or password.
set -g fisher_user_api_token jorgebucaran:ce04da9bd93ddb5e729cfff4a58c226322c8d142
For a specific version of a package add an
@ symbol after the package name followed by the tag, branch or commit-ish. Only one package version can be installed at any given time.
fisher add edc/[email protected] jethrokuan/[email protected]
You can add packages from a local directory too. Local packages are installed as symbolic links so changes in the original files will be reflected in future shell sessions without having to re-run
fisher.
fisher add ~/path/to/local/pkg
Listing packagesListing packages
List all the packages that are currently installed using the
ls command. This doesn't show package dependencies.
fisher ls jethrokuan/z rafaelrinaldi/pure gitlab.com/jorgebucaran/kraken edc/bass ~/path/to/myfish/pkg
You can use a regular expression after
ls to refine the output.
fisher ls "^gitlab|fish-.*"
Removing packagesRemoving packages
Remove packages using the
rm command. If a package has dependencies, they too will be removed. If any dependencies are still shared by other packages, they will remain installed.
fisher rm rafaelrinaldi/pure
You can remove everything that is currently installed in one sweep using the following pipeline.
fisher ls | fisher rm
Updating packagesUpdating packages
Run
fisher to update everything that is currently installed. There is no dedicated update command. Using the command line to add and remove packages is a quick way to modify and commit changes to your fishfile in a single step.
Looking for a way to update fisher itself? Use the
self-update command.
fisher self-update
Other commandsOther commands
Use the
--help command to display usage help on the command line.
fisher --help
Last but not least, use the
--version command to display the current version of Fisher.
fisher --version
Using the fishfileUsing the fishfile
Whenever you add or remove a package from the command-line, Fisher writes the exact list of installed packages to
~/.config/fish/fishfile. This is your fishfile. Add this file to your dotfiles or version control in order to reproduce your configuration on a different system.
You can also edit this file and run
fisher to commit your changes. Only the packages listed in this file will be installed (or remained installed) after
fisher returns. If a package is already installed, it will be updated. Everything after a
# symbol will be ignored.
vi ~/.config/fish/fishfile
- rafaelrinaldi/pure - jethrokuan/[email protected] gitlab.com/jorgebucaran/kraken edc/bass + FabioAntunes/fish-nvm ~/path/to/myfish/pkg
fisher
That will remove rafaelrinaldi/pure and jethrokuan/z, add FabioAntunes/fish-nvm and update the rest.
Digging deeperDigging deeper
What is a package?What is a package?
Packages help you organize shell scripts into reusable, independent components that can be shared through a git URL or the path to a local directory. Even if your package is not meant to be shared with others, you can benefit from composition and the ability to depend on other packages.
The structure of a package can be adopted from the fictional project described below. These are the files that Fisher looks for when installing or uninstalling a package. The name of the root directory can be anything you like.
fish-kraken ├── fishfile ├── functions │ └── kraken.fish ├── completions │ └── kraken.fish └── conf.d └── kraken.fish
If your project depends on other packages, it should list them as dependencies in a fishfile. There is no need for a fishfile otherwise. The rules concerning the usage of the fishfile are the same rules we've already covered in using the fishfile.
While some packages contain every kind of file, some packages include only functions or configuration snippets. You are not limited to a single file per directory either. There can be as many files as you need or just one as in the following example.
fish-kraken └── kraken.fish
The lack of private scope in fish causes all package functions to share the same namespace. A good rule of thumb is to prefix functions intended for private use with the name of your package to prevent conflicts.
Creating your own packageCreating your own package
The best way to show you how to create your own package is by building one together. Our first example will be a function that prints the raw non-rendered markdown source of a README file from GitHub to standard output. Its inputs will be the name of the owner, repository, and branch. If no branch is specified, we'll use the master branch.
Create the following directory structure and function file. Make sure the function name matches the file name; otherwise fish won't be able to autoload it the first time you try to use it.
fish-readme └── readme.fish
function readme -a owner repo branch if test -z "$branch" set branch master end curl -s end
You can install it with the
add command followed by the path to the directory.
fisher add /absolute/path/to/fish-readme
To publish your pacckage put your code on GitHub, GitLab, BitBucket, or anywhere you like. Fisher is not a package registry. Its function is to fetch fish scripts and put them in place so that your shell can find them.
Now let's install the package from the net. Open your fishfile and replace the local version of the package you added with the URL of the repository. Save your changes and run
fisher.
- /absolute/path/to/fish-readme + gitlab.com/jorgebucaran/fish-readme
fisher
You can leave off the
github.com part of the URL when adding or removing packages hosted on GitHub. If your package is hosted anywhere else, the address of the server is required.
Configuration snippetsConfiguration snippets
Configuration snippets consist of all the fish files inside your
~/.config/fish/conf.d directory. They run on shell startup and generally used to set environment variables, add new key bindings, etc.
Unlike functions or completions which can be erased programmatically, we can't undo a fish file that has been sourced without creating a new shell session. For this reason, packages that use configuration snippets provide custom uninstall logic through an uninstall event handler.
Let's walk through an example that uses this feature to add a new key binding for the Control-G sequence. Let's say we want to use it to open the fishfile in the
vi editor quickly. When you install the package,
fishfile_quick_edit_key_bindings.fish will be sourced, adding the specified key binding and loading the event handler function. When you uninstall it, Fisher will emit an uninstall event.
fish-fishfile-quick-edit └── conf.d └── fishfile_quick_edit_key_bindings.fish
bind \cg "vi ~/.config/fish/fishfile" set -l name (basename (status -f) .fish){_uninstall} function $name --on-event $name bind --erase \cg end
UninstallingUninstalling
This command also uninstalls all your packages and removes your fishfile.
fisher self-uninstall
|
https://reposhub.com/linux/shell-package-management/jorgebucaran-fisher.html
|
CC-MAIN-2021-43
|
refinedweb
| 1,695
| 57.98
|
In this article, we'll see how to read/unzip file(s) from zip or tar.gz with Python. We will describe the extraction of single or multiple files from the archive.
If you are interested in parallel extraction from archive than you can check: Python Parallel Processing Multiple Zipped JSON Files Into Pandas DataFrame
Step 1: Get info from Zip Or Tar.gz Archive with Python
First we can check what is the content of the zip file by this code snippet:
from zipfile import ZipFile zipfile = 'file.zip' z = ZipFile(zipfile) z.infolist()
the result:
<ZipInfo filename='text1.txt' filemode='-rw-rw-r--' external_attr=0x8020 file_size=0>]
From which we can find two filenames and size:
- pandas-dataframe-background-color-based-condition-value-python.png
- text1.txt
Step 2: List and Read all files from Archive with Python
Next we can list all files from the archive in a list by:
from zipfile import ZipFile archive = 'file.zip' zip_file = ZipFile(archive) [text_file.filename for text_file in zip_file.infolist() ]
Result:
['pandas-dataframe-background-color-based-condition-value-python.png',
'text1.txt']
If you like to filter them - for example only
.json ones - or read the files as Pandas DataFrames you can do:
from zipfile import ZipFile archive = 'file.zip' zip_file = ZipFile(archive) dfs = {text_file.filename: pd.read_csv(zip_file.open(text_file.filename)) for text_file in zip_file.infolist() if text_file.filename.endswith('.json')} dfs
Step 3: Extract files from zip archive With Python
Package
zipfile can be used in order to extract files from zip archive for Python. Basic usage is shown below:
import zipfile archive = 'file.zip' with zipfile.ZipFile(archive, 'r') as zip_file: zip_file.extractall(directory_to_extract_to)
Step 4: Extract files from Tar/Tar.gz With Python
For
Tar/Tar.gz files we can use the code below in order to extract the files. It uses module -
tarfile and differs the two types in order to use proper extraction mode:
import tarfile zipfile = 'file.zip' if zipfile.endswith("tar.gz"): tar = tarfile.open(zipfile, "r:gz") elif zipfile.endswith("tar"): tar = tarfile.open(zipfile, "r:") tar.extractall() tar.close()
Note: All files from the archive will be unzipped in the current working directory for the script.
Step 5: Extract single file from Archive
If you like to get just a single file from Archive then you can use the method:
zipObject.extract(fileName, 'temp_py'). Basic usage is shown below:
import zipfile archive = 'file.zip' with zipfile.ZipFile(archive, 'r') as zip_file: zip_file.extract('text1.txt', '.')
In this example we are going to extract the file -
'text1.txt' in the current working directory. If you like to change the output directory than you can change the second parameter -
'.'
Conclusion
In this tutorial, we covered how to extract single or multiple files from Archive with Python. It covered two different python packages -
zipfile and
tarfile.
You've also learned how to list and get info from archived files.
|
https://blog.softhints.com/read-unzip-file-from-zip-tar-gz-python/
|
CC-MAIN-2021-25
|
refinedweb
| 486
| 61.43
|
Creative coding with Replit.
What is creative coding?
For this article, we'll consider a tool to be a creative coding one if its main purpose is to create graphics, visual models, games, or sounds. Plain HTML or JavaScript can be used for this type of thing, but we're looking for tools and languages that are a bit more specialised.
Here's a list of tools we'll be taking a look at for the more creative side of Replit:
- Python
turtle
- p5.js
- Kaboom
- Pygame
- Pyxel
- GLSL
Python
turtle
Turtle graphics is a classic of the genre. First created way back in the 1960s, the idea is that there is a small turtle robot on the screen, holding some pens. You give the turtle commands to move around and tell it when to put the pen down and what color pen to use. This way you can make line or vector drawings on the screen. The turtle idea comes from a type of actual robot used for education.
Replit has support for Python
turtle, which is the current incarnation of the turtle graphics idea. Choose the "Python (with Turtle)" template when creating a new repl to use it.
Python
turtle uses commands like
forward(10),
back(10),
left(50),
right(30)
pendown() and
penup() to control the turtle. The methods
forward and
back take the distance the turtle should move as their arguments, while
left and
right take the angle in degrees to turn the turtle on the spot (the turtle is very nimble!). You can use
pendown and
penup to tell the turtle to draw or not draw while moving.
When you create a new Python (with Turtle) template, you'll notice a small program is included as an example to show you the basics. When you run this program, it will draw a square with each side a different color.
Although
turtle has a small set of simple commands, it can be used to make some impressive-looking graphics. This is because you can use loops and calculations and all the other programming constructs available in Python to control the turtle.
Try this
turtle program for example:
import turtle
t = turtle.Turtle()
t.speed(0)
sides = 3;
colors = ['red', 'yellow', 'orange']
for x in range(360):
t.pencolor(colors[x % sides])
t.forward(x * 3 / sides + x)
t.left(360 / sides + 1)
t.width(x * sides / 200)
This code generates a spiral by drawing a slightly rotated and increasingly larger triangle for each of the 360 degrees specified in the main loop. This short little script produces a cool-looking output:
Try changing up the
sides parameter to draw different shapes, and play with the color combos to come up with new artworks.
p5.js
p5.js is a JavaScript graphics and animation library developed specifically for artists and designers - and generally people who have not been exposed to programming before. It's based on the Processing software project, and brings the Processing concept to web browsers, making it easy to share your "sketches", which is the p5.js name for programs.
Replit has two templates for p5.js - one for pure JavaScript, and another that interprets Python code, but still uses the underlying p5.js JavaScript library. You can use the Python version if you are more familiar with Python syntax than JavaScript syntax.
If you create a repl using one of the templates, you'll see it includes some sample code. Running it will draw random color circles on the screen wherever the mouse pointer is.
p5.js has two main functions in every sketch:
setup(), which is run once when the sketch is executed, and
draw(), which is run every frame.
In the
setup function, you generally set up the window size and other such parameters. In the
draw function, you can use p5.js functions to draw your scene. p5.js has functions for everything from drawing a simple line to rendering 3D models.
Here is another sketch you can try (note that this is in JavaScript, so it will only work in the p5.js JavaScript template):
function setup() {
createCanvas(500, 500);
background('honeydew');
}
function draw() {
noStroke()
fill('cyan');
circle(450, 200, 100);
fill('pink');
triangle(250, 75, 300, 300, 200, 275);
fill('lavender')
square(250, 300, 200);
}
In this sketch, we draw a few shapes in various colors on the screen, in a kind of 80s geometric art style:
The p5.js website has a guide to getting started, plus a lot of references and examples to experiment with.
Kaboom
Kaboom.js is Replit's own homegrown JavaScript game framework, launched in 2021. It's geared towards making 2D games, particularly platform games, although it has enough flexibility to create games in other formats too. Because it is a JavaScript library, it can be used to develop web games, making it easy to share and distribute your creations with the world.
Replit has two official templates for Kaboom:
- A specialized Kaboom template, with an integrated sprite editor and gallery, as well as pre-defined folders for assets. This is perfect for getting started with Kaboom and making games in general, as you don't need to worry about folder structures or sourcing graphics.
- A 'light' template that is a simple web template with just the Kaboom package referenced. This is for coders with a little more experience, as the intent is to give you more control and flexibility
One of the great features of Kaboom is the simple way you can define level maps, drawing them with text characters, and then mapping the text characters to game elements:
const level = [
" $",
" $",
" $",
" $",
" $",
" $$ = $",
" % ==== = $",
" = $",
" = ",
" ^^ = > = @",
"===========================",
];
Another interesting aspect of Kaboom is that it makes heavy use of composition. This allows you to create characters with complex behaviour by combining multiple simple components:
"c": () => [
sprite("coin"),
area(),
solid(),
cleanup(),
lifespan(0.4, { fade: 0.01 }),
origin("bot")
]
Kaboom has a fast-growing resource and user base. The official Kaboom site documents each feature, and also has some specific examples. There is also a site with complete tutorials for building different types of games at Make JavaScript Games.
Pygame
Pygame is a well-established library (from 2000!) for making games. It has functionality to draw shapes and images to the screen, get user input, play sounds, and more. Because it has been around for so long, there are plenty of examples and tutorials for it on the web.
Replit has a specialised Python template for Pygame. Choose this template for creating Pygame games:
Try out this code in a Pygame repl:
import pygame
pygame.init()
bounds = (300,300)
window = pygame.display.set_mode(bounds)
pygame.display.set_caption("box")
color = (0,255,0)
x = 100
y = 100
while True:
pygame.time.delay(100)
for event in pygame.event.get():
if event.type == pygame.QUIT:
run = False
keys = pygame.key.get_pressed()
if keys[pygame.K_LEFT]:
x = x - 1
elif keys[pygame.K_RIGHT]:
x = x + 1
elif keys[pygame.K_UP]:
y = y - 1
elif keys[pygame.K_DOWN]:
y = y + 1
window.fill((0,0,0))
pygame.draw.rect(window, color, (x, y, 10, 10))
pygame.display.update()
This code initializes a new
pygame instance and creates a window to display the output in. Then it has a main game loop, which listens for keyboard arrow key presses, and moves a small block around the screen based on the keys pressed.
Pyxel
Pyxel is specialised for making retro-type games, inspired by console games from the 80s and early 90s. You can only display 16 colors, and no more than 4 sound samples can be played at once, just like on the earlier Nintendo, Sega, and other classic games systems. If you're into pixel art, this is the game engine for you.
Choose the 'Pyxel' template on Replit to create a new Pyxel environment.
Try this code in a Pyxel repl to draw rectangles of random size and color, changing every two frames:
import pyxel
import random
class App:
def __init__(self):
pyxel.init(160, 120, caption="Pyxel Squares!")
pyxel.run(self.update, self.draw)
def update(self):
if pyxel.btnp(pyxel.KEY_Q):
pyxel.quit()
def draw(self):
if (pyxel.frame_count % 2 == 0):
pyxel.cls(0)
pyxel.rect(random.randint(0,160), random.randint(0,120), 20, 20, random.randint(0,15))
App()
Take a look in the examples folder on the Pyxel GitHub project to see more ways to use Pyxel.
GLSL
On the more advanced end of the spectrum, Replit supports GLSL projects. GLSL (OpenGL Shading Language) is a C-style language for creating graphics shaders. Shaders are programs that (usually) run on graphics cards as part of a graphics rendering pipeline. There are many types of shaders - the two most common are vertex shaders and fragment (or pixel) shaders. Vertex shaders compute the position of objects in the graphics world, and pixel shaders compute the color that each pixel should be. This previously required writing code for specific graphics hardware, but GLSL is a high-level language that can run on many different graphics hardware makes.
GLSL gives you control over the graphics rendering pipeline, enabling you to create very advanced graphics. GLSL has many features to handle vector and matrix manipulations, as these are core to graphics processing.
Choose the "GLSL" template to create a new GLSL repl:
The template has a sample fragment shader in the file
shader.glsl as well as some web code to setup a WebGL resource to apply the shader to. Running the sample will show some pretty gradients on the screen that vary with time and as you move the mouse over it.
Try this code out in the shader file to make a kind of moving "plaid" effect:
precision mediump float;
varying vec2 a_pos;
uniform float u_time;
void main(void) {
gl_FragColor = vec4(
a_pos.x * sin(u_time * a_pos.x),
a_pos.y * sin(u_time * a_pos.y),
a_pos.x * a_pos.y * sin(u_time),
1.0);
}
Here we set
gl_FragColor, which is the color for a specific pixel on the screen. A pixel color in GLSL is represented using a
vec4 data type, which is a vector of four values, representing red, green, blue, and alpha. In this shader, we vary the pixel color depending on it's co-ordinate
a_pos, and the current frame time
u_time.
If you'd like to dive deeper into the world of advanced graphics and shaders, you can visit Learn OpenGL's Getting Started: Shaders resource.
Wrap up
That wraps up this list of the official creative coding language templates on Replit. Of course, Replit is flexible enough that you can import and use whatever framework or library you want in your projects, so you are not limited to the tools we've looked at here. Replit is also adding more languages and templates everyday, so be sure to watch out for new additions!
|
https://docs.replit.com/tutorials/creative-coding
|
CC-MAIN-2022-27
|
refinedweb
| 1,806
| 72.56
|
Home Screen alias: is script already running?
I used “Add to Home Screen” to put a link to my app on my iPad. If I use the link, it starts up the app fine; if I switch to something else, and then hit the link again, it starts up a second (or third, etc.) instance of my app on top of the previous instance.
If I use the X in the upper left corner to close the latest instance, the older instance is underneath, still working.
How can I tell whether or not my app is already running, and not open a new instance but rather just let the existing instance display?
I’ve looked at the list from
globals()and don’t see anything obvious there.
- mithrendal
I like to know that too. Whenever I launch my ui.View app for example via url schema and it was already running then i have two views stacked on each other. I followed two approaches to solve.
I start a thread and checking every second wether the app is in background. If so then I close the view. This is working but of course with the disadvantage that the view also closes when leaving pythonista for something other then restarting the script a second time.
On start of my script I would try two identify all current views and close them before presenting the new view. But I did not succeed to address e.g. find the views.
You may be able to check for running instances as follows
import gc, ui running_views=[v for v in gc.get_objects() if isinstance(v,ui.View) and v.on_screen]
though the on_screen check won't work for views presented as panels.
You could also just set some flag in a global module when your view is on-screen, and clear it when the view is dismissed. Does this make sense?
- mithrendal
SOLVED
I tried a mix of both suggestions with the global variable and the instance and onscreen test. But when I launch the app the second time then the global var is empty.
Now I did the trick with the bultins class. That works fine and solves the issue nicely . Thank you so much.
Here is the code
import builtins
if name == 'main':
try: v=builtins.theview except: v=None if(isinstance(v,ui.View) and v.on_screen ): #console.hud_alert('reuse view') else: #console.hud_alert('create view') v = ui.load_view() v.present('sheet') builtins.theview=v
Thank you! It worked for me, too:
import builtins try: bookView = builtins.navigation except: bookView = None if bookView and isinstance(bookView, ui.View) and bookView.on_screen: print('Reusing existing book view') navigation = bookView inventoryView = builtins.inventory reviewView = builtins.reviews builtins.inventory = inventoryView builtins.reviews = reviewView
Nice! Can you also see
if 'bookView' in locals() or 'bookView' in globals():?
If you do stick with try/except then I would encourage you to avoid a naked exception (see PEP8) because it can hide syntax and other errors that may take precious time to find. In this case
except NameError:would be safer than
except:.
Thanks! I’d already checked in locals() and globals() looking for some place where the running view was still accessible; I couldn’t find it. I double-checked again and it still isn’t there.
Capturing only the necessary exception makes sense, but for me I needed to capture
AttributeError.
Another thing I tried to do was check that
builtins.navigationwas an instance of
StuffView, my own subclass of
ui.View, so as to be even more certain that the saved view I’m finding is the view for this app. But
isinstancereturned
False; I’m assuming this is because the
StuffViewclass was not in locals/globals, and so I had to recreate it; but once recreated, it isn’t the same
StuffViewthat the previous run of Pythonista created
navigationfrom. Whereas it is the same
ui.Viewthat each incarnation’s
StuffViewinherits from.
Here’s my current code to restore my views from a previous run if they exist:
try: navigation = builtins.navigation except AttributeError: navigation = None if navigation and isinstance(navigation, ui.View) and navigation.on_screen: reviewView = navigation.subviews[0] inventoryView = navigation.subviews[1]
One of the things I’m assuming here is that
.add_subviewwill always add subviews in the same order. I couldn’t find any means of getting a subview back by name that wouldn’t have been more work than just saving the subviews on
builtins.
|
https://forum.omz-software.com/topic/4097/home-screen-alias-is-script-already-running
|
CC-MAIN-2018-30
|
refinedweb
| 745
| 77.13
|
Java has many data types and operations, making it suited for various programming tasks. These are pretty helpful in all aspects of Java, whether you’re writing a simple program or developing a complex application or software. In Java, data the two core categories of types include primitive data and data types that aren’t primitive.
Java’s Data Types
Java’s variables must be of a specific data type. There are two groups of data types:
- Byte
- short
- int
- long
- float
- double
- boolean, and
- char
The list above are examples of primitive data types. On the other hand, Strings, Arrays, and Classes are examples of non-primitive data types.
Types of Primitive Data
A primitive data type determines both types of variable values and size, which has no extra functions. In Java, primitive data types make up a count of eight:
Numbers
There are two sorts of primitive number types:
- Integer types store whole integers, which are either positive or negative such as 123 or -456.
- Byte, short, int, and long are all valid types.
The numeric value determines which type you should choose. Floating-point types represent numbers with a fractional portion and one or more decimals. Float and double are the two types.
Even though Java has multiple numeric types, the most commonly used for numbers are int (for whole numbers) and double for floating-point numbers. However, we’ll go through each one in detail as you read on.
Integer Types
Byte
From -128 to 127, the byte data type can hold entire values. When you know the value will be between -128 and 127, you can use this instead of int or other integer types to conserve memory:
byte numVal = 113; System.out.println(numVal);
Short
The full numbers -32768 to 32767 can be stored in the short data type:
short numVal = 4389; System.out.println(numVal);
Int
Whole numbers between -2147483648 and 2147483647 can be stored in the int data type. Therefore, when creating variables with a numeric value, the int data type is the ideal data type in general.
int numVal = 100000; System.out.println(numVal);
Long
From -9223372036854775808 to 9223372036854775807, the long data type can store entire numbers. When int is insufficient to store the value, this is utilized. It’s important to note that the value should conclude with an “L”:
long numVal = 15000000000L; System.out.println(numVal);
Types of Floating Points
It would be best to use a floating-point type when you need a decimal number, such as 9.99 or 3.14515.
Float
Fractional numbers between 3.4e-038 and 3.4e+038 can be stored using the float data type. It’s important to note that the value should conclude with an “f”:
float floatVal = 6.85f; System.out.println(floatVal);
Double
Fractional numbers between 1.7e-308 and 1.7e+308 can be stored in the double data type. It’s important to note that the value should conclude with a “d”:
Is it better to use float or double?
The precision of a floating-point value is the number of digits following the decimal point that the value can have. The precision of float variables is just six or seven decimal digits, but the accuracy of double variables is around 15 digits.
As a result, it is safer to utilize double for most calculations.
Numbers in Science
A scientific number with a “e” to represent the power of ten can also be a floating point number:
float floatVal = 35e3f; double doubleVal = 12E4d; System.out.println(floatVal); System.out.println(doubleVal);
Booleans
The boolean keyword is used to specify a boolean data type, which can only take the values true or false:
boolean isCodeUnderscoredLegit = true; boolean isCodeEasy = false; System.out.println(isCodeUnderscoredLegit); // Outputs true System.out.println(isCodeEasy); // Outputs false
Conditional testing uses Boolean values extensively, which you’ll learn more about later.
Characters
A single character is stored in the char data type.
Single quotes, such as ‘Z’ or ‘b,’ must surround the character:
char studentScore = 'A'; System.out.println(studentScore);
You can also use ASCII values to display specific characters:
char myVar1 = 65, myVar2 = 66, myVar3 = 67; System.out.println(myVar1); System.out.println(myVar2); System.out.println(myVar3);
The ASCII Table Reference contains a complete list of all ASCII values.
Strings
A sequence of characters is stored using the String data type (text). In addition, use double quotes to surround string values:
String helloCode = "Hello Codeunderscored"; System.out.println(helloCode);
Because the String type is so widely utilized and integrated with Java, it is sometimes referred to as “the special ninth type.”
Don’t worry if you’re not familiar with the term “object.” It relates to an object, a String in Java is a non-primitive data type. Methods on the String object are used to execute various operations on strings.
Types of Non-Primitive Data
Because they refer to things, non-primitive data types are termed reference types. The following are the fundamental distinctions between primitive and non-primitive data types:
- In Java, primitive types are predefined (that is, they have already been declared). Java does not specify non-primitive types, which the programmer constructs except for string.
- Non-primitive types, on the other hand, can be used to call methods that perform specific actions, whereas primitive types cannot.
- Non-primitive types can be null, whereas primitive types always have a value.
- A lowercase letter begins with a primitive type, while an uppercase letter begins with a non-primitive one.
- The size of a primitive type is determined by the data type, whereas non-primitive types are all the same size.
Strings, Arrays, Classes, Interfaces, and other non-primitive types are examples.
Interfaces
Interfaces are another approach to implement abstraction in Java. An interface is an “abstract class” that is used to put together related functions with empty bodies:
// interface interface Human { public void humanSound(); // interface method (does not have a body) public void run(); // interface method (does not have a body) }
The interface must be “implemented” (kind of like inherited) by another class with the implements keyword to access the interface functions (instead of extends). The “implement” class provides the body of the interface method:
// Interface interface Human { public void humanSound(); // interface method (does not have a body) public void sleep(); // interface method (does not have a body) } // Lady "implements" the Human interface class Lady implements Human { public void humanSound() { // The body of humanSound() is provided here System.out.println("The lady screams: wee wee"); } public void sleep() { // The body 's sleep() is provided here System.out.println("Zzz"); } } class Main { public static void main(String[] args) { Lady theLady = new Lady(); // Create a Lady object theLady.humanSound(); theLady.sleep(); } }
Interface Remarks
Interfaces, like abstract classes, cannot be used to construct objects. For instance, in the example above, it is not possible to create a “Human” object in the MyMainClass.
The “implement” class provides the body for interface methods that don’t have one. It would help override all of an interface’s methods when implementing it. By default, interface methods are abstract and public. Also, by default, interface attributes are public, static, and final. A constructor is also not allowed in an interface as it cannot create objects.
When Should You Use Interfaces?
1) To increase security, hide certain information and only display the most critical aspects of an object (interface).
2) “Multiple inheritance” is not supported in Java (a class can only inherit from one superclass).
However, because the class can implement many interfaces, it can be done with interfaces.
Note: To use several interfaces, use a comma to separate them (see example below).
interface InterfaceOne { public void interfaceOneMethod(); // interface method } interface InterfaceTwo { public void interfaceTwoMethod(); // interface method } // InterfaceClass "implements" InterfaceOne and InterfaceTwo class InterfaceClass implements InterfaceOne, InterfaceTwo { public void interfaceOneMethod() { System.out.println("Some text.."); } public void interfaceTwoMethod() { System.out.println("Some other text..."); } } class Main { public static void main(String[] args) { InterfaceClass theObj = new InterfaceClass(); theObj.interfaceOneMethod (); theObj.interfaceTwoMethod (); } }
Java Objects and Classes
Java’s primary focus as a computer language is on objects.
In Java, everything is linked to classes and objects and their characteristics and methods.
A computer, for example, is an object in real life. The computer has characteristics like weight and color and procedures like start and shutdown.
A class functions similarly to an object constructor or a “blueprint” for constructing things.
Creating a Class
Use the term class to create a class:
# Main.java # Creation of a class named "Main" with a variable a: public class Main { int a = 20; }
Remember from the Java Syntax concepts that a class should always begin with an uppercase letter and that the java file name should be the same as the class name.
Making a new object
A class in Java is used to build an object. We’ve already created the Main class, so we can now create objects. To create a Main object, type the class name, then the object name, followed by the keyword new:
Example: Create a “theObj” object and print the value of a:
public class Main { int a = 20; public static void main(String[] args) { Main theObj = new Main(); System.out.println(theObj.a); } }
Objects in Multiples
You can make numerous objects of the same type:
public class Main { int a = 20; public static void main(String[] args) { Main theOneObj = new Main(); // Object 1 Main theTwoObj = new Main(); // Object 2 System.out.println(theOneObj.a); System.out.println(theTwoObj.a); } }
Example: Create two Main objects Using Several Classes
You can also build a class object and use it in a different class. It is frequently used to organize classes (one class contains all properties and methods, while the other has the main() function (code to be run)).
Keep in mind that the java file should have the same name as the class. We’ve created two files in the same directory/folder in this example:
// Main.java public class Main { int a = 5; }
// Second.java class Second { public static void main(String[] args) { Main theObj = new Main(); System.out.println(theObj.a); } }
javac Main.java javac Second.java
When you’ve finished compiling both files, you’ll be able to tun the Second.java file as follows:
java Second.java
Arrays in Java
Arrays store many values in a single variable instead of defining distinct variables for each item. To declare an array, use square brackets to determine the variable type:
String[] computers;
We’ve now declared a variable that will hold a string array. Further, we can use an array literal to add values to it by placing the items in a comma-separated list inside curly braces:
String[] computers = {"HP", "Lenovo", "DELL", "Chrome Book"};
You may make an array of integers as follows.
int[] numVal = {30, 40, 50, 60};
Access to an Array’s Elements
The index number is used to access an array element. In the computer’s array above, this statement gets the value of the first element:
String[] computers = {"HP", "Lenovo", "DELL", "Chrome Book"}; System.out.println(computers[0]); // Outputs HP
Note that array indexes begin at 0. As a result, the first element is [0]. The second element is [1], and so on.
Make a Change to an Array Element
Refer to the index number to change the value of a certain element:
computers[0] = "IBM"; String[] computers = {"HP", "Lenovo", "DELL", "Chrome Book"}; computers[0] = "IBM"; System.out.println(computers[0]); // Now outputs IBM instead of HP
Length of Array
Establishing the length of an array is an aspect of the length property in an array:
String[] computers = {"HP", "Lenovo", "DELL", "Chrome Book"}; System.out.println(computers.length); // Outputs 4
Iterate Over an Array
The for loop can be used to loop through the array elements, and the length property can be used to determine how many times the loop should execute. All elements in the computer’s array are output in the following example:
String[] computers = {"HP", "Lenovo", "DELL", "Chrome Book"}; int i =0; for (i; i < computers.length; i++) { System.out.println(computers[i]); }
In addition, with For-Each, you may loop through an array. There’s also a “for-each” loop, which is only used to loop through array elements:
Syntax
for (type variable : arrayname) { ... }
Using a “for-each” loop, the following example prints all members in the vehicles array:
String[] computers = {"HP", "Lenovo", "DELL", "Chrome Book"}; for (String i : computers) { System.out.println(i); }
You might understand the preceding example when we break it down as follows: print the value of i for each String element (called i – as in index) in computers. When comparing the for loop and the for-each loop, you’ll notice that the technique is easier to code, requires no counter (since it uses the length property), and is more readable.
Arrays with several dimensions
An array of arrays is a multidimensional array. Add each array within its own set of curly brackets to make a two-dimensional array:
int[][] numVals = { {11, 12, 13, 14}, {15, 16, 17} };
numVals is now an array that contains two arrays as items. To get to the items in the numVals array, you’ll need two indexes: one for the array and one for each element within it. This example uses the third member (2) of numVals’ second array (1):
int[][] numVals = { {11, 12, 13, 14}, {15, 16, 17} }; int a = numVals[1][2]; System.out.println(a); // Outputs 7
To acquire the items of a two-dimensional array, we can use a for loop inside another for loop though we still need to point to the two indexes:
public class Main { public static void main(String[] args) { int[][] numVals = { {11, 12, 13, 14}, {15, 16, 17} }; for (int i = 0; i < numVals.length; ++i) { for(int j = 0; j < numVals[i].length; ++j) { System.out.println(numVals[i][j]); } } } }
Strings in Java
Text is stored using strings. A String variable is made up of a group of characters encased in double-quotes:
Example: Create a String variable with the following value:
String greeting = "Codeunderscored";
Length of the String
A String in Java is an object which comprises methods that may perform specific actions on strings. For example, the length of a string may be obtained with the length() method:
String txtVal = "Codeunderscored"; System.out.println("The length of the text string is: " + txtVal.length());
Additional String Methods
There are numerous string functions, such as toUpperCase() and toLowerCase():
String txtVal = "Code underscored"; System.out.println(txtVal .toUpperCase()); // Outputs "CODE UNDERSCORED" System.out.println(txtVal .toLowerCase()); // Outputs "code underscored"
Finding a Character in a String is a difficult task. The indexOf() method retrieves the first occurrence of a supplied text in a string (including whitespace):
String txtVal = "Please find where 'locate' occurs!"; System.out.println(txtVal .indexOf("find")); // Outputs
Java starts counting at zero. 0 is the first place in a string, 1 is the second, and two is the third.
Concatenating strings
The + operator is responsible for joining two strings together. It is referred to as concatenation:
String prefixName = "Code"; String suffixName = "underscored"; System.out.println( prefixName + " " + suffixName);
To make a space between firstName and lastName on print, we’ve placed an empty text (” “) between them. You can also concatenate two strings with the concat() method:
String prefixName = "Code"; String suffixName = "underscored"; System.out.println(prefixName .concat(suffixName));
Characters with Unique Qualities
Because strings must be enclosed in quotes, Java will misinterpret this string and generate the following error:
String txtVal = "Codeunderscored are the "Vikings" from the web.";
The backslash escape character is an excellent way to avoid this problem. Special characters are converted to string characters using the backslash () escape character. In addition, In Java, there are more than six escape sequences that are valid as follows:
In a string, the sequence \” inserts a double quote:
String txt = "We are the so-called \"Vikings\" from the north.";
In a string, the sequence \’ inserts a single quote:
String txt = "It\'s alright.";
The following sequence \ adds a single backslash to a string:
String txt = "The character \ is called backslash.";
Adding Strings and Numbers
The + operator is used in Java for both addition and concatenation.
- Numbers are added to the equation.
- Strings are joined together.
When two numbers are added together, the result is a number:
int a = 50; int b = 30; int c = a + b; // c will be 80 (an integer/number)
Combining two strings leads to a string concatenation:
String a = "50"; String b = "30"; String c = a + b; // c will be 5030 (a String)
Combining a number and a string, you get a string concatenation:
String a = "50"; int b = 30; String c = a + b; // c will be 5030 (a String)
Conclusion
Any computer language’s most crucial foundation is its data types. It is the most vital notion for any newcomer. The data type is required to express the kind, nature, and set of operations associated with the value it stores.
Java data types are incredibly fundamental. It is the first thing you should learn before moving to other Java concepts.
|
https://www.codeunderscored.com/java-data-types-with-examples/
|
CC-MAIN-2022-21
|
refinedweb
| 2,846
| 54.02
|
Imagine you have a dialog, where you can enter text and other data related to the text. While the text should be formatted, the normal data entries should not. So you place a toolbar into your dialog, that has all the formatting buttons you need and find out, that it will dock only at the edges of your dialog. But I (the customer) wants this thing, where it belongs to: right over the format-able text entry.
To achieve the customer's need, there is a little more involved than just placing a button row in the middle of a form.
Create a toolbar as usual and add a CToolBar member to your dialog, giving it any name that suits you. For a formatting toolbar a name like m_wndFormatBar sounds great.
CToolBar
m_wndFormatBar
Create a static control at that spot, where you want the toolbar to reside. Give this frame an ID (i.e. IDC_STC_TOOLBARFRAME). Make it slightly bigger then the tools, so you can see the space that becomes occupied by the toolbar. Within class wizard assign a member to this control. Make sure it is assigned to CStatic and not CString. Name it whatever you want (like m_Stc_ToolbarFrame).
IDC_STC_TOOLBARFRAME
CStatic
CString
m_Stc_ToolbarFrame
Now comes the hard part. In the OnInitDialog method of your dialog, place following code. You may omit the comments.
OnInitDialog
// Positioning of toolbar
CSize sizeToolbar;
CRect mrect;
mrect.SetRectEmpty();
// attach command routing to dialog window
m_wndFormatBar.Create(this);
m_wndFormatBar.LoadToolBar(IDR_TOOLBAR_FORMAT);
m_wndFormatBar.SetBarStyle(CBRS_ALIGN_TOP | CBRS_TOOLTIPS | CBRS_FLYBY);
m_wndFormatBar.ShowWindow(SW_SHOW);
// Calculate size of toolbar and adjust size of static control to fit size
sizeToolbar = m_wndFormatBar.CalcFixedLayout(false,true);
m_Stc_ToolbarFrame.GetWindowPlacement(&wpl);
wpl.rcNormalPosition.bottom = wpl.rcNormalPosition.top +
sizeToolbar.cy + 4;
wpl.rcNormalPosition.right = wpl.rcNormalPosition.left +
sizeToolbar.cx + 4;
// Position static control and toolbar
m_Stc_ToolbarFrame.SetWindowPlacement(&wpl);
m_wndFormatBar.SetWindowPlacement(&wpl);
// Adjust buttons into static control
m_Stc_ToolbarFrame.RepositionBars(AFX_IDW_CONTROLBAR_FIRST,
AFX_IDW_CONTROLBAR_LAST, 0);
m_Stc_ToolbarFrame.ShowWindow(SW_HIDE);
m_wndFormatBar.Create(this) tells the toolbar, to which window the commands are to be sent. Loading the images, setting the bar styles and displaying the toolbar is quite obvious. Then you calculate the size of the toolbar, and adjust the size of the static control accordingly with some margin.
m_wndFormatBar.Create(this)
After having done heavy numerical considerations we place the static control and put the toolbar in its lap. m_Stc_ToolbarFrame.RepositionBars tells the toolbar, which window it shall snuggle up to. Then make the static control disappear.
m_Stc_ToolbarFrame.RepositionBars
Omitting lines or doing the calculations wrong will be sentenced with weird optical results and strange behavior of your application. Check it out.
While the command routing is very simple, the visual updating of these commands is a little bit tricky.
Add the UpdateCommandUI handlers to your dialog and edit the methods as you would, in every Frame/View application. Do not care, when they do not seem to work. Actually they can't.
UpdateCommandUI
Add the line:
#include "afxpriv.h"
at the top of the dialog's cpp file, or put it inside stdafx.h. In your dialog's header file put the line:
afx_msg LRESULT OnKickIdle(WPARAM, LPARAM);
inside the message map. The best place is a line between the //}}AFX_MSG line and the DECLARE_MESSAGE_MAP() line. If there is no line, then make one by pressing return. In your dialog's cpp file look for the word END_MESSAGE_MAP(). Before that line, enter ON_MESSAGE(WM_KICKIDLE, OnKickIdle). If you forgot to include afxpriv.h the compiler will tell you, that it does not know anything about WM_KICKIDLE. (By the way, does this message mean that someone is idly kicking?)
//}}AFX_MSG
DECLARE_MESSAGE_MAP()
END_MESSAGE_MAP()
ON_MESSAGE(WM_KICKIDLE, OnKickIdle)
WM_KICKIDLE
Add the body of the OnKickIdle routine to your dialog class and call herein the CommandUpdateUI handlers. How? Have a look.
OnKickIdle
CommandUpdateUI
LRESULT CArbitraryToolbarDlg::OnKickIdle(WPARAM, LPARAM)
{
CCmdUI cmdUI;
cmdUI.m_nID = ID_FORMAT_BOLD; // The Command ID
// Tell the dialog to call the UpdateCommandUI-routine
cmdUI.DoUpdate(this, FALSE);
cmdUI.m_nID = ID_FORMAT_DURCHSTRICH;
cmdUI.DoUpdate(this, FALSE);
cmdUI.m_nID = ID_FORMAT_KURSIV;
cmdUI.DoUpdate(this, FALSE);
cmdUI.m_nID = ID_FORMAT_UNTERSTRICH;
cmdUI.DoUpdate(this, FALSE);
cmdUI.m_nID = ID_EDIT_CUT;
cmdUI.DoUpdate(this, FALSE);
cmdUI.m_nID = ID_EDIT_COPY;
cmdUI.DoUpdate(this, FALSE);
cmdUI.m_nID = ID_EDIT_PASTE;
cmdUI.DoUpdate(this, FALSE);
return TRUE;
}
Add the command handlers and test it. Drink a cup of coffee.
During this whole process you should compare your source code with mine. What you may adapt to your needs, what may not be altered I can't tell. This is taught only by experience, your experience.
The sample supplied has only the first seven buttons working. This is by design! After all, it is a sample not a full blown application.
The routines for squeezing the RichEditCtrl will be coming soon with its own class. So stay tuned.
RichEditCtrl
After having read so long, may I ask you a favour? My little brother has been sent to war in the middle east now, would you please pray for him, so he will return safe and sound?.
|
http://www.codeproject.com/Articles/3820/A-Toolbar-in-the-middle-of-elsewhere?fid=14929&df=90&mpp=25&sort=Position&spc=Relaxed&tid=523330
|
CC-MAIN-2016-36
|
refinedweb
| 823
| 52.76
|
Powerful features (or variables) are often expensive to compute. Maybe they require a lot of CPU power to compute or require human interaction. As a result the size of the datasets for which the features are evaluated are smaller than they should be.
This post explores a possible way around this. Regression algorithms can learn complex functions and are cheap to evaluate. The idea then is:
- Train a regression algorithm on a small subset of the data for which all expensive features have been calculated
- Use this algorithm to cheaply calculate the features for a much larger number of samples
Here I demonstrate this approach using a feature called "Missing mass calculator". It is used in searches for the Higgs boson and takes about one minute to calculate per sample (or event). A minute per event! That does not sound much until you realise a typical analysis at the LHC uses millions or even tens of millions of events.
First some imports and loading the data:
%matplotlib inline
from operator import itemgetter import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import sklearn from sklearn.cross_validation import train_test_split from sklearn.cross_validation import StratifiedKFold, KFold from sklearn.grid_search import RandomizedSearchCV from scipy.stats import randint as sp_randint from scipy.stats import uniform as sp_uniform from sklearn.metrics import mean_squared_error from sklearn.ensemble import GradientBoostingRegressor, AdaBoostRegressor, RandomForestRegressor from sklearn.ensemble import BaggingRegressor from sklearn.tree import DecisionTreeRegressor
The HiggsML challenge dataset has been released to the public. It contains the Missing Mass Calculator (
MMC) variable as a feature, let's try and see if we can learn to compute it with a regression model. This is an interesting challenge as the process to calculate the
MMC feature is quite involved (good luck implementing it from the description in the paper ...). Let's go!
The next cell downloads the full dataset from CERN's opendata portal to your
/tmp directory:
%%bash # Download the HiggsML dataset # details here: wget --quiet -P /tmp/
Read it all in to a
pandas
DataFrame, directly from a compressed csv. We also print out the name of all the columns, the important part to notice is that there are several features which have nothing to do with
MMC which we can use to predict it, as well as weights for each sample.
df = pd.read_csv('/tmp/atlas-higgs-challenge-2014-v2.csv.gz', compression='gzip') # Print the name of all the colums print ', '.join(df.columns), Label, KaggleSet, KaggleWeight
An expensive feature¶
Let's take a look at the feature we want to try and learn. The idea behind the "Missing Mass Calculator" is to construct a variable that distinguishes between events containing a Higgs and ones that do not. One way is to try and reconstruct the mass of the Higgs boson. In events with a Higgs that should give you something close to the Higgs mass (125 GeV) and different values for events without a Higgs in them.
In the following plot background events (those which are not the Higgs) are shown in blue and events containing a Higgs boson are shown in red.
First thing to notice is that the red distribution is multiplied by a factor of 100! This is just to make it visible. Otherwise events containing a Higgs are so rare they are impossible to see compared to the background.
As expected the red distribution peaks around 125 GeV and the background peaks at lower values. The peak is also much broader.
hist_opts = dict(alpha=0.5, bins=50, range=(0,300)) signal = df.Label == 's' _=plt.hist(df.DER_mass_MMC[~signal].as_matrix(), weights=df.Weight[~signal].as_matrix(), label='Other stuff', **hist_opts) _=plt.hist(df.DER_mass_MMC[signal], weights=df.Weight[signal] * 100, label='The Higgs * 100', color='r', **hist_opts) plt.legend(loc='best')
<matplotlib.legend.Legend at 0x10b0016d0>
# Selecting the columns used to predict MMC columns = [' ] # For some samples/events the MMC could not be calculated # they are marked with a value < -800 # Split the data by whether the sample represents a Higgs (signal) # boson event or not (background) signal = df[(df.Label == 's') & (df.DER_mass_MMC > -800)] X = signal[columns] w = signal['Weight'] MMC = signal['DER_mass_MMC'] background = df[(df.Label == 'b') & (df.DER_mass_MMC > -800)] bg_X = background[columns] bg_w = background['Weight'] bg_MMC = background['DER_mass_MMC']
Split signal and background into development and evaluation sets as well as the development set into a training and testing one. It is important to keep some events hidden from your model evaluation and selection procedure so you can evaluate the models true performance.
# this looks more daunting than it has to be :-( X_dev,X_eval, w_dev,w_eval, MMC_dev,MMC_eval = train_test_split(X.as_matrix(), w.as_matrix(), MMC.as_matrix(), random_state=9548) bg_X_dev,bg_X_eval, bg_w_dev,bg_w_eval, bg_MMC_dev,bg_MMC_eval = train_test_split(bg_X.as_matrix(), bg_w.as_matrix(), bg_MMC.as_matrix(), random_state=958) X_train,X_test, w_train,w_test, MMC_train,MMC_test = train_test_split(X_dev, w_dev, MMC_dev, random_state=2171) bg_X_train,bg_X_test, bg_MMC_train,bg_MMC_test, bg_w_train,bg_w_test = train_test_split( bg_X_dev, bg_MMC_dev, bg_w_dev, train_size=0.666, random_state=4685)
Regression trees¶
The plan is to train a regressor which given the input variables can predict the MMC feature for us. After some
GridSearchCV I found the hyper-parameters used below. They work quite well but we could probably do even better if we spent more CPU time. (While the hyper-parameter optimisation is not shown here, we still have to split the data the same way as the performance evaluated on the test set is biased optimistically due to doing some fairly exhaustive searching.)
clf = GradientBoostingRegressor(n_estimators=2000, learning_rate=0.05, subsample=0.7296635812586656, max_features=0.5779758681446561, max_depth=5, loss='ls') clf.fit(X_train, MMC_train, w_train)
GradientBoostingRegressor(alpha=0.9, init=None, learning_rate=0.05, loss='ls', max_depth=5, max_features=0.577975868145, max_leaf_nodes=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=2000, random_state=None, subsample=0.729663581259, verbose=0, warm_start=False)
Let's compare the MMC feature computed by the regression model with the true value. If we are doing a good job they should be pretty similar. Crucially we do not re-use samples used during the training to evaluate the performance.
MMC_pred_train = clf.predict(X_train) MMC_pred_test = clf.predict(X_test) MMC_pred_eval = clf.predict(X_eval) bg_MMC_pred_test = clf.predict(bg_X_test) bg_MMC_pred_eval = clf.predict(bg_X_eval) print "MSE on training set:", print mean_squared_error(MMC_train, MMC_pred_train, w_train) print "MSE on testing set:", print mean_squared_error(MMC_test, MMC_pred_test, w_test) print "MSE on evaluation set:", print mean_squared_error(MMC_eval, MMC_pred_eval, w_eval)
MSE on training set: 12.2873639683 MSE on testing set: 26.276874694 MSE on evaluation set: 34.3325667757
As you can see the model performs much better on the training set than on the samples in the testing set. The most important point to note though is that the performance estimate you obtain from the test set is also optimistic. This is the result of the sum of testing and training being used during the optimisation of the hyper-parameters. This is why it is crucial to reserve a third set of samples which is only looked at after you fix all parameters. In our case it is
MMC_eval.
Learned versus Original¶
So, after all this, how well does our learned MMC regressor do? Let's compared the distribution of the MMC feature as computed the slow way and the learned MMC feature. By eye they look extremely similar, which is nice. This means you could use this approach of learning an expensive to calculate feature (or variable) and use that as a proxy which is much faster to evaluate.
opts = dict(alpha=0.6, bins=50, range=(0,300)) plt.hist(MMC_eval, weights=w_eval, label="Original MMC", **opts) plt.hist(MMC_pred_eval, weights=w_eval, label="Learned MMC", **opts) plt.xlabel("Missing Mass Calculator variable") plt.legend(loc='best') plt.title("Signal events")
<matplotlib.text.Text at 0x10f736dd0>
Raising the bar¶
Did our regression model learn anything useful? Maybe it just memorised how the input features are related to MMC, but otherwise gained no insight. Did you notice that we only used samples containing a Higgs boson so far? That was on purpose. It means not only do we have a large set of samples which where never used during model evaluation and selection, in addition these are quite different types of events.
The next figure compares the original MMC feature with the regression model's calculation. The agreement is not as good as for the Higgs boson class but it does quite well.
plt.hist(bg_MMC_eval, weights=bg_w_eval, label="Original MMC", **opts) plt.hist(bg_MMC_pred_eval, weights=bg_w_eval, label="Learned MMC", **opts) plt.xlabel("Missing Mass Calculator variable") plt.legend(loc='best') plt.title("Background events")
<matplotlib.text.Text at 0x112947650>
While it is not absolute proof that the regression model actually learnt something about physics, the fact that it can compute the MMC feature even for a totally different class of samples does suggest that it learnt something.
Another way of evaluating the performance is to plot the difference between the original MMC feature's value and the MMC value calculated by the regression model for each sample.
def mean_and_std(values, weights): average = np.average(values, weights=weights) variance = np.average((values-average)**2, weights=weights) return (average, np.sqrt(variance)) delta = y_eval - MMC_pred_eval bg_delta = bg_y_eval - bg_MMC_pred_eval _=plt.hist(delta, weights=w_eval, alpha=0.6, bins=40, range=(-30,30), label='$\Delta$MMC for Signal', normed=True) _=plt.hist(bg_delta, weights=bg_w_eval, alpha=0.6, bins=40, range=(-30,30), label='$\Delta$MMC for Background', normed=True) plt.legend(loc='best') print mean_and_std(delta, weights=w_eval) print mean_and_std(bg_delta, weights=bg_w_eval)
(-0.038939916353662653, 5.770798070300847) (-1.7394594313468719, 14.574445902019193)
As you can see the spread between the original MMC feature and the learned one is quite small, around 6. Improtantly the average difference is basically zero. This means the regression model does not favour one value over another.
Unsurprisingly the spread is larger for background events which are a different kind of beast all together compared to the ones we used during training.
In a real world application you would use some background events and some signal events to train the regression model. This will improve the performance of the model on this class of events. Here I choose not to include them as I wanted to see how well the model does on events which are not only new but also quite different from the ones used during training.
The end¶
Often the most powerful features in a dataset (for example for classifying Higgs events) are also the most expensive to compute in terms of CPU time or actual human time. Often this limits the number of samples in your dataset. Here I showed that you can get away with computing an expensive feature for a small subset of your events and then use a regression model to calculate it for a much larger number of samples.
You can also apply this idea to the large, from first principle simulations of the LHCb, ATLAS and CMS detectors. These simulations use over half of the total CPU time of the LHC computing grid, so speeding them up has huge potential!
As far as I know this has not really been used in particle physics so far, let me know if you know of other examples where it has been used!
If you find a mistake or want to tell me something else get in touch on twitter @betatim
This post started life as a ipython notebook, download it or view it online.
|
http://betatim.github.io/posts/learning-expensive-functions/
|
CC-MAIN-2018-30
|
refinedweb
| 1,908
| 57.27
|
Board index » Microsoft Visual C++/VC++
All times are UTC
Using VC++ 6.0, sp5, I get hundreds of long warnings when compiling std::vector<std::string>. Changing the warning level in ProjectSettings/CC++/General has no effect. Is there any way to turn off these warnings?
Multi-line warning always ends with : identifier was truncated to '255' characters in the debug information
Thank you
Henri
>Multi-line warning always ends with : >identifier was truncated to '255' characters in the debug information
This occurs at warning level 4. switching to level 3 should resolve it. Alternatively you can sometimes eliminate that warning using:
#pragma warning( disable: 4786 )
Dave -- MVP VC++ FAQ: My address is altered to discourage junk mail. Please post responses to the newsgroup thread, there's no need for follow-up email copies.
does it. I wonder why changing the warning level in the IDE, in ProjectSettings/CC++/General has absolutely no effect, even if I choose <none> I still get all the warnings
>does it. I wonder why changing the warning level in the IDE, in >ProjectSettings/CC++/General has absolutely no effect, even if I choose ><none> I still get all the warnings
1. typedef std::vector<std::string> Vector_String
2. C4251 warning using std::vector<std::string>
3. vector<std::string>
4. std::vector<user-defined class>
5. std::vector<int[2]> intializing trouble
6. std::vector<CClass*> / destructors
7. Warning: std::numeric_limits<char>::digits == 7
8. std::list<string>::sort (Cmp)
9. std::list<string>::sort (Cmp)
10. bug: VS7.0 (6.0) C++ std::auto_ptr conflict with std::vector
11. convert non std::string to std::string
12. export classes using std namespace (ex std::vector) in a DLL
|
http://computer-programming-forum.com/80-microsoft-visual-c-vc/e09ef7a9a08d352e.htm
|
CC-MAIN-2019-18
|
refinedweb
| 286
| 51.44
|
gensym.org - Home tag:gensym.org,2009:mephisto/ Mephisto Noh-Varr 2008-03-26T03:11:20Z david tag:gensym.org,2008-03-26:19 2008-03-26T03:11:00Z 2008-03-26T03:11:20Z 2 Problems <p>1 - How can you verify that the destination of a hyperlink is what the description claims?</p> <p>2 - How can Alice give data to Bob and ensure that Bob does not give that data to Mallory?</p> david tag:gensym.org,2008-02-19:18 2008-02-19T05:13:00Z 2008-02-19T05:21:02Z Why not make your business cards useful? <p>Last weekend, as I was cleaning out some old papers, I came across a stack of business cards I had collected at a conference, and it gave me an idea. Typically, when I get a business card, I enter the address into my address book or <a href="">Highrise</a> and then throw it away. It seems like a waste - it would be more efficient to just enter that information directly, but that would take too much time in any sort of social context.</p> <p <em>tape the card to his monitor</em> so that when he found himself in need of your product or service, it would be staring him in the face. That seems unlikely to happen, but here is a way to make it happen in at least a few cases: put some immediately useful information on the card.</p> <p. </p> <p>As an example, the following snippet will find all of the files that have not been added to (or ignored in) a subversion repository and add them:</p> <p>svn st | grep ? | awk '{print $2}' | xargs svn add</p> <p>There are many ways to do the same thing, but a surprising number of programmers seem to use subversion and know only how to manually add new files.</p> <p>As another example, I can never remember the <a href="">Mac startup key combinations</a> since I use them infrequently, but when I need them, it's really inconvenient to look them up (since my Mac is probably powered down at that point).</p> <p>Even if you don't succeed in getting developers to tape your card to their monitors, you should succeed in getting their attention, which is all that most business cards seem to strive for now anyway.</p> david tag:gensym.org,2008-01-07:15 2008-01-07T06:04:00Z 2008-01-07T06:07:05Z Learning Haskell <p>I have a confession to make. </p> <p. </p> <h1>Why Haskell?</h1> <p>I'm not deluded enough to think that I'm going to be using it any time soon (if at all) in my day-to-day work. In fact, I'm <a href="">skeptical</a> that it will be useful even for my after-hours projects. However, I don't think that any other language can currently be more effective at teaching me general programming concepts. </p> <p>Here are some of the concepts I'm hoping to grok by learning Haskell:</p> <ul> <li>lazy evaluation</li> <li>monads (and more general category theory)</li> <li>software transactional memory</li> </ul> <h1>My approach to learning Haskell</h1> <p>My main resources for learning Haskell are guides I've found on the web. <a href="">Haskell for C Programmers</a> looks like it will give me a decent overview, and from there, I expect to jump to <a href="">A Gentle Introduction to Haskell</a>. About a year ago, I read a little gem of a book called <a href="">Purely Functional Data Structures</a>. While that book uses Standard ML for its examples, it provides Haskell examples in the back. I also intend to work through the tutorial <a href="">"Write Yourself a Scheme in 48 hours"</a>, which shows you how to implement a subset of Scheme in Haskell. </p> <p.</p> <p>As a starting point, I'm lucky that <a href="">Paul Brown</a> has just done the same thing (rewrite his blog in Haskell) and has some useful writings about the process. My first steps have been to get FastCGI working with Haskell, and that was pretty straightforward, thanks to <a href="">Paul's post</a> on the subject.</p> <h1>My approach to implementing the blog</h1> <p.</p> <p <a href="">CouchDB</a>.</p> <p.</p> david tag:gensym.org,2007-12-12:14 2007-12-12T02:33:00Z 2007-12-12T02:33:58Z You ain't got no soul power: Good-bad v. Bad-bad <p><em>Disclaimer: This post is only tangentally related to software development. I've had it written for a while, but I can't quite get it to say what I want. Then I realized that, given the subject matter, a crappy post is kind of appropriate.</em></p> <p>I recently saw the film <em>Troll 2</em> for the first time. If you haven't seen this film, stop reading this right now, and add it to your Netflix queue, order it from Amazon, do whatever you need to do to watch this. <em>Troll 2</em> is easily the most enjoyably awful film I've ever seen. For the uninitiated, here are some highlights:</p> <ul> <li>Despite the title, this film has nothing to do with the first <em>Troll</em> movie and is, in fact, about goblins, rather than trolls. </li> <li>The monsters in the film are very obviously little people wearing potato sacks and rubber masks. With the exception of a couple close-up shots, their lips don't move at all.</li> <li>The premise is that goblins feed humans green-colored food that causes the humans to morph into a half-human, half-plant being that the (vegetarian) goblins can then eat.</li> <li>Dialogue: <ul> <li>"Do you see this writing? Do you know what it means? Hospitality. And you can't piss on hospitality! I won't allow it!"</li> <li>Sister: "How do we get him to come? By having a seance maybe?" Brother: "You're a genius, big sister!"</li> <li>"You're grandfather's death was very hard on all of us. It was hard on your sister, your father, and on me, his daughter."</li> </ul></li> </ul> <p!")</p> <p>In case I've not made it clear: this is not a good movie. The plot is ludicrous, the acting would be considered bad in a community theater, and the production qualities are abysmal. <em>Troll 2</em> may be the worst movie ever made, surpassing such disasters as <em>Manos, Hands of Fate</em> and <em>Plan 9 from Outer Space</em>. However, I'd like to contrast it with another film I frequenty call the worst film ever made: <em>Batman and Robin</em>. I saw <em>Batman and Robin</em> in 1997, when it was in theaters. I haven't seen it since, and have no intention of doing so. Roger Ebert is fond of saying, "Every bad movie is depressing. No good movie is depressing.". When I hear this quote, I think of <em>Batman and Robin</em>, which depresses me in a way no other movie does.</p> <p>To recap so far: in my mind, <em>Batman and Robin</em> and <em>Troll 2</em> are each candidates for the worst film ever made, but one depresses me and the other brings me a lot of joy. How can this be? </p> <p>The most obvious difference between the two films is the size of the budgets. Watching <em>Batman and Robin</em>, it's clear that it was an expensive film to make, and that every dollar spent on its production could have been better spent on just about anything else. In the case of <em>Troll 2</em>, <em>Troll 2</em>. I don't think the reason why one film is so much fun while the other is just depressing is as simple as budget size, however.</p> <p>The sins of <em>Batman and Robin</em> have been <a href="">well-documented</a>, so I won't repeat them here. If you were to survey the highest-grossing films circa 1996 (<em>Independance Day</em>, <em>The Rock</em>, <em>Mission: Impossible</em>), take the surface elements of those films, and cynically combine them in the hopes of creating a blockbuster, I think you'd get a result similar to <em>Batman and Robin</em>. Every element of that film feels as though all artistic decisions were made by a group of suits trying to guess what would be most marketable. I have vivid memories of watching late-night talk shows in the months leading up to <em>Batman and Robin</em>.</p> <p>In contrast, the human elements of <em>Troll 2</em> <em>Troll 2</em>. </p> <p>What <em>Troll 2</em> <em>Troll 2</em>.</p> david tag:gensym.org,2007-10-18:12 2007-10-18T04:01:00Z 2007-10-19T19:31:35Z RSpec on Rails tip - Weaning myself off of fixtures <p. </p> <p:</p> <pre><code> # </code></pre> <p>If you drop this into your spec_helper.rb, then, in your specs, you can do the following:</p> <pre><code> require File.dirname(__FILE__) + '/../spec_helper' describe BookOrders, "A newly created BookOrder" do reset_tables :book_orders, :books, :customer before(:each) do # setup some stuff end it "should require a customer" do # blah blah end # and so on end </code></pre> david tag:gensym.org,2007-07-15:11 2007-07-15T12:17:00Z 2007-07-15T17:18:39Z Custom XPath Matcher for RSpec <p><tt> </tt>response.should have_tag(<span class="s"><span class="dl">'</span><span class="k">ul</span><span class="dl">'</span></span>) <span class="r">do</span> <tt> </tt> with_tag(<span class="s"><span class="dl">'</span><span class="k">li</span><span class="dl">'</span></span>)<tt> </tt><span class="r">end</span><tt> </tt></pre></td> </tr></table> <p. </p> <p>Fortunately, RSpec makes adding your own custom matchers really easy. Thanks to a couple existing tutorials, such as <a href="">this one</a>, I was able to whip up a custom XPath matcher pretty><span class="r">module</span> <span class="cl">Spec</span><tt> </tt> <span class="r">module</span> <span class="cl">Rails</span><tt> </tt> <span class="r">module</span> <span class="cl">Matchers</span><tt> </tt> <span class="r">class</span> <span class="cl">MatchXpath</span> <span class="c">#:nodoc:</span><tt> </tt> <tt> </tt> <span class="r">def</span> <span class="fu">initialize</span>(xpath)<tt> </tt> <span class="iv">@xpath</span> = xpath<tt> </tt> <span class="r">end</span><tt> </tt><tt> </tt> <span class="r">def</span> <span class="fu">matches?</span>(response)<tt> </tt> <span class="iv">@response_text</span> = response.body<tt> </tt> doc = <span class="co">REXML</span>::<span class="co">Document</span>.new <span class="iv">@response_text</span><tt> </tt> match = <span class="co">REXML</span>::<span class="co">XPath</span>.match(doc, <span class="iv">@xpath</span>)<tt> </tt> <span class="r">not</span> match.empty?<tt> </tt> <span class="r">end</span><tt> </tt><tt> </tt> <span class="r">def</span> <span class="fu">failure_message</span><tt> </tt> <span class="s"><span class="dl">"</span><span class="k">Did not find expected xpath </span><span class="il"><span class="dl">#{</span><span class="iv">@xpath</span><span class="dl">}</span></span><span class="ch">\n</span><span class="dl">"</span></span> + <tt> </tt> <span class="s"><span class="dl">"</span><span class="k">Response text was </span><span class="il"><span class="dl">#{</span><span class="iv">@response_text</span><span class="dl">}</span></span><span class="dl">"</span></span><tt> </tt> <span class="r">end</span><tt> </tt><tt> </tt> <span class="r">def</span> <span class="fu">description</span><tt> </tt> <span class="s"><span class="dl">"</span><span class="k">match the xpath expression </span><span class="il"><span class="dl">#{</span><span class="iv">@xpath</span><span class="dl">}</span></span><span class="dl">"</span></span><tt> </tt> <span class="r">end</span><tt> </tt> <span class="r">end</span><tt> </tt><tt> </tt> <span class="r">def</span> <span class="fu">match_xpath</span>(xpath)<tt> </tt> <span class="co">MatchXpath</span>.new(xpath)<tt> </tt> <span class="r">end</span><tt> </tt> <span class="r">end</span><tt> </tt> <span class="r">end</span><tt> </tt><span class="r">end</span><tt> </tt></pre></td> </tr></table> <h2>Where to Define Matchers</h2> <p><tt> </tt><tt> </tt>matchers_path = <span class="co">File</span>.dirname(<span class="pc">__FILE__</span>) + <span class="s"><span class="dl">"</span><span class="k">/matchers</span><span class="dl">"</span></span><tt> </tt>matchers_files = <span class="co">Dir</span>.entries(matchers_path).select {|x| <span class="rx"><span class="dl">/</span><span class="ch">\.</span><span class="k">rb</span><span class="ch">\z</span><span class="dl">/</span></span> =~ x}<tt> </tt>matchers_files.each <span class="r">do</span> |path|<tt> </tt> require <span class="co">File</span>.join(matchers_path, path)<tt> </tt><span class="r">end</span><tt> </tt></pre></td> </tr></table> <p>Now, any matcher I define in the matchers directory will get picked up by the spec_helper file. </p> david tag:gensym.org,2007-05-30:10 2007-05-30T03:09:00Z 2007-05-30T03:09:57Z Using RSpec with BackgrounDRb Workers <p>A Rails app I'm working on performs some expensive operations that should be offloaded to another process, so I'm using this as an opportunity to try out <a href="">BackgrounDRb</a>..</p> <p>First, I created a new directory under my spec directory:</p> <pre><code>svn mkdir spec/workers </code></pre> <p>Then, I wrote the following in a file called collecting_worker_spec.rb in the newly-created workers directory (my worker is called CollectingWorker)<></pre></td> <td class="code"><pre> <tt> </tt>require <span class="co">File</span>.dirname(<span class="pc">__FILE__</span>) + <span class="s"><span class="dl">'</span><span class="k">/../spec_helper</span><span class="dl">'</span></span><tt> </tt><tt> </tt>describe <span class="co">CollectingWorker</span>, <span class="s"><span class="dl">"</span><span class="k">with feeds needing collection</span><span class="dl">"</span></span> <span class="r">do</span><tt> </tt> <tt> </tt> before(<span class="sy">:each</span>) <span class="r">do</span><tt> </tt> <span class="iv">@worker</span> = <span class="co">CollectingWorker</span>.new<tt> </tt> <span class="r">end</span> <tt> </tt> <tt> </tt> it <span class="s"><span class="dl">"</span><span class="k">should pull a single feed</span><span class="dl">"</span></span> <span class="r">do</span> <tt> </tt> mock_collector = returning mock(<span class="s"><span class="dl">'</span><span class="k">collector</span><span class="dl">'</span></span>) <span class="r">do</span> |m|<tt> </tt> m.should_receive(<span class="sy">:collect</span>)<tt> </tt> <span class="r">end</span><tt> </tt> <span class="co">Collector</span>.should_receive(<span class="sy">:pop</span>).and_return(mock_collector)<tt> </tt> <span class="iv">@worker</span>.do_work(<span class="pc">true</span>)<tt> </tt> <span class="r">end</span><tt> </tt><span class="r">end</span><tt> </tt></pre></td> </tr></table> <p>For this spec to run correctly, though, I needed to add some code to my spec_helper.rb. This isn't pretty, but it is working for></pre></td> <td class="code"><pre><tt> </tt><span class="r">module</span> <span class="cl">BackgrounDRb</span><tt> </tt> <span class="r">module</span> <span class="cl">Worker</span><tt> </tt> <span class="r">class</span> <span class="cl">RailsBase</span> <tt> </tt> <span class="r">def</span> <span class="pc">self</span>.register; <span class="r">end</span><tt> </tt> <span class="r">end</span><tt> </tt> <span class="r">end</span><tt> </tt><span class="r">end</span> <tt> </tt><tt> </tt>worker_path = <span class="co">File</span>.dirname(<span class="pc">__FILE__</span>) + <span class="s"><span class="dl">"</span><span class="k">/../lib/workers</span><span class="dl">"</span></span><tt> </tt>spec_files = <span class="co">Dir</span>.entries(worker_path).select {|x| <span class="rx"><span class="dl">/</span><span class="ch">\.</span><span class="k">rb</span><span class="ch">\z</span><span class="dl">/</span></span> =~ x}<tt> </tt>spec_files -= [ <span class="co">File</span>.basename(<span class="pc">__FILE__</span>) ]<tt> </tt>spec_files.each <span class="r">do</span> |path|<tt> </tt> require(<span class="co">File</span>.join(worker_path, path))<tt> </tt><span class="r">end</span><tt> </tt></pre></td> </tr></table> .</p> david tag:gensym.org,2007-05-23:9 2007-05-23T05:01:00Z 2007-05-23T12:51:25Z Development Database Maintenance <p:</p> <ol> <li>Rebuild the database from scratch</li> <li>Distribute changes to the schema</li> </ol> <p>.</p> <p>. </p> <p. </p> <p).</p> .</p> <p. </p> <p> namespace <span class="sy">:gold</span> <span class="r">do</span><tt> </tt><tt> </tt> task <span class="sy">:export</span> <span class="r">do</span><tt> </tt> require <span class="co">RAILS_ROOT</span> + <span class="s"><span class="dl">'</span><span class="k">/config/environment</span><span class="dl">'</span></span><tt> </tt> conn = <span class="co">ActiveRecord</span>::<span class="co">Base</span>.connection<tt> </tt> tables = conn.tables.reject {|i| i == <span class="s"><span class="dl">'</span><span class="k">schema_info</span><span class="dl">'</span></span>}<tt> </tt> tables.each <span class="r">do</span> |table|<tt> </tt> filename = <span class="co">RAILS_ROOT</span> + <span class="s"><span class="dl">'</span><span class="k">/gold/data/</span><span class="dl">'</span></span> + table.pluralize + <span class="s"><span class="dl">'</span><span class="k">.yml</span><span class="dl">'</span></span><tt> </tt> open(filename, <span class="s"><span class="dl">'</span><span class="k">w</span><span class="dl">'</span></span>) <span class="r">do</span> |f|<tt> </tt> rows = conn.select_all(<span class="s"><span class="dl">"</span><span class="k">SELECT * FROM </span><span class="il"><span class="dl">#{</span>table<span class="dl">}</span></span><span class="dl">"</span></span>)<tt> </tt> rows.each <span class="r">do</span> |row|<tt> </tt> f.puts(<span class="s"><span class="dl">"</span><span class="k">gold_</span><span class="il"><span class="dl">#{</span>row[<span class="s"><span class="dl">"</span><span class="k">id</span><span class="dl">"</span></span>]<span class="dl">}</span></span><span class="k">:</span><span class="dl">"</span></span>)<tt> </tt> row.keys.each <span class="r">do</span> |key|<tt> </tt> f.puts <span class="s"><span class="dl">"</span><span class="k"> </span><span class="il"><span class="dl">#{</span>key<span class="dl">}</span></span><span class="k">: </span><span class="il"><span class="dl">#{</span>row[key]<span class="dl">}</span></span><span class="dl">"</span></span><tt> </tt> <span class="r">end</span><tt> </tt> <span class="r">end</span><tt> </tt> <span class="r">end</span><tt> </tt> <span class="r">end</span><tt> </tt><tt> </tt> <span class="r">end</span><tt> </tt><tt> </tt> task <span class="sy">:import</span> <span class="r">do</span><tt> </tt> require <span class="co">RAILS_ROOT</span> + <span class="s"><span class="dl">'</span><span class="k">/config/environment</span><span class="dl">'</span></span><tt> </tt> require <span class="s"><span class="dl">'</span><span class="k">test_help</span><span class="dl">'</span></span><tt> </tt> <span class="co">Dir</span>.glob(<span class="co">RAILS_ROOT</span> + <span class="s"><span class="dl">'</span><span class="k">/gold/data/*.yml</span><span class="dl">'</span></span>).each <span class="r">do</span> |file|<tt> </tt> <span class="co">Fixtures</span>.create_fixtures(<tt> </tt> <span class="co">RAILS_ROOT</span> + <span class="s"><span class="dl">'</span><span class="k">/gold/data</span><span class="dl">'</span></span>, <tt> </tt> <span class="co">File</span>.basename(file, <span class="s"><span class="dl">'</span><span class="k">.yml</span><span class="dl">'</span></span>))<tt> </tt> <span class="r">end</span><tt> </tt> <span class="r">end</span><tt> </tt><tt> </tt> <span class="r">end</span><tt> </tt></pre></td> </tr></table> david tag:gensym.org,2007-05-02:8 2007-05-02T05:26:00Z 2007-05-02T12:48:16Z Vacation Reading <p. </p> <p. </p> <p>On my last vacation (which happened to be my honeymoon), I read the following:</p> <ul> <li><p><a href=";s=books&qid=1178081720&sr=8-2">Beautiful Evidence</a>, Edward Tufte</p> <p>I'm a huge Edward Tufte fan, and this is his latest book. It was difficult to wait until vacation to start reading this. While it's not my favorite of his books (<a href=";qid=1178081720&sr=8-2">The Visual Display of Quantitative Information</a> remains that), it was still a pleasure to read, and it made the flight over the Atlantic go by quickly.</p></li> <li><p><a href="">A Madman Dreams of Turing Machines</a>, Janna Levin </p> <p.</p></li> <li><p><a href="">Special Topics in Calamity Physics</a>, Marisha Pessl</p> <p>This novel drew me in early. While it doesn't have the depth that the reviews of it suggest, Marish Pessl's use of language is entertaining, and the story moves quickly enough that the book seemed much shorter than its 500 pages.</p></li> </ul> <p>After an embarassingly large amount of deliberating, I've decided to bring the following on my upcoming vacation:</p> <ul> <li><p><a href="">Against the Day</a> -- Thomas Pynchon's latest.</p></li> <li><p><a href=";s=books&qid=1178082754&sr=1-1">Compilers: Principles, Techniques, and Tools</a> -- You know, the dragon book. I'm embarassed to have not yet read this and am looking forward to finally doing so. This may break my rule about bringing books that make me want to code, but that's a risk I'm willing to take.</p></li> <li><p><a href=";s=books&qid=1178082886&sr=1-1">Dreaming in Code</a> -- I'm a sucker for stories about software projects.</p></li> </ul> <p>After I get back from my vacation, I'm in Chicago just long enough to shower, catch a nap, and grab my laptop before heading to Portland for RailsConf. I can't wait.</p> david tag:gensym.org,2007-04-07:6 2007-04-07T21:40:00Z 2007-04-07T21:40:17Z Enumerate, Map, Filter, Accumulate <p>Chapter 2 of <a href="">The Structure and Interpretation of Computer Programs</a>. </p> <p>The example that the book gives is that of finding the salary of the highest paid programmer, given a collection of employee records. Here's how a Java programmer might implement><tt> </tt>int maxSalary = 0;<tt> </tt>for (Employee employee : employees) {<tt> </tt> if (Role.PROGRAMMER.equals(employee.getRole())) {<tt> </tt> int salary = employee.getSalary();<tt> </tt> if (salary > maxSalary) {<tt> </tt> maxSalary = salary;<tt> </tt> }<tt> </tt> }<tt> </tt>}<tt> </tt></pre></td> </tr></table> <p>I don't think this is bad at all. It's more or less obvious at a glance what the code is doing. Let's take a look at how it can be implemented in Scheme using the enumerate-map-filter-accumulate style:<>(define (salary-of-highest-paid-programmer records)<tt> </tt> (accumulate <tt> </tt> max 0 <tt> </tt> (map salary<tt> </tt> (filter programmer? records))))<tt> </tt></pre></td> </tr></table> <p. </p> <p>Here's how I would do it in Scheme:<><tt> </tt>(define (salary-report records)<tt> </tt> (map <tt> </tt> (lambda (want-role) <tt> </tt> (let ((salaries <tt> </tt> (map <tt> </tt> salary <tt> </tt> (filter <tt> </tt> (lambda (employee) (has-role? want-role employee)) <tt> </tt> records))))<tt> </tt> (list want-role (accumulate max 0 salaries) salaries)))<tt> </tt> (uniq (map role records))))<tt> </tt></pre></td> </tr></table> <p>And in Java:<></pre></td> <td class="code"><pre><tt> </tt>Map<Role, List<Employee>> employeesByRole = new HashMap<Role, List<Employee>>();<tt> </tt><span class="r">for</span> (Employee employee : employees) {<tt> </tt> Role role = employee.getRole();<tt> </tt> <span class="r">if</span> (!employeesByRole.containsKey(role)) {<tt> </tt> employeesByRole.put(role, new ArrayList<Employee>());<tt> </tt> }<tt> </tt> employeesByRole.get(role).add(employee);<tt> </tt>}<tt> </tt><tt> </tt>List<List> report = new ArrayList<List>();<tt> </tt><tt> </tt><span class="r">for</span> (Role role : employeesByRole.keySet()) {<tt> </tt> Integer maxSalary = <span class="i">0</span>;<tt> </tt> List<Integer> allSalaries = new ArrayList<Integer>();<tt> </tt> <span class="r">for</span> (Employee employee : employeesByRole.get(role)) {<tt> </tt> Integer salary = employee.getSalary();<tt> </tt> allSalaries.add(salary);<tt> </tt> maxSalary = salary > maxSalary ? salary : maxSalary;<tt> </tt> }<tt> </tt> List reportEntry = new ArrayList();<tt> </tt> reportEntry.add(role);<tt> </tt> reportEntry.add(maxSalary);<tt> </tt> reportEntry.add(allSalaries);<tt> </tt> report.add(reportEntry);<tt> </tt>}<tt> </tt></pre></td> </tr></table> <p. </p> <p.</p> <p>Let's return to the original example: finding the salary of the highest paid programmer. Here's how I would do it in Ruby:<>create_employees.<tt> </tt> select {|emp| <span class="sy">:programmer</span> == emp.role }.<tt> </tt> map {|emp| emp.salary }.<tt> </tt> inject {|m, v| m > v ? m : v}<tt> </tt></pre></td> </tr></table> <p>Is it possible to do this kind of thing in Java? This is the closest I could come up></pre></td> <td class="code"><pre><tt> </tt>Integer maxSalary = accumulate(new Accumulation<Integer, Integer>() {<tt> </tt> protected Integer function(Integer a, Integer b) {<tt> </tt> return a > b ? a : b;<tt> </tt> }}, <tt> </tt> map(new Mapper<Employee, Integer>() {<tt> </tt> protected Integer function(Employee emp) {<tt> </tt> return emp.getSalary();<tt> </tt> }}, <tt> </tt> filter(new Filter<Employee>() {<tt> </tt> protected boolean function(Employee emp) {<tt> </tt> return Role.PROGRAMMER.equals(emp.getRole());<tt> </tt> }}, <tt> </tt> enumerate(new Enumeration<Employee>() {<tt> </tt> protected Collection<Employee> function() {<tt> </tt> return Arrays.asList(employees);<tt> </tt> }}))), <tt> </tt> new Integer(0));<tt> </tt></pre></td> </tr></table> <a href="">BGGA proposal</a> were adopted, I think it would look something like><tt> </tt>Integer maxSalary = <tt> </tt> accumulate(<tt> </tt> { Integer a, Integer b => a > b ? a : b }, 0,<tt> </tt> map(<tt> </tt> { Employee emp => emp.getSalary() }, <tt> </tt> filter(<tt> </tt> { Employee emp => Role.PROGRAMMER.equals(emp.getRole()) },<tt> </tt> Arrays.asList(employees) )));<tt> </tt></pre></td> </tr></table> <p>That's a real improvement.</p> david tag:gensym.org,2007-03-15:4 2007-03-15T01:51:00Z 2007-03-15T01:51:51Z Learning Tools Versus Learning Concepts <p.</p> <p>My reason for writing about this course is Professor D'Angelo and his lack of patience for a certain type of student. You see, to do well on the exams in math courses at Illinois, you had one of three options:</p> <ol> <li>To learn the concepts</li> <li>To learn how to solve the types of problems that would be on the exam</li> <li>To cheat</li> </ol> <p.</p> <p.</p> <p. </p> <p. </p> <p.</p> .</p> <p? </p> <p. </p> <p.</p> <p.</p>
|
http://feeds.feedburner.com/gensymorg-main
|
crawl-002
|
refinedweb
| 4,599
| 55.64
|
Setting up your Git (in Python)
A step-by-step guide to setting up Git repositories, and deploying to Heroku, using Python and Visual Studio Code! It was really painful for me the first few times I did it (with the help of my expert friend, no less), so I really hope the below will help you guys simplify it.
For clarification on the below instructions, when I say “Input
xyzzy“ I mean type
xyz into the terminal and hit return/enter.
In Visual Studio Code:
- Open the terminal. Drag up the bottom console if you can’t see it, alternatively click ‘View’, click ‘Integrated Terminal’.
- Input
cd- this brings you back to your Home folder
- Input
ls- this shows you all the files in the current directory you are in
- Input
mkdir filename- this creates a folder with the filename, which is going to be where all your repository files will be in. This filename should be similar to the name of your directory.
- Input
cd filename- replace filename with the name of your folder. This brings you into the folder
In Github:
- Click ‘New repository’
- Fill in the repository name, select ‘add README.md’
- Click ‘Create repository’
In Visual Studio Code:
- Input
git init- initialises a git repository locally in your computer
- Input
git add README.md- adds the readme file into the repository index. *If you didn’t select add README previously, it will return an error. You will need to create a README file manually by inputing
touch README.md, and then repeat step 2.
- Input
git commit -m “first commit”- basically this commits (aka mark as confirmed) your action of adding the readme file. The text in between the quotes “” is your comment tagged to this action.
- Input
git remote add origin- this tells git which address to push it to. Replace username with your username and reponame with the name of your repository. Alternatively, earlier on when you created your repository in Github this link should have been provided to you. *If you get an error (or you did a typo like me) and trying to redo the step gives you
fatal: remote origin already exists., you can remove the origin and put in a new one again, by inputing
git remote rm origin. Repeat step 4 after that.
- Input
git push -u origin master- this pushes your commit online, to Github.
- If you have been successful, you can refresh your Github page online and you should be able to see your README.md file in the repository.
- Tell Visual Studio Code which language you are using to code in - click ‘View’, ‘Command Palette’, ‘Select Workspace Interpreter’, ‘Python 3.6’
At this stage you have successfully set up your Github repository! You can now work locally on your computer, and push the code online to Github.
Next, we will host your work on the internet (via Heroku) so everyone can see it!
Setting up heroku (in Python)
- Click ‘Create’ on top right hand corner, and then ‘Create new app’
- Input app name; I kept the region as US. Click ‘Create app’
- Deployment method - click ’Connect to github’
- Input repository name, press search. Click ‘Connect’.
In Visual Studio Code:
- Input
pip3 install gunicornto install gunicorn.
- Create files required for Heroku deployment. Input
touch requirements.txt- this is a file that is required for all Python apps. It details all the versions of libraries used for the app, which Heroku will load so that the app can run properly on the internet.
- Input
pip3 freezeand search for the libraries you need (i.e. the stuff you import in your app), as well as gunicorn and Flask. For example, my app needs to import pandas and numpy to run. I will copy and paste them into requirements.txt by inputing the below:
echo “numpy==1.13.1 pandas==0.20.3 gunicorn==19.7.1 Flask==0.12.2 Flask-Compress==1.4.0”>>requirements.txt
What this echo command is doing is basically telling the program to take all the text in between the quotation marks, and put it into the file, requirements.txt. Aka the lazy way to paste something into a file without opening the file.
Input
touch runtime.txt- this is a file only required by Heroku. It tells Heroku which version of program to use. Input
python -V(that’s a capital V! I typo-ed and had to use exit()) to see what version of Python you are on. Input
echo “python-3.6.2”>>runtime.txtto write it in the runtime.txt file. *If you are using Python 3 but the version on the computer shows up as 2, don’t worry - just type in the Python 3 version instead.
Input
touch app.pyto create your app file. Input the below, which is a very basic python program to instruct the server to go to an index.html file with Flask. You can just copy and paste the text within the quotation marks into the app.py file as well (I’m just lazy to open the file).
echo “from flask import Flask, render_template app = Flask(__name__) @app.route('/') def main(): return render_template('index.html') if __name__ == '__main__': app.run(debug=True)”>>app.py
Alternatively, if you already have your app written, just copy and paste your py file into the folder.
- Input
mkdir templates- this creates a folder called templates.
- Input
cd templatesto go into the templates folder. Input
touch index.html- this creates an index.html file. Input the below:
echo “<!DOCTYPE html> <html> <head> <title>My Homepage</title> </head> <body> What’s up world!!! </body> </html>”>>index.html
Alternatively copy and paste all the text within quotation marks directly into index.html.
Input
cd ..- this goes up one folder in the directory, bringing you back to your main folder.
Input
touch Procfile(that’s a capital P!) which creates a Procfile. Input
echo “web: gunicorn app:app”>>Procfile. Basically this file tells Heroku that this is a web app, using gunicorn (which reads Python), and the last bit is equivalent to
from app import app. Meaning, from app.py, import the variable app (defined as app = Flask(name). If you saved your app.py in another folder, you can replace the app.py bit with your folder location, like bot/app.py
Your app should be able to run locally now. Input
python3 app.pyto try running it. It should give you the link, press command+click to open the link in a new page.
If everything works, go to the left side panel, there should be a blue bubble on a node icon detailing the number of changes you have made. Next to ‘Changes’, click the ‘+’ sign to stage all changes. Click the tick on top, input the message ‘added files for heroku deployment’. Press the three dots, click ‘Push to…’, click ‘origin’
In Heroku:
Press ‘Deploy Branch’ under Manual deploy. This is only required for the first time. *If you have troubles building it or running the application, click ‘More’ on the top right hand side, next to ‘Open app’, and click ‘View logs’. From there you can try to troubleshoot a little.
- Once your app is successfully built, at the bottom it should provide the link to your website, saying deployed to Heroku. Go to that link to access your website! Alternatively, scroll up and click open app.
- Once done, click ‘Enable automatic deploys’. This automatically updates your app every time you push to Github.
CONGRATULATIONS!!! You are officially DONE. With the deployment haha - the real work is yet to come ;)
*Optional: Create a .gitignore file (this tells git which files to ignore so that they are not uploaded to Github)
- Go to
- Search what programs you use to code with - for me, it’s ‘Python’, ‘MacOS’, ‘VisualStudioCode’ - so they can customise the code for you.
- Copy all the text generated In Visual Studio Code:
- Input
touch .gitignoreto create the file
- You can open the .gitignore file manually to paste it in. To do this, click ‘File’, ‘Open’, and open the entire folder. Do not open just the individual file. You want to the see the entire folder content on the left of the screen. Double-click on .gitignore, and paste all the text in, and save it. Alternatively, you can input
echo “text”>>.gitignore- replace text with the text you copied. This writes all the text into the file.
- If there is a particular file in the directory you want to ignore (for e.g. testing.txt), type the filename in right at the very top ’testing.txt’. You should see that even if you make changes to this file, VSC will not ask you to commit changes on it.
|
https://www.yinglinglow.com/blog/2020/01/26/Deploying-Online
|
CC-MAIN-2021-49
|
refinedweb
| 1,457
| 75.71
|
Today, I stepped into an interesting pit with mybatis. After a day’s searching, I didn’t find the problem. Let me tell you about the conventional solutions.
The command space of the mapping file of
But I checked n times! N times! There is no problem. It’s perfect, but it’s a mistake!
The result is that the XML file is not compiled!
When compiling, put the XML file in the folder like this
you can see that the XML file is not compiled.
We are here pom.xml Add the following code to the
<build> <resources> <resource> <directory>src/main/java</directory> <includes> <include>**/*.xml</include> </includes> </resource> <resource> <directory>${basedir}/src/main/resources</directory> </resource> </resources> </build>
Recompile
XML is compiled to solve the problem
Read More:
- How to Fix Invalid bound statement (not found) Error
- The icon on the layui page is not displayed, and an error message is reported: Failed to decode downloaded font …..
- How to Fix Spring Boot OTS parsing error: Failed to convert WOFF 2.0
- Related configuration of mybatis project
- How to Fix The error may exist in com/kuang/dao/UserMapper.xml
- Solution to the problem of installing lxml in Pip
- solve org.apache.ibatis . binding.BindingException : invalid bound statement (not found)
- Spring MVC – enable static resource access
- Post build event after VC generation
- (springmvc) Failed to load resource: the server responded with a status of 404 (Not Found)
- Differences between the assets directory in Android project and the purpose of resource
- Failed to execute goal com.spotify:docker-maven-plugin:1.0.0:build Exception caught: basedir src/mai
- Upgrading QT4 project to Qt5 project
- Compilation errors and warnings encountered in QT and their solutions
- Vs2015 configuring OpenGL (glfw Library)
- Vtk8.0 compilation process record under vs2017 and qt5.12.1
- After Java application is deployed in Linux environment, Chinese is displayed as square solution
- Failed to load ApplicationContext encountered when using mybatis plus
- Caused by the “error string in namespace std does not name a type” error
- cenos Upgrade g++ gcc(cc1plus: error: unrecognized command line option “-std=c++11”)
|
https://programmerah.com/mapped-statements-collection-does-not-contain-value-for-xxxx-25886/
|
CC-MAIN-2021-21
|
refinedweb
| 352
| 53.81
|
You can subscribe to this list here.
Showing
6
results of 6
On Tue, 24 Jun 2003, Lachlan Andrew wrote:
> I don't think it is an issue of "tweaking". As long as the
> environment is not the *same* environment as the rest of the
> database, it will not share the cache. We could have another
> environment with all the same parameters. (However we would probably
> not want a cache, since the file shouldn't be used often.)
Do you know for certain that the environment is the same? It would be
nice if I could duplicate this bug, but I've never been able to.
This smells like something that should be handled in the WordDB class at
the DB API level.
1) I notice that in WordDB there is a dbenv that is used with a BDB
create function.
There is also a set_cachesize() function in the C API.
I'm wonder if we just set this variable to zero if that would have the
desired effect.
2) I previously devised a 'cache' that works above the BDB API level for
the WordDB class. This scheme would probably compensate for the loss of
performance from eliminating the internal cache.
Neal Richter
Knowledgebase Developer
RightNow Technologies, Inc.
Customer Service for Every Web Site
Office: 406-522-1485
> Odd... It dumps the output in a file /tmp/t_htsearchxxxxx.
> Could you please re-run it and mail me that file?
Contents of t_htsearch file appears after my .sig.
It's possible to duplicate this running htsearch from the command line; =
entering a query and a result format produces the same output as in the =
test file: the output stops at '<form action=3D"'. There's no core dump, =
nonzero exit or other indication that anything failed. The output just =
stops.=20
--=20
Neil Kohl
Manager, ACP Online
nkohl@...=20
t_htsearch10604:
Content-type: text/html
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html><head><title>Search results for 'also'</title></head>
<body bgcolor=3D"#eef7ff">
<h2><img src=3D"@IMAGEDIR@/htdig.gif" alt=3D"ht://Dig">
Search results for 'also'</h2>
<hr noshade size=3D"4">
<form method=3D"get" action=3D"
Neil Kohl
Manager, ACP Online =20
American College of Physicians
nkohl@... 215.351.2638, 800.523.1546 x2638
Greetings,
Sorry for the delay in replying -- busy at work...
Yes, I didn't think of the database location at all :(
I don't think it is an issue of "tweaking". As long as the=20
environment is not the *same* environment as the rest of the=20
database, it will not share the cache. We could have another=20
environment with all the same parameters. (However we would probably=20
not want a cache, since the file shouldn't be used often.)
Yes, we should eventually update the underlying BDB code, but perhaps=20
after 3.2.0b5 is out :)
Cheers,
Lachlan
On Tue, 24 Jun 2003 09:06, Neal Richter wrote:
> Here's Lachlan's diff to db/mp_cmpr.c
>
> < if(CDB_db_create(&dbp, dbenv, 0) !=3D 0)
> ---
>
> >/* Use *standalone* database, to prevent recursion when writing
> > pages */
> > /* from the cache, shared with other members of the
> > environment */
> > if(CDB_db_create(&dbp, NULL, 0) !=3D 0)
>
> My hunch is that this is a rather 'blunt' fix. It seems likely
> that their is a slight problem with the DB_ENV we use... maybe it
> needs to be tweaked before the db_create call if the compression is
> enabled??
>
> The __db_env struct is fairly large, but most of it seems to be
> function-pointers. There are a number of variables for Locking,
> Logging, Transactions, Memory-pool, and some other flags.
>
> I'm looking to see what effect this will have. There are a number
> of important looking fields in __db_env... some having to do with
> db-filename sematics and location.
>
> I can't find much yet on what everything defaults to (if that is
> the right term) when standalone is used.
>
> I also think we need to look at how we can merge our changes with
> at least the next version 'up' of our BDB version at some point
> this year.
--=20
lha@...
ht://Dig developer DownUnder ()
On Tue, 24 Jun 2003 21:06, Gabriele Bartolini wrote:
> > do a lot of conditional compilation or clean up the code and get
> > people to use newer compilers.
>
> I agree with Geoff. I don't know though if at this time it is
> better to wait for 3.2.0b5 to be out.
Hear, hear! :)
Lachlan
--=20
lha@...
ht://Dig developer DownUnder ()
On Tue, 24 Jun 2003 05:06, Neil Kohl wrote:
> Success!
Excellent -- well done!
> Note that one test -- t_htsearch -- failed
> Output doesn't match "4 matches"
> ../htsearch/htsearch -c
> /home/neilk/src/htdig-3.2.0b4-20030615/test/conf/htdig.conf
> 'words=3Dalso' >> /tmp/t_htsearch25459 --
> Simple search for 'also'
Odd... It dumps the output in a file /tmp/t_htsearchxxxxx.
Could you please re-run it and mail me that file?
Thanks (yet again :)
Lachlan
--=20
lha@...
ht://Dig developer DownUnder ()
Hi friends!
> But IMHO, we should be pushing people towards newer compilers with
> newer releases. It's pretty clear that new GCC releases will stop
> compiling non-ISO C++, so we'll either need to do a lot of conditional
> compilation or clean up the code and get people to use newer compilers.
I agree with Geoff. Also, if you could be interested, thanks to my
friend Marco, ht://Check since last sunday has got automatic checks for
C++ standard library and conditional compilation which for fstream,
iomanip, etc. and namespaces.
If you are interested in this, I volounteer myself to port the code to
this, maybe asking my friend Marco (mnencia@...) to join me.
I don't know though if at this time it is better to wait for 3.2.0b5 to
be out.
Let me know
Ciao
-Gabriele
--
Gabriele Bartolini - Web Programmer
Comune di Prato - Prato - Tuscany - Italy
g.bartol@... |
> find bin/laden -name osama -exec rm {} ;
|
http://sourceforge.net/p/htdig/mailman/htdig-dev/?viewmonth=200306&viewday=24
|
CC-MAIN-2014-41
|
refinedweb
| 995
| 75.2
|
Hi I was wondering if you could help me with a question on image
processing. I am using c++to process the image.I have a raw file, and I have read it by creating an fstream, and have saved the length of
the file in variable end.
#include <iostream> #include <fstream> #include <stdio.h> #include <stdlib.h> #include <math.h> using namespace std; int file_length(ifstream *is) { int end; // get length of file: is->seekg (0, ios::end); end = is->tellg(); is->seekg (0, ios::beg); } int main() { char * buffer; //unsigned char buffer[128][128]; ifstream myfile("Baboon.raw", ifstream::in); //the file size 4x9x54 int end; end = file_length(&myfile); cout << end << "\n"; buffer = (char*) malloc (end); /* * returns the length of a file (in bytes) */ }
I now have to store the image into a matrix and create code to
process the image into a bmp file.
Any suggestions?
Thanks
|
https://www.daniweb.com/programming/software-development/threads/239085/image-processing-with-c
|
CC-MAIN-2020-40
|
refinedweb
| 150
| 80.01
|
Hi David,
afaik there is no bu.js - however these errors might have a different
cause: You most probably forgot to use the browser-update transformer
in you display pipeline which causes <bu:*> element to go through the
browser.
When dojo parses the page "bu" is interpreted as a dojo namespace
which causes the browser to query for bu/manifest.js etc… (Actually
this caused me a lot of headache some time ago).
Solution is to make sure you use the browser-update transformer
correctly. Let me know if I am right.
Best regards
Benjamin
Am 14.01.2009 um 14:16 schrieb David Legg:
> I'm getting myself into a terrible mess trying to make a simple
> suggestion list using CFORMS and Ajax.
>
> I've no problem with the cocoon-webapp samples... they run fine.
> The problem arises in knowing which bits of the samples to extract
> to make a single stand-alone app. Perhaps I approached this process
> the wrong way around but I thought I'd start with a brand new Cocoon
> block fresh from running maven and build it up from that.
>
> I'm now getting a version of my form appearing in a browser but
> instead of a list of autocompleting names I get a plain textbox with
> an index number in it and a long exception list gets generated. The
> upshot of the exception list is a couple of missing pipeline
> matchers: -
>
> ..
> Caused by: org.apache.cocoon.ResourceNotFoundException: No pipeline
> matched request: resource/external/bu/manifest.js
> ..
> javax.servlet.ServletException:
> org.apache.cocoon.ResourceNotFoundException: No pipeline matched
> request: resource/external/bu.js
> ..
>
> I've searched the entire cocoon trunk but can't find bu.js
> anywhere... I presume it is short for 'browser update' but where is
> this file located?
>
> Regards,
> David Le
|
http://mail-archives.apache.org/mod_mbox/cocoon-users/200901.mbox/%3C8D973815-449C-4968-939C-37CD33339A9B@boksa.de%3E
|
CC-MAIN-2018-05
|
refinedweb
| 299
| 66.33
|
I have a tbale with no rows in it. I would like to get the count for that table. I would expect to see zero rows returned instead of an error.
final Query q = em.createNativeQuery( "select count(*) " + "from MyTable " + "where myColumn = :col ", MyClass.class ); q.setParameter("col", myColumnValue); return (Integer) q.getSingleResult();
The error I receive is:
WARN [JDBCExceptionReporter] SQL Error: 0, SQLState: S0022 ERROR [JDBCExceptionReporter] Column 'myPrimaryKey' not found.
In case you're wondering why I added the MyClass.class part....
If I take the MyClass.class part out I get a NOT YET IMPLEMENTED error.
I got it w/o using the native query.
final Query q = em.createQuery( "select count(o) " + "from MyTable o " + "where o.myColumn = :col " );
|
https://developer.jboss.org/thread/106531
|
CC-MAIN-2018-39
|
refinedweb
| 122
| 62.85
|
We have now the third version of Team Services and it is called Windows SharePoint Services V3. It is now possible to have a better handling of version controlling means Major and Minor versions in a document library. Why not have this information also in the document and be able to print it out?
The property differences:
Here we see the standard property window and please refer to the Revision number. This number you could call Version but it is only an internal number which is 8 bit long and not accessible from outside. This means we have to find a different way.
We have to use custom properties. If you open an existing word document in a doclib you will see the property ContentType. Why not put also the version number in a similar property?
The goal:
We start with a new document by a click on New in the doclib. I changed the template.doc to have my property “WSSVersion” configured with the first value “0.1”. To see a value of DocProperty you must insert a “Field” and choose the appropriate property. These fields must also be recalculated to show the right value! The standard is that those fields are recalculated when you print the document.
The next deal is how to put the right value at the right time into this property or also called doclib-column?
Feature?
We will use a new feature comes with WSS V3 and .Net Framework 2.0 and it is also called feature in WSS.
Refer to the path on your server: C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\FEATURES and you will see a lot of other folders.
At this place we can also install our “Version Feature”. For this sample I am using the folder C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\FEATURES\DocVersion
1. We need a feature.xml in folder DocVersion:
<?xml version="1.0" encoding="ISO-8859-1" ?>
<Feature Scope="Web"
Title="Get the version into the document"
Description="Have also the SharePoint version number of a document into the document."
Id="00000000-0000-0000-0000-000000000000"
xmlns="">
<ElementManifests>
<ElementManifest Location="elements.xml" />
</ElementManifests>
</Feature>
For the "00000000-0000-0000-0000-000000000000" you need your own GUID and create it by GUIDGEN
2. We need an elements.xml in folder DocVersion:
<?xml version="1.0" encoding="ISO-8859-1" ?>
<Elements xmlns="">
<Receivers ListTemplateOwner="00BFEA71-E717-4E80-AA17-D0C71B360101" ListTemplateId="101">
<Receiver>
<Name>ItemUpdated</Name>
<Type>ItemUpdated</Type>
<SequenceNumber>10010</SequenceNumber>
<Assembly>FeatureTest, Version=1.0.0.0, Culture=neutral, PublicKeyToken=36d210293b917bd0</Assembly>
<Class>test.TestEventReceiver</Class>
<Data />
<Filter />
</Receiver>
<Receiver>
<Name>ItemAdded</Name>
<Type>ItemAdded</Type>
<SequenceNumber>10010</SequenceNumber>
<Assembly>FeatureTest, Version=1.0.0.0, Culture=neutral, PublicKeyToken=36d210293b917bd0</Assembly>
<Class>test.TestEventReceiver</Class>
<Data />
<Filter />
</Receiver>
</Receivers>
</Elements>
3. We need the code for this feature and here with C#. In this case name the DLL you will create FeatureTest.dll in Visual Studio 2005SP1:
using Microsoft.SharePoint;
using System;
namespace test
{
class TestEventReceiver : SPItemEventReceiver
{
public override void ItemUpdated(SPItemEventProperties properties)
{
DisableEventFiring(); // in ValidateItem it will fire an event, so disable here
ValidateItem(properties);
}
public override void ItemAdded(SPItemEventProperties properties)
{
DisableEventFiring(); // in ValidateItem it will fire an event, so disable here
ValidateItem(properties);
}
protected bool ValidateItem(SPItemEventProperties properties)
{
SPSite siteV = null;
SPWeb webV = null;
if (properties.ListItemId > 0 && properties.ListId != Guid.Empty)
{
try
{
siteV = new SPSite(properties.WebUrl);
webV = siteV.OpenWeb();
SPList spList = webV.Lists.GetList(properties.ListId, false);
SPListItem Item = spList.GetItemById(properties.ListItemId);
// The internal number ist stored in _UIVersion
// 1 for V0.1
// 512 for V1.0
// 513 for V1.1
// 1024 for V2.0
//Item["WSSVNumber"] = Item["_UIVersion"];
// I'm using the string value
Item["WSSVersion"] = Item["_UIVersionString"];
Item.SystemUpdate();
}
catch // You will run into an exception in case the Column does not exist
{
// Put your exception code here
}
}
return true;
}
}
}
4. Compile your DLL
5. Put the DLL into the GAC by GACUTIL
6. Install the feature by STSADM -o installfeature -filename DocVersion\feature.xml
7. Activate the feature by the GUI or with STSADM
8. Try it out how it works
Reference section:
SDK for Microsoft Office SharePoint Server 2007
SDK for Windows SharePoint Services V3
Document Property Promotion and Demotion
Introduction to Columns
Event Fundamentals
Hi there,
Thank you for the helpful post. However — I am wondering if you (or anyone else out there) could help me
with an issue I’ve been having lately with WSS 3.0. Here’s a summary of my environment:
DEV ENVIRONMENT SUMMARY:
===================================================
"bare" WSS 3.0 RTM (RTW) running on W2k3 SP1
MOSS (ie. SPS) 2007 is NOT installed.
MAIN GOAL: Leverage WSS 3.0’s versioning functionality for network communications management application.
Installation of Event handlers is via Object model, NOT by features.
PROBLEM SUMMARY:
===================================================
I have written classes to override the "ItemUpdated", and "ItemAdded" methods as shown in this article, as
well as trying to tap into other methods such as "ItemCheckedIn" "ItemDeleted" etc. I have successfully been
able to get the events to fire for simple tests.
Now, as I prepare to deploy my event handlers in a more commercial environment, I have been running
more rigorous tests, and have been getting very inconsistent behaviour with respect to the events
firing on document libraries:
The following WORKS:
— a) If I trigger events via the SP Web UI
— b) If I "SLOWLY" trigger events via the SP Object Model (ie. using SPFolder.Add(..) and SPFolder.Delete(..) methods)
By "SLOWLY" I mean that I artifically "sleep" for a few seconds between consecutive calls.
The following DOES NOT WORK (or works at best very inconsistently)
— c) If I "QUICKLY" trigger events via the SP Object Model (ie. using SPFolder.Add(..) and SPFolder.Delete(..))
By "QUICKY" I mean that I remove all of my artificial sleeps and let the code in b) run normally.
The events sometime fire, sometimes not. On a document library of ten items, I may have ZERO events fire, or
I may get the events to fire only on the 1st, 3rd, and 8th item for example. It is, as far as I can tell,
rather random.
Has anyone else noticed such strange behaviour? I have searched and every example
I’ve come across usually focuses on very simple docLib events (triggered via the SP UI), and often do not
consider what would happen if MANY MANY events happen at once.
Though I feel the doclib events ins WSS 3.0 /SPS 2007 are a great enhancement from WSS 2.0
— I am apprehensive to fully commit if the event model doesn’t stand up well under high event loads?
Thanks in advance for any help you can provide.
The best way to get and give community support could be one of our newsgroup. Start here
Great post and this is exactly what I have been trying to achieve!
My big questions is: why the hell wasn’t this feature included in the WSS/Word integration options by default??!!! After all, once a document is in library you can connect any custom metadata using the Quick Parts as well as some defaults. So why not Version?? it’s one of those items often used in word headers.
Serious lack of forsight on the part of the MOSS development team? or just too complicated to get into this release?(doubtful)
Finally my first blog post – I’ve been struggling to find something worthwhile to submit and I suppose…
The SharePoint product group knows this customer need now.
I’m curious as to how steps 4 and 5 need to be performed.
[On a remote machine] I did create a .cs file with the code above and "compiled" it to a DLL using csc.exe. I copied the DLL to the Sharepoint server but gacutil wouldn’t add it to the GAC, stating an "unknown error".
Are there any further steps need to be taken or did I miss something essential? Is there a little more step-by-step guiding howto on these to steps (compiling and adding to the gac) for this?
I created the DLL directly on the SharePoint Server due to some relationships (Use solutions in VS2005, *.SLN). The DLL should be in the GAC and not somewhere else.
For those questions like “how to” please refer to the MSDN and community support.
How to: Create a Simple Feature
How to: Create an Event Handler Feature
Last but not least refer also to the Visual Studio 2005 help/support/community.
Community support:
Thanks very much for that post.
Is that feature required because of office 2003 usage or it is required too with office 2007 ?
Many thanks for the answer
The easiest way is to create a new column called WSSVersion in the document library. Set it up as a calculated column and add [Version] as formula. No coding needed.
The feature should work DOC’s and is not fully tested to work with fully featured DOCX’s.
The reason for the feature is because the calculated column works one time.
I am also trying to use the [Version] field. How can I use feature to create a customized starting value for [Version]? This seems to be the closest I have come to finding an answer. Online for days with no such luck. Thank you in advance for any help you may provide.
Hi
Hopefully someone still watches at this post.
We need this feature ’cause we want to use MOSS to control our Doc-Version. I (hopefully) successfully implemented all the steps above, created the XMLs, the DLL, signed it on the Server and put it in the GAC and activated the feature (Took me a lot of time, cause I’m doing this the first time ever). First I made some mistakes, but I think I cleared them all out, but the feature still won’t work (Already uninstalled the old versions and reinstalled the error-free-version).
I put a Field-Control in my template word document, name it "WSSVersion" and assign a value of 0.1 but when the version in the WSS-library is changed, the "0.1" in the document stays and won’t update to the newer version number.
My Suspection is, that it has something to do with the elements.xml Do I have to change the following line from the example above with my Template-Owner GUID and ID or can I leave it just like in the example?
<Receivers ListTemplateOwner="00BFEA71-E717-4E80-AA17-D0C71B360101" ListTemplateId="101">
If I have to change it, how can I find out my GUID and ID?
I would be very reliefed, if someone can help me. I put a lot of time into this, and I’m starting to going crazy 😉
Hi there!
Never mind, I got it! Although in my eyes there is missing something very important in the article above: to create a column named "WSSVersion" in the doclib!
Correct me if I’m wrong, but this was the magic missing step.
This works great with my Microsoft Office Word files, thanks!
I wonder whether this is also possible with Visio *.vsd files and Excel XLS files.
With Visio I cannot add the WSSVersion property – it’s simply not available for selection. Is there a workaround?
It’s good to know that this works for you in WinWord.
For any other Office document, XLS, VSD,… it could be straight forward. The properties we are talking about here are also available by the Explorer, the operating system explorer. That means navigate to the folder where the file exists and right-mouse-click properties. Here you can see/edit/add for a lot of known Office files the custom properties.
Sample:
– Create Word (DOC), Excel (XLS) and Viso (VSD) file, save it into a doclib where the column WSSVersion exists.
– Double check that the WSSVersion column has a value for all three documents
– Download the three files to your Desktop
– Check each file with right-mouse-click properties
You should see in all these three documents that there is a new property called WSSVersion and the value should be also there.
The next step will be how to get this value of the file-property WSSVersion into the sheet, drawing page… For that task please check with specialists in the particular area. With a briefly look it seems that you need VBA code for Excel and Visio.
This one has been resurrected from the collection of post-crash blogs: How is it possible to connect
How would I implement to multiple specific lists?
The suggested solution triggers a "Save Conflict" Error when attempting to upload files. Any suggestions how to remediate this problem?
Dirk
I have developed a small sharepoint feature that will do this for 5 previous versions so the details can be put into a table inside the document.
You’ll need to compile it yourself and do some prep work to make it work though. A macro can be used to automatically update document fields whenever the document is opened.
Here ‘Enable Label’ Check box
8. DO NOT Check the other two boxes in the Labels section.
9. In the Lable Format field, enter the metadata fields in the following format…
Reference : {Reference} n Version : {Version} n Classification : {Classification} n Custodian : {Custodian}
10. Set the label appearence and click on preview.
11. Click ok at the bottom of the page.
The next steps relate to Word 2007….
12. Go back to the library and create a new document using the content type you have modifed.!
Finished
My company is fairly new to WSS and we’re just learning what it can do. We’re also struggling towards ISO 9001:2008 accreditation and document versioning is an important factor. The problem is we don’t have the programming knowhow or the time to implement our own solution or one that is too code heavy. I really like Bome’s solution and got this to work partially (in that the bit in sharepoint worked). The problem comes when trying to insert the property in a Word document, where it doesn’t exist. Is there a way to add it as a field code?
I seem to have got this working for me.
I’m not a SharePoint expert by any means but have had this dumped on me at work.
I needed a solution which works with Word 2003 and requires no coding except for creating a small refresh macro in Word.
Basically, I found that if you
1. Turn on versioning
2. Create a ‘Document’ Content type, set its group as ‘Policy’
3. Upload your word document to SharePoint as a Content type
4. Add the Content type to your library
5. Define a Policy for your Content type and enable labels
6. Create a new Word document in SharePoint from that Content type
7. Edit the Word document and add the field “DLCPolicyLabelValue” created by SharePoint as a property for your document
8. Add a macro to the Word document so it refreshes the document version set by SharePoint every time you open it
9. Upload this as your new document template
Every time you create a new document from within SharePoint and save new versions etc. it updates the document version number within the Word document.
I’ve done a step by step guide of what I did in order to get it working if you need it. It assumes that you have admin access to your SharePoint site. There are also settings which you may want to skip, sorry about this but I didn’t want to leave anything out.
In SharePoint
1) Turn on Versioning:
Go to your Document Library
Settings > Document Library Settings > Versioning Settings
Require Content Approval for Submitting Items: Yes
Select: Create major and minor (draft) versions
Select: Only users who can approve items (and the author of the item)
Select: Require documents to be checked out before they can be edited?
Click OK
2) Set up a New Content Type:
Site Actions > Site Settings > Site Content Type Gallery > New Site Content Type
Name: Operating Procedure
Description: Template for new operating procedure
Select parent content type from: Document Content Types
Parent Content Type: Document
Existing group: Policies
Click OK
Outside of SharePoint:
3) Create a blank Word 2003 document:
Save it somewhere you can find it later, let’s call it "Operating Procedure.doc"
In SharePoint, upload your blank Word document:
Site Settings > Site Content Type Gallery > Site Content Type > Advanced Settings
Upload a new document template: browse for Operating Procedure.doc
Should this content type be read only: No
Update all content types inheriting from this type: Yes
Click OK
4) Add your word document content type to your library:
Go to your Document Library
Settings > Document Library Settings > Add from existing site content types
In Available Site Content Types: select Operating Procedure and click Add
Click OK
5) Create a policy for your Content Type:
Scroll down to Content Types and click: Operating Procedure
Under Settings, Click: Information management policy settings
Specify the Policy: Define a policy > Click OK
Administrative Description: Operating Procedure Type
Policy Statement: This is a policy statement
Click tick box: Enable Labels
Label Format: Document Version: {Version}
Font: Arial
Size: 10
Style: Bold
Justification: Center
Label Size:
Height: 0.5
Width: 5.0
Click ‘Refresh’ button under preview
This should display Document Version: {_UIVersionString}
Click Enable Auditing and tick all 5: Specify the events to audit
Click OK
6) Create a new Word document within SharePoint:
Go to your document library
Click New > Operating Procedure
This should open a blank word document
Save it as "Test Operating Document.doc"
Click OK
Close Word
In SharePoint you should have a document to check in, check it in and select:
What Kind of version would you like to check in: 0.1 Minor Version (Draft)
Keep the document checked out after checking in this version? No
Click OK
7) Edit the document and add the Document Version field:
Go to you Document Library and select the drop down next to Test Operating Document
Select “Edit in Microsoft Word”
In Word
Go to: Insert > Field
Categories: (All)
Field Names: Doc Properties
Field Properties: DLCPolicyLabelValue
Click OK
"Document Version: 0.1" should appear
8) Create a macro to refresh the document version field every time it is opened:
Go to Tools > Macros > Macros…
Macro Name: AutoOpen
Click Create
Paste this:
With Options
.UpdateFieldsAtPrint = True
.UpdateLinksAtPrint = True
End With
ActiveDocument.Fields.Update
Close the VB Editor
Save this document over Operating Procedure.doc that you saved somewhere at the beginning of this process
Close Word
9) This is now your new template for this Content type:
In SharePoint
Go to your Document Library
Settings > Document Library Settings
Click Operating Procedure
Click Advanced Settings
Upload a new document template: browse for Operating Procedure.doc that you have just resaved
Click OK
Hope this helps…
Forgot to add – We are using MOSS 2007 (not enterprise I think??) and Office 2003
Hi, thanks for that!
I am currently using this solution and I find it very useful. I am using files Word 2003 and 2007, Excel 2003, PowerPoint 2003. And I have created several columns with additional information to the version. But I’m trying to use with Project (.mpp) and AutoCAD (.dwg), but not work.
Know any way to make it work with these files?
Thanks again!
It's nice to see that this blog post is still a reference to have the version number available inside a document stored in a DocLib. That sounds to me as a good idea to create an update also in regards of our new SharePoint 2010 products.
Short info when asking about other file types.
– First Step:
"Copy" the SharePoint internal version number into a DocLib Column.
– Second Step:
Think about promotion and demotion how SharePoint is able to put the Metadata (the columns) into a file.
How the second step works in WSSV3/MOSS2007 ?
———————————————————————
SharePoint uses Promotion and Demotion to “exchange” property/column values from/to documents. To fulfill the needs we will talk about Document Parsers. For MOSS 2007 and WSS V3 the best documentation you can find here:
What does it mean Document Property Promotion and Demotion?
msdn.microsoft.com/…/aa543341.aspx
Document Parsers in SharePoint (1 of 4): Overview / WSS V3
blogs.msdn.com/…/sharepointbeta2documentparseroverview1.aspx
Here we have the most important words that build in parsers are not changeable.
WSS includes built-in document parsers for the following file types:
• OLE: includes DOC, XLS, PPT, MSG, and PUB file formats
• Office 2007 XML formats: includes DOCX, DOCM, PPTX, PPTM, XLSX and XLSM file formats
• XML
• HTM: includes HTM, HTML, MHT, MHTM, and ASPX file formats
How about SharePoint 2010?
—————————————-
In the new version of SharePoint Foundation 2010 and also Microsoft SharePoint Server 2010 it should be possible to write an own parser also for the build-in file types.
SharePoint Document Management SPF2010
msdn.microsoft.com/…/ms431814.aspx
Custom Document Parsers SPF 2010
msdn.microsoft.com/…/aa544149.aspx
This might be the solution with our new SharePoint Foundation 2010.
Caution
SharePoint Foundation 2010 includes a number of built-in document parsers. You can replace a built-in document parser with a custom parser, but you should do so only after careful consideration, particularly if you are thinking of replacing a parser for an HTML-based file type. The built-in parsers sometimes do more than the pluggable parser interface allows.
#######################
Summary for the Second Step:
You need an own Document Parser to also have Metadata "published" into your MPP and DWG files.
Thanks for help with getting the version to show in Word 2003 – the step by step guide was great help! Can anyone help me get the Sharepoint 2007 version number to populate and update in Excel 2003.
Do it for Excel in the same way. You might need a formula in a cell to show the DOC-Property value.
We are running a WSS 3.0 and we are not experienced in C#-programming… but we realy need this function to print the "WSSVersion" in our «Quality System Documents». Would it be possible to get the dll as a download?
Hope, someone will have a heart…
|
https://blogs.msdn.microsoft.com/joerg_sinemus/2007/01/26/wss-version-number-in-the-word-2003-document/
|
CC-MAIN-2018-30
|
refinedweb
| 3,719
| 64
|
Since 'scanf()' is really ugly dealing with inputs , i decided to write a function that will hopefully be perfect when asking for numbers from the user.
The function is called 'getl()' (get long), it has 3 arguments and need no extra header files to be included. First and Second argument let you specify the range of max\min integer allowed to be input. Third takes a costumized message that will be displayed in these cases :
- When the user immdiatly hit <Enter> without entering anything.
- When the input doesn't belong to the range you specified.
- When the user enters an invalid input i.e : a mix of numbers and letters.
- When the user enters a real number (i.e: 4.5)
The function will keep displaying your custom message and asking for another input untill you enter a valid one.
here is the code :
Let me know what you guys think about it. And if it has any weak points , feel free to modify or add to it to make it even better.Let me know what you guys think about it. And if it has any weak points , feel free to modify or add to it to make it even better.Code:
#include <stdio.h>
long getl(long, long , char *);
int main(void)
{
long number;
printf("Please enter a number :\n");
number = getl(0,500,"Invalid Input.\n\nPlease enter a number :");
printf("You've entered %d\n\n", number);
system("PAUSE");
return 0;
}
long getl(long min_int, long max_int, char *message)
{
char input[BUFSIZ];
char *p;
long result;
for(;;)
{
if( fgets(input, sizeof(input), stdin) == NULL )
printf("\nCould not read from stream\n");
result = strtol(input, &p, 10);
if(result <= max_int && result >= min_int && input[0]!='\n' && ( *p == '\0' || *p == '\n' ))
return result;
else
puts(message);
}
}
p.s : i know nothing is perfect , i use perfect as in "as good as possible".
|
http://cboard.cprogramming.com/c-programming/57740-perfect-input-function-printable-thread.html
|
CC-MAIN-2014-42
|
refinedweb
| 313
| 72.56
|
ASP.NET MVC - Some Frequently Asked Questions
Posted by:
Suprotim Agarwal
, on 8/17/2009, in
Category
ASP.NET MVC
Views:
78370
Tweet
Abstract:
This article introduces ASP.NET MVC and answers some frequently asked questions about ASP.NET WebForms vs ASP.NET MVC.
ASP.NET MVC - Some Frequently Asked Questions
This article introduces ASP.NET MVC and answers some frequently asked questions about ASP.NET WebForms vs ASP.NET MVC.
What is MVC?
MVC or the Model-View-Controller is an
architectural pattern
used in software engineering for separating the components of a Web application. The MVC pattern helps decouple the business logic from the presentation layer which in turn gives you the flexibility to make changes to a layer, without affecting the other. This also leads to effective testing and maintainability.
The implementation of this pattern is divided into three parts:
-
Model – Represents the domain specific data
-
View – UI Components responsible to display the Model data
-
Controller – Handles User Interactions/Events, manipulates and updates the model to reflect a change in the state of an application.
What is ASP.NET MVC?
ASP.NET MVC is a web development framework that embraces the MVC architecture. It is a part of the ASP.NET framework and provides an alternative way to develop ASP.NET Web applications.
Microsoft started working on the ASP.NET MVC framework in
October 2007
and after a series of Previews and Beta Releases, ASP.NET MVC 1.0 was released on 17
th
March 2009. As of this writing,
ASP.NET MVC 2 Preview 1
has been released
How is ASP.NET MVC different from ASP.NET WebForms (ASP.NET WebForm VS ASP.NET MVC)? Is ASP.NET MVC a replacement for WebForms?
No. ASP.NET MVC, is not a replacement for WebForms. Both ASP.NET MVC and ASP.NET WebForms are built on top of the Core ASP.NET Framework. In fact a lot of features we use in ASP.NET such as Roles, Membership, Authentication and a lot of namespaces, classes and interfaces can be used in an ASP.NET MVC application
Here are some points that differentiate ASP.NET WebForms from ASP.NET MVC:
ASP.NET WebForms
ASP.NET MVC
Uses the ‘Page Controller’ pattern. Each page has a code-behind class that acts as a controller and is responsible for rendering the layout.
Uses the ‘Front Controller’ pattern. There is a single central controller for all pages to process web application requests and facilitates a rich routing architecture
Uses an architecture that combines the Controller (code behind) and the View (.aspx). Thus the Controller has a dependency on the View. Due to this, testing and maintainability becomes an issue.
ASP.NET MVC enforces a "separation of concerns". The Model does not know anything about the View. The View does not know there’s a Controller. This makes MVC applications easier to test and maintain.
The View is called before the Controller.
Controller renders View based on actions as a result of the User Interactions on the UI.
At its core, you ‘cannot’ test your controller without instantiating a View. There are ways to get around it using tools.
At its core, ASP.NET MVC was designed to make test-driven development easier. You ‘can’ test your Controller without instantiating a View and carry out unit-tests without having to run the controllers in an ASP.NET process.
WebForms manage state by using view state and server-based controls.
ASP.NET MVC does not maintain state information by using view state.
WebForms supports.
You lose the 'drag.
Since the application tasks are separated into different components, amount of code required is more. Since ASP.NET MVC does not use ViewState, you cannot use Data controls like GridView, Repeater.
Works very well for small teams where focus is on rapid application development
Works well for large projects where focus in on testability and maintainability.
Remember there is always a trade-off in adopting a certain technology. We also cannot conclude that one model is better than the other. You need to decide upon various factors and do a thorough feasibility study before you go in for a certain methodology of developing your applications. U
se the tool best suited for the job. You do not have to do something just because others are doing it!
I would like to quote
Rex Morgan
as he put this point brilliantly in the stackoverflow forums.
“It's important to keep in mind that MVC and WebForms are not competing, and one is not better than the other. They are simply different tools. Most people seem to approach MVC vs WebForms as "one must be a better hammer than the other". That is wrong. One is a hammer, the other is a screwdriver. Both are used in the process of putting things together, but have different strengths and weaknesses.
If one left you with a bad taste, you were probably trying to use a screwdriver to pound a nail. Certain problems are cumbersome with WebForms that become elegant and simple with MVC, and vice-versa”
I find ASP.NET WebForms easier. When and Why should I Care about ASP.NET MVC?
Again, ASP.NET MVC is not introduced to replace WebForms. WebForms has been amazing in its own arena, but depending on your experiences with it and how far have you have exploited it’s usage in your applications, there are difference of opinions as far as its advantages and disadvantages are concerned.
Here are some points that will help you understand and embrace ASP.NET MVC.
ASP.NET MVC is for you if you –
-
care to build applications that are maintainable, testable and are ‘abreast’ with the other development methodologies, a couple of years from now.
-
emphasize on reducing complexity by enforcing ‘separation of concerns’ and introducing loose coupling. Tasks are separated and handled by separate components
.
-
are tired of dealing with postback and viewstate issues
-
choose testability (Test Driven Development) over rapid application development(rad) where in rad, a single class is responsible for displaying output and responding to events. This behavior couples the layers tightly, making it difficult to test
.
-
have a large team of developers and want to promote parallel development where there are separate teams working on the view, controller and model
.
-
want to provide your application with multiple user interface. Since there is no or little dependency between different components, you can adopt a pluggable UI model in your application keeping the same business components
.
-
are worried that your smart-client look-a-like, tightly coupled and stateful abstracted webform model is difficult to test and breaks frequently while maintaining application
.
-
want a simple, seamless and maintainable AJAX experience like other platforms provide
-
want meaningful, RESTful URL’s
-
need to work on multiple platforms later. A shift from ASP.NET MVC to Ruby on Rails and other similar platforms is easier. It’s also a good option to consider for your career.
My advice to all you developers out there would be to go ahead and pick up this new technology, develop a sample MVC project and you will soon realize that you are following architectural best practices, are closer to the way web works and are using AJAX and JQuery efficiently to deliver jaw dropping UI experiences. A lot of our development time has gone in measuring the height and width in your apps – it’s now time to take a dive into the depth of it and see how things are done the right way – the web oriented way. After all, we will be trying a framework based on a pattern that has been used for over 30 years now! So the ‘it will work’ tests have been done for us.
I am still doubtful. How will companies adopt ASP.NET MVC ?
Now that’s a tricky question. There is an investment required here in terms of resources and adopting a new technology. With the economy on a downtrend, the adaptability will be slower initially than Microsoft would have expected. Since both the frameworks, ASP.NET WebForms and ASP.NET MVC, are a product of the same company, it will be the client’s need and the project’s interest that will drive the show. As I said, it’s a tricky question!
Where can I download ASP.NET MVC from? Is it Free?
Yes, ASP.NET MVC is absolutely free. You can download ASP.NET MVC 1.0 from here
As of this writing,
ASP.NET MVC 2 Preview 1
has been released which can be downloaded
from here
.
The latest version is ASP.NET MVC 4 and it can be downloaded here
Does Microsoft have a Roadmap for ASP.NET MVC? Is Microsoft keen on taking this ahead?
There is an
ASP.NET MVC Roadmap
. The ASP.NET MVC framework is well maintained well documented and there are plenty of tutorials, videos and hands on labs available. There is a clear roadmap too. I feel this framework is here to stay and it will be fun watching how this architecture evolves with ASP.NET over the period of time.
What about Migration from one to the other? Can we also create an Application using ASP.NET WebForms and ASP.NET MVC?
In some cases, Yes. I would say that both the technologies complement each other, but migrating from WebForms to ASP.NET MVC will not be a piece of cake depending on the size of the projects. People, who are keen to use WebForms only and focus only on separation of the logic and UI, can adopt the
MVP Pattern
. However the MVP pattern is not as effective as the MVC.
If you are planning on developing an ASP.NET WebForms application and plan to move it to ASP.NET MVC in future, then try to build a loosely coupled architecture. This way the pains of migrating will be less. Moreover since ASP.NET MVC focuses on plugability and can be extended, functionality like Routing can be used in ASP.NET WebForms and the same can be utilized in ASP.NET MVC later.
Honestly, although I have not yet built an application that involves both ASP.NET WebForms and ASP.NET MVC, it does look possible to build one, since both the technologies are built on top of the core ASP.NET framework.
What is the best way to learn ASP.NET MVC?
Start using it! There are some additional resources, learning tutorials and articles listed over here that makes the learning curve smoother –
-
Official ASP.NET MVC Site
-
Learn ASP.NET MVC
– ASP.NET MVC Written Tutorials, Video Tutorials, Sample Applications
-
ASP.NET MVC Training Kit
- hands-on-labs, demos, decks, FAQs, etc
-
Rob Conery’s MVC StoreFront Series
– In Rob’s own words, the goal of the series is to explore ASP.NET MVC and the various disciplines that compliment it. Rob has covered all kinds of developer goodness, including Test-driven Development, Dependency Injection, and (lightly) Domain Driven Design in these series.
-
ASP.NET MVC documentation
– documentation of the public namespaces, classes, and interfaces that support ASP.NET MVC
-
Professional ASP.NET MVC 1.0
– A great book by Rob Conery, Scott Hanselman, Phil Haack and Scott Gutherie begins with a complete ASP.NET MVC reference application and then takes you through the basic and advanced features of ASP.NET MVC.
Further Reading:
ASP.NET MVC Overview
Compatibility of ASP.NET Web Forms and ASP.NET MVC
Should I Migrate To ASP.NET MVC
In the next article, we will see how to build a sample ASP.NET MVC application.
I hope you liked the article and I thank you for viewing it.
If you liked the article,
or
Subscribe Via Email
Suprotim Agarwal is an ASP.NET Architecture MVP and the founder of this website -. You can read more about Suprotim over
here
Give a +1 to this article if you think it was well written. Thanks!
Recommended Articles
Using AngularJS in your ASP.NET MVC Projects - Part 2
Testing an ASP.NET MVC Controller using NUnit
ASP.NET MVC Model Testing using NUnit and MOQ
Role Based Security in ASP.NET MVC 5 Web Applications
Building a DataGrid using HTML, jQuery, Knockout.js and ASP.NET MVC & WEB API
Using Windows Azure Mobile Services in ASP.NET MVCikram Pendse
on Monday, August 17, 2009 7:52 AM
Simply great article ! clears out all confusions one can have on MVC.
Comment posted by
Paul Noakes
on Monday, August 17, 2009 8:00 AM
I agree with Vikarm. This is the best article I have read so far introducing MVC. I will have my eyes glued to this blog to see some more updates on asp mvc. Thank you !!!
Comment posted by
Srinivas Nilagiri
on Tuesday, January 19, 2010 2:50 AM
Simple and clear, Thanks
Comment posted by
venkatesh
on Saturday, December 1, 2012 7:47 AM
great about mvc article
Comment posted by
Subhashis
on Tuesday, February 5, 2013 3:20 AM
Just Superb article.
One can have full over view of the MVC
We are expecting more good articles like this
Comment posted by
Dharmendra
on Saturday, April 19, 2014 7:30 AM
i'm use.............................................................................................WebClient client = new WebClient();
string downloadString = client.DownloadString("");
jub local per ye code chal rha hai to dikha rha hai ki mera youtoube block hai but when upload krde rhe hai server per to server per mara youtube block nhi dikha rha hai kya block url check krne ka jquery or javascript se posible hai
Project Tracking Website using AngularJS and ASP.NET Web API
Important Tips Every jQuery Developer Should Know
.NET Design Patterns – A Fresh Look
12 Unit Testing Myths and Practices
Introducing ServiceStack
ECMAScript 6 – New language improvements in JavaScript
The Absolutely Awesome jQuery Cookbook Released
Performance Optimization in ASP.NET Web Sites
The Ultimate AngularJS Cheat Sheet - Part 1
Creating REST Service using ASP.NET Web API – (Project Tracking Website Part II))
|
http://www.dotnetcurry.com/showarticle.aspx?ID=370
|
CC-MAIN-2015-18
|
refinedweb
| 2,330
| 67.86
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
import openerp.netsvc as netsvc
from openerp.service.web_services import db
def create_db(self, cr, uid, ids, context=None):
db = db(netsvc.ExportService)
db.exp_create_database('test', True, 'en_US', user_password='admin') #exp_create_database(self, db_name, demo, lang, user_password='admin'):
return True
Please accept the answer if you find it helpful.
Please can u post full code to create db function with import files.Do help me
I have udated answer.
you need to import import openerp.netsvc as netsvc from openerp.service.web_services import db
I used your code as it is. But it is shown error as UnboundLocalError: local variable 'db' referenced before assignment
I have tested it and it works .. you are doing something wrong .. share a screenshot here
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
|
https://www.odoo.com/forum/help-1/question/how-to-create-a-database-from-openerp-form-93716
|
CC-MAIN-2017-26
|
refinedweb
| 174
| 51.95
|
xerces c++, supression of validation and loading dtds over the net.
Discussion in 'XML' started by Kza, Sep 27, 2006.:
- 530
- Jukka K. Korpela
- May 31, 2005
DTDs, schema's and namespacesMiel Bronneberg, Sep 30, 2003, in forum: XML
- Replies:
- 2
- Views:
- 443
- Miel Bronneberg
- Sep 30, 2003
Good SGML DTD viewer *or* tool for translating SGML DTDs to XML DTDsClifford W. Racz, Feb 12, 2004, in forum: XML
- Replies:
- 4
- Views:
- 2,246
- Clifford W. Racz
- Feb 13, 2004
XML, DTDs and getting the newspaper on the webBrandons of mass destruction, May 6, 2004, in forum: XML
- Replies:
- 5
- Views:
- 979
- Andy Dingley
- May 7, 2004
Upgrade of Xalan 1.2.2 and Xerces 1.4.4 to Xalan 2.6 and Xerces 2.6.2cvissy, Nov 16, 2004, in forum: XML
- Replies:
- 0
- Views:
- 764
- cvissy
- Nov 16, 2004
|
http://www.thecodingforums.com/threads/xerces-c-supression-of-validation-and-loading-dtds-over-the-net.373193/
|
CC-MAIN-2015-40
|
refinedweb
| 141
| 83.46
|
Search: Search took 0.01 seconds.
Generate Images with sencha cmd showing error Coordinate out of bounds!Started by RABHARDWAJ, 13 Mar 2013 4:53 AM
- Last Post By:
- Last Post: 2 Oct 2014 6:43 AM
- by Artur Bodera (Joust)
Equavallent comamnd of "sencha create jsb -a all.html -p all.jsb3" in Sencha CMD
- Last Post By:
- Last Post: 2 Mar 2013 9:40 PM
- by dongryphon
Uncaught TypeError: Cannot read property 'parentNode' of null - sencha cmd buildStarted by maitreya.karandikar, 4 Feb 2013 2:18 AM
- Last Post By:
- Last Post: 4 Feb 2013 2:18 AM
- by maitreya.karandikar
x-sencha-init provides NullPointerExceptionStarted by paulroth3d, 27 Jan 2013 12:11 AM
- Last Post By:
- Last Post: 29 Jan 2013 1:13 PM
- by dongryphon
Sencha Cmd needs some coffeeStarted by lsdriscoll, 25 Jan 2013 2:45 AM
Sencha Cmd and Stores
- Last Post By:
- Last Post: 23 Jan 2014 3:20 PM
- by mitchellsimoens
Sencha touch 2.x compile issues (Sencha CMD v3.0.0. 250)Started by Ramprasadh, 14 Jan 2013 12:28 AM
- Last Post By:
- Last Post: 15 Jan 2013 8:11 PM
- by Ramprasadh
Problem with Sencha CMD on Windows
sencha cmd 3.0.0.250 build problems
Minified code has errors in Code - whereas unminified code works perfectlyStarted by RamalingamS, 11 Dec 2012 9:01 AM
- Last Post By:
- Last Post: 21 Dec 2012 6:27 AM
- by RamalingamS
Failed Sencha Compile - Error - com.sencha.exceptions.BasicExceptionStarted by RamalingamS, 10 Dec 2012 2:24 AM
- Last Post By:
- Last Post: 10 Dec 2012 2:24 AM
- by RamalingamS
Failed sencha compile - Error - Circular requires referenceStarted by RamalingamS, 9 Dec 2012 12:37 AM
- Last Post By:
- Last Post: 10 Dec 2012 6:18 AM
- by RamalingamS
[FIXED] Packaging a Android native app doesnt respect orientation setting from packager.json
How to change android manifest with sencha packager
Sencha Cmd 3.0.0.250 build error
Migrating Architect Application to use Sencha Cmd 3.0Started by avrahqedivra, 30 Nov 2012 10:38 AM
- Last Post By:
- Last Post: 30 Nov 2012 10:38 AM
- by avrahqedivra
How to deal with stores in another namespace
Sencha Touch - sencha generate app command throws an exception
sencha package native for iPhone 5 not full height
Cmd & DOCS: Alternative Strategy - Sharing A Framework Subset
Sencha Cmd app Build (production, testing ...) Ext.create vs xtypes
Build Production: Failed to find file for Ux.locale.Manager
Results 1 to 22 of 22
|
https://www.sencha.com/forum/tags.php?tag=sencha+cmd+3.0.0.250
|
CC-MAIN-2016-30
|
refinedweb
| 415
| 56.49
|
Implementing Digits restriction Using JQuery
Implementing Digits restriction Using JQuery Hi Sir
I have..." id="quantity" /></div>
I want to implement two things :
1...(" Digits Only").show();
return false
SEND + MORE = MONEY
of magic formula is no more than 3, the number of digits of every number is no more...SEND + MORE = MONEY Problem Description
Write a program to solve... by setting 1 to Z, 2 to A, 0 to D, 4 to R, 5 to E, 6 to C, and 7 to F.
Now given
mapping between java class and more than 2 tables
mapping between java class and more than 2 tables Association mapping of table in hibernate
Hi Friend,
Please visit the following link:
Hibernate Tutorial
Thanks
Generating password and id and triggering mail.
Generating password and id and triggering mail. I want a code... creates a new record in the system and a
unique id and a MD5 hash password... by the system on id mentioning password. ???
Thanks in Advance
Hibernate Criteria Greater Than
Hibernate Criteria Greater Than
The Hibernate Criteria Greater Than is used the fetch the value from the
database which is Greater than the given value. gt... = criteria.list();
An Example of greater than is given below please consider
Writing more than one cards in a WML deck.
Writing more than one
cards in a deck.In this lesson...
<onevent type="ontimer">
Use of "|" separator for selecting more than one path
Use of "|" separator for selecting
more than one path.... It allows you
more flexibility by allowing you to execute multiple expressions...;information>
<person id="1">
More than one Faces Configuration file
More than one Faces Configuration file Is it possible to have more than one Faces Configuration file
Can a Class extend more than one Class?
Can a Class extend more than one Class? Hi,
Can a Class extend more than one Class?
thanks
Apply more than one xsl for single XML
Apply more than one xsl for single XML How to apply more than one xsl for same xml file
more than one struts-config.xml file
more than one struts-config.xml file Can we have more than one struts-config.xml file for a single Struts application
HIBERNATE COMPOSITE ID - Hibernate
HIBERNATE COMPOSITE ID Hi,
I have a database table structure as
CREATE TABLE `attendance` (
`sno` int(11) NOT NULL auto_increment,
`empRef` int(8) default NULL,
`dat` date default NULL,
`timeIn` time
Hibernate criteria by id.
Hibernate criteria by id. How to display record by using criteria by id in Hibernate
Tomcat 6.
Tomcat 6. hi......I have problem like
windows could not start the Apache Tomcat 6 on Local Computer.
for more information ,review theSystem Event Log.
If this is a non-Microsoft service,contact the service vendor,
and refer
Hibernate
Hibernate Can we write more than one hibernate.cfg.xml file... ? if so how can we call and use it.?
can we connect to more than one DataBase from a single Hibernate program
how can retrieve more than one values in text field using ajax?
how can retrieve more than one values in text field using ajax? im...;
<br />
</div>
<div id="seatdiv">...;input
</div>
<
More than 1 preparedStatement object - Java Beginners
More than 1 preparedStatement object Hey but I want to use more than one prepared Statement object using batch update..
Explain with a code using java...
Thanks Hi Friend,
You can use more than one prepared
What is LTO 6?
; inch width magnetic tape was in use for more than 50 years by many of the world's... than two decades made it possible for the LTO 6 to arrive with highest ever data... drive. LTO 6 cartridges have a data storage capacity which is roughly double than
Hibernate criteria by id.
Hibernate criteria by id. How to display record by using criteria by id in Hibernate?
Here is an example -
package...();
}
}
Output:
Hibernate: select this_.emp_id as emp1_0_0_, this_.date_of_join
Java SE 6
Java SE 6
... MicroSystems has released the Java SE 6 on Monday December 11.
So go... of each and every new features added
in jdk 6 with running examples.
numbers divisible by 5 and 6
numbers divisible by 5 and 6 Find the first ten numbers which are greater than Long.MAX_VALUE divisible by 5 and 6
for writting java program why we are taking more than one class
for writting java program why we are taking more than one class for writting java program only one class is enough but why we are taking more than one class
Implementing more than one Job Details and Triggers
Implementing more than one Job Details and Triggers... will learn how to implement
more than one triggers and jobs with a quartz... of more than one
job details and triggers.
Description of program:
Here, we
hibernate - Hibernate
performance
For more information,Tutorial and Examples on Hibernate Visit...hibernate what is hibernate and how to make a pc hibernating? Hi friend,
Hibernate is based on object oriented concept like java
total id
total id how to print and total my id number ..for example if my id is 0123456789
the output will look like this :
0123456789
0+1+2+3+4+5+6+7+8+9=45
help me plz
Hibernate - Hibernate
Hibernate SessionFactory Can anyone please give me an example of Hibernate SessionFactory? Hi friend,package roseindia;import...(); System.out.println("Id:"+procedure.getId
Difference Between Java 5 and Java 6
, Java 6 also adds more useful and significant features for Java languages...Difference Between Java 5 and Java 6
Today Java has become one of the most... journey to reach as java 6, and hence it is important to know the difference
hibernate - Hibernate
Hibernate;
import org.hibernate.Session;
import org.hibernate.*;
import... Branches.branch_id from Branches branches";
Query query...();
System.out.println("ID: " + row[0]);
System.out.println("Name: " + row[1
Hibernate - Hibernate
(int id) { this.id = id; }}--------------------------------------read for more...Hibernate pojo example I need a simple Hibernate Pojo example ...){ System.out.println(e.getMessage()); } finally{ } }}hibernate mapping <class name
JSF validateLength Tag
of more
than 6 characters then this tag can be used to validate. This is one... the value less than 6 characters or more than 15
characters then validation
error..., the
user has entered less than 6 characters in the password field so error message
Techniques used for Generating Dynamic Content Using Java Servlets.
Techniques used for Generating Dynamic Content
Common Gateway Interface...
make it less than the optimal solution. CGI runs in a separate
process separated from the web server and it requires more
Difference between Java 6 and Java 7
to reach as java 6, and hence it is important to know the difference between Java 6 and Java 7.
To know the basic difference between Java 6 and Java7, let us first know in brief about these.
Java Version 6: First of all
generating random numbers - Java Beginners
generating random numbers We would like to be able to predict tomorrow's price of a share of stock. We have data on past daily prices. Based... between 6 and 50 elements inclusive.
- Each element of data will be between 10.0
What is the main advantage of using the hibernate than using the SQL
What is the main advantage of using the hibernate than using the SQL Hi,
What is the main advantage of using the hibernate than using the SQL.
thanks
Struts PDF Generating Example
Struts PDF Generating Example
To generate a PDF in struts you need to use...;/param>
</result>
An example of PDF Generating is given below...;head>
<title>PDF Generating Example</title>
</head>
hibernate best practices - Hibernate
for more information. best practices
Hi all,
I am working on hibernate. can any one please send me the best practices in hibernate. And please send
multiply of 100 digits numbers
multiply of 100 digits numbers multiplying 100 digits numbers to eachother
LTO 6: Your Next Big Boost to Data Storage
storage technology it has set a record of storing more than 80,000 PB of data. With more than 4 million drives and 200 million cartridges already shipped its quick... technologies with steady development on multiple aspects and LTO 6 or the sixth
5 Important Things to know About LTO 6
transfer speed, LTO 6 offers far advanced and enhanced features than LTO 5... user benefits are more important to have a clear understanding rather than so... friendly as LTO 6 consumes even less power than its predecessor versions
Server side validation on dynamically generated fields from more than one table on spring framework.
Server side validation on dynamically generated fields from more than one table on spring framework. Server side validation on dynamically generated fields from more than one table in spring mvc framework
Connection pool in Tomcat 6 - JDBC
Connection pool in Tomcat 6 Hello Everybody,
I am trying to implement connection pooling in Tomcat 6 and MySQL 5.0.41 with mysql-connector...(BasicDataSource.java:1143)
... 19 more Hi Friend,
If you are using
J query event for selecting the more than one div in tablet browser(Select and drag)?
J query event for selecting the more than one div in tablet browser(Select and drag)? Web application five div is created. For selecting the more... to select the more than one tag using the touch and drag event
jsp - excel generating problem - JSP-Servlet
jsp - excel generating problem Hi,
I worked with the creating excel through jsp, which is the first example in this tutorial... is result excel file.
If you have more problem then give details
Hibernate-HQL subquery - Hibernate
for more details subquery Hi,
I need to fetch latest 5 records from...
where customer_id=xxxx
order by shipdate desc
Hibernate Criteria Queries - Hibernate
.
Read for more information. Criteria Queries Can I use the Hibernate Criteria Query... Configuration();
// configuring hibernate
SessionFactory
Hibernate id annotation
In this section, you will learn how to do mapping of id through annotation
hibernate code problem - Hibernate
= request.getParameter("name");
For more information on hibernate visit to :
http...hibernate code problem String SQL_QUERY =" from Insurance...();){
Insurance insurance=(Insurance)it
System.out.println("ID
Error in connecting to the mySQL database in TOMCAT using more than one PC (database connection pooling)
Error in connecting to the mySQL database in TOMCAT using more than one PC (database connection pooling) how do i implement connection pooling... me?
I tried using DriverManager.getConnection, and I can work with two or more
need to generate ID
need to generate ID hai,
i need to generate ID i.e when i select addemploye option and submit, it should generate a emp ID which is 1 greater than the previous highest ID value stored in database.
eg: if the higest value
Hi.. how to write more than one sheets in a excel file... pls anybody help me....
Hi.. how to write more than one sheets in a excel file... pls anybody help me... more than one sheets.. For example: first sheet have complete mean automatically... "+tableName+" having more than 65536 rows");
System.out.println
JQuery-Selecting class or ID
am getting little trouble in selecting an item using class or ID in jQuery?
Given below code selects an element of id name "element1" :
$('#element1')
Since IDs are unique, so it will select only the element of particular id
Struts2.2.1 Hibernate Example
be
more than
6 character.
requiredinteger = ${getText...Struts2.2.1 Hibernate Example
In this example, We will discuss about the Hibernate Integration using
struts2.2.1. In this example,we take the input from
java(Hibernate) - Hibernate
for more information.
Thanks
Amardeep...java(Hibernate) Hai Amardeep
This is jagadhish.Iam giving full code...;
public class PhoneNumber
{
private long id;
private String numberType
validating email id
validating email id how to validate the email id ?
<...="";
return false;
}
}
</script>
<form id="form...;tr><td>Email </td><td><input type="text" id="email" ><
Convert Number Format
; so if we pass less
than 3 digits in fractional part then it takes it upto 3 digits and if
we pass more than 5 digits then it takes the number....
If we put more than 5 digits :
Hibernate Criteria Expression (ge)
:00:00.0
Id: 6
Insurance Name: Life Insurance
Insurance Amount...
Hibernate Criteria Expression (ge)
... a "greater
than or equal" constraint to the named property.
Expressions
Session ID - Java Beginners
Session ID Do we get new session id for a new domain after clicking..." +
" Info TypeValue\n" +
"\n" +
" ID\n...
---------------------------------------------
Read for more information.
http
id
id how to find date of birth and gender in id in html with the help of javascript
Hibernate Criteria Expression (le)
: 2004-01-01 00:00:00.0
Insurance Id: 6
Insurance Name: Life Insurance...
Hibernate Criteria Expression (le)
... a
"less than or equal" constraint to the named property.
Expressions
hibernate
hibernate Is there any other way to call procedure in hibernate other than named query????? if we are using session object to get the connection then why hibernate we can directly call by using a simple java class??????? please
updating rows which contains same id, different value for each row
basing on one condition. The condition is if more than 2 rows contains same sid...updating rows which contains same id, different value for each row Student table:
sid name age
3 Criteria Expression (gt)
.
Hibernate: select this_.ID as ID0_0_,
this_.insurance_name...:00:00.0
Insurance Id: 6
Insurance Name: Life Insurance...
Hibernate Criteria Expression (gt)
generating itext pdf from java application - Java Beginners
generating itext pdf from java application hi,
Is there any method in page events of itext to remove page numbers from pdf generated frm java application. Hi friend,
Read for more information.
Joins
you required the data from more than one table. When you select the
data from more than one table this is known as Joining. A join is a SQL query that is used to select the data from more than one table or views. When you define multiple
More About the CronTrigger
More About the CronTrigger
The CronTriggers are more useful than the
SimpleTrigger, if we... for minutes
and seconds, 0 to 31 for Day-of-Month but here, we should
more
load more with jquery
load more with jquery i am using jquery to loadmore "posts" from my... box its is going to display php posts and after that when i click on load more...({
url: "loadmore.php?lastid=" + $(".postitem:last").attr("id
Chapter 6. Assemble enterprise applications and deploy them in IBM WebSphere Application Server
Chapter 6. Assemble enterprise applications and deploy them... Objectives Next
Chapter 6. Assemble enterprise....
One or more application module class loaders that load elements
JavaScript toPrecision method
of digits for the number value. If
the digits is greater than the number...
then it would convert that number like 123.4560. If
the digits is less than....
Now if we insert number of digits less than the digits present before
Hibernate Criteria Expression (lt)
properly.
Hibernate: select this_.ID as ID0_0_,
this_.insurance_name...
Hibernate Criteria Expression (lt)
...: The
Hibernate Criteria API supports a rich set of comparison operators. Some
standard SQL
java program_big digits multiplication..
java program_big digits multiplication.. i want program about big digits multiplication program using java..plz tel me answer
Writing more than one cards in a WML deck.
in connectivity - Hibernate
the hibernate and postgresql
that progrram is running while showing no error... insertted Hi friend,
This is connectivity and hibernate configuration... hibernate for use
SessionFactory sessionFactory = new Configuration
hibernate code - Hibernate
hibernate code while generating the hibernate code i got the error like org.hibernate.MappingException
Searching - Hibernate
Searching How we can search the record through Hibernate. As we do rs.next() to
get the records in jdbc .Then how we can do this using Hibernate... qualification;
private String engineer;
private String docter;
private int id
Passing td cell id to servlet
;/td>
<td id="5" onClick="editTableCell(5)"></td>
<td id="6" onClick="editTableCell(6)"> </td>
</tr>
<tr>
<td id...Passing td cell id to servlet Hi all,
I am new to JSP/Servlet
Java - Hibernate
for more information .
Thanks..., this type of output.
----------------------------
Inserting Record
Done
Hibernate: insert into CONTACT (FIRSTNAME, LASTNAME, EMAIL, ID) values (?, ?, ?, ?)
Could
hibernate...............
hibernate............... goodevining. I am using hibernate on eclipse, while connection to database Orcale10g geting error .........driver
ARNING...)
at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:423)
... 7 more
please suggest me
hibernate annotations
hibernate annotations I am facing following problem, I have created... hibernate annotations to insert records into these tables. But it is trying... serialVersionUID = 8876327743522818732L;
@Id
private long sid
|
http://www.roseindia.net/tutorialhelp/comment/97115
|
CC-MAIN-2015-14
|
refinedweb
| 2,764
| 57.27
|
This document is maintained by Ka-Ping Yee.
He can be reached at ping
zesty.ca.
Many questions can be answered by looking at the PFIF specification and the example PFIF document.
xmlns:xsiand
xsi:schemaLocationattributes for?
Here is a diagram describing | | 2b. original non-PFIF record | | in record's home repository | | in record's home '------------------------'
A PFIF repository can contain original records and clone records. Original records are records residing in their home repository; clone records belong to other repositories.
Each record belongs to a home repository, which is the repository where the computer record was first created (stage 2 in the above diagram). Though the record can distributed and copied into other databases, the home repository remains the authority on the record.
The person_record_id and note_record_id fields begin with a domain name and a slash. That domain name identifies the record's home repository.
The ONLY field that changes is the entry_date field, which indicates when a record entered the receiving application. No other fields change. And after a record has been stored in a repository, nothing in the record changes, not even the entry_date.
PFIF is based on a "post-only" philosophy. After a record has been stored for the first time in PFIF, it is only copied from place to place, not changed.
The source_date is the "real" date of the record: the date that the original record was created.
The entry_date is the date that this particular copy of the record was stored. The entry_date should be automatically filled in by the receiving repository; there is no need for anyone ever to manually enter an entry_date when entering data.
All the clones of a record have the same source_date as the original record. All the clones of a record will probably have different values of entry_date.
These two fields apply to both person and note records.
(The date fields labelled "Entry Date" and "Note Entry Date" on the Katrina People Finder Project entry form correspond to the source_date field in PFIF, not the entry_date field. The user should never need to enter the entry_date field.)
The purpose of the entry_date field is to enable incremental updates. If you want to mirror all the records from a remote PFIF repository into your own database on a daily basis, then you don't have to ask for a dump of the entire remote repository every day. You can just ask for all the records with an entry_date beyond the highest entry_date that you received last time.
Historical note: yes, these are somewhat confusing field names. If you are wondering why they were chosen, their origins have to do with the flow from 2b to 3 to 4 in the diagram above, which is where the Katrina People Finder Project originally focused its attention. In the case of humans reading data out of a non-PFIF repository and entering the data into a PFIF repository, these names make sense: the source_date is the date of the record in the non-PFIF source repository, and the entry_date is the date that the human enters the record into the PFIF repository. We retained these names for compatibility, though they make less sense when applied more generally. Just remember that source_date is the fixed creation date and entry_date is the automatically-set arrival date and you'll be fine.
These fields identify the (PFIF or non-PFIF) record in its home repository. source_name is the name of the home repository; source_date is the date that the record was created in the home repository; source_url is the URL to the record in the home repository.
These fields are set the first time the record is converted to PFIF, and never changed after that.
Always start with the PFIF XML document format. Your application will need to support this format in any case. If you are writing a program to format the PFIF XML directly, keep in mind that you will need to replace "&" with "&" and "<" with "<" in field values.
Embed that PFIF in an Atom feed only if you need to be compatible with an Atom feed reader. Or embed the PFIF in an RSS feed only if you need to be compatible with an RSS feed reader. The specifications of the Atom and RSS feed formats are only for compatibility with other syndication software, so that PFIF data can flow through existing syndication channels.
For all other purposes, stay with PFIF XML. Unless you are depending on other Atom or RSS software to transmit your PFIF, there is no reason to do the extra work of embedding in Atom or RSS.
PFIF-aware applications should scan the input document
for all
pfif:person elements
and ignore everything else.
This will work for all three forms of input.
If you are using an XML parsing library,
ask your XML parsing library to retrieve
all the
pfif:person elements in the document.
If you are using regular expressions or string matching,
"<pfif:person"
to find the start of each person element,
and search from the start of each person element
for the string
"</pfif:person" to find the end of the element.
If the field is for non-changing information about the missing person, put it into the other field of the person record. The specification says:
Short fields should be on a single line with the field name, a colon, and the field value. Long fields can be given as a line with the field name and a colon, then text indented on the following lines.
When a record is converted from some other form to PFIF by a machine process, the field "automated-pfif-author" should be present and should name the program that produced the PFIF. The "automated-pfif-author" field is not added when records are exported from a PFIF repository.
A description of the person in free-form text can also go here, with the field name "description". For example, a program that scrapes a record from a non-PFIF format that includes a free-form text field might produce an other field like this:description: Dark hair, in her late thirties. Also goes by the names "Kate" or "Katie". automated-pfif-author: ScrapeMatic 0.5Field names for data fields imported from other applications should begin with the domain name and a slash. For example, if a birthdate is imported from an ICRC record, it might look like this:icrc.org/birthdate: 1976-02-26
If the field is for data that changes over time, add it as a note. There is no particular format specified for notes at the moment. Use your best judgement to format it as text; you can use the same format as the other field if you want.
Even when an application decides that multiple person records refer to the same person, it should not attempt to merge the records in place. Instead, the application should retain all the received records and present a merged display of them. Keeping the original records maintains accountability and makes it possible for the application to handle future imports of the same records from their original sources.
The Database Schema section of the specification suggests a possible way that an application based on a relational database can keep track of multiple records referring to the same person.
xmlns:xsiand
xsi:schemaLocationattributes for?
These attributes are not required by the PFIF specification, but can help validation tools validate XML documents. For example, Altova's XML Spy, an XML editor and validator, recognizes these attributes and can use them to validate a PFIF document against the PFIF XML Schema.
The example PFIF document
shows how to use these attributes
to tell readers of a PFIF document
where to find the XML Schema for the document.
The
xmlns:xsi attribute identifies the namespace
for an XML Schema Instance,
and the
xsi:schemaLocation
maps the PFIF namespace URI to the URL for the XML Schema document.
The specification document at includes attributes from RDDL 2.0, a proposed format for referring to an XML Schema. These attributes are present so that a program that reads a PFIF document can follow the namespace URL to, retrieve the document, and find the link to the XML Schema document at. If an XML processor supports RDDL, only the namespace has to be given, and no other attributes are needed to help locate the schema.
The RDDL attributes are properly qualified in an "rddl" namespace. The W3C validator does not know how to handle namespaces, but otherwise the specification document is valid XHTML 1.0 Strict.
|
http://zesty.ca/pfif/faq.html
|
CC-MAIN-2017-13
|
refinedweb
| 1,431
| 61.36
|
list functions issue with python
- Richard Weth last edited by
This is a minimal recreate :
import re
class foo (object):
def init(self):
self.zing = “joemama”
def bar(self):
print “bar”
When I list the function of this class it does not work. However if I add a space to the beginning of the class definition … list functions performs as desired. Sadly adding a space before class breaks the code, so there is no harmony here.
Why would this happen … my eol == LF (unix) encoding is utf, and this operates on a windows system with python 2.7.9
Kinda mystified here. Esp since the only workaround is this (this works)
#<begin code>
import re
class foo (object):
def init(self):
self.zing = “joemama”
def bar(self):
#<end code>
print “bar”
|
https://community.notepad-plus-plus.org/topic/11936/list-functions-issue-with-python
|
CC-MAIN-2020-10
|
refinedweb
| 131
| 72.76
|
Java’s generics have undoubtedly made life easier in some cases by enforcing type safety on collections. One collection that frequently does not benefit from them is maps used in a heterogeneous way – that is, a java.util.Map (or equivalent) that contains several non-compatible values. Assuming the keys are of the same type, it’s still not possible to type the value of the map as anything other than a java.lang.Object – exactly what it would have been prior to generics.
Looking through the Jetbrains documentation on its API for IntelliJ plugin development, it’s interesting to note their workaround for enforcing type safety in a map containing heterogeneous values:
- DataKey<T> – the access point to the map
- DataContext – an interface that can be used to wrap a map
This approach externalises the type-safety of the map’s content by shifting the casts to DataKey.
The interesting part of DataKey, from this viewpoint, is
@Nullable public T getData(@NotNull DataContext dataContext) { return (T) dataContext.getData(myName); }
Read access is via the key, and not directly on the map itself, e.g. for a key
DataKey<Foo> FOO = DataKey.create("foo")
a type-safe call without a cast can be made on the wrapped via:
Foo foo = MyKeys.FOO.getData(dataContext);
The map-based implementations of DataContext just does a regular lookup with the key provided by DataKey:
public class MyDataContext implements DataContext { private final Map<String, ?> data = new HashMap<String, ?>(); @Nullable Object getData(@NonNls String dataId) { return data.get(dataId); } }
and the returned object is cast to the correct type on the way out of DataKey. Simple and elegant.
It’s worth noting that both DataKey and DataContext are read-only in function – there’s a getData(), but no setData(). It’s easy enough to extend the concept to make sure that what goes into the map is of the same type as what comes out by adding a couple of extra methods, one on DataKey:
public void setData(DataContext dataContext, T value) { dataContext.setData(getName(), value); }
and another on DataContext:
public void setData(String key, Object value) { data.put(key, value); }
I’ve seen a lot of code where keys to access values from maps have been declared as static final Strings. Using DataKey, you could replace these declarations with DataKeys that specify the key to access the value and the type of the value itself, so
public static final String USER = "user"; ... User user = (User)map.get(USER);
would become
public static final DataKey<User> USER = DataKey.create("user"); ... User user = USER.getData(dataContext);
2 thoughts on “Storing multiple object types in Java maps with type safety”
This could be especially useful in Android where you can’t use generics.
Although, this would also mean that, instead of defining a DataKey using generics, you’d have to create a different subclass of DataKey for each type. I thought about passing the type into the DataKey’s constructor, but that wouldn’t help with the return type of getData. It’ll also require a little bit of reworking for the setter, but could totally be worth it.
Sorry, I was a little off in my last comment. It seems you CAN use generics in Android. My confusion was from Bundle and a few other things in the Android SDK that force you to do the casting. It’s been a while since I’ve worked with Android, and I hadn’t needed to use generics while I was working with it.
So yeah, you can pretty much use your design as-is.
|
https://www.objectify.be/wordpress/2010/04/21/storing-multiple-object-types-in-java-maps-with-type-safety/
|
CC-MAIN-2019-26
|
refinedweb
| 597
| 54.32
|
I have a query, as follows:
Background: There is a detailed validated model in hoc with several mod files for ion channels. These ion channels access FUNCTION_TABLE (Tautables) from a hoc file. I am rebuilding this existing model from scratch in Python. However, I am stuck with a fatal error.
Issue: To figure out the error, I have made a simplified dummy model in Python with a tautable for the NaF channel in tautable.hoc file. The NaF channel is getting inserted and the gui is opening but when I initialize and/or run, the entire NEURON crashes. The validated hoc model works perfectly but the Python model doesn’t.
Additionally, I am able to print values (in Spyder IPython shell) from the hoc vectors present in ‘tautable.hoc’ indicating h.load_file has worked.
Python code for dummy model:
tautable.hoc
Code: Select all
from neuron import h, gui h.load_file("tautable.hoc") soma = h.Section(name='cell_soma') #GEOMETRY soma.insert('NaF') soma.L = soma.diam = 12.6157 #Biophysical mechanisms soma.Ra = 100 soma.cm = 1 h.psection()
chan_NaF.mod
Code: Select all
objref vecv_NaF, vecmtau_NaF, vechtau_NaF vecv_NaF = new Vector() vecv_NaF.indgen(-90, 30, 10) vecmtau_NaF = new Vector() vecmtau_NaF.append(0.02,0.023,0.03,0.037,0.043,0.067,0.107,0.053,0.05,0.04,0.027,0.02,0.02) table_tabmtau_NaF(&vecmtau_NaF.x[0], vecv_NaF.size, &vecv_NaF.x[0]) vechtau_NaF = new Vector() vechtau_NaF.append(0.433,0.433,0.433,0.433,0.433,0.433,0.433,0.283,0.167,0.15,0.107,0.1,0.093) table_tabhtau_NaF(&vechtau_NaF.x[0], vecv_NaF.size, &vecv_NaF.x[0])
How do I make my model work with these tautables? I don't want to manually rewrite the entire tautables for all channels from hoc to Python as there are many in the original model.
Code: Select all
TITLE Fast Sodium current : Unit check passed INDEPENDENT { t FROM 0 TO 1 WITH 1 (ms) } UNITS { (mV) = (millivolt) (mA) = (milliamp) } NEURON { SUFFIX NaF USEION na READ ena WRITE ina RANGE g, gmax, ina GLOBAL minf, mtau, hinf, htau POINTER mu } PARAMETER { : PARAMETERS ARE BY DEFAULT GLOBAL VARIABLES gmax = 0.0195 (mho/cm2) :gmax = 0 ena (mV) m_vh = -23.9 (mV) : half activation m_ve = -11.8 (mV) : slope h_vh = -62.9 (mV) : half activation h_ve = 10.7 (mV) : slope } ASSIGNED { v (mV) g (mho/cm2) ina (mA/cm2) minf (1) mtau (ms) hinf (1) htau (ms) mu (1) } STATE { m h } BREAKPOINT { SOLVE states METHOD cnexp g = gmax * m*m*m*h * (1-(mu-1)*0.05) ina = g * (v - ena) } INITIAL { rates(v) m = minf h = hinf } DERIVATIVE states { rates(v) m' = ( minf - m ) / mtau h' = ( hinf - h ) / htau } FUNCTION_TABLE tabmtau(v(mV)) (ms) FUNCTION_TABLE tabhtau(v(mV)) (ms) PROCEDURE rates(v(mV)) { TABLE mtau, htau, minf, hinf DEPEND h_vh FROM -120 TO 40 WITH 160 : TABLE mtau, htau, minf, hinf DEPEND h_vh FROM -120 TO 40 WITH 320 mtau = tabmtau(v) htau = tabhtau(v) minf = 1/(1 + exp((v - m_vh)/m_ve)) hinf = 1/(1 + exp((v - h_vh)/h_ve)) }
Thanks in advance!
|
https://www.neuron.yale.edu/phpBB/viewtopic.php?p=17650
|
CC-MAIN-2020-40
|
refinedweb
| 516
| 66.13
|
So you want to write a package manager
You woke up this morning, rolled out of bed, and thought, “Y’know what? I don’t have enough misery and suffering in my life. I know what to do — I’ll write a language package manager!”
Totally. Right there with you. I take my misery and suffering in moderation, though, and since I think you might too, this design guide could help you keep most of your hair. You, or people hoping to improve an existing package manager, or curious about how they work, or who’ve realized that their spiffy new language is pretty much consigned to irrelevance without one. Whatever.
Now, I also have an ulterior motive: right now, the Go community actually DOES need proper package management, and I’m contributing to an approach. As such, I’ll be returning often to Go as the main reference case, and there’s a dedicated Go section at the end. But the real focus here is general package management design principles and domain constraints, and how they may apply in different languages.
Package management is awful, you should quit right now
Package management is a nasty domain. Really nasty. On the surface, it seems like a purely technical problem, amenable to purely technical solutions. And so, quite reasonably, people approach it that way. Over time, these folks move inexorably towards the conclusion that:
- software is terrible
- people are terrible
- there are too many different scenarios
- nothing will really work for sure
- it’s provable that nothing will really work for sure
- our lives are meaningless perturbations in a swirling vortex of chaos and entropy
If you’ve ever created…well, basically any software, then this epiphanic progression probably feels familiar. It’s the design death spiral — one that, experience tells us, is a signal to go back and reevaluate expectations and assumptions. Often enough, it turns out that you just hadn’t framed the problem properly in the first place. And — happy day! — that’s the case here: those who see package management as a purely technical problem are inevitably, even if subconsciously, looking for something that can completely and correctly automate upgrades.
Stop that. Please. You will have a bad time.
Package management is at least as much about people, what they know, what they do, and what they can reasonably be responsible for as it is about source code and other things a computer can figure out. Both sides are necessary, but independently insufficient.
Oh but wait! I’m already ahead of myself.
LOLWUT is “Package Manager”
You know how the internet works, so you probably already read the first two sentences of what Wikipedia has to say. Great, you’re an expert now. So you know that first we’ve gotta decide what kind of package manager to write, because otherwise it’s like “Hey pass me that bow?” and you say “Sure” then hand me an arrow-shooter, but I wanted a ribbon, and Sonja went to get her cello. Here’s the menu:
- OS/system package manager (SPM): this is not why we are here today
- Language package manager (LPM): an interactive tool (e.g., `go get`) that can retrieve and build specified packages of source code for a particular language. Bad ones dump the fetched source code into a global, unversioned pool (GOPATH), then cackle maniacally at your fervent hope that the cumulative state of that pool makes coherent sense.
- Project/application dependency manager (PDM): an interactive system for managing the source code dependencies of a single project in a particular language. That means specifying, retrieving, updating, arranging on disk, and removing sets of dependent source code, in such a way that collective coherency is maintained beyond the termination of any single command. Its output — which is precisely reproducible — is a self-contained source tree that acts as the input to a compiler or interpreter. You might think of it as “compiler, phase zero.”
The main distinction here is between systems that help developers to create new software, versus systems for users to install an instance of some software. SPMs are systems for users, PDMs are systems for developers, and LPMs are often sorta both. But when LPMs aren’t backed by a PDM, the happy hybrid devolves into a cantankerous chimaera.
PDMs, on the other hand, are quite content without LPMs, though in practice it typically makes sense to bundle the two together. That bundling, however, often confuses onlookers into conflating LPMs and PDMs, or worse, neglecting the latter entirely.
Don’t do that. Please. You will have a bad time.
PDMs are pretty much at the bottom of this stack. Because they compose with the higher parts (and because it’s what Go desperately needs right now), we’re going to focus on them. Fortunately, describing their responsibilities is pretty easy. The tool must correctly, easily, and quickly, move through these.
Remember that thing about how package management is just as much about people as computers? It’s reflected in “intend” and “reproduce,” respectively. There is a natural tension between the need for absolute algorithmic certainty of outputs, and the fluidity inherent in development done by humans. That tension, being intrinsic and unavoidable, demands resolution. Providing that resolution is fundamentally what PDMs do.
We Have Met The Enemy, And They Are Us
It’s not the algorithmic side that makes PDMs.
Now, we’re talking about PDMs, which means we’re talking about interacting with the world of FLOSS— pulling code from it, and possibly publishing code back to it. (In truth, not just FLOSS — code ecosystems within companies are often a microcosm of the same. But let’s call it all FLOSS, as shorthand.) I’d say the basic shape of that FLOSS-entwined mental model goes something like this:
- I have some unit of software I’m creating or updating — a “project.” While working on that project, it is my center, my home in the FLOSS world, and anchors all other considerations that follow.
- I have a sense of what needs to be done on my project, but must assume that my understanding is, at best, incomplete.
- I know that I/my team bear the final responsibility to ensure the project we create works as intended, regardless of the insanity that inevitably occurs upstream.
- I know that if I try to write all the code myself, it will take more time and likely be less reliable than using battle-hardened libraries.
- I know that relying on other peoples’ code means hitching my project to theirs, entailing at least some logistical and cognitive overhead.
- I know that there is a limit beyond which I cannot possibly grok all code I pull in.
- I don’t know all the software that’s out there, but I do know there’s a lot, lot more than what I know about. Some of it could be relevant, almost all of it won’t be, but searching and assessing will take time.
- I have to prepare myself for the likelihood that most of what’s out there may be crap — or at least, I’ll experience it that way. Sorting wheat from chaff, and my own feelings from fact, will also take time.. In fact, some are even necessary for the functioning of open software ecosystems, so much so that we codify them:
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE…
- The MIT License (emphasis mine)
And you know something’s for serious when it’s in boilerplate legalese that everyone uses, but no one reads.
OK, OK, I get it: creating software is full of potentially dangerous unknowns. We’re not gonna stop doing it, though, so the question is: how can we be out there, writing software/cavorting in the jungle of uncertainty, but simultaneously insulate our projects (and sanity) from all these risks?
I like to think of the solution through an analogy from public health: harm reduction. A little while back, Julia Evans wrote a lovely piece called Harm Reduction for Developers. It’s short and worth the read, but I’ll paraphrase the crux:
People are going to do risky activities. Instead of saying YOU’RE WRONG TO DO THAT JUST DON’T DO THAT, we can choose to help make those activities less risky.
This is the mentality we must adopt when building a practical tool for developers. Not because we’re cynically catering to some lowest-common-denominator caricature of the cowboy (cowperson? cowfolk?) developer, but because…well, look at that tower of uncertainty! Taking risks is a necessary part of development. The PDM use case is for a tool that encourages developers to reduce uncertainty through wild experimentation, while simultaneously keeping the project as stable and sane as possible. Like a rocket with an anchor! Or something.
Under different circumstances, this might be where I start laying out use cases. But…let’s not. That is, let’s not devolve into enumerating specific cases, inevitably followed by groping around in the dark for some depressing, box-ticking, design-by-committee middle ground. For PDMs, anyway, I think there’s a better way. Instead of approaching the solution in terms of use cases, I’m going to take a crude note from distributed systems and frame it in terms of states and protocols.
States and Protocols
There’s good reason to focus on state. Understanding what state we have, as well as when, why, and how it changes is a known-good way of stabilizing software against that swirling vortex of chaos and entropy. If we can lay down even some general rules about what the states ought to represent and the protocols by which they interact, it bring order and direction to the rat’s nest. (Then, bring on the use cases.) Hell, even this article went through several revisions before I realized it, too, should be structured according to the states and protocols.
Meet the Cast
PDMs are constantly choreographing a dance between four separate states on-disk. These are, variously, the inputs to and outputs from different PDM commands:
- Project code: the source code that’s being actively developed, for which we want the PDM to manage its dependencies. Being that it’s not currently 1967, all project code is under version control. For most PDMs, project code is all the code in the repository, though it could be just a subdirectory.
- Manifest file: a human-written file — though typically with machine help— that lists the depended-upon packages to be managed. It’s common for a manifest to also hold other project metadata, or instructions for an LPM that the PDM may be bundled with. (This must be committed, or else nothing works.)
- Lock file: a machine-written file with all the information necessary to [re]produce the full dependency source tree. Created by transitively resolving all the dependencies from the manifest into concrete, immutable versions. (This should always get committed. Probably. Details later.)
- Dependency code: all of the source code named in the lock file, arranged on disk such that the compiler/interpreter will find and use it as intended, but isolated so that nothing else would have a reason to mutate it. Also includes any supplemental logic that some environments may need, e.g., autoloaders. (This needn’t be committed.)
Let’s be clear — there are PDMs that don’t have all of these. Some may have them, but represent them differently. Most newer ones now do have everything here, but this is not an area with total consensus. My contention, however, is that every PDM needs some version of these four states to achieve its full scope of responsibilities, and that this separation of responsibilities is optimal.
Pipelines within Pipelines
States are only half of the picture. The other half is the protocols — the procedures for movement between the states. Happily, these are pretty straightforward.
The states form, roughly, a pipeline, where the inputs to each stage are the state of its predecessor. (Mostly — that first jump is messy.) It’s hard to overstate the positive impact this linearity has on implementation sanity: among other things, it means there’s always a clear “forward” direction for PDM logic, which removes considerable ambiguity. Less ambiguity, in turn, means less guidance needed from the user for correct operation, which makes using a tool both easier and safer.
PDMs also exist within some larger pipelines. The first is the compiler (or interpreter), which is why I’ve previously referred to them as “compiler phase zero”:
For ahead-of-time compiled languages, PDMs are sort of a pre-preprocessor: their aggregate result is the main project code, plus dependency code, arranged in such a way that when the preprocessor (or equivalent) sees “include code X” in the source, X will resolve to what the project’s author intended.
The ‘phase zero’ idea still holds in a dynamic or JIT-compiled language, though the mechanics are a bit different. In addition to laying out the code on disk, the PDM typically needs to override or intercept the interpreter’s code loading mechanism in order to resolve includes correctly. In this sense, the PDM is producing a filesystem layout for itself to consume, which a nitpicker could argue means that arrow becomes self-referential. What really matters, though, is that it’s expected for the PDM to lay out the filesystem before starting the interpreter. So, still ‘phase zero.’
The other pipeline in which PDMs exist is version control. This is entirely orthogonal to the compiler pipeline. Like, actually orthogonal:
See? The PDM feeds the compiler code along the X axis, while version control provides chronology along the Y axis. Orthogonal!
…wait. Stuff on the X axis, time on the Y axis…this sounds weirdly similar to the Euclidean space we use to describe spacetime. Does that mean PDMs are some kind of bizarro topological manifold function? And compiler outputs are literally a spacetime continuum!? Maybe! I don’t know. I’m bad at math. Either way, though, we’ve got space and time, so I’m callin it: each repository is its own little universe.
Silly though this metaphor may be, it remains oddly useful. Every project repo-verse is chock full of its own logical rules, all evolving along their own independent timelines. And while I don’t know how hard it would be to align actual universes in a sane, symbiotic way, “rather difficult” seems like a safe assumption. So, when I say that PDMs are tools for aligning code-universes, I’m suggesting that it’s fucking challenging. Just lining up the code once isn’t enough; you have to keep the timelines aligned as well. Only then can the universes grow and change together.
“Aligning universes” will come in handy later. But, there’s one other thing to immediately note back down in the PDM pipeline itself. While PDMs do deal with four physically discrete states, there’s really only two concepts at work:
Taken together, some combination of the manifest and the project code are an expression of user intent. That’s useful shorthand in part because of what it says about what NOT to do: you don’t manually mess with the lock file or dependencies, for the same reason you don’t change generated code. And, conversely, a PDM oughtn’t change anything on the left when producing the lock file or the dependency tree. Humans to the left, machines to the right.
That’s suspiciously nice and tidy, though. Let’s inject a little honesty:
For a lot of software, “hot mess” might even be charitable. And that can be frustrating, leading to a desire to remove/not create capabilities in tooling that lead to, or at least encourage, the mess-making. Remember the goal, though: harm reduction. A PDM’s job is not to prevent developers from taking risks, but to make that risky behavior as safe and structured as possible. Risk often leads to a hot mess, at least momentarily. Such is the nature of our enterprise. Embrace it.
Probably the easiest way to get PDMs wrong is focusing too much on one side to the detriment of the other. Balance must be maintained. Too much on the right, and you end up with a clunky, uptight mess like git submodules that developers simply won’t use. Too much on the left, and the productivity gains from making things “just so easy!” will sooner or later (mostly sooner) be siphoned off by the crushing technical debt incurred from building on an unstable foundation.
Don’t do either. Please. Everyone will have a bad time.
To Manifest a Manifest
Manifests often hold a lot of information that’s language-specific, and the transition from code to manifest itself tends to be quite language-specific as well. It might be more accurate to call this “World -> Manifest”, given how many possible inputs there are. To make this concrete, let’s have a look at an actual manifest:
As there’s no existing Go standard for this, I’m using a Cargo (Rust) manifest. Cargo, like a lot of software in this class, is considerably more than just a PDM; it’s also the user’s typical entry point for compiling Rust programs. The manifest holds metadata about the project itself under package, parameterization options under features (and other sections), and dependency information under assorted dependencies sections. We’ll explore each.
Central Package Registry
The first group is mostly for Rust’s central package registry, crates.io. Whether or not to use a registry might be the single most important question to answer. Most languages with any sort of PDM have one: Ruby, Clojure, Dart, js/node, js/browser, PHP, Python, Erlang, Haskell, etc. Of course, most languages — Go being an exception — have no choice, as there isn’t enough information directly in the source code for package retrieval. Some PDMs rely on the central system exclusively, while others also allow direct interaction with source repositories (e.g. on GitHub, Bitbucket, etc.). Creating, hosting, and maintaining a registry is an enormous undertaking. And, also it’s…not my problem! This is not an article about building highly available, high-traffic data storage services. So I’m skippin’ ‘em. Have fun!
…ahem.
Well, I’m skipping how to build a registry, but not their functional role. By acting as package metadata caches, they can offer significant performance benefits. Since package metadata is generally held in the manifest, and the manifest is necessarily in the source repository, inspecting the metadata would ordinarily require cloning a repository. A registry, however, can extract and cache that metadata, and make it available via simple HTTP API calls. Much, much faster.
Registries can also be used to enforce constraints. For example, the above Cargo manifest has a ‘package.version’ field:
[package]
version = "0.2.6"
Think about it this a little. Yep: it’s nuts! Versions must refer to a single revision to be of use. But by writing it into the manifest, that version number just slides along with every commit made, thereby ending up applying to multiple revisions.
Cargo addresses this by imposing constraints in the registry itself: publishing a version to crates.io is absolutely permanent, so it doesn’t matter what other commits that version might apply to. From crates’ perspective, only the first revision actually gets it.
Other sorts of constraint enforcement might include validation of a well-formed source package. If the language does not dictate a particular source structure, then the PDM might impose one, and a registry could enforce it: npm could look for a well-formed module object, or composer could try to validate conformance to an autoloader PSR. (AFAIK, neither do, nor am I even sure either is possible). In a language like Go, where code is largely self-describing, this sort of thing is mostly unnecessary.
Parameterization
The second group is all about parameterization. The particulars here are Rust-specific, but the general idea is not: parameters are well-defined ways in which the form of the project’s output, or its actual logical behavior, can be made to vary. While some aspects can intersect with PDM responsibilities, this is often out of the PDM’s scope. For example, Rust’s target profiles allow control over the optimization level passed to its compiler. And, compiler args? PDM don’t care.
However, some types of options — Rust’s features, Go’s build tags, any notion of build profiles (e.g. test vs. dev vs. prod) — can change what paths in the project’s logic are utilized. Such shifts may, in turn, make some new dependencies required, or obviate the need for others. That is a PDM problem.
If this type of configurability is important in your language context, then your ‘dependency’ concept may need to admit conditionality. On the other hand, ensuring a minimal dependency set is (mostly) just about reducing network bandwidth. That makes it a performance optimization, and therefore skippable. We’ll revisit this further in the lock files section.
If you’re not a wizened Rustacean or package manageer, Cargo’s ‘features’ may be a bit puzzling: just who is making choices about feature use, and when? Answer: the top-level project, in their own manifest. This highlights a major bit I haven’t touched yet: projects as “apps,” vs. projects as “libraries.” Conventionally, apps are the project at the top of the dependency chain — the top-level project — whereas libraries are necessarily somewhere down-chain.
The app/lib distinction, however, is not so hard-and-fast. In fact, it’s mostly situational. While libraries might usually be somewhere down the dependency chain, when running tests or benchmarks, they’re the top-level project. Conversely, while apps are usually on top, they may have subcomponents that can be used as libraries, and thus may appear down-chain. A better way to think of this distinction is “a project produces 0..N executables.”
Apps and libs really being mostly the same thing is good, because it suggests it’s appropriate to use the same manifest structure for both apps and libs. It also emphasizes that manifests are necessarily both “downward”-facing (specifying dependencies) and “upward”-facing (offering information and choices to dependees).
Dependencies
The PDM’s proper domain!
Each dependency consists of, at least, an identifier and version specifier. Parameterization or source types (e.g. raw VCS vs. registry) may also be present. Changes to this part of a manifest are necessarily of one of the following:
- Adding or removing a dependency
- Changing the desired version of an existing dependency
- Changes to the parameters or source types of a dependency
These are the tasks that devs actually need to do. Removal is so trivial that many PDMs don’t provide a command, instead just expecting you’ll delete the appropriate line from the manifest. (I think an rm command is generally worth it, for reasons I’ll get into later.) adding generally oughtn’t be more difficult than:
<tool> add <identifier>@<version>
or just
<tool> add <identifier>
to implicitly ask for the most recent version.
The only hard requirements for identifiers is that they all exist in the same namespace, and that the PDM can glean enough information from parsing them (possibly with some help from a ‘type’ field — e.g. ‘git’ or ‘bzr’, vs. ‘pkg’ if it’s in a central registry) to determine how to fetch the resource. Basically, they’re a domain-specific URI. Just make sure you avoid ambiguity in the naming scheme, and you’re mostly good.
If possible, you should try to ensure package identifiers are the same as the names used to reference them in code (i.e. via include statements), as that’s one less mental map for users to hold, and one less memory map for static analyzers to need. At the same time, one useful thing that PDMs often do, language constraints permitting, is the aliasing of one package as another. In a world of waiting for patches to be accepted upstream, this can be a handy temporary hack to swap in a patched fork.
Of course, you could also just swap in the fork for real. So why alias at all?
It goes back to the spirit of the manifest: they’re a place to hold user intent. By using an alias, an author can signal to other people — project collaborators especially, but users as well — that it is a temporary hack, and they should set their expectations appropriately. And if being implicit isn’t clear enough, they can always put in a comment explaining why!
Outside of aliases, though, identifiers are pretty humdrum. Versions, however, are anything but.
Hang on, we need to talk about versions
Versions are hard. Maybe this is obvious, maybe not, but recognizing what makes them hard (and the problem they solve) is crucial.
Versions have exactly, unambiguously, and unequivocally fuck all to do with the code they emblazon. There are literally zero questions about actual logic a version number can definitively answer. On that very-much-not-answerable list is one of the working stiff developer’s most important questions: “will my code still work if I change to a different version of yours?”
Despite these limitations, we use versions. Widely. The simple reason is due to one of those pieces of the FLOSS worldview:
I know there is a limit beyond which I cannot possibly grok all code I pull in.
If we had to completely grok code before using or updating it, we’d never get anything done. Sure, it might be A Software Engineering Best Practice™, but enforcing it would grind the software industry to a halt. That’s a far greater risk than having some software not work some times because some people used versions wrong.
There’s a less obvious reason we rely on versions, though: there is no mechanical alternative. That is, according to our current knowledge of how computation works, a generic tool (even language-specific) capable of figuring out whether or not combinations of code will work as intended cannot exist. Sure, you can write tests, but let’s not forget Dijkstra:
“Empirical testing can only prove the presence of bugs, never their absence.”
Someone better than me at math, computer science and type theory could probably explain this properly. But you’re stuck with me for now, so here’s my glib summary: type systems and design by contract can go a long way towards determining compatibility, but if the language is Turing complete, they will never be sufficient. At most, they can prove code is not incompatible. Try for more, and you’ll end up in a recursive descent through new edge cases that just keep on popping out. A friend once referred to such endeavors as playing Gödelian whack-a-mole. (Pro tip: don’t play. Gödel’s winning streak runs 85 years.)
This is not (just) abstruse theory. It confirms the simple intuition that, in the “does my code work correctly with yours?” decision, humans must be involved. Machines can help, potentially quite a lot, by doing parts of the work and reporting results, but they can’t make a precise final decision. Which is exactly why versions need to exist, and why systems around them work the way they do: to help humans make these decisions.
Versions’ sole raison d’etre is as a crude signaling system from code’s authors to its users. You can also think of them as a curation system. When adhering to a system like SemVer, versions can suggest:
- Evolutions in the software, via a well-defined ordering relationship between any two versions
- The general readiness of a given version for public use (i.e., <1.0.0, pre-releases, alpha/beta/rc)
- The likelihood of different classes of incompatibilities between any given pair of versions
- Implicitly, that if there are versions, but you use a revision without a version, you may have a bad time
For a system like semver to be effective in your language, it’s important to set down some language-specific guidelines around what kind of logic changes correspond to major, minor, and patch-level changes. Rust got out ahead of this one. Go needs to, and still can. Without them, not only does everyone just go by ‘feel’ — AKA, “let’s have everyone come up with their own probably-garbage approach, then get territorial and defensive” — but there’s no shared values to call on in an issue queue when an author increments the wrong version level. And those conversations are crucial. Not only do they fix the immediate mistake, but they’re how we collectively improve at using versions to communicate.
Despite the lack of necessary relationship between versions and code, there is at least one way in which versions help quite directly to ensure software works: they focus public attention on a few particular revisions of code. With attention focused thusly, Linus’ Law suggests that bugs will be rooted out and fixed (and released in subsequent patch versions). In this way, the curatorial effect of focusing attention on particular versions reduces systemic risk around those versions. This helps with another of FLOSS’ uncertainties:
I have to prepare myself for the likelihood that most of what’s out there will probably be crap. Sorting wheat from chaff will also take time.
Having versions at least ensures that, if the software is crap, it’s because it’s actually crap, not because you grabbed a random revision that happened to be crap. That saves time. Saved time can save projects.
Non-Version Versions
Many PDMs also allow other, non-semver version specifiers. This isn’t strictly necessary, but it has important uses. Two other types of version specifiers are notable, both of which more or less necessitate that the underlying source type is a VCS repository: specifying a branch to ‘follow’, or specifying an immutable revision.
The type of version specifier used is really a decision about how you want to relate to that upstream library. That is: what’s your strategy for aligning their universe with yours? Personally, I see it a bit like this:
Versions provide me a nice, ordered package environment. Branches hitch me to someone else’s ride, where that “someone” may or may not be hopped up on cough syrup and blow. Revisions are useful when the authors of the project you need to pull in have provided so little guidance that you basically just have to spin the wheel and pick a revision. Once you find one that works, you write that revision to the manifest as a signal to your team that you never want to change again.
Now, any of these could go right or wrong. Maybe those pleasant-seeming packages are brimming with crypto backdoored by NSA. Maybe that dude pulling me on a rope tow is actually a trained car safety instructor. Maybe it’s a, uh, friendly demon running the roulette table?
Regardless, what’s important about these different identifiers is how it defines the update process. Revisions have no update process; branches are constantly chasing upstream, and versions, especially via version range specifiers, put you in control of the kind of ride you want…as long as the upstream author knows how to apply semver correctly.
Really, this is all just an expansion and exploration of yet another aspect of the FLOSS worldview:
I know that relying on other peoples’ code means hitching my project to theirs, entailing at least some logistical and cognitive overhead.
We know there’s always going to be some risk, and some overhead, to pulling in other peoples’ code. Having a few well-defined patterns at least make the alignment strategy immediately evident. And we care about that because, once again, manifests are an expression of user intent: simply looking at the manifest’s version specifier clearly reveals how the author wants to relate to third-party code. It can’t make the upstream code any better, but following a pattern reduces cognitive load. If your PDM works well, it will ease the logistical challenges, too.
The Unit of Exchange
I have hitherto been blithely ignoring something big: just what is a project, or a package, or a dependency? Sure, it’s a bunch of source code, and yes, it emanates, somehow, from source control. But is the entire repository the project, or just some subset of it? The real question here is, “what’s the unit of source code exchange?”
For languages without a particularly meaningful source-level relationship to the filesystem, the repository can be a blessing, as it provides a natural boundary for code that the language does not. In such cases, the repository is the unit of exchange, so it’s only natural that the manifest sits at the repository root:
$ ls -a
.
..
.git
MANIFEST
<ur source code heeere>
However, for languages that do have a well-defined relationship to the filesystem, the repository isn’t providing value as a boundary. (Go is the strongest example of this that I know, and I deal with it in the Go section.) In fact, if the language makes it sane to independently import different subsets of the repository’s code, then using the repository as the unit of exchange can actually get in the way.
It can be inconvenient for consumers that want only some subset of the repository, or different subsets at different revisions. Or, it can create pain for the author, who feels she must break down a repository into atomic units for import. (Maintaining many repositories sucks; we only do it because somehow, the last generation of DVCS convinced us all it was a good idea.) Either way, for such languages, it may be preferable to define a unit of exchange other than the repository. If you go down that path, here’s what you need to keep in mind:
- The manifest (and the lock file) take on a particularly meaningful relationship to their neighboring code. Generally, the manifest then defines a single ‘unit.’
- It is still ABSOLUTELY NECESSARY that your unit of exchange be situated on its own timeline — and you can’t rely on the VCS anymore to provide it. No timeline, no universes; no universes, no PDM; no PDM, no sanity.
- And remember: software is hard enough without adding a time dimension. Timeline information shouldn’t be in the source itself. Nobody wants to write real code inside a tesseract.
Between versions in the manifest file and path dependencies, it would appear that Cargo has figured this one out, too.
Other Thoughts
Mostly these are bite-sized new ideas, but also some review.
- Choose a format primarily for humans, secondarily for machines: TOML or YAML, else (ugh) JSON. Such formats are declarative and stateless, which makes things simpler. Proper comments are a big plus — manifests are the home of experiments, and leaving notes for your collaborators about the what and why of said experiments can be very helpful!
- TIMTOWTDI, at least at the PDM level, is your arch-nemesis. Automate housekeeping completely. If PDM commands that change the manifest go beyond add/remove and upgrade commands, it’s probably accidental, not essential. See if it can be expressed in terms of these commands.
- Decide whether to have a central package registry (almost certainly yes). If so, jam as much info for the registry into the manifest as needed, as long as it in no way impedes or muddles the dependency information needed by the PDM.
- Avoid having information in the manifest that can be unambiguously inferred from static analysis. High on the list of headaches you do not want is unresolvable disagreement between manifest and codebase. Writing the appropriate static analyzer is hard? Tough tiddlywinks. Figure it out so your users won’t have to.
- Decide what versioning scheme to use (Probably semver, or something like it/enhancing it with a total order). It’s probably also wise to allow things outside the base scheme: maybe branch names, maybe immutable commit IDs.
- Decide if your software will combine PDM behavior with other functionality like an LPM (probably yes). Keep any instructions necessary for that purpose cleanly separated from what the PDM needs.
- There are other types of constraints — e.g., required minimum compiler or interpreter version — that may make sense to put in the manifest. That’s fine. Just remember, they’re secondary to the PDM’s main responsibility (though it may end up interleaving with it).
- Decide on your unit of exchange. Make a choice appropriate for your language’s semantics, but absolutely ensure your units all have their own timelines.
The Lockdown
Transforming a manifest into a lock file is the process by which the fuzz and flubber of development are hardened into reliable, reproducible build instructions. Whatever crazypants stuff a developer does with dependencies, the lock file ensures another user can replicate it — zero thinking required. When I reflect on this apropos of the roiling, seething mass that is software, it’s pretty amazing.
This transformation is the main process by which we mitigate harm arising from the inherent risks of development. It’s also how we address one particular issue in the FLOSS worldview:
I know that I/my team bear the final responsibility to ensure the project we create works as intended, regardless of the insanity that inevitably occurs upstream.
Now, some folks don’t see the value in precise reproducibility. “Manifests are good enough!”, “It doesn’t matter until the project gets serious” and “npm became popular long before shrinkwrap (npm’s lock file) was around!” are some arguments I’ve seen. But these arguments strike me as wrongheaded. Rather than asking, “Do I need reproducible builds?” ask “Do I lose anything from reproducible builds?” Literally everyone benefits from them, eventually. (Unless emailing around tarballs and SSH’ing to prod to bang out changes in nano is your idea of fun). The only question is if you need reproducibility a) now, b) soon, or else c) can we be friends? because I think maybe you’re not actually involved in shipping software, yet you’re still reading this, which makes you a weird person, and I like weird people.
The only real potential downside of reproducible builds is the tool becoming costly (slow) or complicated (extra housekeeping commands or arcane parameters), thus impeding the flow of development. These are real concerns, but they’re also arguments against poor implementations, not reproducibility itself. In fact, they’re really UX guidelines that suggest what ‘svelte’ looks like on a PDM: fast, implicit, and as automated as possible.
The algorithm
Well, those guidelines just scream “algorithm!” And indeed, lock file generation must be fully automated. The algorithm itself can become rather complicated, but the basic steps are easily outlined:
- Build a dependency graph (so: directed, acyclic, and variously labeled) by recursively following dependencies, starting from those listed in the project’s manifest
- Select a revision that meets the constraints given in the manifest
- If any shared dependencies are found, reconcile them with <strategy>
- Serialize the final graph (with whatever extra per-package metadata is needed), and write it to disk. Ding ding, you have a lock file!
(Author’s note: The general problem here is boolean satisfiability, which is NP-complete. This breakdown is still roughly helpful, but trivializes the algorithm.)
This provides us a lock file containing a complete list of all dependencies (i.e., all reachable deps in the computed, fully resolved dep graph). “All reachable” means, if our project has three direct dependencies, like this:
We still directly include all the transitively reachable dependencies in the lock file:
Exactly how much metadata is needed depends on language specifics, but the basics are the package identifier, an address for retrieval, and the closest thing to an immutable revision (i.e. a commit hash) that the source type allows. If you add anything else — e.g., a target location on disk — it should only be to ensure that there is absolutely zero ambiguity in how to dump out the dependency code.
Of the four basic steps in the algorithm, the first and last are more or less straightforward if you have a working familiarity with graphs. Sadly, graph geekery is beyond my ability to bequeath in an article; please feel free to reach out to me if that’s where you’re stuck.
The middle two steps (which are really just “choose a revision” split in two), on the other hand, have hiccups that we can and should confront. The second is mostly easy. If a lock file already exists, keep the locked revisions indicated there unless:
- The user expressly indicated to ignore the lock file
- A floating version, like a branch, is the version specifier
- The user is requesting an upgrade of one or more dependencies
- The manifest changed and no longer admits them
- Resolving a shared dependency will not allow it
This helps avoid unnecessary change: if the manifest would admit 1.0.7, 1.0.8, and 1.0.9, but you’d previously locked to 1.0.8, then subsequent resolutions should notice that and re-use 1.0.8. If this seems obvious, good! It’s a simple example that’s illustrative of the fundamental relationship between manifest and lock file.
This basic idea approach is well-established — Bundler calls it “conservative updating.” But it can be extended further. Some PDMs recommend against, or at least are indifferent to, lock files committed in libraries, but that’s a missed opportunity. For one, it makes things simpler for users by removing conditionality — commit the lock file always, no matter what. But also, when computing the top-level project’s depgraph, it’s easy enough to make the PDM interpret a dependency’s lock file as being revision preferences, rather than requirements. Preferences expressed in dependencies are, of course, given less priority than those expressed in the top-level project’s lock file (if any). As we’ll see next, when shared dependencies exist, such ‘preferences’ can promote even greater stability in the build.
Diamonds, SemVer and Bears, Oh My!
The third issue is harder. We have to select a strategy for picking a version when two projects share a dependency, and the right choice depends heavily on language characteristics. This is also known as the “diamond dependency problem,” and it starts with a subset of our depgraph:
With no versions specified here, there’s no problem. However, if A and B require different versions of C, then we have a conflict, and the diamond splits:
There are two classes of solution here: allow multiple C’s (duplication), or try to resolve the conflict (reconciliation). Some languages, like Go, don’t allow the former. Others do, but with varying levels of risky side effects. Neither approach is intrinsically superior for correctness. However, user intervention is never needed with multiple C’s, making that approach far easier for users. Let’s tackle that first.
If the language allows multiple package instances, the next question is state: if there’s global state that dependencies can and do typically manipulate, multiple package instances can get clobbery in a hurry. Thus, in node.js, where there isn’t a ton of shared state, npm has gotten away with avoiding all possibility of conflicts by just intentionally checking out everything in ‘broken diamond’ tree form. (Though it can achieve the happy diamond via “deduping,” which is the default in npm v3).
Frontend javascript, on the other hand, has the DOM — the grand daddy of global shared state — making that approach much riskier. This makes it a much better idea for bower to reconcile (“flatten”, as they call it) all deps, shared or not. (Of course, frontend javascript also has the intense need to minimize the size of the payload sent to the client.)
If your language permits it, and the type system won’t choke on it, and the global state risks are negligible, and you’re cool with some binary/process footprint bloating and (probably negligible) runtime performance costs, AND bogging down static analysis and transpilation tooling is OK, then duplication is the shared deps solution you’re looking for. That’s a lot of conditions, but it may still be preferable to reconciliation strategies, as most require user intervention — colloquially known as DEPENDENCY HELL — and all involve potentially uncomfortable compromises in the logic itself.
If we assume that the A -> C and B -> C relationships are both specified using versions, rather than branches or revisions, then reconciliation strategies include:
- Highlander: Analyze the A->C relationship and the B->C relationship to determine if A can be safely switched to use C-1.1.1, or B can be safely switched to use C-1.0.3. If not, fall back to realpolitik.
- Realpolitik: Analyze other tagged/released versions of C to see if they can satisfy both A and B’s requirements. If not, fall back to elbow grease.
- Elbow grease: Fork/patch C and create a custom version that meets both A and B’s needs. At least, you THINK it does. It’s probably fine. Right?
Oh, but wait! I left out the one where semver can save the day:
- Phone a friend: ask the authors of A and B if they can both agree on a version of C to use. (If not, fall back to Highlander.)
The last is, by far, the best initial approach. Rather than me spending time grokking A, B and C well enough to resolve the conflict myself, I can rely on signals from A and B’s authors — the people with the least uncertainty about their projects’ relationship to C— to find a compromise:
A’s manifest says it can use any patch version of v1.0 newer than 2, and B’s manifest says it can use any minor and patch version of v1. This could potentially resolve to many versions, but if A’s lock file pointed at 1.0.3, then the algorithm can choose that, as it results in the least change.
Now, that resolution may not actually work. Versions are, after all, just a crude signaling system. 1.x is a bit of a broad range, and it’s possible that B’s author was lax in choosing it. Nevertheless, it’s still a good place to start, because:
- Just because the semver ranges suggest solution[s], doesn’t mean I have to accept them.
- A PDM tool can always further refine semver matches with static analysis (if the static analyses feasible for the language has anything useful to offer).
- No matter which of the compromise solutions is used, I still have to do integration testing to ensure everything fits for my project’s specific needs.
- The goal of all the compromise approaches is to pick an acceptable solution from a potentially large search space (as large as all available revisions of C). Reducing the size of that space for zero effort is beneficial, even if occasional false positives are frustrating.
Most important of all, though, is that if I do the work and discover that B actually was too lax and included versions for C that do not work (or excludes versions that do work), I can file patches against B’s manifest to change the range appropriately. Such patches record the work you’ve done, publicly and for posterity, in a way that helps others avoid the same pothole. A decidedly FLOSSy solution, to a distinctly FLOSSy problem.
Dependency Parameterization
When discussing the manifest, I touched briefly on the possibility of allowing parameterization that would necessitate variations in the dependency graphs. If this is something your language would benefit from, it can add some wrinkles to the lock file.
Because the goal of a lock file is to completely and unambiguously describe the dependency graph, parameterizing things can get expensive quickly. The naive approach would construct a full graph in memory for each unique parameter combination; assuming each parameter is a binary on/off, the number of graphs required grows exponentially (2^N) in the number of parameters. Yikes.
However, that approach is less “naive,” than it is “braindead.” A better solution might enumerate all the combinations of parameters, divide them into sets based on which combinations have the same input set of dependencies, and generate one graph per set. And an even better solution might handle all the combinations by finding the smallest input dependency set, then layering all the other combinations on top in a single, multivariate graph. (Then again, I’m an amateur algorithmicist on my best day, so I’ve likely missed a big boat here.)
Maybe this sounds like fun. Or maybe it’s vertigo-inducing jibberish. Either way, skipping it’s an option. Even if your manifest does parameterize dependencies, you can always just get everything, and the compiler will happily ignore what it doesn’t need. And, once your PDM inevitably becomes wildly popular and you are showered with conference keynote speaker invitations, someone will show up and take care of this hard part, Because Open Source.
That’s pretty much it for lock files. Onwards, to the final protocol!
Compiler, phase zero: Lock to Deps
If your PDM is rigorous in generating the lock file, this final step may amount to blissfully simple code generation: read through the lock file, fetch the required resources over the network (intermediated through a cache, of course), then drop them into their nicely encapsulated, nothing-will-mess-with-them place on disk.
There are two basic approaches to encapsulating code: either place it under the source tree, or dump it in some central, out-of-the-way location where both the package identifier and version are represented in the path. If the latter is feasible, it’s a great option, because it hides the actual dependee packages from the user, who really shouldn’t need to look at them anyway. Even if it’s not feasible to use the central location directly as compiler/interpreter inputs — probably the case for most languages — then do it anyway, and use it as a cache. Disk is cheap, and maybe you’ll find a way to use it directly later.
If your PDM falls into the latter, can’t-store-centrally camp, you’ll have to encapsulate the dependency code somewhere else. Pretty much the only “somewhere else” that you can even hope to guarantee won’t be mucked with is under the project’s source tree. (Go’s new vendor directories satisfy this requirement nicely.) That means placing it in the scope of what’s managed by the project’s version control system, which immediately begs the question: should dependency sources be committed?
…probably not. There’s a persnickety technical argument against committing: if your PDM allows for conditional dependencies, then which conditional branch should be committed? “Uh, Sam, obv: just commit all of them and let the compiler use what it wants.” Well then, why’d you bother constructing parameterized dep graphs for the lock file in the first place? And, what if the different conditional branches need different versions of the same package? And…and…
See what I mean? Obnoxious. Someone could probably construct a comprehensive argument for never committing dep sources, but who cares? People will do it anyway. So, being that PDMs are an exercise in harm reduction, the right approach ensures that committing dep sources is a safe choice/mistake to make.
For PDMs that directly control source loading logic — generally, interpreted or JIT-compiled languages — pulling in a dependency with its deps committed isn’t a big deal: you can just write a loader that ignores those deps in favor of the ones your top-level project pulls together. However, if you’ve got a language, such as Go, where filesystem layout is the entire game, deps that commit their deps are a problem: they’ll override whatever reality you’re trying to create at the top-level.
For wisdom on this, let’s briefly turn to distributed systems:
A distributed system is one in which the failure of a computer you didn’t even know existed can render your own computer unusable.
- Leslie Lamport
If that sounds like hell — you’re right! Anything that doesn’t have to behave like a distributed system, shouldn’t. You can’t allow person A’s poor use of your PDM to prevent person B’s build from working. So, if language constraints leave you no other choice, the only recourse is to blow away the committed deps-of-deps when putting them into place on disk.
That’s all for this bit. Pretty simple, as promised.
The Dance of the Four States
OK, we’ve been through the details on each of the states and protocols. Now we can step back and assemble the big picture.
There’s a basic set of commands that most PDMs provide to users. Using the common/intuitive names, that list looks something like this:
- init: Create a manifest file, possibly populating it based on static analysis of the existing code.
- add: Add the named package[s] to the manifest.
- rm: Remove the named package[s] from the manifest. (Often omitted, because text editors exist).
- update: Update the pinned version of package[s] in the lock file to the latest available version allowed by the manifest.
- install: Fetch and place all dep sources listed in the lock file, first generating a lock file from the manifest if it does not exist.
Our four-states concept makes it easy to visualize how each of these commands interacts with the system. (For brevity, let’s also abbreviate our states to P[roject code], M[anifest], L[ock file], and D[ependency code]):
Cool. But there are obvious holes — do I really have to run two or three separate commands (add, update, install) to pull in packages? That’d be annoying. And indeed, composer’s require (their ‘add’ equivalent) also pushes through the manifest to update the lock, then fetches the source package and dumps it to disk.
npm, on the other hand, subsumes the ‘add’ behavior into their install command, requiring additional parameters to change either the manifest or the lock. (As I pointed out earlier, npm historically opted for duplicated deps/trees over graphs; this made it reasonable to focus primarily on output deps, rather than following the state flows I describe here.) So, npm requires extra user interaction to update all the states. Composer does it automatically, but the command’s help text still describes them as discrete steps.
I think there’s a better way.
A New-ish Idea: Map, Sync, Memo
Note: this is off the beaten path. As far as I know, no PDM is as aggressive as this approach. It’s something we may experiment with in Glide.
Perhaps it was clear from the outset, but part of my motivation for casting the PDM problem in terms of states and protocols was to create a sufficiently clean model that it would be possible to define the protocols as one-way transformation functions:
- f : P → M: To whatever extent static analysis can infer dependency identifiers or parameterization options from the project’s source code, this maps that information into the manifest. If no such static analysis is feasible, then this ‘function’ is really just manual work.
- f : M → L: Transforms the immediate, possibly-loosely-versioned dependencies listed in a project’s manifest into the complete reachable set of packages in the dependency graph, with each package pinned to a single, ideally immutable version for each package.
- f : L → D: Transforms the lock file’s list of pinned packages into source code, arranged on disk in a way the compiler/interpreter expects.
If all we’re really doing is mapping inputs to outputs, then we can define each pair of states as being in sync if, given the existing input state, the mapping function produces the existing output state. Let’s visually represent a fully synchronized system like this:
Now, say the user manually edited the manifest, but hasn’t yet run a command to update the lock file. Until she does, M and L will be out of sync:
Similarly, if the user has just cloned a project with a committed manifest and lock file, then we know L and D will be out of sync simply because the latter doesn’t exist:
I could list more, but I think you get the idea. At any given time that any PDM command is executed, each of the manifest, lock, and deps can be in one of three states:
- does not exist
- exists, but desynced from predecessor
- exists and in sync with predecessor
Laying it all out like this brings an otherwise-subtle design principle to the fore: if these states and protocols can be so well-defined, then isn’t it kind of dumb for a PDM to ever leave the states desynced?
Why, yes. Yes it is.
No matter what command is executed, PDMs should strive to leave all the states — or at least the latter three — fully synchronized. Whatever the user tells the PDM to do, it does, then takes whatever steps are necessary to ensure the system is still aligned. Taking this ‘sync-based’ approach, our command diagram now looks like this:
Viewed in this way, the dominant PDM behavior becomes a sort of move/counter-move dance, where the main action of a command mutates one of the states, then the system reacts to stabilize itself.
Here’s a simple example: a user manually edits the manifest to remove a dependency, then `add`s a different package. In a PDM guided narrowly by user commands, it’s easy to imagine the manually-removed package not being reflected in the updated lock. In a sync-based PDM, the command just adds the new entry into the manifest, then relinquishes control to the sync system.
There’s obvious user benefits here: no extra commands or parameters are needed to keep everything shipshape. The benefits to the PDM itself, though, may be less obvious: if some PDM commands intentionally end in a desync state, then the next PDM command to run will encounter this, and has to decide why there’s a desync: is it because the previous command left it that way? Or because the user wants it that way? It’s a surprisingly difficult question to answer — and often undecidable. But it never comes up in a sync-based PDM, as they always attempt to move towards a sane final state.
Of course, there’s a (potential) drawback: it assumes the only valuable state is full synchronization. If the user actually does want a desync…well, that’s a problem. My sense, though, is that the desyncs users may think they want today are more about overcoming issues in their current PDM than solving a real domain problem. Or, to be blunt: sorry, but your PDM’s been gaslighting you. Kinda like your very own NoSQL Bane. In essence, my guess here is that a sync-based approach obviates the practical need for giving users more granular control.
There is (at least) one major practical challenge for sync-based PDMs, though: performance. If your lock file references thirty gzipped tarballs, ensuring full sync by re-dumping them all to disk on every command gets prohibitively slow. For a sync-based PDM to be feasible, it must have cheap routines to determine whether each state pair is in sync. Fortunately, since we’re taking care to define each sync process as a function with clear inputs, there’s a nice solution: memoization. In the sync process, the PDM should hash the inputs to each function and include that hash as part of the output. On subsequents runs, compare the new input hash to the old recorded one; if they’re the same, then assume the states are synced.
Now really, it’s just M → L and L → D that we care about here. The former is pretty easy: the inputs are the manifest’s dep list, plus parameters, and the hash can be tucked into the generated lock file.
The latter is a little uglier. Input calculation is still easy: hash the serialized graph contained in the lock file — or, if your system has parameterized deps, just the subgraph you’re actually using. The problem is, there’s no elegant place to store that hash. So, don’t bother trying: just write it to a specially-named file at the root of the deps subdirectory. (If you’re a the lucky duck writing to a central location with identifier+version namespacing, you can skip all of this.)
Neither of these approaches is foolproof, but I’m guessing they’ll hold up fine in practice.
Dénouement
Writing a general guide to all the considerations for a PDM is a daunting task. I suspect I’ve missed at least one major piece, and I’m sure I’ve neglected plenty of potentially important details. Comments and suggestions are welcome. I’d like this guide to be reasonably complete, as package management is both a de facto requirement for widespread language use today, and too wide-ranging of a problem to be adequately captured in organic, piecemeal mailing list discussions. I’ve seen a number of communities struggle to come to grips with the issue for exactly this reason.
That said, I do think I’ve brought the substantial aspects of the problem into view. In the final counting, though, the technical details may be less important than the broad organizing concepts: language package management is an exercise in harm reduction, performed by maintaining a balance between humans and machines/chaos and order/dev and ops/risk and safety/experimentation and reliability. Most importantly, it is about maintaining this balance over time.
Because management over time is a requirement, but it’s a categorically terrible idea for languages to try to manage their own source chronology as part of the source itself, it suggests that pretty much all languages could benefit from a PDM. Or rather, one PDM. A language community divided by mutually incompatible packaging tools is not a situation I’d walk into lightly.
OK general audience — shoo! Or not. Either way, it’s Go time!
A PDM for Go
Thus far, this article has conspicuously lacked any tl;dr. This part being more concrete, it gets one:
- A PDM for Go is doable, needn’t be that hard, and could even integrate nicely with `go get` in the near term
- A central Go package registry could provide lots of wins, but any immediate plan must work without one
- Monorepos can be great for internal use, and PDMs should work within them, but monorepos without a registry are harmful for open code sharing
- I have an action plan that will indubitably result in astounding victory and great rejoicing, but to learn about it you will need to read the bullet points at the end
Go has struggled with package management for years. We’ve had partial solutions, and no end of discussion, but nothing’s really nailed it. I believe that’s been the case because the whole problem has never been in view; certainly, that’s been a major challenge in at least the last two major discussions. Maybe, with the big picture laid out, things already seem a bit less murky.
First, let’s fully shift from the general case to Go specifically. These are the three main constraints that govern the choices we have available:
- GOPATH manipulation is a horrible, unsustainable strategy. We all know this. Fortunately, Go 1.6 adds support for the vendor directory, which opens the door to encapsulated builds and a properly project-oriented PDM.
- Go’s linker will allow only one package per unique import path. Import path rewriting can circumvent this, but Go also has package-level variables and init functions (aka, global state and potentially non-idempotent mutation of it). These two facts make npm-style, “broken diamond” package duplication an unsound strategy for handling shared deps.
- Without a central registry, repositories must continue acting as the unit of exchange. This intersects quite awkwardly with Go’s semantics around the directory/package relationship.
Also, in reading through many of the community’s discussions that have happened over the years, there are two ideas that seem to be frequently missed:
- Approaches to package management must be dual-natured: both inward-facing (when your project is at the ‘top level’ and consuming other deps), and outward-facing (when your project is a dep being consumed by another project).
- While there’s some appreciation of the need for harm reduction, too much focus has been on reducing harm through reproducible builds, and not enough on mitigating the risks and uncertainties developers grapple with in day-to-day work.
That said, I don’t think the situation is dire. Not at all, really. There are some issues to contend with, but there’s good news, too. Good stuff first!
Upgrading `go get`
As with so many other things, Go’s simple, orthogonal design makes the technical task of creating a PDM relatively easy (once the requirements are clear!). Even better, though, is that a transparent, backwards-compatible, incremental path to inclusion in the core toolchain exists. It’s not even hard to describe:
We can define a format for a lock file that conforms to the constraints outlined here, and teach `go get` to interact with it. The most important aspect of “conformance” is that the lock file be a transitively complete list of dependencies for both the package it cohabits a directory with, and all subpackages — main and not, test and not, as well as encompassing any build tag parameterizations. In other words, imagine the following repository layout:
.git # so, this is the repo root
main.go # a main package
cmd/
bar/
main.go # another main package
foo/
foo.go # a non-main package
foo_amd64.go # arch-specific, may have extra deps
foo_test.go # tests may have extra deps
LOCKFILE # the lockfile, derp
vendor/ # `go get` puts all (LOCKFILE) deps here
That LOCKFILE is correct iff it contains immutable revisions for all the transitive dependencies of all three (<root>, foo, bar) packages. The algorithm `go get` follows, then, is:
- Walk back up from the specified subpath (e.g. `go get <repoaddr>/cmd/bar`) until a LOCKFILE is found or repository root is reached
- If a LOCKFILE is found, dump deps into adjacent vendor dir
- If no LOCKFILE is found, fall back to the historical GOPATH/default branch tip-based behavior
This works because it parallels how the vendor directory itself works: for anything in the directory tree rooted at vendor’s parent (here, the repo root), the compiler will search that vendor directory before searching GOPATH.
I put the lock at the root of the repository in this example, but it doesn’t necessarily have to be there. The same algorithm would work just fine within a subdirectory, e.g.:
.git
README.md
src/
LOCKFILE
main.go
foo/
foo.go
Not exactly best practices, but a `go get <repoaddr>/src` will still work fine with the above algorithm.
This approach can even work in some hinky, why-would-you-do-that situations:
.git
foo.go
bar/
bar.go
cmd/
LOCKFILE
main.go
If the main package imports both foo and bar, then putting the vendor dir adjacent to the LOCKFILE under cmd/ means foo and bar won’t be encapsulated by it, and the compiler will fall back to importing from the GOPATH. I’ve found a little trick, though, and it seems like this problem can be addressed by putting a copy of the repository under its own vendor directory:
$GOPATH/github.com/sdboyer/example/
.git
foo.go
bar/
bar.go
cmd/
LOCKFILE
main.go
vendor/github.com/sdboyer/example/
foo.go
bar/
bar.go
cmd/ # The PDM could omit from here on down
LOCKFILE
main.go
A bit weird, and fugly-painful for development, but it solves the problem in a fully automatable fashion.
Of course, just because we can accommodate this situation, doesn’t mean we should. And that, truly, is the big question: where should lock files, and their co-located vendor directories, go?
If you’re just thinking about satisfying particular use cases, this is a spot where it’s easy to get stuck in the mud. But if we go back to design concept that PDMs must settle on a fundamental “unit of exchange,” the answer is clearer. The pair of manifest and lock file represent the boundaries of a project ‘unit.’ While they have no necessary relationship with the project source code, given that we’re dealing with filesystems, things tend to work best when they’re at the root of the source tree their ‘unit’ encompasses.
This doesn’t require that they’re at the root of a repository — a subdirectory is fine, so long as they’re parent to all the Go packages they describe. The real problem arises when there’s a desire to have multiple trees, or subtrees, under a different lock (and manifest). I’ve pointed out a couple times how having multiple package/units share the same repository isn’t feasible if the repository is relied on to provide temporal information. It’s absolutely true, but in a Go context, it’s not helpful just to assert “don’t do it.”
Instead, we need to draw a line between best practices for publishing a project, versus what’s workable when you’re building out, say, a company monorepo. Given the pervasive monorepo at Go’s Googlian birthing grounds, and the gravitational pull towards monorepos on how code has been published over the last several years, drawing these lines is particularly important.
Sharing, Monorepos, and The Fewest “Don’t Do It”s I can manage
A lot of folks who’ve come up through a FLOSS environment look at monorepos and say, “eeeww, why would you ever do that?” But, as with many choices made by Billion Dollar Tech Companies, there are significant benefits. Among other things, they’re a mitigation strategy against some of those uncertainties in the FLOSS ecosystem. For one, with a monorepo, it’s possible to know what “all the code” is. As such, with help from comprehensive build systems, it’s possible…theoretically…for individual authors to take systemic responsibility for the downstream effects of their changes.
Of course, in a monorepo, all the code is in one repository. By collapsing all code into a monorepo, subcomponents are subsumed into the timeline of the monorepo’s universe, and thus lack their own temporal information — aka, meaningful versioning. And this is fine…as long as you’re inside the same universe, on the same timeline. But, as I hope is clear by now, sane versioning is essential for those outside the monorepo — those of us who are stitching together a universe on the fly. Interestingly, that also includes folks who are stitching together other monorepos.
Now, if Go had a central registry, we could borrow from Rust’s ‘path dependencies’ and work out a way of providing an independently-versioned unit of exchange composed from a repository’s subpackages/subtrees. If you can get past the git/hg/bzr induced Stockholm-thinking that “small repos are a Good Thing™ because Unix philosophy”, then you’ll see there’s a lot that’s good about this. You show me a person who likes maintaining a ton of semi-related repositories, and I’ll show you a masochist, a robot, or a liar.
But Go doesn’t have a central registry. Maybe we can do that later.. Let’s call the former “open,” and the latter “closed.”
- Open repositories are safe and sane to pull in as a dependency; closed repositories are not. (Just reiterating.)
- Open repositories should have one manifest, one lock file, and one vendor directory, ideally at the repository root.
- Open repositories should always commit their manifest and lock, but not their vendor directory.
- Closed repositories can have multiple manifest/lock pairs, but it’ll be on you to arrange them sanely.
- Closed repositories can safely commit their vendor directories.
- Open or closed, you should never directly modify upstream code in a vendor directory. If you need to, fork it, edit, and pull in (or alias) your fork.
While I think this is a good goal, a PDM will also need to be pragmatic: given the prevalence of ‘closed’-type monorepos in the public Go ecosystem (if nothing else, golang.org/x/*), there needs to be some support for pulling in packages from the same repository at different versions. And that’s doable. But for a sane, version-based ecosystem to evolve, the community must either move away from sharing closed/monorepos, or build a registry. All the possible compromises I see there throw versioning under the bus, and that’s a non-starter.
Semantic Versioning, and an Action Plan
Last fall, Dave Cheney made a valiant effort towards achieving some traction on semantic versioning adoption. In my estimation, the discussion made some important inroads on getting the importance of using a versioning system in the community. Unfortunately (though probably not incorrectly, as it lacked concrete outcomes) the formal proposal failed.
Fortunately, I don’t think formal adoption is really necessary for progress. Adopting semantic versioning is just one of the fronts on which the Go community must advance in order to address these issues. Here’s a rough action plan:
- Interested folks should come together to create and publish a general recommendation on what types of changes necessitate what level of semver version bumps.
- These same interested folks could come together to write a tool that does basic static analysis of a Go project repository to surface relevant facts that might help in deciding on what semver number to initially apply. (And then…post an issue automatically with that information!)
- If we’re feeling really frisky, we could also throw together a website to track the progress towards the semver-ification of Go-dom, as facilitated by the analyze-and-post tool. Scoreboards, while dumb, help spur collective action!
- At the same time, we can pursue a simplest-possible case — defining a lock file, for the repository root only, that `go get` can read and transparently use, if it’s available.
- As semver’s adoption spreads, the various community tools can experiment with different workflows and approaches to the monorepo/versioning problems, but all still conform to the same lock file.
Package management in Go doesn’t have to be an intractable problem. It just tends to seem that way if approached too narrowly. While there are things I haven’t touched on — versioning of compiled objects, static analysis to reduce depgraph bloat, local path overriding for developer convenience — I think I’ve generally covered the field. If we can make active progress towards these goals, then we could be looking at a dramatically different Go ecosystem by the end of the year — or even by GopherCon!
Go is so ripe for this, it’s almost painful. The simplicity, the amenability to fast static analysis…a Go PDM could easily be a shining example among its peers. I have no difficulty picturing gofmt/goimports additions that write directly to the relevant manifest, or even extend its local code-search capabilities to incorporate a central registry (should we build one) that can automagically search for and grab non-local packages. These things, and many more, are all so very close. All that’s lacking is the will.
If you’re interested in working on any of the above, please feel free to reach out to me. I’m sure we can also discuss on the Go PM mailing list.
Thanks to Matt Butcher, Sam Richard, Howard Tyson, Kris Brandow, and Matt Farina for comments, discussion, and suggestions on this article, and Cayte Boyer for her patience throughout.
|
https://medium.com/@sdboyer/so-you-want-to-write-a-package-manager-4ae9c17d9527?utm_source=golangweekly&utm_medium=email
|
CC-MAIN-2020-34
|
refinedweb
| 12,719
| 61.56
|
Walter Hafner (hafner@informatik.tu-muenchen.de)
Fri, 12 Feb 1999 16:50:18 +0100 (MET)
Hi!
First of all: I just installed 3.1.0 (final version) on my box (FreeBSD
2.2.8 STABLE, P-II/300, 128 MB) and it seems to run flawlessly. It's
still in the indexing stage, but this works ok.
However, I'm still struggling with my ht://Dig setup ...
- more then 300 machines with WWW servers
- more than 200.000 physical pages
- The majority of the servers have one ore more aliases.
Several of the machines listen to as much as 4 names and the WWW Server
responds to each of them! Even worse, absolute URLs in these servers
point from one alias to a different one _and_ each server contains
documents with relative links. So I end up, indexing servers with
several hundred documents up to four times.
This kind of environment brings ht://Dig to its knees. Disabling virtual
hosts would be a solution, unfortunately I have lots of _real_ virtual
hosts to index. So I just _have_ to "allow_virtual_hosts" which in turn
results in lots of unnecessary queries.
The "server_aliases" option is _not_ what I want. It is impossible to
maintain such a big namespace manually.
Last time I counted, ht://Dig reported 450 servers, despite a _long_
list of mappings in the "server_aliases" section of my configuration.
Some time ago I wrote about Netscape Compass Server and the way it deals
with server aliases: If two pages with different URLs share the same IP
number, it checks the contents of the pages. If they are the same,
Compass assumes it to be a server alias.
Is such a behaviour in the queue for ht://Dig? If yes, when will it be
available (I know, I know ...)? Unfortunately I don't know C++ at all...
Today the following idea occured to me: What if I'd install a Squid
proxy and configure ht://Dig for proxy usage. Since squid is written in
C (hehe), I could patch it to return "301 Moved Permanently" for certain
URLs. This way I could do pretty much anything I want. Well, I have to
code it of course, but I see no problem there.
Question: How dows ht://Dig react to 301 ? Does it discard the URL? Does
it follow the new URL?
Thanks,
-Walter
--
|
http://www.htdig.org/mail/1999/02/0120.html
|
CC-MAIN-2014-52
|
refinedweb
| 393
| 75.91
|
DeltaTree - a multiway search tree (BTree) structure with some fancy features. More...
#include "clang/Rewrite/Core/DeltaTree.h"
DeltaTree - a multiway search tree (BTree) structure with some fancy features.
B-Trees are generally more memory and cache efficient than binary trees, because they store multiple keys/values in each node. This implements a key/value mapping from index to delta, and allows fast lookup on index. However, an added (important) bonus is that it can also efficiently tell us the full accumulated delta for a specific file offset as well, without traversing the whole tree.
Definition at line 25 of file DeltaTree.h.
Definition at line 390 of file DeltaTree.cpp.
Definition at line 394 of file DeltaTree.cpp.
Definition at line 401 of file DeltaTree.cpp.
AddDelta - When a change is made that shifts around the text buffer, this method is used to record that info.
It inserts a delta of 'Delta' into the current DeltaTree at offset FileIndex.
Definition at line 455 of file DeltaTree.cpp.
getDeltaAt - Return the accumulated delta at the specified file offset.
This includes all insertions or delections that occurred before the specified file index.
Definition at line 408 of file DeltaTree.cpp.
References getRoot(), and Node.
|
https://clang.llvm.org/doxygen/classclang_1_1DeltaTree.html
|
CC-MAIN-2021-49
|
refinedweb
| 203
| 59.4
|
Introduction:
Here I will explain how to group columns in asp.net gridview header row in C# and VB.NET or show multiple columns under single column in asp.net gridview.
Description:
In previous articles I explained Show Gridview row details in tooltip, Show tooltip for gridview header columns, jQuery change tooltip style in asp.net, Add tooltip to dropdownlist items in asp.net, Create simple tooltip with jQuery UI plugin, Asp.net Interview questions, Highlight Gridview records based on search and many articles relating to Gridview, SQL, jQuery,asp.net, C#,VB.NET. Now I will explain how to group columns in gridview in asp.net with C# and VB.NET.
Now in code behind add the following namespaces
C# Code
After that add below code in code behind
VB.NET Code
Demo
8 comments :
nice code, i have an issue. when i export grid to excel the headers not displayed like age group and no.of employees. how to do, give the suggestions,
Thanx. Easy and simple code.
Suresh I have a girdview from the database FORM. It have 8 columns such as 1.year 2.dept 3.name 4.come_time 5.explain 6.encourage 7.discipline 8.care
The 4 to 8 columns have some integer values. So now I need to show the percentage of the 4 to 8 columns in the footer of gridview.Please help me @@@@@
Thank you very much!! Simple and straight forward.
Please help i am getting compilation errors :(
how to use loop for loop if we group-columns-in-gridview-header-row
how to use for loop if we group-columns-in-gridview-header-row
how to use for loop if we Group Columns in Asp.net Gridview Header Row & Display Multiple Columns Under Single Column
To save grid data in db
|
http://www.aspdotnet-suresh.com/2013/09/group-columns-in-gridview-header-row-aspnet.html
|
CC-MAIN-2017-17
|
refinedweb
| 301
| 67.65
|
Read the next entry from the user-information file
#include <utmp.h> struct utmp * getutent( void );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The getutent() function reads in the next entry from a user-information file. If the file isn't already open, getutent() opens it. If the function reaches the end of the file, it fails.
A pointer to a utmp structure for the next entry, or NULL if the file couldn't be read or reached the end of file.id(), getutline(), pututline(), setutent(), utmp, utmpname()
login in the Utilities Reference
|
http://www.qnx.com/developers/docs/6.3.0SP3/neutrino/lib_ref/g/getutent.html
|
CC-MAIN-2021-10
|
refinedweb
| 105
| 67.35
|
/* "screen.h" #ifdef NETHACK extern int nethackflag; #endif struct nlstrans { char *from; char *to; }; #ifdef NETHACK static struct nlstrans nethacktrans[] = { {"Cannot lock terminal - fork failed", "Cannot fork terminal - lock failed"}, {"Got only %d bytes from %s", "You choke on your food: %d bytes from %s"}, {"Copy mode - Column %d Line %d(+%d) (%d,%d)", "Welcome to hacker's treasure zoo - Column %d Line %d(+%d) (%d,%d)"}, {"First mark set - Column %d Line %d", "You drop a magic marker - Column %d Line %d"}, {"Copy mode aborted", "You escaped the dungeon."}, {"Filter removed.", "You have a sad feeling for a moment..."}, {"Window %d (%s) killed.", "You destroy poor window %d (%s)."}, {"Window %d (%s) is now being monitored for all activity.", "You feel like someone is watching you..."}, {"Window %d (%s) is no longer being monitored for activity.", "You no longer sense the watcher's presence."}, {"empty buffer", "Nothing happens."}, {"switched to audible bell.", "Suddenly you can't see your bell!"}, {"switched to visual bell.", "Your bell is no longer invisible."}, {"The window is now being monitored for %d sec. silence.", "You feel like someone is waiting for %d sec. silence..."}, {"The window is no longer being monitored for silence.", "You no longer sense the watcher's silence."}, {"No other window.", "You cannot escape from window %d!"}, {"Logfile \"%s\" closed.", "You put away your scroll of logging named \"%s\"." }, {"Error opening logfile \"%s\"", "You don't seem to have a scroll of logging named \"%s\"."}, {"Creating logfile \"%s\".", "You start writing on your scroll of logging named \"%s\"."}, {"Appending to logfile \"%s\".", "You add to your scroll of logging named \"%s\"."}, {"Detach aborted.", "The blast of disintegration whizzes by you!"}, {"Empty register.", "Nothing happens."}, {"[ Passwords don't match - checking turned off ]", "[ Passwords don't match - your armor crumbles away ]"}, {"Aborted because of window size change.", "KAABLAMM!!! You triggered a land mine!"}, {"Out of memory.", "Who was that Maude person anyway?"}, {"getpwuid() can't identify your account!", "An alarm sounds through the dungeon...\nThe Keystone Kops are after you!"}, {"Must be connected to a terminal.", "You must play from a terminal."}, {"No Sockets found in %s.\n", "This room is empty (%s).\n"}, {"New screen...", "Be careful! New screen tonight."}, {"Child has been stopped, restarting.", "You regain consciousness."}, {"There are screens on:", "Your inventory:"}, {"There is a screen on:", "Your inventory:"}, {"There are several screens on:", "Prove thyself worthy or perish:"}, {"There is a suitable screen on:", "You see here a good looking screen:"}, {"There are several suitable screens on:", "You may wish for a screen, what do you want?"}, {"%d socket%s wiped out.", "You hear %d distant explosion%s."}, {"Remove dead screens with 'screen -wipe'.", "The dead screen%s touch%s you. Try 'screen -wipe'."}, {"Illegal reattach attempt from terminal %s.", "'%s' tries to touch your session, but fails."}, {"Could not write %s", "%s is too hard to dig in"}, {0, 0} }; #endif char * DoNLS(from) char *from; { #ifdef NETHACK struct nlstrans *t; if (nethackflag) { for (t = nethacktrans; t->from; t++) if (strcmp(from, t->from) == 0) return t->to; } #endif return from; }
|
http://opensource.apple.com/source/screen/screen-19/screen/nethack.c
|
CC-MAIN-2013-20
|
refinedweb
| 513
| 78.65
|
xcb_create_pixmap man page
xcb_create_pixmap — Creates a pixmap
Synopsis
#include <xcb/xproto.h>
Request function
xcb_void_cookie_t xcb_create_pixmap(xcb_connection_t *conn, uint8_t depth, xcb_pixmap_t pid, xcb_drawable_t drawable, uint16_t width, uint16_t height);
Request Arguments
- conn
The XCB connection to X11.
- depth
TODO
- pid
The ID with which you will refer to the new pixmap, created by xcb_generate_id.
- drawable
Drawable to get the screen from.
- width
The width of the new pixmap.
- height
The height of the new pixmap.
Description
Creates a pixmap. The pixmap can only be used on the same screen as drawable is on and only with drawables of the same depth.
Return Value
Returns an xcb_void_cookie_t. Errors (if any) have to be handled in the event loop.
If you want to handle errors directly with xcb_request_check instead, use xcb_create_pixmap_checked. See xcb-requests(3) for details.
Errors
- xcb_alloc_error_t
The X server could not allocate the requested resources (no memory?).
- xcb_drawable_error_t
The specified drawable (Window or Pixmap) does not exist.
- xcb_value_error_t
TODO: reasons?
See Also
xcb-requests(3), xcb_generate_id(3)
Author
Generated from xproto.xml. Contact xcb@lists.freedesktop.org for corrections and improvements.
Referenced By
The man page xcb_create_pixmap_checked(3) is an alias of xcb_create_pixmap(3).
|
https://www.mankier.com/3/xcb_create_pixmap
|
CC-MAIN-2017-47
|
refinedweb
| 194
| 61.33
|
Nu from Vue.js and Node.js, and creates an excellent developer experience. I think you’ll find that the real value is in all the time you can save during front-end implementation. The documentation is excellent, making it easy to understand and maintain a modular implementation. Interested in learning about Nuxt.js? In this Nuxt.js tutorial, we’ll provide a very basic introduction to what it can do, through a project that will consume a blog post API endpoint and then display it.
Step 1: Create the project with CLI
To get started, you’ll need Node.js version 10 or later installed in your machine. Also using the Vue.js devtools for Chrome will help, so create a directory where the project is going to live and then using the CLI, add:
$sudo npm i -g create-nuxt-app@2.1.1
(to avoid any conflict use 2.1.1 for now)
$create-nuxt-app gorilla-news
(you can choose the project name you want)
For options:
• no server and UI framework are required
• rendering mode is Single Page App, yes for axios and prettier, no for eslint, and npm for package manager
From inside your project directory, run:
$npm run dev
You will see an initial page with the Nuxt.js logo.
Step 2: Add Vue-Material
Vue Material helps us to make our app look good and save time when we add styles, so let’s install it:
$npm i vue-material
Next, go to the plugins folder, create a new file vue-material.js, and add the following lines to it:
import Vue from 'vue'; import VueMaterial from 'vue-material'; Vue.use(VueMaterial);//to register globally
We need to let Nuxt.js know that about this plugin, so go to the nuxt.config.js file and add “vue-material” inside the plugins array, like this:
{ src: '~/plugins/vue-material'}
We also need to add this inside the css array:
{ src: 'vue-material/dist/vue-material.min.css', lang:'css' }
For our project example, we’ll use icons, so we’ll add the Material Icons library (same nuxt.config.js file) inside the link array, in the head section (remember to add commas to separate array elements):
{ rel: 'stylesheet', href: '//fonts.googleapis.com/css?family=Roboto:400,500,700,400italic|Material+Icons'}
Now, what do we need to generate a theme? Sass should be a good start, so let’s install it:
$npm i -D node-sass sass-loader
Next, we need to create a file call: theme.scss inside the assets directory. This will import the material engine, and also include the register for our new theme.
There are some configuration options for the theme, and you can choose the ones that you like. Go to Vue Material–Themes-Configuration and copy the first piece of code inside the theme.scss file. Remember to add that Sass file as a new CSS reference in the css array ,like this:
{ src: '~/assets/theme.scss', lang:'scss' }
Step 3: Consume the API
Now we’re ready to consume any blog post API to display it. In this example, we’ll consume the official blog post feed from Gorilla Logic: (but of course you can use any API you want, the only requirement for our purposes is that it contain images URL in it). Another option you have is the API key/credentials. In our example, we don’t need it. To consume the API, we are going to use one of the most common plugins in Vue.js: Axios. Please create an axios.js file inside the plugins folder. This is an important step of this Nuxt.js tutorial!
In order to consume the blog post from the API, you must use the asyncData function, return an array with the data, and display each element using a v-for loop. The code looks something like:
(Reference:)
Now in the CLI run:
$npm run dev
(in case it wasn’t up)
And the returned data might look like:
That’s the basic idea: consume and display data using axios inside the main index.vue page.
Let’s add the proxy module, because it will help us to manage the API endpoints in a more elegant way. For example, instead of always copying the entire URL, we would just add “/api/” .
In the CLI, run:
$npm i @nuxtjs/proxy
Add a reference from this newly installed module in the main config file (nuxt.config.js), inside the modules array.
Also add a “proxy:true, credentials: false” inside the axios config array.
Next, create a new separate “proxy” config array. It should look something like this:
The idea is to manage the “Base API URL” as a target; then, at the time that we call “api” in the pages, Nuxt will know that this is a direct reference to our main API endpoint.
Step 4: Adding layout
You have made it to the final step of this Nuxt.js tutorial! Inside the main index.vue page, we need to add a container for every post. To do that, we can use the material layouts styles. Keep the original v-for statement and add “md-” styles (check Vue material) so that we can make sure that the app displays well on mobile.
For our example:
• Each element must display image, title, and some icons
• Everything should be inside the md-card tag, and inside of that, the tags: md-card-media, md-card-header, md-card-content will create the cards for each blog post
Basic HTML is added internally with the title, author name, and a few icons.
Important: Keep in mind that you should use the new “post” object correctly, calling the data fields such as title, author, and image source property. You can add id if your API endpoint has it. The card code looks something like this:
To recap, we now have an index page where we can see the list of blog posts. We’re getting the information from the API using only the basic setup of axios.
After adding some material tags and classes, we should be able to see the cards like this:
Wrapping up the first part of our Nuxt.js tutorial
And there you have it–we’ve learned from our simple Nuxt.js tutorial how to:
• Create a basic setup from Nuxt.js using CLI
• Create a theme
• Configure a layout setup using Vue-Material
• Understand how Nuxt.js manages routes
• Manage an API endpoint call using Axios
If you spend a few hours exploring, you’ll quickly get an idea of how this framework works and how it can help in your upcoming projects.
In the second part of this Nuxt.js tutorial, we’ll create basic register functionality and connect it to Firebase, including how to:
• Create a main nav for our app
• Create a new “register” page
• Use the email/password API functionality from Firebase and connect our register with it
• Display an avatar as soon as any user is registered by checking the store values
|
https://gorillalogic.com/blog/how-to-build-a-real-world-app-a-nuxt-js-tutorial-part-1/
|
CC-MAIN-2021-21
|
refinedweb
| 1,180
| 62.38
|
I need some vars be included from server. How can I get external contents into entire programming area?
For example I want import updated var name form server, can I import it with AJAX like command?
How can I use AJAX?
For tips, questions and general information about writing Add-ins, how to package them, and how to submit them to EViews for publication.
Moderators: EViews Gareth, EViews Moderator
2 posts • Page 1 of 1
- EViews Developer
- Posts: 530
- Joined: Tue Sep 16, 2008 3:00 pm
- Location: Irvine, CA
Re: How can I use AJAX?
We currently don't have any way for you to programmatically access a web service from EViews code. However, if the data from your server is available as a simple HTML table format or as a file, you can use HTTP URLs in both WFOPEN and IMPORT commands to get the data into EViews.
Steve
Code: Select all
import
Steve
2 posts • Page 1 of 1
Return to “Add-in Writing area”
Who is online
Users browsing this forum: No registered users and 2 guests
|
http://forums.eviews.com/viewtopic.php?f=26&t=12830&p=44701
|
CC-MAIN-2019-09
|
refinedweb
| 181
| 68.4
|
40. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5chris.campbell
May 25, 2011 1:24 PM (in response to canaca)
For those running into this error, could you please add a new bug to bugbase.adobe.com and post back with a link so that others can cast their votes? In the bug, please include sample source code and a full description, including a link to this thread.
Thanks,
Chris
41. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5penguan May 26, 2011 5:44 AM (in response to chris.campbell)
Bug created:
Please vote on this bug if you experience the same!
42. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5jamesfin May 27, 2011 8:00 AM (in response to ShinyaArao)
I had a similar issue for iOS and determined that our web server certificate wasn't valid. After updating the certificate, the 2032 no longer happens.
43. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5ertr2 May 30, 2011 6:26 AM (in response to ShinyaArao)
Um....The point of My Question is 'Not working URLLoader in Local Network'...
For Example, When url is, URLLoader is good working in AIR 2.5,
but when url is , URLLoader is not working in this platform...
important point, this problem is appeared in IOS and Android development.
Finally I had solved this problem, not used local network(......) and using external network(exist server).
URLLoader is not working in localhost or internal network!!
44. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5chris_emerson Nov 3, 2011 10:32 AM (in response to ShinyaArao)
I have a traditional URLLoader/URLRequest setup that loads an external XML file.
It works flawlessly in the AIR Launcher ... but when I try on my device it fails... giving me this message over the remote debug session:
"...text=Error #2032: Stream Error. URL: app:/assets/xml/projects.xml"
I've even tried changing my code to use the File.applicationDirectory approach. But once again,... it only works in the AIR Launcher... not on the device!
What am I missing? Anyone have a clue? Thanks in advance for anyone who can help or hint!
45. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5chris.campbell
Dec 1, 2011 11:31 AM (in response to chris_emerson)
Hi Chris,
If you're still having an issue with this, could you post your code so I can take a look and try it out?
Thanks,
Chris
46. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5chrispeck303 Dec 6, 2011 3:48 PM (in response to ShinyaArao)
Hi
Another Chris to add to the mix.
I'm getting the same error. Im trying to convert a rather large web project to an Air for Mobile project in Flash Builder 4.5.
I have Air SDK 3.0 and Flex 4.5.1 and whilst my URL request work fine running via the desk top, it encounters IOError 2032 when running on the device.
Does anyone have a fix?
All the best
ChrisP.
47. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5chris.campbell
Dec 7, 2011 12:54 PM (in response to chrispeck303)
Chris,
Is this happening on iOS or Android?
Chris
48. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5chrispeck303 Dec 7, 2011 2:38 PM (in response to chris.campbell)
Hi Chris
I'm using a Galaxy S2 running Android 2.3.3 for device debugging.
As far as the code is concerned there isn't much to it when it finally sends the request.
public function send (params:URLVariables, method:String="GET") : void
{
var req:URLRequest = new URLRequest(url);
req.method = method;
req.data = params;
addToQueue(req);
}
then when dequeued
urlLoader.load(req);
My current theory is that it encounters an invalid SSL Certificate, though I would have thought android would spit out a warning. My system's guys are investigating that avenue. Otherwise, I'm a bit stuck.
Any advice would be appreciated.
All the best
Chris.
49. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5chrispeck303 Dec 7, 2011 3:23 PM (in response to chrispeck303)
Hi Chris
Don't worry about this one. My bad. There was a firewall issue blocking requests to our dev servers from the phone.
Thanks for your time all the same.
ChrisP.
50. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5chris.campbell
Dec 7, 2011 3:27 PM (in response to chrispeck303)
Hi Chris,
Np, and thanks for the update. Glad you got it working.
Chris
51. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5tolga_erdogus Jan 5, 2012 4:54 PM (in response to chris.campbell)
Chris Campbell,
from my post at, here is the code that reproduces this bug with Flash Builder 4.6/Air 3.1 (essentially if you load any valid https url in to UrlLoader in Flash Builder it works, however in the Android Emulator (I don't have a physical device) it gives an IOError 2032) - I see a certificate error in adb logcat but not sure if it is the cause:
<?xml version="1.0" encoding="utf-8"?>
<s:View xmlns:fx=""
xmlns:s="library://ns.adobe.com/flex/spark"
xmlns:mx="library://ns.adobe.com/flex/mx"
xmlns:ns1="*"
xmlns:
<fx:Script>
<![CDATA[
import mx.events.FlexEvent;
protected var requestTokenUrl:String = "";
protected function windowedapplication1_creationCompleteHandler(event:FlexEvent):void
{
var loader:URLLoader = new URLLoader();
loader.addEventListener(ErrorEvent.ERROR, onError);
loader.addEventListener(AsyncErrorEvent.ASYNC_ERROR, onAsyncError);
loader.addEventListener(SecurityErrorEvent.SECURITY_ERROR, securityErrorHandler);
loader.addEventListener(HTTPStatusEvent.HTTP_RESPONSE_STATUS, httpResponseStatusHandler);
loader.addEventListener(IOErrorEvent.IO_ERROR, ioErrorHandler);
var urlRequest:URLRequest = new URLRequest(requestTokenUrl);
loader.load(urlRequest);
}
protected function requestTokenHandler(event:Event):void
{
}
protected function httpResponse(event:HTTPStatusEvent):void
{
label.text += event.status;
// TODO Auto-generated method stub
}
private function completeHandler(event:Event):void {
label.text += event.toString();
trace("completeHandler data: " + event.currentTarget.data);
}
private function openHandler(event:Event):void {
label.text += event.toString();
trace("openHandler: " + event);
}
private function onError(event:ErrorEvent):void {
label.text += event.toString();
trace("onError: " + event.type);
}
private function onAsyncError(event:AsyncErrorEvent):void {
label.text += event.toString();
trace("onAsyncError: " + event);
}
private function onNetStatus(event:NetStatusEvent):void {
label.text += event.toString();
trace("onNetStatus: " + event);
}
private function progressHandler(event:ProgressEvent):void {
label.text += event.toString();
trace("progressHandler loaded:" + event.bytesLoaded + " total: " + event.bytesTotal);
}
private function securityErrorHandler(event:SecurityErrorEvent):void {
label.text += event.toString();
trace("securityErrorHandler: " + event);
}
private function httpStatusHandler(event:HTTPStatusEvent):void {
label.text += event.toString();
//label.text += event.responseHeaders.toString();
trace("httpStatusHandler: " + event);
}
private function httpResponseStatusHandler(event:HTTPStatusEvent):void {
label.text += event.toString();
trace("httpStatusHandler: " + event);
}
private function ioErrorHandler(event:IOErrorEvent):void {
label.text += event.toString();
label.text += event.text;
trace("ioErrorHandler: " + event);
}
]]>
</fx:Script>
<fx:Declarations>
<!-- Place non-visual elements (e.g., services, value objects) here -->
</fx:Declarations>
<s:Label
</s:View>
52. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5Murtaza_Ghodawala Jan 9, 2012 1:02 AM (in response to tolga_erdogus)
Hi Chris Campbell,
I am using Flash Builder 4.6/AIR 3.1.0. I am using RESTful web service to get XML results and to display on my mobile application. I am getting the same below error when accessing the webservice from mobile app (Android - Galaxy Tab 7 inch).
Error: [IOErrorEvent type="ioError" bubbles=false cancelable=false eventPhase=2 text="Error # 2032"] URL: stlabEmployeeDetails-context-root/jersey/restlab
The same code is working in Flash Builder 4.6 Android emulator. I have checked Network Monitor to "Disabled" before deploying to mobile. What am i doing wrong here? I am pasting my code below-
<?xml version="1.0" encoding="utf-8"?>
<s:View xmlns:fx=""
xmlns:s="library://ns.adobe.com/flex/spark" title="HomeView" xmlns:dao="dao.*"
xmlns:
<fx:Script>
<![CDATA[
import mx.collections.ArrayCollection;
import mx.collections.IList;
import mx.collections.XMLListCollection;
import mx.events.FlexEvent;
import mx.rpc.events.FaultEvent;
import mx.rpc.events.ResultEvent;
import mx.rpc.xml.SimpleXMLDecoder;
import mx.utils.ArrayUtil;
import valueObjects.EmployeeDetail;
[Bindable]
private var myXml:XML;
[Bindable]
public var resultCollection:IList;
public function handleXml(event:ResultEvent):void
{
var xmlListCollection:XMLListCollection = new XMLListCollection(event.result.children());
var xmlListCollectionValues:XMLListCollection = new XMLListCollection(event.result.emp.children());
var resultArray:Array = xmlListCollection.toArray();
var resultArrayValues:Array = xmlListCollectionValues.toArray();
var objEmployeeDetails:EmployeeDetail;
var resultCollection:ArrayCollection = new ArrayCollection();
var j:int = 0;
for(var i:int=0;i<resultArray.length;i++){
objEmployeeDetails = new EmployeeDetail();
objEmployeeDetails.brand = resultArrayValues[j];
objEmployeeDetails.division = resultArrayValues[j+1];
objEmployeeDetails.email = resultArrayValues[j+2];
objEmployeeDetails.employee_name = resultArrayValues[j+3];
objEmployeeDetails.employee_number = resultArrayValues[j+4];
objEmployeeDetails.grade = resultArrayValues[j+5];
objEmployeeDetails.mobile = resultArrayValues[j+6];
objEmployeeDetails.position = resultArrayValues[j+7];
j = j + 8;
resultCollection.addItem(objEmployeeDetails);
}
list.dataProvider = resultCollection;
//return resultCollection;
}
public function handleFault(event:FaultEvent):void
{
//Alert.show(event.fault.faultDetail, "Error");
}
protected function sesrchEmployee():void
{
xmlRpc.send();
}
]]>
</fx:Script>
<fx:Declarations>
<dao:EmployeeDAO
<mx:HTTPService
<mx:request
<data>{key.text}</data>
<data>{key1.text}</data>
</mx:request>
</mx:HTTPService>
</fx:Declarations>
<s:navigationContent/>
<s:titleContent>
<s:VGroup
<s:HGroup
<s:Label
<s:TextInput
</s:HGroup>
<s:HGroup
<s:Label
<s:TextInput
</s:HGroup>
</s:VGroup>
</s:titleContent>
<s:actionContent>
<s:Button
</s:actionContent>
<s:List
<s:itemRenderer>
<fx:Component>
<s:IconItemRenderer
</s:IconItemRenderer>
</fx:Component>
</s:itemRenderer>
</s:List>
</s:View>
Appreciate your quick response in this regard.
Thanks,
Murtaza Ghodawala
Mobile: +965 97180549
murtaza.ghodawala@alshaya.com
53. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5chris.campbell
Jan 9, 2012 5:25 PM (in response to Murtaza_Ghodawala)
Hi Murtaza and tolga,
Could you both create new bugs (and please include the sample code you've added here) over at bugbase.adobe.com? Please post back with the bug URL's once they've been created. I encourage anyone else running into these problems to please visit the bug and vote/comment as appropriate.
Thanks,
Chris
54. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5Murtaza_Ghodawala Jan 9, 2012 11:27 PM (in response to chris.campbell)
Hi Chris,
Please find the below bug URL -
Appreciate your quick response in fixing this issue as soon as possible. Thanks.
Thanks,
Murtaza Ghodawala
Mobile: +965 97180549
murtaza_ghoda82@hotmail.com
55. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5tolga_erdogus Jan 10, 2012 6:52 AM (in response to chris.campbell)
Chris - I had already created one:
Though - i must say that this is a type of error that seems to be happening with a lot of people (on and off with some potential fixes and regressions since 2.X) and is in the URLLoader network stack. It basically is in the critical path of any internet based Air app.
Call me crazy, but I would say this bug is critical enough to just skip a democratized vote process and fix ASAP. And in all honesty, there isn't enough Adobe activity in the forums for people to keep the faith and come back to vote for bugs. From the outside, just by looking at the Adobe employee activity in forums, there seems to be a "pause" of Adobe Air right now. It is sad because it is by far the coolest and most productive cross platform/device development environment. I am barely hanging on to the idea of building an Air app even though it is so poweful. I made a tremendous amount of progress in the first 3 days of starting to build my app and it has been on hold for 2 weeks after that.
56. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5blogtom Apr 7, 2012 2:34 PM (in response to ShinyaArao)
You may need to add an "anti cache" var like this:
requestServer = new URLRequest("" + new Date().getTime());
Hope this helps.
57. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5banzai76_ May 23, 2012 3:42 PM (in response to blogtom)
I'd been getting similar https and urlloader-related #2032 errors, but they seem fixed when I compile with AIR 3.3 from the Adobe Labs site (in conjunction with Flex 4.6).
58. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5Suat Korkmaz Nov 8, 2012 4:39 AM (in response to banzai76_)
I too get this error on FB 4.6 + AIR 3.5. Trying to get this working on iPad 5.1. Also tested on iOS 6.0 device. No luck.
My code is simple:
var loader:URLLoader = new URLLoader();
loader.addEventListener(HTTPStatusEvent.HTTP_RESPONSE_STATUS, httpResponseStatusHandler);
loader.addEventListener(SecurityErrorEvent.SECURITY_ERROR, securityErrorHandler);
loader.addEventListener(IOErrorEvent.IO_ERROR, IOErrorHandler, false, 0, true);
var request:URLRequest = new URLRequest("");
request.manageCookies = true;
var vars:URLVariables = new URLVariables("userName=" + usn + "&pword=" + pwd + "&LOGIN=LOGIN&operation=CPLOGIN");
request.data = vars;
loader.load(request);
This code works on iPad emulator but not on iPad itself.
Any help will be appreciated.
My Best,
Suat
EDIT: I am also getting:
IDS_CONSOLE_SANDBOX
IDS_CONSOLE_NOT_PERMITTED
59. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5banzai76_ Nov 8, 2012 6:50 AM (in response to Suat Korkmaz)
Do you have a valid security certificate for the domain of the URL you are calling? If you don't, it will fail. This is a known AIR for iOS bug.
If you can use the same code to successfully download a file from that domain using http (not https), then this would likely be the explanation.
60. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5Suat Korkmaz Nov 8, 2012 9:21 AM (in response to banzai76_)
Thank you for your reply.
I added an image file to the root of the domain and downloaded it. No problems.
We debug'ed the backend side. The loginUser.jsp file clearly returns 302 status with a redirection url but I receive this as an ioError. This is totally strange.
AFAIK the cert I use is valid. A few developers are using it with no problems but of course this dows not mean that it is valid. I'll check it.
I don't know what to do next.
Any further help will be appreciated too.
My Best,
Suat
61. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5banzai76_ Nov 9, 2012 2:50 AM (in response to Suat Korkmaz)
I would suggest as next steps:
1. Download that image file again, this time using https. If that works then you know that there are no security problems.
2. If possible, submit the login details directly to whatever URL you are trying to redirect to, and see if that works?
62. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5Suat Korkmaz Nov 9, 2012 3:35 AM (in response to banzai76_)
1. I downloaded the file with https with success. No problems at all.
2. I tried this. Result is the same. ioError.
Thanks again.
63. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5Suat Korkmaz Nov 9, 2012 4:03 AM (in response to Suat Korkmaz)
The last post says that https calls are not allowed in Adobe Air. Tell me that it is not true.
64. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5banzai76_ Nov 9, 2012 4:19 AM (in response to Suat Korkmaz)
That isn't true.
HTTPS is supported in AIR, including on iOS. I'm using it in development right now with exactly the same code as you. There are live apps in the itunes app store that use it, but there also a lot of posts around like this one where people are having trouble.
In the test where you submitted data directly to the redirected url, can you see what the server log says it is returning? i.e. does it it receive the request and respond correctly?
I'm out of ideas really. Maybe you could it on an android phone and try it from a swf hosted on the website? Just to see if the code works in those situations?
65. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5Suat Korkmaz Nov 9, 2012 6:01 AM (in response to banzai76_)
I didin't like it but i soved the issue another way. I created a StageWebView and moved it outside the stage. Made the https call with it and thats all. Cookie created. All other services are now accessible.
Thanks for your help banzai. I'm appreciated.
66. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5Fathah Noor Nov 13, 2012 1:13 AM (in response to chris_emerson)
My previous problem was similar like what you've had, and finally today the problem is solved!
I simply replace File.documentsDirectory.resolvePath("blablabla").nativePath into File.documentsDirectory.resolvePath("blablabla").url
67. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5Suat Korkmaz Nov 13, 2012 3:07 AM (in response to Fathah Noor)
Can you please post the whole code?
68. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5Fathah Noor Nov 13, 2012 4:00 AM (in response to Suat Korkmaz)
I think our problem is somewhat different? CMIIW
I was responding to comments from chris_emerson particularly on this section:
"I've even tried changing my code to use the File.applicationDirectory approach. But once again,... it only works in the AIR Launcher... not on the device!"
69. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5Fatche Sep 18, 2013 3:03 AM (in response to Murtaza_Ghodawala)
I know it's quite a late reply but I'll just add the following for future reference.
I was having a very similar problem but with a desktop app, one specific request I'd used for a long time stopped working and I would always get the generic 2032 error.
This was with a standalone Air app, using 3.1. After going through tons of posts and trying to figure out what was going on I finally solved it by simply cleaning all caches from my browsers (chrome, safari and firefox). It might seem silly since it's a standalone app but it did work.
Hope it may help someone with this problem.
70. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.572Pantera Dec 26, 2013 8:26 PM (in response to chris.campbell)
Vote? for a bug fix? How about just fix it? Still present in 3.9.
Air is dead. The pace of bug fixes and features has slowed to a crawl.
71. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5sherif.elshazli Nov 11, 2014 3:25 AM (in response to ShinyaArao)
We're still experiencing this bug occasionally.
We are using URLLoader to communicate with a php server. This happens under AIR 15 beta downloaded from adobe labs, on iOS, during the login call.
We can see in the server's logs that the response has been dispatched OK but it just doesn't reach the app. After a while, around 1 min, the IOError StreeamError comes with status 0.
This never happened on the first app install. It reproduces more often when closing the app during the login call, and restarting.
The php server is hosted with Amazon and we'd never experienced any problems with them.
The call is http not https.
72. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5chris.campbell
Nov 19, 2014 3:45 PM (in response to sherif.elshazli)
Sherif.elshazli,
Can we start a new thread? This one is 4 years old and has conflicting information. When adding the new forum post, please feel free to reference this thread and please include a link to your bug report.
Thanks,
Chris
73. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5Yozef0 Oct 5, 2015 4:19 PM (in response to chris.campbell)
Sounds like this thread is still open. I wonder how active the community still is..
I will narrow down the Issue. AIR 3.4 Flex 3.6.
var loader:URLLoader = new URLLoader();
loader.addEventListener(Event.COMPLETE, resultUpdatedNotificationsEvent);
loader.addEventListener(IOErrorEvent.IO_ERROR, ioErrorHandler); // <-- This gets fired with a IOError id 2032
loader.addEventListener(HTTPStatusEvent.HTTP_STATUS, httpStatusHandler); // <-- then this gets status : 0
loader.addEventListener(SecurityErrorEvent.SECURITY_ERROR, securityErrorHandler);
var request:URLRequest = new URLRequest('https://
loader.load(request);
On Mac - All Ok... runs well.
On Window - I get the IOErrorEvent.IO_ERROR event. How come?
74. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5chris.campbell
Oct 5, 2015 5:58 PM (in response to Yozef0)
Can you try this out with AIR 19? If it still happens, please open a bug report at and attach a sample project. Once added, please let me know the bug number and I'll have someone take a look.
Thanks,
Chris
75. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5Yozef0 Oct 6, 2015 5:02 AM (in response to chris.campbell)
Good day Chris, I've updated Flash Builder and installed AIR 19.0 SDK.
The issue still persists. I have attached an .fxp project which describes the issue with Bug #: 4069486
The API calls work on Macintosh, however not on Windows machines.
76. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5ivanp2689695 Oct 6, 2015 10:02 PM (in response to Yozef0)
Also doesn't work on iPad 2,1 ios 9.0.2 with Air SDK 19.0.0.190
_request = new URLRequest( url );
_request.method = URLRequestMethod.GET;
variables.receipt = InAppBilling.service.applicationReceipt.getAppReceipt();
_request.data = variables;
_loader.addEventListener( HTTPStatusEvent.HTTP_STATUS , httpStatusHandler );
_loader.addEventListener(SecurityErrorEvent.SECURITY_ERROR, securityErrorHandler);
_loader.addEventListener( IOErrorEvent.IO_ERROR , onLoader_IOError );
_loader.dataFormat = URLLoaderDataFormat.TEXT;
_loader.load( _request );
77. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5ivanp2689695 Oct 9, 2015 7:00 AM (in response to ivanp2689695)
Problem is with apps compiled for iOS 9 requesting comms via unsecure links.
Fix
More here
spank/ — IOError #2032, iOS9, Adobe Air and ATS
78. Re: URLLoader doesn't work(IOError #2032) in AIR SDK version 2.5Samita1990 Nov 4, 2015 10:23 PM (in response to Samita1990)
Please let me know as soon as possible for this because i am way behind with my timeline.
Regards,
Sam
|
https://forums.adobe.com/message/4127753
|
CC-MAIN-2017-09
|
refinedweb
| 3,758
| 52.56
|
Dragging an object from a treeview
I implemented a treeview and it is working great.
I like to know whether it is possible to Drag&Drop an object (in this case a link to a image file) from the treeview to (for example) the color channel of a material in the Material Editor?
Or an easier request.
I know I can drag objects to the treeview, but is it also possible to drag object from a treeview to (for example) a link field?
Is there somewhere an example available?
Hi, @pim Unfortunately from a TreeView to outside it's only possible to drag c4d.C4DAtoms and derived class.
While GetDragType supports multiple types, it's not the case of GenerateDragArray which can only accept a list of C4DAtoms (in C++ it's a AtomArray, and it's from there the limitation comes) (Of course it can be your own BaseList2D registered with RegisterNodePlugin, but since other parts of Cinema 4D have no idea of what is this kind of BaseList2D, it will be accepted nowhere)
So to drag from the outside, you need GetDragType to returns c4d.DRAGTYPE_ATOMARRAY
And DragStart to let the treeView object be dragged by retuning c4d.TREEVIEW_DRAGSTART_ALLOW
Finally in GenerateDragArray you return a list of C4DAtoms to be in the list.
def GetDragType(self, root, userdata, obj): return c4d.DRAGTYPE_ATOMARRAY def DragStart(self, root, userdata, obj): return c4d.TREEVIEW_DRAGSTART_ALLOW | c4d.TREEVIEW_DRAGSTART_SELECT` def GenerateDragArray(self, root, userdata, obj): bs = c4d.BaseShader(c4d.Xbitmap) bs[c4d.BITMAPSHADER_FILENAME] = obj.name # Here I assume the custom TreeView item have a name parameter elements = [bs] return elements
Note that the limitation only exists in Python as GenerateDragData is not available.
Cheers,
Maxime.
Thanks, sounds a bit complicated, so I have to give it some thoughts.
For now, it is enough.
-Pim
|
https://plugincafe.maxon.net/topic/12149/dragging-an-object-from-a-treeview
|
CC-MAIN-2020-16
|
refinedweb
| 300
| 53.71
|
to realize such a setup.
Installing and setting up the L3 agent
OpenStack offers different models to operate a virtual router. The model that we discuss in this post is sometimes called a “legacy router”, and is realized by a router running on one of the controller hosts, which implies that the routing functionality is no longer available when this host goes down. In addition, Neutron offers more advanced models like a distributed virtual router (DVR), but this is beyond the scope of todays post.
To make the routing functionality available, we have to install respectively enable two additional pieces of software:
- The Routing API is provided by an extension which needs to be loaded by the Neutron server upon startup. To achieve this, this extension needs to be added to the service_plugins list in the Neutron configuration file neutron.conf
- The routing functionality itself is provided by an agent, the L3 agent, which needs to be installed on the controller node
In addition to installing these two components, there are a few changes to the configuration we need to make. First, of course, the L3 agent comes with its own configuration file that we need to adapt. Specifically, there are two changes that we make for this lab. First, we set the interface driver to openvswitch, and second, we ask the L3 agent not to provide a route to the metadata proxy by setting enable_metadata_proxy to false, as we use the mechanism provided by the DHCP agent.
In addition, we change the configuration of the Horizon dashboard to make the L3 functionality available in the GUI as well (this is done by setting the flag horizon_enable_router in the configuration to “True”).
All this can again be done in our lab environment by running the scripts for Lab 8. In addition, we run the demo playbook which will set up a VLAN network with VLAN ID 100 and two instances attached to it (demo-instance-1 and demo-instance 3) and a flat, external network with one instance (demo-instance-3).
git clone cd Lab8 vagrant up ansible-playbook -i hosts.ini site.yaml ansible-playbook -i hosts.ini demo.yaml
Setting up our first router
Before setting up our first router, let us inspect the network topology that we have created. The Horizon dashboard has a nice graphical representation of the network topology.
We see that we have two virtual Ethernet networks, carrying one IP network each. The network on the left – marked as external – is the flat network, with an IP subnet with CIDR 172.18.0.0/24. The network on the right is the VLAN network, with IP range 172.16.0.0./24.
Now let us create and set up the router. This happens in several steps. First, we create the router itself. This is done using the credentials of the demo user, so that the router will be part of the demo project.
vagrant ssh controller source demo-openrc openstack router create demo-router
At this point, the router exists as an object, and there is an entry for it in the Neutron database (table routers). However, the router is not yet connected to any network. Let us do this next.
It is important to understand that, similar to a physical router, a Neutron virtual router has two interfaces connected to two different networks, and, again as in the physical network world, the setup is not symmetric. Instead, there is one network which is considered external and one internal network. By default, the router will allow for traffic from the internal network to the external network, but will not allow any incoming connections from the external network into the internal networks, very similar to the cable modem that you might have at home to connect to this WordPress site.
Correspondingly, the way how the external network and the internal network are attached to the router differ. Let us start with the external network. The connection to the external network is called the external gateway of the router and can be assigned using the set command on the router.
openstack router set \ --external-gateway=flat-network\ demo-router
When you run this command and inspect the database once more, you will see that the column gw_port_id has been populated. In addition, listing the ports will demonstrate that OpenStack has created a port which is attached to the router (this port is visible in the database but not via the CLI as the demo user, as the port is not owned by this user) and has received an IP address on the external network.
To complete the setup of the router, we now have to connect the router to an internal network. Note that this needs to be done by the administrator, so we first have to source the credentials of the admin user.
source admin-openrc openstack router add subnet demo-router vlan-subnet
When we now log into the Horizon GUI as the demo user and ask Horizon to display the network topology, we get the following result.
We can reach the flat network (and the lab host) from the internal network, but not the other way around. You can verify this by logging into demo-instance-1 via the Horizon VNC console and trying to ping demo-instance-3.
Now let us try to understand how the router actually works. Of course, somewhere behind the scenes, Linux routing mechanisms and iptables are used. One could try to implement a router by manipulating the network stack on the controller node, but this would be difficult as the configuration for different routers might conflict. To avoid this, Neutron creates a dedicated network namespace for each router on the node on which the L3 agent is running.
The name of this namespace is qrouter-, followed by the ID of the virtual router (here “q” stands for “Quantum” which was the name of what is now known as Neutron some years ago). To analyze the network stack within this namespace, let us retrieve its ID and spawn a shell inside the namespace.
netns=$(ip netns list \ | grep "qrouter" \ | awk '{print $1'}) sudo ip netns exec $netns /bin/bash
Running
ifconfig -a and
route -n shows that, as expected, the router has two virtual interfaces (both created by OVS). One interface starting with “qg” is the external gateway, the second one starting with “qr” is connected to the internal network. There are two routes defined, corresponding to the two subnets to which the respective interfaces are assigned.
Let us now inspect the iptables configuration. Running
iptables -S -t nat reveals that Neutron has added an SNAT (source network address translation) rule that applies to traffic coming from the internal interface. This rule will replace the source IP address of the outgoing traffic by the IP address of the router on the external network.
To understand how the router is attached to the virtual network infrastructure, leave the namespace again and display the bridge configuration using
sudo ovs-vsctl show. This will show you that the two router interfaces are both attached to the integration bridge.
Let us now see how traffic from a VM on the internal network flows through the stack. Suppose an application inside the VM tries to reach the external network. As inside the VM, the default route goes to 172.18.0.1, the routing mechanism inside the VM targets the packet towards the qr-interface of the router. The packet leaves the VM through the tap interface (1). The packet enters the bridge via the access port and receives a local VLAN tag (2), then travels across the bridge to the port to which the qr-interface is attached. This port is an access port with the same local VLAN tag as the virtual machine, so it leaves the bridge as untagged traffic and enters the router (3).
Within the router, SNAT takes place (4) and the packet is forwarded to the qg-interface. This interface is attached to the integration bridge as access port with local VLAN ID 2. The packet then travels to the physical bridge (5), where the VLAN tag is stripped off and the packet hits the physical networks as part of the native network corresponding to the flat network.
As the IP source address is the address of the router on the external network, the response will be directed towards the qg-interface. It will enter the integration bridge coming from the physical bridge as untagged traffic, receive local VLAN ID 2 and end up at the qg-access port. The packet then flows back through the router, is leaving it again at the qr interface, appears with local VLAN tag 1 on the integration bridge and eventually reaches the VM.
There is one more detail that deserves being mentioned. When you inspect the iptables rules in the mangle table of the router namespace, you will see some rules that add marks to incoming packets, which are later evaluated in the nat and filter tables. These marks are used to implement a feature called address scopes. Essentially, address scopes are reflecting routing domains in OpenStack, the idea being that two networks that belong to the same address scope are supposed to have compatible, non-overlapping IP address ranges so that no NATing is needed when crossing the boundary between these two networks, while a direct connection between two different address scopes should not be possible.
Floating IPs
So far, we have set up a router which performs a classical SNAT to allow traffic from the internal network to appear on the external network as if it came from the router. To be able to establish a connection from the external network into the internal network, however, we need more.
In a physical infrastructure, you would use DNAT (destination netting) to achieve this. In OpenStack, this is realized via a floating IP. This is an IP address on the external network for which DNAT will be performed to pass traffic targeted to this IP address to a VM on the internal network.
To see how this works, let us first create a floating IP, store the ID of the floating IP that we create in a variable and display the details of the floating IP.
source demo-openrc out=$(openstack floating ip create \ -f shell \ --subnet flat-subnet\ flat-network) floatingIP=$(eval $out ; echo $id) openstack floating ip show $floatingIP
When you display the details of the floating IP, you will see that Neutron has assigned an IP from the external network (the flat network), more precisely from the network and subnet that we have specified during creation.
This floating IP is still fully “floating”, i.e. not yet attached to any actual instance. Let us now retrieve the port of the server demo-instance-1 and attach the floating IP to this port.
port=$(openstack port list \ --server demo-instance-1 \ -f value \ | awk {'print $1}') openstack floating ip set --port $port $floatingIP
When we now display the floating IP again, we see that floating IP is now associated with the fixed IP address of the instance demo-instance-1.
Now leave the controller node again. Back on the lab host, you should now be able to ping the floating IP (using the IP on the external network, i.e. from the 172.16.0.0/24 network) and to use it to SSH into the instance.
Let us now try to understand how the configuration of the router has changed. For that purpose, enter the namespace again as above and run
ip addr. This will show you that now, the external gateway interface (the qg interface) has now two IP addresses on the external network – the IP address of the router and the floating IP. Thus, this interface will respond to ARP requests for the floating IP with its MAC address. When we now inspect the NAT tables again, we see that there are two new rules. First, there is an additional source NAT rule which replaces the source IP address by the floating IP for traffic coming from the VM. Second, there is now – as expected – a destination NAT rule. This rule applies to traffic directed to the floating IP and replaces the target address with the VM IP address, i.e. with the corresponding fixed IP on the internal network.
We can now understand how a ping from the lab host to the floating IP flows through the stack. On the lab host, the packet is routed to the vboxnet1 interface and shows up at enp0s9 on the controller node. From there, it travels through the physical bridge up to the integration bridge and into the router. There, the DNAT processing takes place, and the target IP address is replaced by that of the VM. The packet leaves the router at the internal qr-interface, travels across the integration bridge and eventually reaches the VM.
Direct access to the internal network
We have seen that in order to connect to our VMs using SSH, we first need to build a router to establish connectivity and assign a floating IP address. Things can go wrong, and if that operation fails for whatever reason or the machines are still not reachable, you might want to find a different way to get access to the instances. Of course there is the noVNC client built into Horizon, but it is more convenient to get a direct SSH connection without relying on the router. Here is an approach how this can be done.
Recall that on the physical bridge on the controller node, the internal network has the VLAN segmentation ID 100. Thus to access the VM (or any other port on the internal network), we need to tag our traffic with the VLAN tag 100 and direct it towards the bridge.
The easiest way to do this is to add another access port to the physical bridge, to assign an IP address to it which is part of the subnet on the internal network and to establish a route to the internal network from this device.
vagrant ssh controller sudo ovs-vsctl add-port br-phys vlan100 tag=100 \ -- set interface vlan100 type=internal sudo ip addr add 172.18.0.100/24 dev vlan100 sudo ip link set vlan100 up
Now you should to be able to ping any instance on the internal VLAN network and SSH into it as usual from the controller node.
Why does this work? The upshot of our discussion above is that the interaction of local VLAN tagging, global VLAN tagging and the integration bridge flow rules effectively attach all virtual machines in our internal network via access ports with tagging 100 to the physical network infrastructure, so that they all communicate via VLAN 100. What we have done is to simply create another network device called vlan100 which is also connected to this VLAN. Therefore, it is effectively on one Ethernet segment with our first two demo instances. We can therefore assign an IP address to it and then use it to reach these instances. Essentially, this adds an interface to the controller which is connected to the virtual VLAN network so that we can reach each port on this network from the controller node (be it on the controller node or a compute node).
There is much more we could say about routers in OpenStack, but we leave that topic for the time being and move on to the next post, in which we will discuss overlay networks using VXLAN.
|
https://leftasexercise.com/2020/03/16/building-virtual-routers-with-openstack/
|
CC-MAIN-2020-16
|
refinedweb
| 2,599
| 66.27
|
Introduction
This package allows Go processes to publish multicast DNS style records onto their local network segment. For more information about mDNS, and it's closely related cousin, Zeroconf, please visit.
Acknowledgements
Thanks to Brian Ketelsen and Miek Gieben for their feedback and suggestions. This package builds on Miek's fantastic godns library and would not have been possible without it.
Installation
This package is goinstall'able.
goinstall github.com/davecheney/mdns
Usage
Publishing mDNS records is as simple as importing the mdns page
import ( "net" // needed for net.IP "github.com/davecheney/mdns" )
Then calling one of the publish functions
mdns.PublishA("yourhost.local", 3600, net.IP(192,168,1,100))
This places an A record into the internal zone file. Broadcast mDNS queries that match records in the internal zone file are responded to automatically. Other records types are supported, check the godoc for more information.
godoc githib.com/davecheney/mdns
Tested Platforms
This package has been tested on the following platforms
- linux/arm
- linux/386
- darwin/386
Changelog
14/10/2011 Initial Release
|
https://bitbucket.org/davecheney/mdns/src
|
CC-MAIN-2018-17
|
refinedweb
| 176
| 50.33
|
7 minutes to complete
Datadog APM allows you to customize your traces to include any additional information you might need to maintain observability into your business. You can use this to identify a spike in the throughput of a certain enterprise customer, or the user suffering the highest latency, or to pinpoint the database shard generating the most errors.
In this example, a customer ID is added to traces allowing the customers that have the slowest performance to be identified. Customization of traces is based on tags that seamlessly integrate APM with the rest of Datadog and come in the form of
key:value pairs of metadata added to spans.
1) Follow the example to get your code instrumented.
Depending on the programming language you are you using, you’ll need to set the tags to add to your spans differently.
Note: take note of the service and resource names you are working on, these will come in handy later. In this example, the service is the Ruby server
web-store and the resource (endpoint) is
ShoppingCartController#checkout.
The Datadog UI uses tags to set span level metadata. Custom tags may be set for auto-instrumentation by grabbing the active span from the global tracer and setting a tag with
setTag method.
import io.opentracing.Tracer; import io.opentracing.util.GlobalTracer; @WebServlet class ShoppingCartServlet extends AbstractHttpServlet { @Override void doGet(HttpServletRequest req, HttpServletResponse resp) { // Get the active span final Span span = GlobalTracer.get().activeSpan(); if (span != null) { // customer_id -> 254889 span.setTag("customer.id", customer_id); } // [...] } }
The Datadog UI uses tags to set span level metadata. Custom tags may be set for auto-instrumentation by grabbing the active span from the global tracer and setting a tag with
set_tag method.
from ddtrace import tracer @app.route('/shopping_cart/<int:customer_id>') @login_required def shopping_cart(customer_id): # Get the active span current_span = tracer.current_span() if current_span: # customer_id -> 254889 current_span.set_tag('customer.id', customer_id) # [...]
The Datadog UI uses tags to set span level metadata. Custom tags may be set for auto-instrumentation by grabbing the active span from the global tracer and setting a tag with
set_tag method.
The Datadog UI uses tags to set span level metadata. Custom tags may be set for auto-instrumentation by grabbing the active span from the global tracer and setting a tag with
SetTag method.
package main import ( muxtrace "gopkg.in/DataDog/dd-trace-go.v1/contrib/gorilla/mux" "gopkg.in/DataDog/dd-trace-go.v1/ddtrace/tracer" ) func handler(w http.ResponseWriter, r *http.Request) { vars := mux.Vars(r) // Get the active span from a Go Context if span, ok := tracer.SpanFromContext(r.Context()); ok { // customer_id -> 254889 span.SetTag("customer.id", vars["customerID"]) } // [...] } func main() { tracer.Start(tracer.WithServiceName("web-store")) defer tracer.Stop() // Use auto-instrumentation mux := muxtrace.NewRouter() mux.HandleFunc("/shopping_cart/{customerID}", handler) http.ListenAndServe(":8080", mux) }
The Datadog UI uses tags to set span level metadata. Custom tags may be set for auto-instrumentation by grabbing the active span from the global tracer and setting a tag with
setTag method.
app.get('/shopping_cart/:customer_id', (req, res) => { // Get the active span const span = tracer.scope().active() if (span !== null) { // customer_id -> 254889 span.setTag('customer.id', req.params.
The Datadog UI uses tags to set span level metadata. Custom tags may be set for auto-instrumentation by grabbing the active span from the global tracer and setting a tag with
setTag method.
<?php namespace App\Http\Controllers; use DDTrace\GlobalTracer; class ShoppingCartController extends Controller { public shoppingCartAction (Request $request) { // Get the currently active span $span = GlobalTracer::get()->getActiveSpan(); if (null !== $span) { // customer_id -> 254889 $span->setTag('customer_id', $request->get('customer_id')); } // [...] } } ?>
2) Go to the Services page and click on the service that you added tags to. Scroll down and click on the specific resource where the tag was added in the Resource table. Scroll down to the Traces table
The Trace table shows you both the overall latency distribution of all traces in the current scope (service, resource and timeframe) and links to individual traces. You can sort this table by duration or error code to easily identify erroneous operation or opportunities for optimization.
3) Click into one of your traces
In this view you can see the flamegraph on top and the additional information windows beneath it. The Datadog flamegraph allows you to have an at a glance view of the duration and status of every logical unit (span) that impacts a request. The flamegraph is fully interactive and you can pan it (by dragging) or zoom in and out (by scrolling). Clicking on any span provides more information about that span in particular in the bottom part of the view.
The bottom part of the view includes additional information about the trace or any selected span. Here you can see all default tags as well as the ones you manually include. In addition to these, you can also switch to view associated Host and Log information.
4) Navigate to the Trace Search page.
The Trace Search page allows you to identify specific Traces and Analyzed Spans you are interested in. Here you can filter by time a set of default tags (such as
Env,
Service,
Resource and many more).
5) Find a trace that has the new tag. To do this use the facet explorer on the left to find the Resource name you set at the beginning of this guide and click into one of the rows you see there.
6) Find the new tag that you added to the trace. Click on it and select Create facet for
@[your facet name] (remember, this is customer_id in our example)
You can now determine the displayed name of your facet and where to place it in the facet explorer.
You should now be able to see the facet you created in the Facet Explorer. The fastest way to find it is by using the
Search facets box.
6) Navigate to the App Analytics page
The App Analytics page is a visual query building tool that allows you to conduct an investigation into your traces with infinite cardinality. It relies on facets to filter and scope the query, read more in the App Analytics overview.
7) Choose the service you’ve been working on from the service facet list, choose Error from the status facet and select
customer_id (or any other tags you added to your spans) from the group by field.
8) Remove Error from the query, change the
count * measure to
Duration and change the graph type to
Top List.
You can now see the customers that have the slowest average requests. Note: If you’d like to make sure your customers never pass a certain threshold of performance, you can export this query to a monitor, alternatively, you can save this visualization to a dashboard and keep an eye over it over time.
Finally, you can also see all the traces relevant to your query by clicking the visualization and selecting
View traces.
Additional helpful documentation, links, and articles:
|
https://docs.datadoghq.com/tracing/guide/add_span_md_and_graph_it/
|
CC-MAIN-2020-40
|
refinedweb
| 1,162
| 55.95
|
If animated cartoon. Usually it was played at a slow pace in an exaggerated manner to accompany a worn-out character trudging along, apparently bogged down by some trouble or other. (If you listened to different melodies as a kid, check it out here) Because of this cartoon-ization I attributed negative connotations to the melody. Then I found the ‘real' story. According Wikipedia, "Some authors have said that the song originated based upon the extraordinary performance of Lady Suffolk, who was the first horse to do the mile in less than two and a half minutes. It occurred on July 4, 1843, at the Beacon Course racetrack in Hoboken, New Jersey, when she was over 10 years old." Wow, a ten year old horse breaking a global speed record. Not bad!What's this got to do with PowerBuilder, plenty! As you'll soon see!
I just finished writing a rich media eTutorial on WPF Layout using PowerBuilder 12 .NET. The tutorial guides Classic PowerBuilder developers in the nuances of building fluid WPF layouts using declarative features of XAML. WPF has awesome capabilities that you can leverage to transform your user interface. It's a real powerful racehorse. But (and it's a big but) WinForm and WPF are different technologies and have different presentation strategies. WinForm is coordinate based. The control location is specified by x and y coordinates. Controls are sized absolutely with a given height and width. On the performance side, the entire screen is redrawn every time a control moves. Taking a page from the publishing playbook, WPF is layout-based; the emphasis is on creating flexible, fluid, adaptable layouts that are adaptive as well as screen resolution independent. In WPF, flow-based layout is standard; classic coordinate-based layout only has rudimentary support in the form of a Canvas layout that approximates absolute positioning. Screen rendering is fast, because WPF sits on top of DirectX technology. Only the portion of the screen affected by the move or change is redrawn.
When you migrate your Classic application to .NET WPF, PowerBuilder renders your windows and CVUOs in a rigid Canvas layout. If your application has a home brewed resize service or it relies on the industry standard PFC service, your windows will continue to resize in a fluid manner, albeit in a code-based rather than a native declarative manner. But if your Classic windows are resize challenged, you will find yourself tempted to move to fluid, flexible display containers. You might say to yourself, "Hey all I have to do is change a couple of XAML tags and I'm good to go." The news is there are two inherent flaws in this approach. First, there are visual design considerations that must precede any radical GUI modification. You'll need to figure out how your GUI elements should respond to size and resolution changes and set up your elements so they respond logically. You can learn about this non-language / platform-specific design issue from my eCourse or from any good WPF book. Second there's a code problem that is PowerBuilder specific: the visual inheritance hierarchy and its XAML implementation, which are going to fight you tooth and nail not to be changed. I'm going to focus on this issue in the rest of this article.
XAML was not designed to support inheritance. XAML is a declarative markup style language. It is predicated on the separation of data and business logic from presentation. It is not an object-oriented code-based approach (although XAML does compile into code object runtime calls). The way to reformulate XAML is through resource dictionaries, template bindings and the like. You can apply resource dictionaries both dynamically and statically. (I'll be building eTutorials to show you how to do these.)
Because Classic PowerBuilder fully supports visual inheritance and most applications rely heavily on it, PowerBuilder .NET added development and compile-time support for visual XAML ‘inheritance.' However, as you'll soon see, because it depends on Painter dynamic code generation, and some proprietary internal formatting, it's resistant to change. At compile time and when a descendent is opened in a painter, descendent XAML is dynamically composed by consolidating all the XAML up the hierarchy. Let's take a look at how PowerBuilder does it.
I created a small application in Classic and then migrated it .NET to illustrate the issues. Figure 1 shows the application architecture. Figure 2 shows the target in the Solution Explorer. It has a CVUO (u_bar), an ancestor window composed of a button bar, a DataWindow control, a picture and a couple of command buttons. Last, there is a descendent window that adds another DataWindow control and a couple more buttons.
Starting at the top, let's take a look at the XAML that is generated. Listing 1 shows the XAML generated for the CVUO u_bar.sru.xaml (namespace declarations removed for brevity's sake).
Figure 4 shows the same XAML inside the user object painter. It's plain to see that there are many more attributes in the painter than in the XAML file. How come?
Now let's go down a level to the ancestor window. Listing 2 shows the generated XAML file w_anc.srw.xaml. I bolded the XAML for the user object. Notice that it's just a placeholder. Most of the XAML for the user object is not here!
Figure 5 shows the same XAML in the WPF window painter. You can plainly see that the XAML for the user object has been merged into the window. Holy cow, painter magic!
Now let's go down to the next level, the descendent window. Listing 3 shows the XAML. Once again notice that the ancestor XAML (bolded) is merely a placeholder. This time the placeholders go up two ancestor levels: the base window and the user object.
Figure 6 shows the same XAML in the WPF window painter. Once again it's fully expanded.
Note several key issues. (1) You might think you can change any XAML in a window. But in reality, you can't. If you use the WPF editor on the descendent XAML and make changes to ancestor XAML, you might think you have accomplished something, but when you save, all your changes will be lost (with no warning!). (2) Worse yet (at least in build 5530) because of the inheritance issue, some of your well-intentioned descendant changes will confuse the XAML writing process and cause your XAML to render in a corrupt manner. Net result - next time you open your window, the layout editor will tell you it can't load the XAML. Then you'll be forced to figure out what it did on the save and fix it manually.
In a conversation with some of the product engineers, I raised the issue of changing XAML (specifically from a <Canvas> to a <Grid> but it can apply to any fundamental change). Here's what was suggested,
"If inheritance is involved, the best recommendation would be to change the root layout panel in the XAML files of both descendant and all its ancestors *outside the IDE* so they are consistent before reopening the descendant inside the IDE. The intent is to cause the least impact on the underlying Merge/Diff algorithms by maintaining consistency."
However, Canvas is very different from Grid since you are going from an absolute positioning system to a Margin-based positioning system. You would subsequently have to change the Canvas.Left and Canvas.Top properties to Margin properties for all constituent controls in the XAML, preferably starting with the ancestor and descending."
My reaction, "Ugh, work outside the painter, without syntax and tool support; without a clear understanding of the partial ancestor XAML that is written into descendents. What a time-consuming, error-prone approach! No way!"
After careful consideration, I came to the conclusion that it might be better (as in much less painful) to start with a clean slate. Create a new base window hierarchy. Build (or wire) into it all generic, core window functionality. This base window should have no controls. Give it a <Grid> layout container. Then for each implementation window inherit from this base window and style each user interface within a properly designed <Grid> in at the concrete level.
The old gray mare is really a powerful race horse. There's a powerful beast waiting to be released. But when working with a legacy system, the mare's carrying a heavy load. It's going to take careful consideration and a thoughtful approach to merge newer technology into an existing code line.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
|
http://www.codeproject.com/Articles/271623/Merging-Newer-Technology-into-an-Existing-Code-Lin
|
crawl-003
|
refinedweb
| 1,459
| 65.22
|
Pytest: run a function at the end of the tests
pytest cleanup after all tests
pytest no tests ran
pytest before all
pytest conftest
pytest.mark.usefixtures example
pytest-timeout
pytest nested tests
I would like to run a function at the end of all the tests.
A kind of global teardown function.
I found an example here and some clues here but it doesn't match my need. It runs the function at the beginning of the tests. I also saw the function
pytest_runtest_teardown(), but it is called after each test.
Plus: if the function could be called only if all the tests passed, it would be great.
I found:
def pytest_sessionfinish(session, exitstatus): """ whole test run finishes. """
exitstatus can be used to define which action to run. pytest docs about this
Usage and Invocations, You can invoke testing through the Python interpreter from the command line: builtin function arguments pytest -h | --help # show help on command line and The -r flag can be used to display a “short test summary info” at the end of the test Pytest supports the use of breakpoint () with the following behaviours: When breakpoint () is called and PYTHONBREAKPOINT is set to the default value, pytest will use the custom internal PDB trace UI instead of the system default Pdb. When tests are complete, the system will default back to the system Pdb trace UI.
To run a function at the end of all the tests, use a pytest fixture with a "session" scope. Here is an example:
@pytest.fixture(scope="session", autouse=True) def cleanup(request): """Cleanup a testing directory once we are finished.""" def remove_test_dir(): shutil.rmtree(TESTING_DIR) request.addfinalizer(remove_test_dir)
The
@pytest.fixture(scope="session", autouse=True) bit adds a pytest fixture which will run once every test session (which gets run every time you use
pytest). The
autouse=True tells pytest to run this fixture automatically (without being called anywhere else).
Within the
cleanup function, we define the
remove_test_dir and use the
request.addfinalizer(remove_test_dir) line to tell pytest to run the
remove_test_dir function once it is done (because we set the scope to "session", this will run once the entire testing session is done).
pytest fixtures: explicit, modular, scalable, Here is the exact protocol used by pytest to call the test function this way: the smtp_connection object automatically closes when the with statement ends..
You can use the "atexit" module.
For instance, if you want to report something at the end of all the test you need to add a report funtion like this:
def report(report_dict=report_dict): print("THIS IS AFTER TEST...") for k, v in report_dict.items(): print(f"item for report: {k, v}")
then at the end of the module, you call atexit like this:
atexit.register(report)
hoop this helps!
Basic patterns and examples, Now we can profile which test functions execute the slowest: it to your end-users, it is a good idea to also package your test runner and run your tests using the Pythons: Python 3.5, 3.6, 3.7, PyPy3. pytest is a framework that makes building simple and scalable tests easy. Tests are expressive and readable—no boilerplate code required. Get started in minutes with a small unit test or complex functional test for your application or library. Create a simple test function with just four lines of code:
Parametrizing tests, For basic docs, see Parametrizing fixtures and test functions. if metafunc.config.getoption("all"): end = 5 else: end = 2 metafunc.parametrize("param1", range(end)). This means that we only run 2 tests if we do not pass --all : $ pytest -q The pytest framework makes it easy to write small tests, yet scales to support complex functional testing for applications and libraries. An example of a simple test: # content of test_sample.py def inc ( x ): return x + 1 def test_answer (): assert inc ( 3 ) == 5
How to teardown in the end of all tests? · Issue #3051 · pytest-dev , I tried to create function teardown() in conftest and it is not work. And when I run tests from 'tests' directory, like this 'pytest -q tests', I have
Python testing with Pytest: Order of test functions, So if you have a print statement in our code, we will not see its output during normal test run. examples/python/pt2/test_some.py. import sys; def test_hello To use, either run py.test --doctest-modules command, or set your configuration with pytest.ini: $ cat pytest.ini # content of pytest.ini [pytest] addopts = --doctest-modules Man page: PyTest: doctest integration for modules and test files.
pytest_unconfigureseems to do the job, but maybe someone will come up with a better idea
- To make your fixture run some function at the end, you should use
request.addfinalizer(tear_down_function). To make it run at the end of hole session, but not every test case - specify exact scope as
@pytest.fixture(scope="session")
|
http://thetopsites.net/article/52873379.shtml
|
CC-MAIN-2020-50
|
refinedweb
| 814
| 70.94
|
Monthly Archives: February 2012
Bret Victor’s Code / Drawing IDE
I’ll have more to say and think about this.
Wat
A survey of bizarre behaviours of non-things in Javascript.
Rails Off The Rails
Giles does a pretty good analysis. The key point is that as frameworks mature they start supporting legacy users and applications who, in turn, have different requirements and values from those looking for a quick way to build new applications.
Permutations with Python Generators
Here’s something neat.
I wanted to experiment creating different permutations of a collection of items. (In fact I’m working on some code for laying out shapes on a surface.)
Prototyping in Python to get my ideas straight I came up with this neat generator solution.
def perm(xs) : if xs == [] : yield [] for x in xs : ys = [y for y in xs if not y==x] for p in perm(ys) : yield ([x] + p)
SpimeScript
These.
|
http://sdi.thoughtstorms.info/?m=201202
|
CC-MAIN-2017-17
|
refinedweb
| 156
| 71.75
|
Remove the android_webview::crash_reporter namespace There is already a ::crash_reporter namespace and having a sub namespace with the same name as a top level namespace caused lookup complications (and it's banned by the code style guide). Since there was only a single function in the namespace, after deleting the unused function, let's just remove the whole namespace. This resolves a jumbo compilation problems with symbols being looked up in the wrong crash_reporter namespace. Change-Id: Iaf15dbbe6186d479c3768ee88ce7b84442014183 Reviewed-on: Commit-Queue: Richard Coles <torne@chromium.org> Auto-Submit: Daniel Bratell <bratell@opera.com> Reviewed-by: Richard Coles <torne@chromium.org> Cr-Commit-Position: refs/heads/master@{#633 .
|
https://chromium.googlesource.com/chromium/src/+/e959d2e894889668b780cbfda27bc9fe090f8ad9
|
CC-MAIN-2020-45
|
refinedweb
| 108
| 50.12
|
Last week I was creating a demo application to show some ASP.NET features when I kept getting the following error:
“The type or namespace name 'Linq' does not exist in the namespace 'System.Data'”
I had the System.Data.Linq DLL referenced in my application and verified through the Object Browser that this namespace was indeed part of this assembly. Although a very stupid error it took me some time to figure out why it wasn’t working….
I was finally able to solve the issue by adding the following line in my web.config:
<add assembly="System.Data.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/>
|
http://bartwullems.blogspot.com/2011/09/type-or-namespace-name-does-not-exist.html
|
CC-MAIN-2017-17
|
refinedweb
| 110
| 59.4
|
libPaths
Percentile
Search Paths for Packages
.libPaths gets/sets the library trees within which packages are
looked for.
Usage
.libPaths(new)
.Library .Library.site
Arguments
- new
a character vector with the locations of R library trees. Tilde expansion (
path.expand) is done, and if any element contains one of
*?[, globbing is done where supported by the platform: see
Sys.glob.
Details
.Library is a character string giving the location of the
default library, the
library subdirectory of
R_HOME.
.Library.site is a (possibly empty) character vector giving the
locations of the site libraries, by default the
site.
How paths
new with a trailing slash are treated is
OS-dependent. On a POSIX filesystem existing directories can usually
be specified with a trailing slash: on Windows filepaths with a
trailing slash (or backslash) are invalid and so will never be added
to the library search path.
R/R.version$platform-library/x.y
of the home directory (or
Library/R/x.y/library for
CRAN macOS builds), for R x.y.z.
.Library.site can be set via the environment variable
R_LIBS_SITE (as a non-empty colon-separated list of library trees).
The library search path is initialized at startup from the environment
variable
R_LIBS (which should be a semicolon-separated list of
directories at which R library trees are rooted) followed by those in
environment variable
R_LIBS_USER. Only directories which exist
at the time will be included.
By default
R_LIBS is unset, and
R_LIBS_USER is set to
subdirectory
R/win-library/x.y of the home directory,
for R x.y.z.
.Library.site can be set via the environment variable
R_LIBS_SITE (as a non-empty semicolon:
- %V
R version number including the patchlevel (e.g., 2.5.0).
- %v
R version number excluding the patchlevel (e.g., 2.5).
- %p
the platform for which R was built, the value of
R.version$platform.
- %o
the underlying operating system, the value of
R.version$os.
- %a
the architecture (CPU) R was built on/for, the value of
R.version$arch.
(See
version for details on R version information.)
Function
.libPaths always uses the values of
.Library
and
.Library.site in the base namespace.
.Library.site
can be set by the site in
Rprofile.site, which should be
followed by a call to
.libPaths(.libPaths()) to make use of the
updated value.
For consistency, the paths are always normalized by
normalizePath(winslash = "/").
Value
A character vector of file paths.
References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole.
See Also
Aliases
- .Library
- .Library.site
- .libPaths
- R_LIBS
- R_LIBS_SITE
- R_LIBS_USER
- .expand_R_libs_env_var
Examples
library(base)
# NOT RUN { .libPaths() # all library trees R knows about # }
|
https://www.rdocumentation.org/packages/base/versions/3.5.3/topics/libPaths
|
CC-MAIN-2020-10
|
refinedweb
| 450
| 60.51
|
Basics: Classes / Content-Types
How to generate content types, tools and interfaces.
Overview
By default, when you create a class in your class diagram, it represents an Archetypes content type. You can add operations in your model to generate methods on the class, and attributes to generate fields in the schema. The quick reference at the end of this tutorial will tell you which field types you can use. You should also browse the Archetypes quick reference documentation to see what properties are available for each field and widget type. You may set these using tagged values (see below).
There are three basic ways in which you can alter the way your content types are generated:
- You may set one or more stereotypes on your class, which alters the "type" of class. A stereotype
<<portal_tool>>, for example means you are generating a portal tool rather than just a simple content type.
- You may use tagged values in your model to configure many aspects of your classes, their attributes and their methods. A list of recognised tagged values acting on classes, fields and methods are found in the quick reference at the end of this tutorial.
When reading tagged values, ArchGenXML will generally treat them as strings, with a few exceptions where only non-string values are permitted, such as the
requiredtagged value. If you do not wish your value to be quoted as a string, prefix it with
python:. For example, if you set the tagged value
defaultto
python:["high", "low"]on a
linesattribute, you will get
default=["high", "low"]in a LinesField in your schema.
- ArchGenXML is clever about aggregation and composition. If your class aggregates other classes, it will be automatically made into a folder with those classes as the allowed content types. If you use composition (signified by a filled diamond in the diagram) rather than aggregation, the contained class will only be addable inside the container, otherwise it will be addable globally in your portal by default.
Variants of Content Types
Simple Classes
A simple class is what we had in MyFirstAGXContent in the previous chapter. A simple class is based on
BaseContent. This is the default if no other options override.
Folderish Classes
The easiest way to make a content type folderish is to introduce composition or aggregation in your model - the parent class will become folderish and will be permitted to hold objects of the child classes. You can also make a class folderish just by giving it the
<<folder>> stereotype. Both of these approaches will result in an object derived from
BaseFolder.
You can also give a class the
<<ordered>> stereotype (possibly in addition to
<<folder>>) in order to make it derive from
OrderedBaseFolder and thus have ordering support. Alternatively, you can set the
base_class tagged value on the class to
OrderedBaseFolder. This is a general technique which you can use to override the base folder should you need to. As an aside, the
additional_parents tagged value permits you to derive from multiple parents.
Other tagged values which may be useful when generating folders are:
- filter_content_types
- Set this to
0or
1to turn on/off filtering of content types. If content types are not filtered, the class will act as a general folder for all globally addable content.
- allowed_content_types
- To explicitly set the allowable content types, for example to only allow images and documents, set this to:
Image, Document. Note that if you use aggregation or composition to create folderish types as described above, setting the allowed content types manually is not necessary.
Portal tools
A portal tool is a unique singleton which other objects may find via
getToolByName and utilise. There are many tools which ship with Plone, such as portal_actions or portal_skins. To create a portal tool instead of a regular content type, give your class the
<<portal_tool>> stereotype. Tools can hold attributes and provide methods just like a regular content type. Typically, these hold configuration data and utility methods for the rest of your product to use. Tools may also have configlets - configuration pages in the Plone control panel. See the quick reference at the end of this document for details on the tagged values you must set to generate configlets.
Abstract mixin classes
By marking your class as
abstract in your model (usually a separate tick-box), you are signifying that it will not be added as a content type. Such classes are useful as mixin parents and as abstract base classes for more complex content types, and will not have the standard Archetypes registration machinery, factory type information or derive from BaseClass.
Stub classes
By giving your class the
<<stub>> stereotype, you can prevent it from being generated at all. This is useful if you wish to show content types which are logically part of your model, but which do not belong to your product. For instance, you could create a stub for Plone's standard Image type if you wish to include this as an aggregated object inside your content type - that is, your content type will become folderish, with Image as an allowable contained type.
Deriving/Subclassing Classes
Deriving or subclassing a class is used to extend existing classes, or change their behavior. Using generalisation arrows in your model, you can inherit the methods and schema from another content type or mixin class in your class.
Simple Derivation
All content types in Archetypes are derived from one of the base classes - BaseContent, BaseFolder, OrderedBaseFolder and so on. If you wish to turn this off, for example because the base class is being inherited from a parent class, you can set the
base_class tagged value to
0.
Multiple Derivation
You can of course use multiple inheritance via multiple generalisation arrows in your model. However, if you need to use a base class that is not on your model, you can set the
additional_parents tagged value on your class to a comma-separated list of parent classes.
Deriving from other Products
If you want to derive from a class of an other product create a stub class with a tagged value 'import_from': This will generate a import line
from VALUE import CLASSNAME in classes derived from this class.
Interfaces
Interfaces are a way of formally documenting the public interface to your code. By convention, they are usually in the
interfaces package (see below). Use your UML modeller's interface tool to create new interfaces.
Interfaces do not have most of the added fluff that content types do - they do not even have method bodies. They do, however, have extensive documentation. A class is said to "realise" an interface when it provides implementations for the methods defined in the interface. The UML realisation arrow (a dotted line with an empty arrowhead) will ensure that your content types are linked to the correct interfaces by way of the
__implements__ class attribute.
Packages - bring order to your code
Packages are both a UML concept and a Python concept. In Python, packages are directories under your product containing a set of modules (.py files). In UML, a package is a logical grouping of classes, drawn as a large "folder" with classes inside it. To modularise complex products, you should use packages to group classes together.
|
http://plone.org/documentation/tutorial/archgenxml-getting-started/classes
|
crawl-002
|
refinedweb
| 1,202
| 59.84
|
Effective Go
Introduction
Go is a new language. Although it borrows ideas from existing languages, it has unusual properties that make effective Go programs different in character from programs written in its relatives. A straightforward translation of a C++ or Java program into Go is unlikely to produce a satisfactory result—Java programs are written in Java, not Go. On the other hand, thinking about the problem from a Go perspective could produce a successful but quite different program. In other words, to write Go well, it's important to understand its properties and idioms. It's also important to know the established conventions for programming in Go, such as naming, formatting, program construction, and so on, so that programs you write will be easy for other Go programmers to understand.
This document gives tips for writing clear, idiomatic Go code. It augments the language specification, the Tour of Go, and How to Write Go Code, all of which you should read first.
Examples
The Go package sources are intended to serve not only as the core library but also as examples of how to use the language. Moreover, many of the packages contain working, self-contained executable examples you can run directly from the golang.org web site, such as this one (if necessary, click on the word "Example" to open it up). If you have a question about how to approach a problem or how something might be implemented, the documentation, code and examples in the library can provide answers, ideas and background.
Formatting.
The
gofmt program
(also available as
go fmt, which
operates at the package level rather than source file level)
reads a Go program
and emits the source in a standard style of indentation
and vertical alignment, retaining and if necessary
reformatting comments.
If you want to know how to handle some new layout
situation, run
gofmt; if the answer doesn't
seem right, rearrange your program (or file a bug about
gofmt), }
All Go code in the standard packages has been formatted with
gofmt.
Some formatting details remain. Very briefly:
- Indentation
- We use tabs for indentation and
gofmtemits them by default. Use spaces only if you must.
- Line length
- Go has no line length limit. Don't worry about overflowing a punched card. If a line feels too long, wrap it and indent with an extra tab.
- Parentheses
- Go needs fewer parentheses than C and Java: control structures (
if,
for,
switch) do not have parentheses in their syntax. Also, the operator precedence hierarchy is shorter and clearer, so
x<<8 + y<<16means what the spacing implies, unlike in the other languages.
Commentary
Go provides C-style
/* */ block comments
and C++-style
// line comments.
Line comments are the norm;
block comments appear mostly as package comments, but
are useful within an expression or to disable large swaths of code.
The program—and web server—
godoc processes
Go source files to extract documentation about the contents of the
package.
are extracted along with the declaration to serve as explanatory text for the item.
The nature and style of these comments determines the
quality of the documentation
godoc produces.
Every package should have a package comment, a block
comment preceding the package clause.
For multi-file packages, the package comment only needs to be
present in one file, and any one will do.
The package comment should introduce the package and
provide information relevant to the package as a whole.
It will appear first on the
godoc page and
should set up the detailed documentation that follows.
/* Package regexp implements a simple library for regular expressions. The syntax of the regular expressions accepted is: regexp: concatenation { '|' concatenation } concatenation: { closure } closure: term [ '*' | '+' | '?' ] term: '^' '$' '.' character '[' [ '^' ] character-ranges ']' '(' regexp ')' */ package regexp
If the package is simple, the package comment can be brief.
// Package path implements utility routines for // manipulating slash-separated filename paths.
Comments do not need extra formatting such as banners of stars.
The generated output may not even be presented in a fixed-width font, so don't depend
on spacing for alignment—
godoc, like
gofmt,
takes care of that.
The comments are uninterpreted plain text, so HTML and other
annotations such as
_this_ will reproduce verbatim and should
not be used.
One adjustment
godoc does do is to display indented
text in a fixed-width font, suitable for program snippets.
The package comment for the
fmt package uses this to good effect.
Depending on the context,
godoc might not even
reformat comments, so make sure they look good straight up:
use correct spelling, punctuation, and sentence structure,
fold long lines, and so on.
Inside a package, any comment immediately preceding a top-level declaration serves as a doc comment for that declaration. Every exported (capitalized) name in a program should have a doc comment.
Doc comments work best as complete sentences, which allow a wide variety of automated presentations. The first sentence should be a one-sentence summary that starts with the name being declared.
// Compile parses a regular expression and returns, if successful, // a Regexp that can be used to match against text. func Compile(str string) (*Regexp, error) {
If every doc comment begins with the name of the item it describes,
the output of
godoc can usefully be run through
grep.
Imagine you couldn't remember the name "Compile" but were looking for
the parsing function for regular expressions, so you ran
the command,
$ godoc regexp | grep -i $
Go's declaration syntax allows grouping of declarations. A single doc comment can introduce a group of related constants or variables. Since the whole declaration is presented, such a comment can often be perfunctory.
// Error codes returned by failures to parse an expression. var ( ErrInternal = errors.New("regexp: internal error") ErrUnmatchedLpar = errors.New("regexp: unmatched '('") ErrUnmatchedRpar = errors.New("regexp: unmatched ')'") ... )
Grouping can also indicate relationships between items, such as the fact that a set of variables is protected by a mutex.
var ( countLock sync.Mutex inputCount uint32 outputCount uint32 errorCount uint32 )
Names
Names are as important in Go as in any other language. They even have semantic effect: the visibility of a name outside a package is determined by whether its first character is upper case. It's therefore worth spending a little time talking about naming conventions in Go programs.
Package names
When a package is imported, the package name becomes an accessor for the contents. After
import "bytes"
the importing package can talk about
bytes.Buffer. It's
helpful if everyone using the package can use the same name to refer to
its contents, which implies that the package name should be good:
short, concise, evocative. By convention, packages are given
lower case, single-word names; there should be no need for underscores
or mixedCaps.
Err on the side of brevity, since everyone using your
package will be typing that name.
And don't worry about collisions a priori.
The package name is only the default name for imports; it need not be unique
across all source code, and in the rare case of a collision the
importing package can choose a different name to use locally.
In any case, confusion is rare because the file name in the import
determines just which package is being used.
Another convention is that the package name is the base name of
its source directory;
the package in
src/encoding/base64
is imported as
"encoding/base64" but has name
base64,
not
encoding_base64 and not
encodingBase64.
The importer of a package will use the name to refer to its contents,
so exported names in the package can use that fact
to avoid stutter.
(Don't use the
import . notation, which can simplify
tests that must run outside the package they are testing, but should otherwise be avoided.)
For instance, the buffered reader type in the
bufio package is called
Reader,
not
BufReader, because users see it as
bufio.Reader,
which is a clear, concise name.
Moreover,
because imported entities are always addressed with their package name,
bufio.Reader
does not conflict with
io.Reader.
Similarly, the function to make new instances of
ring.Ring—which
is the definition of a constructor in Go—would
normally be called
NewRing, but since
Ring is the only type exported by the package, and since the
package is called
ring, it's called just
New,
which clients of the package see as
ring.New.
Use the package structure to help you choose good names.
Another short example is
once.Do;
once.Do(setup) reads well and would not be improved by
writing
once.DoOrWaitUntilDone(setup).
Long names don't automatically make things more readable.
A helpful doc comment can often be more valuable than an extra long name.
Getters
Go doesn't provide automatic support for getters and setters.
There's nothing wrong with providing getters and setters yourself,
and it's often appropriate to do so,.
Both names read well in practice:
owner := obj.Owner() if owner != user { obj.SetOwner(user) }
Interface names
By convention, one-method interfaces are named by
the method name plus an -er suffix or similar modification
to construct an agent noun:
Reader,
Writer,
Formatter,
CloseNotifier.
MixedCaps
Finally, the convention in Go is to use
MixedCaps
or
mixedCaps rather than underscores to write
multiword names.
Semicolons
go func() { for { dst <- <-src } }()
needs no semicolons.
Idiomatic Go programs have semicolons only in places such as
for loop clauses, to separate the initializer, condition, and
continuation elements. They are also necessary to separate multiple
statements on a line, should you write code that way.
One consequence of the semicolon insertion rules
is that you cannot put the opening brace of a
control structure (
if,
for,
switch,
or
select) on the next line. If you do, a semicolon
will be inserted before the brace, which could cause unwanted
effects. Write them like this
if i < f() { g() }
not like this
if i < f() // wrong! { // wrong! g() }
Control structures
The control structures of Go are related to those of C but differ
in important ways.
There is no
do or
while loop, only a
slightly generalized
for;
switch is more flexible;
if and
switch accept an optional
initialization statement like that of
for;
break and
continue statements
take an optional label to identify what to break or continue;
and there are new control structures including a type switch and a
multiway communications multiplexer,
The syntax is also slightly different:
there are no parentheses
and the bodies must always be brace-delimited.
If
In Go a simple
if looks like this:
if x > 0 { return y }
Mandatory braces encourage writing simple
if statements
on multiple lines. It's good style to do so anyway,
especially when the body contains a control statement such as a
return or
break.
Since
if and
switch accept an initialization
statement, it's common to see one used to set up a local variable.
if err := file.Chmod(0664); err != nil { log.Print(err) return err }
In the Go libraries, you'll find that
when an
if statement doesn't flow into the next statement—that is,
the body ends in
break,
continue,
goto, or
return—the unnecessary
else is omitted.
f, err := os.Open(name) if err != nil { return err } codeUsing(f)
This is an example of a common situation where code must guard against a
sequence of error conditions. The code reads well if the
successful flow of control runs down the page, eliminating error cases
as they arise. Since error cases tend to end in
return
statements, the resulting code needs no
else statements.
f, err := os.Open(name) if err != nil { return err } d, err := f.Stat() if err != nil { f.Close() return err } codeUsing(f, d)
Redeclaration and reassignment
An aside: The last example in the previous section demonstrates a detail of how the
:= short declaration form works.
The declaration that calls
os.Open reads,
f, err := os.Open(name)
This statement declares two variables,
f and
err.
A few lines later, the call to
f.Stat reads,
d, err := f.Stat()
which looks as if it declares
d and
err.
Notice, though, that
err appears in both statements.
This duplication is legal:
err is declared by the first statement,
but only re-assigned in the second.
This means that the call to
f.Stat uses the existing
err variable declared above, and just gives it a new value.
In a
:= declaration a variable
v may appear even
if it has already been declared, provided:
- this declaration is in the same scope as the existing declaration of
v(if
vis already declared in an outer scope, the declaration will create a new variable §),
- the corresponding value in the initialization is assignable to
v, and
- there is at least one other variable in the declaration that is being declared anew.
This unusual property is pure pragmatism,
making it easy to use a single
err value, for example,
in a long
if-else chain.
You'll see it used often.
§ It's worth noting here that in Go the scope of function parameters and return values is the same as the function body, even though they appear lexically outside the braces that enclose the body.
For
The Go
for loop is similar to—but not the same as—C's.
It unifies
for
and
while and there is no
do-while.
There are three forms, only one of which has semicolons.
// Like a C for for init; condition; post { } // Like a C while for condition { } // Like a C for(;;) for { }
Short declarations make it easy to declare the index variable right in the loop.
sum := 0 for i := 0; i < 10; i++ { sum += i }
If you're looping over an array, slice, string, or map,
or reading from a channel, a
range clause can
manage the loop.
for key, value := range oldMap { newMap[key] = value }
If you only need the first item in the range (the key or index), drop the second:
for key := range m { if key.expired() { delete(m, key) } }
If you only need the second item in the range (the value), use the blank identifier, an underscore, to discard the first:
sum := 0 for _, value := range array { sum += value }
The blank identifier has many uses, as described in a later section.
For strings, the
range does more work for you, breaking out individual
Unicode code points by parsing the UTF-8.
Erroneous encodings consume one byte and produce the
replacement rune U+FFFD.
(The name (with associated builtin type)
rune is Go terminology for a
single Unicode code point.
See the language specification
for details.)
The loop
for pos, char := range "日本\x80語" { // \x80 is an illegal UTF-8 encoding fmt.Printf("character %#U starts at byte position %d\n", char, pos) }
prints
character U+65E5 '日' starts at byte position 0 character U+672C '本' starts at byte position 3 character U+FFFD '�' starts at byte position 6 character U+8A9E '語' starts at byte position 7
Finally, Go has no comma operator and
++ and
--
are statements not expressions.
Thus if you want to run multiple variables in a
for
you should use parallel assignment (although that precludes
++ and
--).
// Reverse a for i, j := 0, len(a)-1; i < j; i, j = i+1, j-1 { a[i], a[j] = a[j], a[i] }
Switch
Go's
switch is more general than C's.
The expressions need not be constants or even integers,
the cases are evaluated top to bottom until a match is found,
and if the
switch has no expression it switches on
true.
It's therefore possible—and idiomatic—to write an
if-
else-
if-
else
chain as a
switch.
func unhex(c byte) byte { switch { case '0' <= c && c <= '9': return c - '0' case 'a' <= c && c <= 'f': return c - 'a' + 10 case 'A' <= c && c <= 'F': return c - 'A' + 10 } return 0 }
There is no automatic fall through, but cases can be presented in comma-separated lists.
func shouldEscape(c byte) bool { switch c { case ' ', '?', '&', '=', '#', '+', '%': return true } return false }
Although they are not nearly as common in Go as some other C-like
languages,
break statements can be used to terminate
a
switch early.
Sometimes, though, it's necessary to break out of a surrounding loop,
not the switch, and in Go that can be accomplished by putting a label
on the loop and "breaking" to that label.
This example shows both uses.
Loop: for n := 0; n < len(src); n += size { switch { case src[n] < sizeOne: if validateOnly { break } size = 1 update(src[n]) case src[n] < sizeTwo: if n+1 >= len(src) { err = errShortInput break Loop } if validateOnly { break } size = 2 update(src[n] + src[n+1]<<shift) } }
Of course, the
continue statement also accepts an optional label
but it applies only to loops.
To close this section, here's a comparison routine for byte slices that uses two
switch statements:
// Compare returns an integer comparing the two byte slices, // lexicographically. // The result will be 0 if a == b, -1 if a < b, and +1 if a > b func Compare(a, b []byte) int { for i := 0; i < len(a) && i < len(b); i++ { switch { case a[i] > b[i]: return 1 case a[i] < b[i]: return -1 } } switch { case len(a) > len(b): return 1 case len(a) < len(b): return -1 } return 0 }
Type switch
A switch can also be used to discover the dynamic type of an interface
variable. Such a type switch uses the syntax of a type
assertion with the keyword
type inside the parentheses.
If the switch declares a variable in the expression, the variable will
have the corresponding type in each clause.
It's also idiomatic to reuse the name in such cases, in effect declaring
a new variable with the same name but a different type in each case.
var t interface{} t = functionOfSomeType() switch t := t.(type) { default: fmt.Printf("unexpected type %T\n", t) // %T prints whatever type t has case bool: fmt.Printf("boolean %t\n", t) // t has type bool case int: fmt.Printf("integer %d\n", t) // t has type int case *bool: fmt.Printf("pointer to boolean %t\n", *t) // t has type *bool case *int: fmt.Printf("pointer to integer %d\n", *t) // t has type *int }
Functions
Multiple return values
One of Go's unusual features is that functions and methods
can return multiple values. This form can be used to
improve on a couple of clumsy idioms in C programs: in-band
error returns such as
-1 for
EOF
and modifying an argument passed by address.
In C, a write error is signaled by a negative count with the
error code secreted away in a volatile location.
In Go,
Write
can return a count and an error: “Yes, you wrote some
bytes but not all of them because you filled the device”.
The signature of the
Write method on files from
package
os is:
func (file *File) Write(b []byte) (n int, err error)
and as the documentation says, it returns the number of bytes
written and a non-nil
error when
n
!=
len(b).
This is a common style; see the section on error handling for more examples.
A similar approach obviates the need to pass a pointer to a return value to simulate a reference parameter. Here's a simple-minded function to grab a number from a position in a byte slice, returning the number and the next position.
func nextInt(b []byte, i int) (int, int) { for ; i < len(b) && !isDigit(b[i]); i++ { } x := 0 for ; i < len(b) && isDigit(b[i]); i++ { x = x*10 + int(b[i]) - '0' } return x, i }
You could use it to scan the numbers in an input slice
b like this:
for i := 0; i < len(b); { x, i = nextInt(b, i) fmt.Println(x) }
Named result parameters
used as the returned values.
The names are not mandatory but they can make code shorter and clearer:
they're documentation.
If we name the results of
nextInt it becomes
obvious which returned
int
is which.
func nextInt(b []byte, pos int) (value, nextPos int) {
Because named results are initialized and tied to an unadorned return, they can simplify
as well as clarify. Here's a version
of
io.ReadFull that uses them well:
func ReadFull(r Reader, buf []byte) (n int, err error) { for len(buf) > 0 && err == nil { var nr int nr, err = r.Read(buf) n += nr buf = buf[nr:] } return }
Defer.
// Contents returns the file's contents as a string. func Contents(filename string) (string, error) { f, err := os.Open(filename) if err != nil { return "", err } defer f.Close() // f.Close will run when we're finished. var result []byte buf := make([]byte, 100) for { n, err := f.Read(buf[0:]) result = append(result, buf[0:n]...) // append is discussed later. if err != nil { if err == io.EOF { break } return "", err // f will be closed if we return here. } } return string(result), nil // f will be closed if we return here. }
Deferring a call to a function such as
Close has two advantages. First, it
guarantees that you will never forget to close the file, a mistake
that's easy to make if you later edit the function to add a new return
path. Second, it means that the close sits near the open,
which is much clearer than placing it at the end of the function.
The arguments to the deferred function (which include the receiver if the function is a method) are evaluated when the defer executes, not when the call executes. Besides avoiding worries about variables changing values as the function executes, this means that a single deferred call site can defer multiple function executions. Here's a silly example.
for i := 0; i < 5; i++ { defer fmt.Printf("%d ", i) }
Deferred functions are executed in LIFO order, so this code will cause
4 3 2 1 0 to be printed when the function returns. A
more plausible example is a simple way to trace function execution
through the program. We could write a couple of simple tracing
routines like this:
func trace(s string) { fmt.Println("entering:", s) } func untrace(s string) { fmt.Println("leaving:", s) } // Use them like this: func a() { trace("a") defer untrace("a") // do something.... }
We can do better by exploiting the fact that arguments to deferred
functions are evaluated when the
defer executes. The
tracing routine can set up the argument to the untracing routine.
This example:
func trace(s string) string { fmt.Println("entering:", s) return s } func un(s string) { fmt.Println("leaving:", s) } func a() { defer un(trace("a")) fmt.Println("in a") } func b() { defer un(trace("b")) fmt.Println("in b") a() } func main() { b() }
prints
entering: b in b entering: a in a leaving: a leaving: b
For programmers accustomed to block-level resource management from
other languages,
defer may seem peculiar, but its most
interesting and powerful applications come precisely from the fact
that it's not block-based but function-based. In the section on
panic and
recover we'll see another
example of its possibilities.
Data
Allocation with
new
Go has two allocation primitives, the built-in functions
new and
make.
They do different things and apply to different types, which can be confusing,
but the rules are simple.
Let's talk about
new first.
It's a built-in function that allocates memory, but unlike its namesakes
in some other languages it does not initialize the memory,
it only zeros it.
That is,
new(T) allocates zeroed storage for a new item of type
T and returns its address, a value of type
*T.
In Go terminology, it returns a pointer to a newly allocated zero value of type
T.
Since the memory returned by
new is zeroed, it's helpful to arrange
when designing your data structures that the
zero value of each type can be used without further initialization. This means a user of
the data structure can create one with
new and get right to
work.
For example, zero-value-is-useful property works transitively. Consider this type declaration.
type SyncedBuffer struct { lock sync.Mutex buffer bytes.Buffer } composite literals
Sometimes the zero value isn't good enough and an initializing
constructor is necessary, as in this example derived }
Note that, unlike in C, it's perfectly OK to return the address of a local variable; the storage associated with the variable survives after the function returns. In fact, taking the address of a composite literal allocates a fresh instance each time it is evaluated, so we can combine these last two lines.
return &File{fd, name, nil, 0}
The.
Composite literals can also be created for arrays, slices, and maps,
with the field labels being indices or map keys as appropriate.
In these examples, the initializations work regardless of the values of
Enone,
Eio, and
Einval, as long as they are distinct.
Back to allocation.
The built-in function
make(T, args
) serves
a purpose different from
new(T).
It creates slices, maps, and channels only, and it returns an initialized
(not zeroed)
value of type
T (not
*T).
The reason for the distinction
is that these three types represent, under the covers, references to data structures that
must be initialized before use.
A slice, for example, is a three-item descriptor
containing a pointer to the data (inside an array), the length, and the
capacity, and.
(When making a slice, the capacity can be omitted; see the section on slices
for more information.)
In contrast,
new([]int) returns a pointer to a newly allocated, zeroed slice
structure, that is, a pointer to a
nil slice value.
These examples illustrate the difference between
new and
make.
var p *[]int = new([]int) // allocates slice structure; *p == nil; rarely useful var v []int = make([]int, 100) // the slice v now refers to a new array of 100 ints // Unnecessarily complex: var p *[]int = new([]int) *p = make([]int, 100, 100) // Idiomatic: v := make([]int, 100)
Remember that
make applies only to maps, slices and channels
and does not return a pointer.
To obtain an explicit pointer allocate with
new or take the address
of a variable explicitly.
Arrays
Arrays are useful when planning the detailed layout of memory and sometimes can help avoid allocation, but primarily they are a building block for slices, the subject of the next section. To lay the foundation for that topic, here are a few words about arrays.
There are major differences between the ways arrays work in Go and C. In Go,
- Arrays are values. Assigning one array to another copies all the elements.
- In particular, if you pass an array to a function, it will receive a copy of the array, not a pointer to it.
- The size of an array is part of its type. The types
[10]intand
[20]intare distinct.
The value property can be useful but also expensive; if you want C-like behavior and efficiency, you can pass a pointer to the array.
func Sum(a *[3]float64) (sum float64) { for _, v := range *a { sum += v } return } array := [...]float64{7.0, 8.5, 9.1} x := Sum(&array) // Note the explicit address-of operator
But even this style isn't idiomatic Go. Use slices instead.
Slices
Slices wrap arrays to give a more general, powerful, and convenient interface to sequences of data. Except for items with explicit dimension such as transformation matrices, most array programming in Go is done with slices rather than simple arrays.
Slices hold references to an underlying array, and if you assign one
slice to another, both refer to the same array.
If a function takes a slice argument, changes it makes to
the elements of the slice will be visible to the caller, analogous to
passing a pointer to the underlying array. A
Read
function can therefore accept a slice argument rather than a pointer
and a count; the length within the slice sets an upper
limit of how much data to read. Here is the signature of the
Read method of the
File type in package
os:
func (f *File) Read(buf []byte) (n int, err error)
The method returns the number of bytes read and an error value, if
any.
To read into the first 32 bytes of a larger buffer
buf, slice (here used as a verb) the buffer.
n, err := f.Read(buf[0:32])
Such slicing is common and efficient. In fact, leaving efficiency aside for the moment, the following snippet would also read the first 32 bytes of the buffer.
var n int var err error for i := 0; i < 32; i++ { nbytes, e := f.Read(buf[i:i+1]) // Read one byte. if nbytes == 0 || e != nil { err = e break } n += nbytes }
The length of a slice may be changed as long as it still fits within
the limits of the underlying array; just assign it to a slice of
itself. The capacity of a slice, accessible by the built-in
function
cap, reports the maximum length the slice may
assume. Here is a function to append data to a slice. If the data
exceeds the capacity, the slice is reallocated. The
resulting slice is returned. The function uses the fact that
len and
cap are legal when applied to the
nil slice, and return 0.
func Append(slice, data []byte) []byte { l := len(slice) if l + len(data) > cap(slice) { // reallocate // Allocate double what's needed, for future growth. newSlice := make([]byte, (l+len(data))*2) // The copy function is predeclared and works for any slice type. copy(newSlice, slice) slice = newSlice } slice = slice[0:l+len(data)] for i, c := range data { slice[l+i] = c } return slice }
We must return the slice afterwards because, although
Append
can modify the elements of
slice, the slice itself (the run-time data
structure holding the pointer, length, and capacity) is passed by value.
The idea of appending to a slice is so useful it's captured by the
append built-in function. To understand that function's
design, though, we need a little more information, so we'll return
to it later.
Two-dimensional slices
Go's arrays and slices are one-dimensional. To create the equivalent of a 2D array or slice, it is necessary to define an array-of-arrays or slice-of-slices, like this:
type Transform [3][3]float64 // A 3x3 array, really an array of arrays. type LinesOfText [][]byte // A slice of byte slices.
Because slices are variable-length, it is possible to have each inner
slice be a different length.
That can be a common situation, as in our
LinesOfText
example: each line has an independent length.
text := LinesOfText{ []byte("Now is the time"), []byte("for all good gophers"), []byte("to bring some fun to the party."), }
Sometimes it's necessary to allocate a 2D slice, a situation that can arise when processing scan lines of pixels, for instance. There are two ways to achieve this. One is to allocate each slice independently; the other is to allocate a single array and point the individual slices into it. Which to use depends on your application. If the slices might grow or shrink, they should be allocated independently to avoid overwriting the next line; if not, it can be more efficient to construct the object with a single allocation. For reference, here are sketches of the two methods. First, a line at a time:
// Allocate the top-level slice. picture := make([][]uint8, YSize) // One row per unit of y. // Loop over the rows, allocating the slice for each row. for i := range picture { picture[i] = make([]uint8, XSize) }
And now as one allocation, sliced into lines:
// Allocate the top-level slice, the same as before. picture := make([][]uint8, YSize) // One row per unit of y. // Allocate one large slice to hold all the pixels. pixels := make([]uint8, XSize*YSize) // Has type []uint8 even though picture is [][]uint8. // Loop over the rows, slicing each row from the front of the remaining pixels slice. for i := range picture { picture[i], pixels = pixels[:XSize], pixels[XSize:] }
Maps
Maps are a convenient and powerful built-in data structure that associate values of one type (the key) with values of another type (the element or value) The key can be of any type for which the equality operator is defined, such as integers, floating point and complex numbers, strings, pointers, interfaces (as long as the dynamic type supports equality), structs and arrays. Slices cannot be used as map keys, because equality is not defined on them. Like slices, maps hold references to an underlying data structure. If you pass a map to a function that changes the contents of the map, the changes will be visible in the caller.
Maps can be constructed using the usual composite literal syntax with colon-separated key-value pairs, so it's easy to build them during initialization.
var timeZone = map[string]int{ "UTC": 0*60*60, "EST": -5*60*60, "CST": -6*60*60, "MST": -7*60*60, "PST": -8*60*60, }
Assigning and fetching map values looks syntactically just like doing the same for arrays and slices except that the index doesn't need to be an integer.
offset := timeZone["EST"]
An attempt to fetch a map value with a key that
is not present in the map will return the zero value for the type
of the entries
in the map. For instance, if the map contains integers, looking
up a non-existent key will return
0.
A set can be implemented as a map with value type
bool.
Set the map entry to
true to put the value in the set, and then
test it by simple indexing.
attended := map[string]bool{ "Ann": true, "Joe": true, ... } if attended[person] { // will be false if person is not in the map fmt.Println(person, "was at the meeting") }
Sometimes you need to distinguish a missing entry from
a zero value. Is there an entry for
"UTC"
or is that the empty string because it's not in the map at all?
You can discriminate with a form of multiple assignment.
var seconds int var ok bool seconds, ok = timeZone[tz]
For obvious reasons this is called the “comma ok” idiom.
In this example, if
tz is present,
seconds
will be set appropriately and
ok will be true; if not,
seconds will be set to zero and
ok will
be false.
Here's a function that puts it together with a nice error report:
func offset(tz string) int { if seconds, ok := timeZone[tz]; ok { return seconds } log.Println("unknown time zone:", tz) return 0 }
To test for presence in the map without worrying about the actual value,
you can use the blank identifier (
_)
in place of the usual variable for the value.
_, present := timeZone[tz]
To delete a map entry, use the
delete
built-in function, whose arguments are the map and the key to be deleted.
It's safe to do this even if the key is already absent
from the map.
delete(timeZone, "PDT") // Now on Standard Time
Printing
Formatted printing in Go uses a style similar to C's
printf
family but is richer and more general. The functions live in the
fmt
package and have capitalized names:
fmt.Printf,
fmt.Fprintf,
fmt.Sprintf and so on. The string functions (
Sprintf etc.)
return a string rather than filling in a provided buffer.
You don't need to provide a format string. For each of
Printf,
Fprintf and
Sprintf there is another pair
of functions, for instance
Println.
These functions do not take a format string but instead generate a default
format for each argument. The
Println versions also insert a blank
between arguments and append a newline to the output while
the
fmt.Printf("Hello %d\n", 23) fmt.Fprint(os.Stdout, "Hello ", 23, "\n") fmt.Println("Hello", 23) fmt.Println(fmt.Sprint("Hello ", 23))
The formatted print functions
fmt.Fprint
and friends take as a first argument any object
that implements the
io.Writer interface; the variables
os.Stdout
and
os.Stderr are familiar instances.
Here things start to diverge from C. First, the numeric formats such as
%d
do not take flags for signedness or size; instead, the printing routines use the
type of the argument to decide these properties.
var x uint64 = 1<<64 - 1 fmt.Printf("%d %x; %d %x\n", x, x, int64(x), int64(x))
prints
18446744073709551615 ffffffffffffffff; -1 -1
If you just want the default conversion, such as decimal for integers, you can use
the catchall format
%v (for “value”); the result is exactly
what
Println would produce.
Moreover, that format can print any value, even arrays, slices, structs, and
maps. Here is a print statement for the time zone map defined in the previous section.
fmt.Printf("%v\n", timeZone) // or just fmt.Println(timeZone)
which gives output
map[CST:-21600 PST:-28800 EST:-18000 UTC:0 MST:-25200]
For maps the keys may be output in any order, of course.
When printing a struct, the modified format
%+v annotates the
fields of the structure with their names, and for any value the alternate
format
%#v prints the value in full Go syntax.
type T struct { a int b float64 c string } t := &T{ 7, -2.35, "abc\tdef" } fmt.Printf("%v\n", t) fmt.Printf("%+v\n", t) fmt.Printf("%#v\n", t) fmt.Printf("%#v\n", timeZone)
prints
&{7 -2.35 abc def} &{a:7 b:-2.35 c:abc def} &main.T{a:7, b:-2.35, c:"abc\tdef"} map[string] int{"CST":-21600, "PST":-28800, "EST":-18000, "UTC":0, "MST":-25200}
(Note the ampersands.)
That quoted string format is also available through
%q when
applied to a value of type
string or
[]byte.
The alternate format
%#q will use backquotes instead if possible.
(The
%q format also applies to integers and runes, producing a
single-quoted rune constant.)
Also,
%x works on strings, byte arrays and byte slices as well as
on integers, generating a long hexadecimal string, and with
a space in the format (
% x) it puts spaces between the bytes.
Another handy format is
%T, which prints the type of a value.
fmt.Printf("%T\n", timeZone)
prints
map[string] int
If you want to control the default format for a custom type, all that's required is to define
a method with the signature
String() string on the type.
For our simple type
T, that might look like this.
func (t *T) String() string { return fmt.Sprintf("%d/%g/%q", t.a, t.b, t.c) } fmt.Printf("%v\n", t)
to print in the format
7/-2.35/"abc\tdef"
(If you need to print values of type
T as well as pointers to
T,
the receiver for
String must be of value type; this example used a pointer because
that's more efficient and idiomatic for struct types.
See the section below on pointers vs. value receivers for more information.)
Our
String method is able to call
Sprintf because the
print routines are fully reentrant and can be wrapped this way.
There is one important detail to understand about this approach,
however: don't construct a
String method by calling
Sprintf in a way that will recur into your
String
method indefinitely. This can happen if the
Sprintf
call attempts to print the receiver directly as a string, which in
turn will invoke the method again. It's a common and easy mistake
to make, as this example shows.
type MyString string func (m MyString) String() string { return fmt.Sprintf("MyString=%s", m) // Error: will recur forever. }
It's also easy to fix: convert the argument to the basic string type, which does not have the method.
type MyString string func (m MyString) String() string { return fmt.Sprintf("MyString=%s", string(m)) // OK: note conversion. }
In the initialization section we'll see another technique that avoids this recursion.
Another printing technique is to pass a print routine's arguments directly to another such routine.
The signature of
Printf uses the type
...interface{}
for its final argument to specify that an arbitrary number of parameters (of arbitrary type)
can appear after the format.
func Printf(format string, v ...interface{}) (n int, err error) {
Within the function
Printf,
v acts like a variable of type
[]interface{} but if it is passed to another variadic function, it acts like
a regular list of arguments.
Here is the implementation of the
function
log.Println we used above. It passes its arguments directly to
fmt.Sprintln for the actual formatting.
// Println prints to the standard logger in the manner of fmt.Println. func Println(v ...interface{}) { std.Output(2, fmt.Sprintln(v...)) // Output takes parameters (int, string) }
We write
... after
v in the nested call to
Sprintln to tell the
compiler to treat
v as a list of arguments; otherwise it would just pass
v as a single slice argument.
There's even more to printing than we've covered here. See the
godoc documentation
for package
fmt for the details.
By the way, a
... parameter can be of a specific type, for instance
...int
for a min function that chooses the least of a list of integers:
func Min(a ...int) int { min := int(^uint(0) >> 1) // largest int for _, i := range a { if i < min { min = i } } return min }
Append
Now we have the missing piece we needed to explain the design of
the
append built-in function. The signature of
append
is different from our custom
Append function above.
Schematically, it's like this:
func append(slice []T, elements ...T) []T
where T is a placeholder for any given type. You can't
actually write a function in Go where the type
T
is determined by the caller.
That's why
append is built in: it needs support from the
compiler.
What
append does is append the elements to the end of
the slice and return the result. The result needs to be returned
because, as with our hand-written
Append, the underlying
array may change. This simple example
x := []int{1,2,3} x = append(x, 4, 5, 6) fmt.Println(x)
prints
[1 2 3 4 5 6]. So
append works a
little like
Printf, collecting an arbitrary number of
arguments.
But what if we wanted to do what our
Append does and
append a slice to a slice? Easy: use
... at the call
site, just as we did in the call to
Output above. This
snippet produces identical output to the one above.
x := []int{1,2,3} y := []int{4,5,6} x = append(x, y...) fmt.Println(x)
Without that
..., it wouldn't compile because the types
would be wrong;
y is not of type
int.
Initialization
Although it doesn't look superficially very different from initialization in C or C++, initialization in Go is more powerful. Complex structures can be built during initialization and the ordering issues among initialized objects, even among different packages, are handled correctly.
Constants
Constants in Go are just that—constant.
They are created at compile time, even when defined as
locals in functions,
and can only be numbers, characters (runes), strings or booleans.
Because of the compile-time restriction, the expressions
that define them must be constant expressions,
evaluatable by the compiler. For instance,
1<<3 is a constant expression, while
math.Sin(math.Pi/4) is not because
the function call to
math.Sin needs
to happen at run time.
In Go, enumerated constants are created using the
iota
enumerator. Since
iota can be part of an expression and
expressions can be implicitly repeated, it is easy to build intricate
sets of values.
type ByteSize float64 const ( _ = iota // ignore first value by assigning to blank identifier KB ByteSize = 1 << (10 * iota) MB GB TB PB EB ZB YB ).
func (b ByteSize) String() string { switch { case b >= YB: return fmt.Sprintf("%.2fYB", b/YB) case b >= ZB: return fmt.Sprintf("%.2fZB", b/ZB) case b >= EB: return fmt.Sprintf("%.2fEB", b/EB) case b >= PB: return fmt.Sprintf("%.2fPB", b/PB) case b >= TB: return fmt.Sprintf("%.2fTB", b/TB) case b >= GB: return fmt.Sprintf("%.2fGB", b/GB) case b >= MB: return fmt.Sprintf("%.2fMB", b/MB) case b >= KB: return fmt.Sprintf("%.2fKB", b/KB) } return fmt.Sprintf("%.2fB", b) }
The expression
YB prints as
1.00YB,
while
ByteSize(1e13) prints as
9.09TB.
The use here of
Sprintf
to implement
ByteSize's
String method is safe
(avoids recurring indefinitely) not because of a conversion but
because it calls
Sprintf with
%f,
which is not a string format:
Sprintf will only call
the
String method when it wants a string, and
%f
wants a floating-point value.
Variables
Variables can be initialized functions.)
And finally means finally:.
func init() { if user == "" { log.Fatal("$USER not set") } if home == "" { home = "/home/" + user } if gopath == "" { gopath = home + "/go" } // gopath may be overridden by --gopath flag on command line. flag.StringVar(&gopath, "gopath", gopath, "override default GOPATH") }
Methods
Pointers vs. Values
As we saw with
ByteSize,
methods can be defined for any named type (except a pointer or an interface);
the receiver does not have to be a struct.
In the discussion of slices above, we wrote an
Append
function. We can define it as a method on slices instead. To do
this, we first declare a named type to which we can bind the method, and
then make the receiver for the method a value of that type.
type ByteSlice []byte func (slice ByteSlice) Append(data []byte) []byte { // Body exactly the same as the Append function defined above. }
This still requires the method to return the updated slice. We can
eliminate that clumsiness by redefining the method to take a
pointer to a
ByteSlice as its receiver, so the
method can overwrite the caller's slice.
func (p *ByteSlice) Append(data []byte) { slice := *p // Body as above, without the return. *p = slice }
In fact, we can do even better. If we modify our function so it looks
like a standard
Write method, like this,
func (p *ByteSlice) Write(data []byte) (n int, err error) { slice := *p // Again as above. *p = slice return len(data), nil }
then the type
*ByteSlice satisfies the standard interface
io.Writer, which is handy. For instance, we can
print into one.
var b ByteSlice fmt.Fprintf(&b, "This hour has %d days\n", 7)
We pass the address of a
ByteSlice
because only
*ByteSlice satisfies
io.Writer.
The rule about pointers vs. values for receivers is that value methods
can be invoked on pointers and values, but pointer methods can only be
invoked on pointers.
This rule arises because pointer methods can modify the receiver; invoking
them on a value would cause the method to receive a copy of the value, so
any modifications would be discarded.
The language therefore disallows this mistake.
There is a handy exception, though. When the value is addressable, the
language takes care of the common case of invoking a pointer method on a
value by inserting the address operator automatically.
In our example, the variable
b is addressable, so we can call
its
Write method with just
b.Write. The compiler
will rewrite that to
(&b).Write for us.
By the way, the idea of using
Write on a slice of bytes
is central to the implementation of
bytes.Buffer.
Interfaces and other types
Interfaces
Interfaces in Go provide a way to specify the behavior of an
object: if something can do this, then it can be used
here. We've seen a couple of simple examples already;
custom printers can be implemented by a
String method
while
Fprintf can generate output to anything
with a
Write method.
Interfaces with only one or two methods are common in Go code, and are
usually given a name derived from the method, such as
io.Writer
for something that implements
Write.
A type can implement multiple interfaces.
For instance, a collection can be sorted
by the routines in package
sort if it implements
sort.Interface, which contains
Len(),
Less(i, j int) bool, and
Swap(i, j int),
and it could also have a custom formatter.
In this contrived example
Sequence satisfies both.) String() string { sort.Sort(s) str := "[" for i, elem := range s { if i > 0 { str += " " } str += fmt.Sprint(elem) } return str + "]" }
Conversions
The
String method of
Sequence is recreating the
work that
Sprint already does for slices. We can share the
effort if we convert the
Sequence to a plain
[]int before calling
Sprint.
func (s Sequence) String() string { sort.Sort(s) return fmt.Sprint([]int(s)) }
This method is another example of the conversion technique for calling
Sprintf safely from a
String method.
Because the two types (
Sequence and
[]int)
are the same if we ignore the type name, it's legal to convert between them.
The conversion doesn't create a new value, it just temporarily acts
as though the existing value has a new type.
(There are other legal conversions, such as from integer to floating point, that
do create a new value.)
It's an idiom in Go programs to convert the
type of an expression to access a different
set of methods. As an example, we could use the existing
type
sort.IntSlice to reduce the entire example
to this:
type Sequence []int // Method for printing - sorts the elements before printing func (s Sequence) String() string { sort.IntSlice(s).Sort() return fmt.Sprint([]int(s)) }
Now, instead of having
Sequence implement multiple
interfaces (sorting and printing), we're using the ability of a data item to be
converted to multiple types (
Sequence,
sort.IntSlice
and
[]int), each of which does some part of the job.
That's more unusual in practice but can be effective.
Interface conversions and type assertions
Type switches are a form of conversion: they take an interface and, for
each case in the switch, in a sense convert it to the type of that case.
Here's a simplified version of how the code under
fmt.Printf turns a value into
a string using a type switch.
If it's already a string, we want the actual string value held by the interface, while if it has a
String method we want the result of calling the method.
type Stringer interface { String() string } var value interface{} // Value provided by caller. switch str := value.(type) { case string: return str case Stringer: return str.String() }
The first case finds a concrete value; the second converts the interface into another interface. It's perfectly fine to mix types this way.
What if there's only one type we care about? If we know the value holds a
string
and we just want to extract it?
A one-case type switch would do, but so would a type assertion.
A type assertion takes an interface value and extracts from it a value of the specified explicit type.
The syntax borrows from the clause opening a type switch, but with an explicit
type rather than the
type keyword:
value.(typeName)
and the result is a new value with the static type
typeName.
That type must either be the concrete type held by the interface, or a second interface
type that the value can be converted to..
As an illustration of the capability, here's an
if-
else
statement that's equivalent to the type switch that opened this section.
if str, ok := value.(string); ok { return str } else if str, ok := value.(Stringer); ok { return str.String() }
Generality
If a type exists only to implement an interface and will never have exported methods beyond that interface, there is no need to export the type itself. Exporting just the interface makes it clear the value has no interesting behavior beyond what is described in the interface. It also avoids the need to repeat the documentation on every instance of a common method.
In such cases, the constructor should return an interface value
rather than the implementing type.
As an example, in the hash libraries
both
crc32.NewIEEE and
adler32.New
return the interface type
hash.Hash32.
Substituting the CRC-32 algorithm for Adler-32 in a Go program
requires only changing the constructor call;
the rest of the code is unaffected by the change of algorithm.
A similar approach allows the streaming cipher algorithms
in the various
crypto packages to be
separated from the block ciphers they chain together.
The
Block interface
in the
crypto/cipher package specifies the
behavior of a block cipher, which provides encryption
of a single block of data.
Then, by analogy with the
bufio package,
cipher packages that implement this interface
can be used to construct streaming ciphers, represented
by the
Stream interface, without
knowing the details of the block encryption.
The
crypto/cipher interfaces look like this:
type Block interface { BlockSize() int Encrypt(src, dst []byte) Decrypt(src, dst []byte) } type Stream interface { XORKeyStream(dst, src []byte) }
Here's the definition of the counter mode (CTR) stream, which turns a block cipher into a streaming cipher; notice that the block cipher's details are abstracted away:
// NewCTR returns a Stream that encrypts/decrypts using the given Block in // counter mode. The length of iv must be the same as the Block's block size. func NewCTR(block Block, iv []byte) Stream
NewCTR applies not
just to one specific encryption algorithm and data source but to any
implementation of the
Block interface and any
Stream. Because they return
interface values, replacing CTR
encryption with other encryption modes is a localized change. The constructor
calls must be edited, but because the surrounding code must treat the result only
as a
Stream, it won't notice the difference.
Interfaces and methods.
For brevity, let's ignore POSTs and assume HTTP requests are always GETs; that simplification does not affect the way the handlers are set up. Here's a trivial but complete implementation of a handler to count the number of times the page is visited.
// Simple counter server. type Counter struct { n int } func (ctr *Counter) ServeHTTP(w http.ResponseWriter, req *http.Request) { ctr.n++ fmt.Fprintf(w, "counter = %d\n", ctr.n) }
(Keeping with our theme, note how
Fprintf can print to an
http.ResponseWriter.)
For reference, here's how to attach such a server to a node on the URL tree.
import "net/http" ... ctr := new(Counter) http.Handle("/counter", ctr)
But why make
Counter a struct? An integer is all that's needed.
(The receiver needs to be a pointer so the increment is visible to the caller.)
// Simpler counter server. type Counter int func (ctr *Counter) ServeHTTP(w http.ResponseWriter, req *http.Request) { *ctr++ fmt.Fprintf(w, "counter = %d\n", *ctr) }
What if your program has some internal state that needs to be notified that a page has been visited? Tie a channel to the web page.
// A channel that sends a notification on each visit. // (Probably want the channel to be buffered.) type Chan chan *http.Request func (ch Chan) ServeHTTP(w http.ResponseWriter, req *http.Request) { ch <- req fmt.Fprint(w, "notification sent") }
Finally, let's say we wanted to present on
/args the arguments
used when invoking the server binary.
It's easy to write a function to print the arguments.
func ArgServer() { fmt.Println(os.Args) }
How do we turn that into an HTTP server? We could make
ArgServer
a method of some type whose value we ignore, but there's a cleaner way.
Since we can define a method for any type except pointers and interfaces,
we can write a method for a function.
The
http package contains this code:
// The HandlerFunc type is an adapter to allow the use of // ordinary functions as HTTP handlers. If f is a function // with the appropriate signature, HandlerFunc(f) is a // Handler object that calls f. type HandlerFunc func(ResponseWriter, *Request) // ServeHTTP calls f(w, req). func (f HandlerFunc) ServeHTTP(w ResponseWriter, req *Request) { f(w, req) }
HandlerFunc is a type with a method,
ServeHTTP,
so values of that type can serve HTTP requests. Look at the implementation
of the method: the receiver is a function,
f, and the method
calls
f. That may seem odd but it's not that different from, say,
the receiver being a channel and the method sending on the channel.
To make
ArgServer into an HTTP server, we first modify it
to have the right signature.
// Argument server. func ArgServer(w http.ResponseWriter, req *http.Request) { fmt.Fprintln(w, os.Args) }
ArgServer now has same signature as
HandlerFunc,
so it can be converted to that type to access its methods,
just as we converted
Sequence to
IntSlice
to access
IntSlice.Sort.
The code to set it up is concise:
http.Handle("/args", http.HandlerFunc(ArgServer))
When someone visits the page
/args,
the handler installed at that page has value
ArgServer
and type
HandlerFunc.
The HTTP server will invoke the method
ServeHTTP
of that type, with
ArgServer as the receiver, which will in turn call
ArgServer (via the invocation
f(w, req)
inside
HandlerFunc.ServeHTTP).
The arguments will then be displayed.
In this section we have made an HTTP server from a struct, an integer, a channel, and a function, all because interfaces are just sets of methods, which can be defined for (almost) any type.
The blank identifier
We've mentioned the blank identifier a couple of times now, in the context of
for
range loops
and maps.
The blank identifier can be assigned or declared with any value of any type, with the
value discarded harmlessly.
It's a bit like writing to the Unix
/dev/null file:
it represents a write-only value
to be used as a place-holder
where a variable is needed but the actual value is irrelevant.
It has uses beyond those we've seen already.
The blank identifier in multiple assignment
The use of a blank identifier in a
for
range loop is a
special case of a general situation: multiple assignment.
If an assignment requires multiple values on the left side, but one of the values will not be used by the program, a blank identifier on the left-hand-side of the assignment avoids the need to create a dummy variable and makes it clear that the value is to be discarded. For instance, when calling a function that returns a value and an error, but only the error is important, use the blank identifier to discard the irrelevant value.
if _, err := os.Stat(path); os.IsNotExist(err) { fmt.Printf("%s does not exist\n", path) }
Occasionally you'll see code that discards the error value in order to ignore the error; this is terrible practice. Always check error returns; they're provided for a reason.
// Bad! This code will crash if path does not exist. fi, _ := os.Stat(path) if fi.IsDir() { fmt.Printf("%s is a directory\n", path) }
Unused imports and variables
It is an error to import a package or to declare a variable without using it. Unused imports bloat the program and slow compilation, while a variable that is initialized but not used is at least a wasted computation and perhaps indicative of a larger bug. When a program is under active development, however, unused imports and variables often arise and it can be annoying to delete them just to have the compilation proceed, only to have them be needed again later. The blank identifier provides a workaround.
This half-written program has two unused imports
(
fmt and
io)
and an unused variable (
fd),
so it will not compile, but it would be nice to see if the
code so far is correct.
package main import ( "fmt" "io" "log" "os" ) func main() { fd, err := os.Open("test.go") if err != nil { log.Fatal(err) } // TODO: use fd. }
To silence complaints about the unused imports, use a
blank identifier to refer to a symbol from the imported package.
Similarly, assigning the unused variable
fd
to the blank identifier will silence the unused variable error.
This version of the program does compile.
package main import ( "fmt" "io" "log" "os" ) var _ = fmt.Printf // For debugging; delete when done. var _ io.Reader // For debugging; delete when done. func main() { fd, err := os.Open("test.go") if err != nil { log.Fatal(err) } // TODO: use fd. _ = fd }
By convention, the global declarations to silence import errors should come right after the imports and be commented, both to make them easy to find and as a reminder to clean things up later.
Import for side effect
An unused import like
fmt or
io in the
previous example should eventually be used or removed:
blank assignments identify code as a work in progress.
But: in this file, it doesn't have a name. (If it did, and we didn't use that name, the compiler would reject the program.)
Interface checks
As we saw in the discussion of interfaces above,
a type need not declare explicitly that it implements an interface.
Instead, a type implements the interface just by implementing the interface's methods.
In practice, most interface conversions are static and therefore checked at compile time.
For example, passing an
*os.File to a function
expecting an
io.Reader will not compile unless
*os.File implements the
io.Reader interface.
Some interface checks do happen at run-time, though.
One instance is in the
encoding/json
package, which defines a
Marshaler
interface. When the JSON encoder receives a value that implements that interface,
the encoder invokes the value's marshaling method to convert it to JSON
instead of doing the standard conversion.
The encoder checks this property at run time with a type assertion like:
m, ok := val.(json.Marshaler)
If it's necessary only to ask whether a type implements an interface, without actually using the interface itself, perhaps as part of an error check, use the blank identifier to ignore the type-asserted value:
if _, ok := val.(json.Marshaler); ok { fmt.Printf("value %v of type %T implements json.Marshaler\n", val, val) }
One place this situation arises is when it is necessary to guarantee within the package implementing the type that
it actually satisfies the interface.
If a type—for example,
json.RawMessage—needs
a custom JSON representation, it should implement
json.Marshaler, but there are no static conversions that would
cause the compiler to verify this automatically.
If the type inadvertently fails to satisfy the interface, the JSON encoder will still work,
but will not use the custom implementation.
To guarantee that the implementation is correct,
a global declaration using the blank identifier can be used in the package:
var _ json.Marshaler = (*RawMessage)(nil)
In this declaration, the assignment involving a conversion of a
*RawMessage to a
Marshaler
requires that
*RawMessage implements
Marshaler,
and that property will be checked at compile time.
Should the
json.Marshaler interface change, this package
will no longer compile and we will be on notice that it needs to be updated.
The appearance of the blank identifier in this construct indicates that the declaration exists only for the type checking, not ability to “borrow” pieces of an implementation by embedding types within a struct or interface.
Interface embedding is very simple.
We've mentioned the
io.Reader and
io.Writer interfaces before;
here are their definitions.
type Reader interface { Read(p []byte) (n int, err error) } type Writer interface { Write(p []byte) (n int, err error) }
The
io package also exports several other interfaces
that specify objects that can implement several such methods.
For instance, there is
io.ReadWriter, an interface
containing both
Read and
Write.
We could specify
io.ReadWriter by listing the
two methods explicitly, but it's easier and more evocative
to embed the two interfaces to form the new one, like this:
// ReadWriter is the interface that combines the Reader and Writer interfaces. type ReadWriter interface { Reader Writer }
This says just what it looks like: A
ReadWriter can do
what a
Reader does and what a
Writer
does; it is a union of the embedded interfaces (which must be disjoint
sets of methods).
Only interfaces can be embedded within interfaces.
The same basic idea applies to structs, but with more far-reaching
implications. The
bufio package has two struct types,
bufio.Reader and
bufio.Writer, each of
which of course implements the analogous interfaces from package
io.
And
bufio also implements a buffered reader/writer,
which it does by combining a reader and a writer into one struct
using embedding: it lists the types within the struct
but does not give them field names.
// ReadWriter stores pointers to a Reader and a Writer. // It implements io.ReadWriter. type ReadWriter struct { *Reader // *bufio.Reader *Writer // *bufio.Writer }
The embedded elements are pointers to structs and of course
must be initialized to point to valid structs before they
can be used.
The
ReadWriter struct could be written as
type ReadWriter struct { reader *Reader writer *Writer }
but then to promote the methods of the fields and to
satisfy the
io interfaces, we would also need
to provide forwarding methods, like this:
func (rw *ReadWriter) Read(p []byte) (n int, err error) { return rw.reader.Read(p) }
By embedding the structs directly, we avoid this bookkeeping.
The methods of embedded types come along for free, which means that
bufio.ReadWriter
not only has the methods of
bufio.Reader and
bufio.Writer,
it also satisfies all three interfaces:
io.Reader,
io.Writer, and
io.ReadWriter.
There's an important way in which embedding differs from subclassing. When we embed a type,
the methods of that type become methods of the outer type,
but when they are invoked the receiver of the method is the inner type, not the outer one.
In our example, when the
Read method of a
bufio.ReadWriter is
invoked, it has exactly the same effect as the forwarding method written out above;
the receiver is the
reader field of the
ReadWriter, not the
ReadWriter itself.
Embedding can also be a simple convenience. This example shows an embedded field alongside a regular, named field.
type Job struct { Command string *log.Logger }
The
Job type now has the
Log,
Logf
and other
methods of
*log.Logger. We could have given the
Logger
a field name, of course, but it's not necessary to do so. And now, once
initialized, we can
log to the
Job:
job.Log("starting now...")
The
Logger is a regular field of the
Job struct,
so we can initialize it in the usual way inside the constructor for
Job, like this,
func NewJob(command string, logger *log.Logger) *Job { return &Job{command, logger} }
or with a composite literal,
job := &Job{command, log.New(os.Stderr, "Job: ", log.Ldate)}
If we need to refer to an embedded field directly, the type name of the field,
ignoring the package qualifier, serves as a field name, as it did
in the
Read method of our
ReaderWriter struct.
Here, if we needed to access the
*log.Logger of a
Job variable
job,
we would write
job.Logger,
which would be useful if we wanted to refine the methods of
Logger.
func (job *Job) Logf(format string, args ...interface{}) { job.Logger.Logf("%q: %s", job.Command, fmt.Sprintf(format, args...)) }.
This qualification provides some protection against changes made to types embedded from outside; there
is no problem if a field is added that conflicts with another field in another subtype if neither field
is ever used.
Concurrency
Share by communicating
Concurrent programming is a large topic and there is space only for some Go-specific highlights here.
Concurrent programming in many environments is made difficult by the subtleties required to implement correct access to shared variables..
This approach can be taken too far. Reference counts may be best done by putting a mutex around an integer variable, for instance. But as a high-level approach, using channels to control access makes it easier to write clear, correct programs.. Unix pipelines, for example, fit this model perfectly. Although Go's approach to concurrency originates in Hoare's Communicating Sequential Processes (CSP), it can also be seen as a type-safe generalization of Unix pipes.
Goroutines
They're called goroutines because the existing terms—threads, coroutines, processes, and so on—convey inaccurate connotations. A goroutine has a simple model: it is a function executing concurrently with other goroutines in the same address space. It is lightweight, costing little more than the allocation of stack space. And the stacks start small, so they are cheap, and grow by allocating (and freeing) heap storage as required.
Goroutines are multiplexed onto multiple OS threads so if one should block, such as while waiting for I/O, others continue to run. Their design hides many of the complexities of thread creation and management.
Prefix a function or method call with the
go
keyword to run the call in a new goroutine.
When the call completes, the goroutine
exits, silently. (The effect is similar to the Unix shell's
& notation for running a command in the
background.)
go list.Sort() // run list.Sort concurrently; don't wait for it..
These examples aren't too practical because the functions have no way of signaling completion. For that, we need channels.
Channels
Like maps, channels are allocated with
make, and
the resulting value acts as a reference to an underlying data structure.
If an optional integer parameter is provided, it sets the buffer size for the channel.
The default is zero, for an unbuffered or synchronous channel.
ci := make(chan int) // unbuffered channel of integers cj := make(chan int, 0) // unbuffered channel of integers cs := make(chan *os.File, 100) // buffered channel of pointers to Files
Unbuffered channels combine communication—the exchange of a value—with synchronization—guaranteeing that two calculations (goroutines) are in a known state.
There are lots of nice idioms using channels. Here's one to get us started. In the previous section we launched a sort in the background. A channel can allow the launching goroutine to wait for the sort to complete..
Receivers always block until there is data to receive..
A buffered channel can be used like a semaphore, for instance to
limit throughput. In this example, incoming requests are passed
to
handle, which sends a value into the channel, processes
the request, and then receives a value from the channel
to ready the “semaphore” for the next consumer.
The capacity of the channel buffer limits the number of
simultaneous calls to
process.
var sem = make(chan int, MaxOutstanding) func handle(r *Request) { sem <- 1 // Wait for active queue to drain. process(r) // May take a long time. <-sem // Done; enable next request to run. } func Serve(queue chan *Request) { for { req := <-queue go handle(req) // Don't wait for handle to finish. } }
Once
MaxOutstanding handlers are executing
process,
any more will block trying to send into the filled channel buffer,
until one of the existing handlers finishes and receives from the buffer.
This design has a problem, though:
Serve
creates a new goroutine for
every incoming request, even though only
MaxOutstanding
of them can run at any moment.
As a result, the program can consume unlimited resources if the requests come in too fast.
We can address that deficiency by changing
Serve to
gate the creation of the goroutines.
Here's an obvious solution, but beware it has a bug we'll fix subsequently:
func Serve(queue chan *Request) { for req := range queue { sem <- 1 go func() { process(req) // Buggy; see explanation below. <-sem }() } }
The bug is that in a Go
for loop, the loop variable
is reused for each iteration, so the
req
variable is shared across all goroutines.
That's not what we want.
We need to make sure that
req is unique for each goroutine.
Here's one way to do that, passing the value of
req as an argument
to the closure in the goroutine:
func Serve(queue chan *Request) { for req := range queue { sem <- 1 go func(req *Request) { process(req) <-sem }(req) } }
Compare this version with the previous to see the difference in how the closure is declared and run. Another solution is just to create a new variable with the same name, as in this example:
func Serve(queue chan *Request) { for req := range queue { req := req // Create new instance of req for the goroutine. sem <- 1 go func() { process(req) <-sem }() } }
It may seem odd to write
req := req
but it's legal and idiomatic in Go to do this. You get a fresh version of the variable with the same name, deliberately shadowing the loop variable locally but unique to each goroutine.
Going back to the general problem of writing the server,
another approach that manages resources well is to start a fixed
number of
handle goroutines all reading from the request
channel.
The number of goroutines limits the number of simultaneous
calls to
process.
This
Serve function also accepts a channel on which
it will be told to exit; after launching the goroutines it blocks
receiving from that channel.
func handle(queue chan *Request) { for r := range queue { process(r) } } func Serve(clientRequests chan *Request, quit chan bool) { // Start handlers for i := 0; i < MaxOutstanding; i++ { go handle(clientRequests) } <-quit // Wait to be told to exit. }
Channels of channels
One of the most important properties of Go is that a channel is a first-class value that can be allocated and passed around like any other. A common use of this property is to implement safe, parallel demultiplexing.
In the example in the previous section,
handle was
an idealized handler for a request but we didn't define the
type it was handling. If that type includes a channel on which
to reply, each client can provide its own path for the answer.
Here's a schematic definition of type
Request.
type Request struct { args []int f func([]int) int resultChan chan int }
The client provides a function and its arguments, as well as a channel inside the request object on which to receive the answer.
func sum(a []int) (s int) { for _, v := range a { s += v } return } request := &Request{[]int{3, 4, 5}, sum, make(chan int)} // Send request clientRequests <- request // Wait for response. fmt.Printf("answer: %d\n", <-request.resultChan)
On the server side, the handler function is the only thing that changes.
func handle(queue chan *Request) { for req := range queue { req.resultChan <- req.f(req.args) } }
There's clearly a lot more to do to make it realistic, but this code is a framework for a rate-limited, parallel, non-blocking RPC system, and there's not a mutex in sight.
Parallelization
Another application of these ideas is to parallelize a calculation across multiple CPU cores. If the calculation can be broken into separate pieces that can execute independently, it can be parallelized, with a channel to signal when each piece completes.
Let's say we have an expensive operation to perform on a vector of items, and that the value of the operation on each item is independent, as in this idealized example.
type Vector []float64 // Apply the operation to v[i], v[i+1] ... up to v[n-1]. func (v Vector) DoSome(i, n int, u Vector, c chan int) { for ; i < n; i++ { v[i] += u.Op(v[i]) } c <- 1 // signal that this piece is done }
We launch the pieces independently in a loop, one per CPU. They can complete in any order but it doesn't matter; we just count the completion signals by draining the channel after launching all the goroutines.
const numCPU = 4 // number of CPU cores func (v Vector) DoAll(u Vector) { c := make(chan int, numCPU) // Buffering optional but sensible. for i := 0; i < numCPU; i++ { go v.DoSome(i*len(v)/numCPU, (i+1)*len(v)/numCPU, u, c) } // Drain the channel. for i := 0; i < numCPU; i++ { <-c // wait for one task to complete } // All done. }
Rather than create a constant value for numCPU, we can ask the runtime what
value is appropriate.
The function
runtime.NumCPU
returns the number of hardware CPU cores in the machine, so we could write
var numCPU = runtime.NumCPU()
There is also a function
runtime.GOMAXPROCS,
which reports (or sets)
the user-specified number of cores that a Go program can have running
simultaneously.
It defaults to the value of
runtime.NumCPU but can be
overridden by setting the similarly named shell environment variable
or by calling the function with a positive number. Calling it with
zero just queries the value.
Therefore if we want to honor the user's resource request, we should write
var numCPU = runtime.GOMAXPROCS(0)
Be sure not to confuse the ideas of concurrency—structuring a program as independently executing components—and parallelism—executing calculations in parallel for efficiency on multiple CPUs. Although the concurrency features of Go can make some problems easy to structure as parallel computations, Go is a concurrent language, not a parallel one, and not all parallelization problems fit Go's model. For a discussion of the distinction, see the talk cited in this blog post.
A leaky buffer
The tools of concurrent programming can even make non-concurrent
ideas easier to express. Here's an example abstracted from an RPC
package. The client goroutine loops receiving data from some source,
perhaps a network. To avoid allocating and freeing buffers, it keeps
a free list, and uses a buffered channel to represent it. If the
channel is empty, a new buffer gets allocated.
Once the message buffer is ready, it's sent to the server on
serverChan.
var freeList = make(chan *Buffer, 100) var serverChan = make(chan *Buffer) func client() { for { var b *Buffer // Grab a buffer if available; allocate if not. select { case b = <-freeList: // Got one; nothing more to do. default: // None free, so allocate a new one. b = new(Buffer) } load(b) // Read next message from the net. serverChan <- b // Send to server. } }
The server loop receives each message from the client, processes it, and returns the buffer to the free list.
func server() { for { b := <-serverChan // Wait for work. process(b) // Reuse buffer if there's room. select { case freeList <- b: // Buffer on free list; nothing more to do. default: // Free list full, just carry on. } } }
The client attempts to retrieve a buffer from
freeList;
if none is available, it allocates a fresh one.
The server's send to
freeList puts
b back
on the free list unless the list is full, in which case the
buffer is dropped on the floor to be reclaimed by
the garbage collector.
(The
default clauses in the
statements execute when no other case is ready,
meaning that the
selects never block.)
This implementation builds a leaky bucket free list
in just a few lines, relying on the buffered channel and
the garbage collector for bookkeeping.
Errors
Library routines must often return some sort of error indication to
the caller.
As mentioned earlier, Go's multivalue return makes it
easy to return a detailed error description alongside the normal
return value.
It is good style to use this feature to provide detailed error information.
For example, as we'll see,
os.Open doesn't
just return a
nil pointer on failure, it also returns an
error value that describes what went wrong.
By convention, errors have type
error,
a simple built-in interface.
type error interface { Error() string }
A library writer is free to implement this interface with a
richer model under the covers, making it possible not only
to see the error but also to provide some context.
As mentioned, alongside the usual
*os.File
return value,
os.Open also returns an
error value.
If the file is opened successfully, the error will be
nil,
but when there is a problem, it will hold an
os.PathError:
// PathError records an error and the operation and // file path that caused it. type PathError struct { Op string // "open", "unlink", etc. Path string // The associated file. Err error // Returned by the system call. } func (e *PathError) Error() string { return e.Op + " " + e.Path + ": " + e.Err.Error() }
PathError's
Error generates
a string like this:
open /etc/passwx: no such file or directory
Such an error, which includes the problematic file name, the operation, and the operating system error it triggered, is useful even if printed far from the call that caused it; it is much more informative than the plain "no such file or directory".
When feasible, error strings should identify their origin, such as by having
a prefix naming the operation or package that generated the error. For example, in package
image, the string representation for a decoding error due to an
unknown format is "image: unknown format".
Callers that care about the precise error details can
use a type switch or a type assertion to look for specific
errors and extract details. For
PathErrors
this might include examining the internal
Err
field for recoverable failures.
for try := 0; try < 2; try++ { file, err = os.Create(filename) if err == nil { return } if e, ok := err.(*os.PathError); ok && e.Err == syscall.ENOSPC { deleteTempFiles() // Recover some space. continue } return }
The second
if statement here is another type assertion.
If it fails,
ok will be false, and
e
will be
nil.
If it succeeds,
ok will be true, which means the
error was of type
*os.PathError, and then so is
e,
which we can examine for more information about the error.
Panic
The usual way to report an error to a caller is to return an
error as an extra return value. The canonical
Read method is a well-known instance; it returns a byte
count and an
error. But what if the error is
unrecoverable? Sometimes the program simply cannot continue.
For this purpose, there is a built-in function
panic
that in effect creates a run-time error that will stop the program
(but see the next section). The function takes a single argument
of arbitrary type—often a string—to be printed as the
program dies. It's also a way to indicate that something impossible has
happened, such as exiting an infinite loop.
// A toy implementation of cube root using Newton's method. func CubeRoot(x float64) float64 { z := x/3 // Arbitrary initial value for i := 0; i < 1e6; i++ { prevz := z z -= (z*z*z-x) / (3*z*z) if veryClose(z, prevz) { return z } } // A million iterations has not converged; something is wrong. panic(fmt.Sprintf("CubeRoot(%g) did not converge", x)) }
This is only an example but real library functions should
avoid
panic. If the problem can be masked or worked
around, it's always better to let things continue to run rather
than taking down the whole program. One possible counterexample
is during initialization: if the library truly cannot set itself up,
it might be reasonable to panic, so to speak.
var user = os.Getenv("USER") func init() { if user == "" { panic("no value for $USER") } }
Recover
When
panic is called, including implicitly for run-time
errors such as indexing a slice out of bounds or failing a type
assertion, it immediately stops execution of the current function
and begins unwinding the stack of the goroutine, running any deferred
functions along the way. If that unwinding reaches the top of the
goroutine's stack, the program dies. However, it is possible to
use the built-in function
recover to regain control
of the goroutine and resume normal execution.
A call to
recover stops the unwinding and returns the
argument passed to
panic. Because the only code that
runs while unwinding is inside deferred functions,
recover
is only useful inside deferred functions.
One application of
recover is to shut down a failing goroutine
inside a server without killing the other executing goroutines.
func server(workChan <-chan *Work) { for work := range workChan { go safelyDo(work) } } func safelyDo(work *Work) { defer func() { if err := recover(); err != nil { log.Println("work failed:", err) } }() do(work) }
In this example, if
do(work) panics, the result will be
logged and the goroutine will exit cleanly without disturbing the
others. There's no need to do anything else in the deferred closure;
calling
recover handles the condition completely.
Because
recover always returns
nil unless called directly
from a deferred function, deferred code can call library routines that themselves
use
panic and
recover without failing. As an example,
the deferred function in
safelyDo might call a logging function before
calling
recover, and that logging code would run unaffected
by the panicking state.
With our recovery pattern in place, the
do
function (and anything it calls) can get out of any bad situation
cleanly by calling
panic. We can use that idea to
simplify error handling in complex software. Let's look at an
idealized version of a
regexp package, which reports
parsing errors by calling
panic with a local
error type. Here's the definition of
Error,
an
error method, and the
Compile function.
// Error is the type of a parse error; it satisfies the error interface. type Error string func (e Error) Error() string { return string(e) } // error is a method of *Regexp that reports parsing errors by // panicking with an Error. func (regexp *Regexp) error(err string) { panic(Error(err)) } // Compile returns a parsed representation of the regular expression. func Compile(str string) (regexp *Regexp, err error) { regexp = new(Regexp) // doParse will panic if there is a parse error. defer func() { if e := recover(); e != nil { regexp = nil // Clear return value. err = e.(Error) // Will re-panic if not a parse error. } }() return regexp.doParse(str), nil }
If
doParse panics, the recovery block will set the
return value to
nil—deferred functions can modify
named return values. It will then check, in the assignment
to
err, that the problem was a parse error by asserting
that it has the local type
Error.
If it does not, the type assertion will fail, causing a run-time error
that continues the stack unwinding as though nothing had interrupted
it.
This check means that if something unexpected happens, such
as an index out of bounds, the code will fail even though we
are using
panic and
recover to handle
parse errors.
With error handling in place, the
error method (because it's a
method bound to a type, it's fine, even natural, for it to have the same name
as the builtin
error type)
makes it easy to report parse errors without worrying about unwinding
the parse stack by hand:
if pos == 0 { re.error("'*' illegal at start of expression") }
Useful though this pattern is, it should be used only within a package.
Parse turns its internal
panic calls into
error values; it does not expose
panics
to its client. That is a good rule to follow.
By the way, this re-panic idiom changes the panic value if an actual error occurs. However, both the original and new failures will be presented in the crash report, so the root cause of the problem will still be visible. Thus this simple re-panic approach is usually sufficient—it's a crash after all—but if you want to display only the original value, you can write a little more code to filter unexpected problems and re-panic with the original error. That's left as an exercise for the reader.
A web server
Let's finish with a complete Go program, a web server. This one is actually a kind of web re-server. Google provides a service at that does automatic formatting of data into charts and graphs. It's hard to use interactively, though, because you need to put the data into the URL as a query. The program here provides a nicer interface to one form of data: given a short piece of text, it calls on the chart server to produce a QR code, a matrix of boxes that encode the text. That image can be grabbed with your cell phone's camera and interpreted as, for instance, a URL, saving you typing the URL into the phone's tiny keyboard.
Here's the complete program. An explanation follows.
package main import ( "flag" "html/template" "log" "net/http" ) var addr = flag.String("addr", ":1718", "http service address") // Q=17, R=18 var templ = template.Must(template.New("qr").Parse(templateStr)) func main() { flag.Parse() http.Handle("/", http.HandlerFunc(QR)) err := http.ListenAndServe(*addr, nil) if err != nil { log.Fatal("ListenAndServe:", err) } } func QR(w http.ResponseWriter, req *http.Request) { templ.Execute(w, req.FormValue("s")) } const templateStr = ` <html> <head> <title>QR Link Generator</title> </head> <body> {{if .}} <img src="{{.}}" /> <br> {{.}} <br> <br> {{end}} <form action="/" name=f<input maxLength=1024 size=70 name=s<input type=submit value="Show QR" name=qr> </form> </body> </html> `
The pieces up to
main should be easy to follow.
The one flag sets a default HTTP port for our server. The template
variable
templ is where the fun happens. It builds an HTML template
that will be executed by the server to display the page; more about
that in a moment.
The
main function parses the flags and, using the mechanism
we talked about above, binds the function
QR to the root path
for the server. Then
http.ListenAndServe is called to start the
server; it blocks while the server runs.
QR just receives the request, which contains form data, and
executes the template on the data in the form value named
s.
The template package
html/template is powerful;
this program just touches on its capabilities.
In essence, it rewrites a piece of HTML text on the fly by substituting elements derived
from data items passed to
templ.Execute, in this case the
form value.
Within the template text (
templateStr),
double-brace-delimited pieces denote template actions.
The piece from
{{if .}}
to
{{end}} executes only if the value of the current data item, called
. (dot),
is non-empty.
That is, when the string is empty, this piece of the template is suppressed.
The two snippets
{{.}} say to show the data presented to
the template—the query string—on the web page.
The HTML template package automatically provides appropriate escaping so the
text is safe to display.
The rest of the template string is just the HTML to show when the page loads. If this is too quick an explanation, see the documentation for the template package for a more thorough discussion.
And there you have it: a useful web server in a few lines of code plus some data-driven HTML text. Go is powerful enough to make a lot happen in a few lines.
|
http://docs.activestate.com/activego/1.8/doc/effective_go.html
|
CC-MAIN-2019-04
|
refinedweb
| 15,115
| 63.9
|
NAME
roar_vs_new_simple, roar_vs_new_playback - Create new VS objects
SYNOPSIS
#include <roaraudio.h> roar_vs_t * roar_vs_new_simple(const char * server, const char * name, int rate, int channels, int codec, int bits, int dir, int * error); roar_vs_t * roar_vs_new_playback(const char * server, const char * name, int rate, int channels, int codec, int bits, int * error);
DESCRIPTION
These functions create a new VS object with a already connected data connection. The functions connect to the server server with application name name. They take the audio parameters as arguments rate, channels, codec, bits. roar_vs_new_simple() takes the stream direction as parameter dir. roar_vs_new_playback() is equivalent to roar_vs_new_simple() expect that it does not take the direction parameter but uses ROAR_DIR_PLAY (waveform playback). It may be implemented as a macro.
PARAMETERS
server The server to connect to. NULL for defaults. name The application name. This should be something the user can use to identify the application. It MUST NOT be the application's binary name or the value of argv[0]. rate, channels, codec, bits The audio parameters for the new stream: sampling rate, number of channels per frame, used codec and number of bits per sample. dir This is the stream direction. Common values include ROAR_DIR_PLAY for waveform playback, ROAR_DIR_MONITOR for waveform monitoring, ROAR_DIR_RECORD for waveform recording. For MIDI ROAR_DIR_MIDI_IN and ROAR_DIR_MIDI_OUT is(3), roar_vs_new_from_file(3), roar_vs_close(3), roarvs(7), libroar(7), RoarAudio(7).
|
http://manpages.ubuntu.com/manpages/precise/man3/roar_vs_new_simple.3.html
|
CC-MAIN-2016-40
|
refinedweb
| 224
| 57.98
|
3D Arrays in C language – How to declare, initialize and access elements
In our previous tutorials we have discussed that C programming allows multiple dimensions in arrays like 1D arrays, 2D arrays. Similarly, we can have three or more dimensions too. A 3D array is like a three dimensional figure ,eg: a cube or a cuboid.
A 3D array has rows and columns like a 2D array and another dimension which contains sets of these 2D rows and columns. The following figure illustrates the three dimensions of a 3D array:
In this figure, we have an integer array. We have sets of numbers present in the rows and columns of this array. Like a two dimensional array, the representation of dimensions is done as 3x3x3. Where the numbers are – number of the row and column set * number of rows * number of column. We can have different values of all the dimensions like 2x3x2 or 4x3x3. To have a better understanding of how the data is stored in the memory for a 3D array, have a look at the explained figure of the array mentioned above:
The above figure shows memory mapping of a 3D array. The consecutive memory addresses differ by 2 because the size of the int data type is 2 bytes. The above memory address has been taken randomly. In reality, the processor can store the array in other memory locations too. We also see that the numbers are stored in a linear fashion in the given array. In the upcoming sections, we shall learn more about the declaration, accessing and displaying the data in a 3D array.
Declaration, Initialization and storing data
The declaration of a 3D array takes place in a very similar manner to that of any other array, like 1D or 2D array. A datatype is given to array to specify how the data in the array should be interpreted. Here is the syntax for declaration of an array:
int arr[3][3][3];
The int specifies that the data stored in the array will be of integer type.
arr is the variable name under which all the data is stored.
The first [3] refers to the sets of rows and columns of the array, the second [3] refers to the number of rows of the inner array and the third [3] refers to the number of columns of the inner array. This is also static memory allocation, that is, we are allocating the array a size equal to 3x3x3 , that is, the array can store 3x3x3 = 27 number of elements. The three [][][] brackets specifies the array is three dimensional.
Initialization of a 3D array needs to be done as follows:
//the following syntax is used for proper readability int arr[3][3][3]={{ //set 1 {4,10,6}, {17,0,12}, {5,56,13} }, { //set 2 {10,23,15}, {2,5,9}, {1,16,20} }, { //set 3 {5,16,0}, {4,35,19}, {8,13,2}}};
Here we have initialized an integer array. Array of any type can be initialized using this format.
We already know that data can be stored in an array using for loops. Since we have a 3D array, we use 3 for loops for that purpose. Here is the syntax for storing data in a 3D array:
for(i=0 ;i < 3 ;i++){ for(j=0 ;j < 3 ;j++){ for(k=0 ;k < 3 ;k++){ scanf("%d",&arr[i][j][k]); }}}
Note that if we do not initialize the array or store data in it by user input, it will result in an output of garbage values.
Accessing and Reading an array
We are already familiar with the concept of accessing the array using subscripts or index numbers in 2D arrays. Accessing 3D array is done in a very similar manner. The data present in row 2, column 1 and 2nd set is referred to as:
arr[2][1][2];
For the purpose of reading all the elements of the array, we use three nested for loops. This syntax is given below:
for(i=0 ;i < 3 ;i++){ //the outer loop is for the set of rows and columns for(j=0 ;j < 3 ;j++){ //this middle loop is for the rows for(k=0 ;k < 3 ;k++){ //this inner loop is for the columns printf("%d ",arr[i][j][k]); }printf("\n"); }printf("\n"); }
Program to initialize 3D array with User input, process and print it
Here is a simple program of 3D array which adds two arrays and stores the result in another array. One array has already been initialized and the other one will have data input by the user.
#include <stdio.h> int main() { //the first array has been initialized. int a[3][3][3]={{ //set 1 {4,10,6}, {17,0,12}, {5,56,13} }, { //set 2 {10,23,15}, {2,5,9}, {1,16,20} }, { //set 3 {5,16,0}, {4,35,19}, {8,13,2}}}; //second array shall have user input values and third array stores the sum of the other two int b[3][3][3], c[3][3][3]; int i, j, k; //input elements into the second array printf("Enter the elements in the array:\n"); for(i=0 ;i < 3; i++){ for(j=0 ;j<3 ;j++){ for(k=0 ;k<3 ;k++){ scanf("%d",&b[i][j][k]); c[i][j][k]=a[i][j][k]+b[i][j][k];//summing up the two arrays } } } printf("The sum of the two arrays : \n"); for(i=0 ;i<3 ;i++){ //the outer loop is for the set of rows and columns for(j=0 ;j<3 ;j++){ //this middle loop is for the rows for(k=0 ;k<3 ;k++){ //this inner loop is for the columns printf("%d ",c[i][j][k]); }printf("\n"); }printf("\n"); } return 0; }
Output:- Enter the elements in the array: 6 7 34 4 9 6 0 2 3 18 9 7 13 4 0 1 23 65 8 6 3 9 3 5 3 15 32 The sum of the two arrays: 10 17 40 21 9 18 5 58 16 28 32 22 15 9 9 2 39 85 13 22 3 13 38 24 11 28 34
Learning never exhausts the mind.So, do come back for more. Hope this helps and you like the tutorial. Do ask for any queries in the comment box and provide your valuable feedback.
Share and subscribe.
Keep Coding!! Happy Coding!! 🙂
|
https://www.codingeek.com/tutorials/c-programming/3d-arrays-in-c-language-how-to-declare-initialize-and-access-elements/
|
CC-MAIN-2020-50
|
refinedweb
| 1,079
| 62.82
|
#include <OSnLNode.h>
Inheritance diagram for OSnLNodeNumber:
Definition at line 1073 of file OSnLNode.h.
default constructor.
default destructor.
Calculate the function value given the current variable values. This is an abstract method which is required to be implemented by the concrete operator nodes that derive or extend from this OSnLNode class.
value is the value of the number
Definition at line 1077 of file OSnLNode.h.
in the C++ type is real
Definition at line 1079 of file OSnLNode.h.
later, e.g.
stochastic programming, we may wish to give an id to a number
Definition at line 1083 of file OSnLNode.h.
|
http://www.coin-or.org/Doxygen/CoinAll/class_o_sn_l_node_number.html
|
crawl-003
|
refinedweb
| 104
| 54.93
|
UnivEqUnivEq
Safer universal equivalence for Scala & Scala.JS. (zero-dependency)
Created: Feb 2015.
Open-Sourced: Apr 2016.
MotivationMotivation
In Scala, all values and objects have the following methods:
equals(Any): Boolean
==(Any): Boolean
!=(Any): Boolean
This means that you can perform nonsensical comparisons that, at compile-time, you know will fail.
You're likely to quickly detect this kind of errors when you're writing them for the first time, but the larger problems are:
- valid comparisons becoming invalid after refactoring your data.
- calling a method that expects universal equality to hold with a data type in which it doesn't (eg. a method that uses
Setunder the hood).
It's a breeding ground for bugs.
But Scalactic/Scalaz/Cats/X already has an
Equal class
This isn't a replacement for the typical
Equal typeclass you find in other libraries. Those define methods of equality, where is this provides a proof that the underlying types'
.equals(Any): Boolean implementation correctly defines the equality. For example, in a project of mine, I use
UnivEq for about 95% of data and
scalaz.Equal for the remaining 5%.
Why distinguish? Knowing that universal quality holds is a useful property in its own right. It means a more efficient equals implementation because typeclass instances aren't used for comparison, which means they're dead code and can be optimised away along with their construction if
def or
lazy vals. Secondly 99.99% of classes with sensible
.equals also have sensible
.hashCode implementations which means it's a good constraint to apply to methods that will depend on it (eg. if you call
.toSet).
Provided HereProvided Here
This library contains:
- A typeclass
UnivEq[A].
- A macro to derive instances for your types.
- Compilation error if a future change to your data types' args or their types, lose universal equality.
- Proofs for most built-in Scala & Java types.
- Ops
==*/
!=*to be used instead of
==/
!=so that incorrect type comparison yields compilation error.
- A few helper methods that provide safety during construction of maps and sets.
- Optional modules for Scalaz and Cats.
ExampleExample
import japgolly.univeq._ case class Foo[A](name: String, value: Option[A]) // This will fail at compile-time. // It doesn't hold for all A... //implicit def fooUnivEq[A]: UnivEq[Foo[A]] = UnivEq.derive // ...It only holds when A has universal equivalence. implicit def fooUnivEq[A: UnivEq]: UnivEq[Foo[A]] = UnivEq.derive // Let's create data with & without universal equivalence trait Whatever val nope = Foo("nope", Some(new Whatever{})) val good = Foo("yay", Some(123)) nope ==* nope // This will fail at compile-time. nope ==* good // This will fail at compile-time. good ==* good // This is ok. // Similarly, if you made a function like: def countUnique[A: UnivEq](as: A*): Int = as.toSet.size countUnique(nope, nope) // This will fail at compile-time. countUnique(good, good) // This is ok.
InstallationInstallation
No dependencies:No dependencies:
// Your SBT libraryDependencies += "com.github.japgolly.univeq" %%% "univeq" % "1.3.0" // Your code import japgolly.univeq._
Scalaz:Scalaz:
// Your SBT libraryDependencies += "com.github.japgolly.univeq" %%% "univeq-scalaz" % "1.3.0" // Your code import japgolly.univeq.UnivEqScalaz._
Cats:Cats:
// Your SBT libraryDependencies += "com.github.japgolly.univeq" %%% "univeq-cats" % "1.3.0" // Your code import japgolly.univeq.UnivEqCats._
UsageUsage
Create instances for your own types like this:
implicit def xxxxxxUnivEq[A: UnivEq]: UnivEq[Xxxxxx[A]] = UnivEq.derive
Change
UnivEq.deriveto
UnivEq.deriveDebugto display derivation details.
If needed, you can create instances with
UnivEq.forceto tell the compiler to take your word.
Use
==*/
!=*in place of
==/
!=.
Add
: UnivEqto type params that need it.
Future WorkFuture Work
- Get rid of the
==*/
!=*; write a compiler plugin that checks for
UnivEqat each
==/
!=.
- Add a separate
HashCodetypeclass instead of just using
UnivEqfor maps, sets and similar.
Note: I'm not working on these at the moment, but they'd be fantastic contributions..
|
https://index.scala-lang.org/japgolly/univeq/univeq/1.2.1?target=_2.12
|
CC-MAIN-2021-17
|
refinedweb
| 639
| 53.68
|
This.
Win32 API functions that allocate a string enable you to free the string by using a method such as
LocalFree. Platform invoke handles such parameters differently. For platform invoke calls, make the parameter an IntPtr type instead of a String type. Use methods that are provided by the System.Runtime.InteropServices..::.Marshal class to convert the type to a string manually and free it manually.
Managed definitions to unmanaged functions are language-dependent, as you can see in the following examples. For more complete code examples, see Platform Invoke Examples.
Imports System.Runtime.InteropServices
Public Class Win32
Declare Auto Function MessageBox Lib "user32.dll" _
(ByVal hWnd As Integer, _
ByVal txt As String, ByVal caption As String, _
ByVal Typ As Integer) As IntPtr
End Class. IntPtr
End Function
End Class
using System.Runtime.InteropServices;
[DllImport("user32.dll")]
public static extern IntPtr MessageBox(int hWnd, String text,
String caption, uint type);
using namespace System::Runtime::InteropServices;
[DllImport("user32.dll")]
extern "C" IntPtr MessageBox(int hWnd, String* pText,
String* pCaption unsigned int uType);.
Field
Description
BestFitMapping
Enables or disables best-fit mapping.
CallingConvention
Specifies the calling convention to use in passing method arguments. The default is WinAPI, which corresponds to __stdcall for the 32-bit Intel-based platforms.
CharSet
Controls name mangling and the way that string arguments should be marshaled to the function. The default is CharSet.Ansi.
EntryPoint
Specifies the DLL entry point to be called.
ExactSpelling
Controls whether an entry point should be modified to correspond to the character set. The default value varies by programming language.
PreserveSig
Controls whether the managed method signature should be transformed into an unmanaged signature that returns an HRESULT and has an additional [out, retval] argument for the return value.
The default is true (the signature should not be transformed).
SetLastError
Enables the caller to use the Marshal.GetLastWin32Error API function to determine whether an error occurred while executing the method. In Visual Basic, the default is true; in C# and C++, the default is false.
ThrowOnUnmappableChar
Controls throwing of an exception on an unmappable Unicode character that is converted to an ANSI "?" character.
For detailed reference information, see DllImportAttribute Class.
|
http://msdn.microsoft.com/en-us/library/w4byd5y4.aspx
|
crawl-002
|
refinedweb
| 361
| 50.23
|
I was inspired to write this control after looking at both Matthew Gullett's spell checking engine and Steve King's CSpellEdit control.
I liked CSpellEdit's simplicity, but the integration with the CEdit control felt a little better in CFPSSpellingEditCtrl, so I combined the two, updated the spell checking engine used to Hunspell instead of MySpell, added user dictionary code, and generalized it as much as possible.
To use this edit box, there are three steps:
To satisfy the LGPL, these directions build hunspell as a DLL, just to be safe.
You'll also need to get a dictionary from the same site (unless Mozilla Firefox or OpenOffice.org is already installed on your machine, and even then, you may wish to provide a dictionary with your program) and put the en_US.dic and en_US.aff (or the equivalent files for the language you chose) in one of the directories mentioned below. Note that Hunspell can use MySpell-compatible dictionaries.
(Steps 1 and 2 only need to be done if you choose to get updated source for Hunspell. If you choose not to, then copy the hunspell and win_api directories from the demo project into a convienent place in your solution.)
Choose either 2a or 2b at this point for the next 3 steps.
WINVER
_WIN32_WINNT
_WIN32_WINDOWS
If you do not do this step, HSpellEdit.h will stop with a C1189 error. (it hits a #error line.) This is to prevent other compilation errors later.The code takes advantage of capabilities of Windows 98, Windows 2000, and/or Windows ME when they are available, but does not *require* them. However, these define needs to be set in order to use these capabilities at all.
#include "HSpellEdit.h"
CWinApp
InitInstance
ExitInstance
CHSpellEdit::Initialize(this)
InitInstance
CWinApp::SetRegistryKey()
CHSpellEdit::Terminate()
DoModal
DoDataExchange
DDX!
*/
The only requirement that HSpellEdit actually places on your users is that if they run Windows 95 or Windows NT 4.0, that they also have an updated version (better than 4.71) of SHFolder.dll installed. The URL to get the US English version is in the comments at the appropriate point in the code. HSpellEdit.cpp detects the Windows version of the user and degrades accordingly. No code that requires Windows 98, Me, or 2000 is relied upon, but code that uses the capabilities of Windows 98, Me, or 2000 is used when these operating systems are available.
You'll want to create IDS_HSPELLEDIT_SHFOLDER_ERROR in your string table to define the message to use if the check for SHFolder.dll fails. If there is none, a default message will be used.
IDS_HSPELLEDIT_SHFOLDER_ERROR
If you use the DLL, please make sure that you use the same number for any IDS_HSPELLEDIT_* string that the DLL uses. The DLL provides strings in US English, but the strings can be overridden or translated by your application. You may wish to specify a string table in another language for translated strings, and I'd appreciate the translations being sent to me.
IDS_HSPELLEDIT_*
Three preprocessor definitions affect finding the dictionary files:
The preprocessor definition HSPELLEDIT_NO_REGISTRY_CHECK turns off loading and saving the path information to the registry.
HSPELLEDIT_NO_REGISTRY_CHECK
The preprocessor definition HSPELLEDIT_NO_EXTERNAL_PROGRAMS_CHECK turns off loading the path information from the registry entries for OpenOffice.org 2.3, 2.2, 2.0, or Mozilla Firefox 2.0.0.0 or greater. (They are tried in that order.)
HSPELLEDIT_NO_EXTERNAL_PROGRAMS_CHECK
The preprocessor definition HSPELLEDIT_NO_SELFREF_CHECK turns off checking the executable directory (and the "dic" subdirectory of the same) for the dictionary files.
HSPELLEDIT_NO_SELFREF_CHECK
Here are the order the directories are checked:
CSIDL_COMMON_APPDATA
CSIDL_APPDATA
Locations marked with * are not used to find the user dictionary, which will be written to CSIDL_APPDATA if it does not exist.
If no regular dictionaries exist, no spell checking will happen, but your program will otherwise work.
The directories that will be searched in if: A) Program is in C:\Program Files\HSpellEditTestB) Username is "Curtis", running on a US English version of Windows XPC) OpenOffice 2.3 and Mozilla Firefox 2.0 are installedare these:
Both a .dic and .aff file are required for your language in one of these directories to be a usable dictionary for HSpellEdit.
If your language is not available in a particular directory, HSpellEdit will check for other sublanguages of your language in descending order, (Windows specifies the order, HSpellEdit does not) then defaults to US English (en_US) before going on to the next directory.
Once a dictionary is found, its location is saved to the registry, unless HSPELLEDIT_NO_REGISTRY_CHECK was defined.
If you are writing an installer for a program using HSpellEdit, it is probably best to put your dictionaries in locations 4 and 5, rather than locations 2 or 3.
To internationalize the menus, put these constants in the string table:
IDS_HSPELLEDIT_SUGGESTIONS
IDS_HSPELLEDIT_NO_SUGGESTIONS
IDS_HSPELLEDIT_ADD
IDS_HSPELLEDIT_IGNORE
You only need to add these constants to your string table if MFC does not already define them for your language. (They will be in MFC71xxx.dll, where xxx is a 3-letter language code, and in the MFC source at atlmfc\src\mfc\l.xxx\prompts.rc if MFC DOES define them for your language.)
ID_EDIT_CUT
ID_EDIT_COPY
ID_EDIT_PASTE
ID_EDIT_SELECT_ALL
ID_EDIT_UNDO
I tried to allow for use in "least-priviledge-required" situations, which is why CSIDL_APPDATA (which should be writable except in extremely limited circumstances) is used as one place to store the dictionaries.
This article, along with any associated source code and files, is licensed under A Public Domain dedication
General News Suggestion Question Bug Answer Joke Rant Admin
Math Primers for Programmers
|
http://www.codeproject.com/Articles/21381/Spell-Checking-Edit-Control-Using-HunSpell
|
CC-MAIN-2013-20
|
refinedweb
| 923
| 53.61
|
0
hey there,
I have the following code:
import java.io.*; public class Karakters{ public static void main(String[] args) throws IOException{ FileWriter in = new FileWriter("karakters.txt"); PrintWriter outfile = new PrintWriter(in); char x = 'A'; for(int i = 0; i < 300; i++){ System.out.println((i + 1) + " " + (char)(x + i)); outfile.println((char)(x + i)); } outfile.close(); } }
This class creates a txt file named karakters.txt, and puts 300 consecutive characters, starting at 'A', in the file, one character each line.
The following class is supposed to generate binary representations of each character using Java's Integer.toBinaryString(character) method. however, this is not working properly. for characters whose ascii values range between 127 and 160 (inclusive), the same value is returned, and it equals 111111 (decimal 63).
import java.io.*; public class Aski{ public static void main(String[] args) throws IOException{ FileReader x = new FileReader("karakters.txt"); BufferedReader outfile = new BufferedReader(x); int i = 0; while(outfile.ready()){ i++; String s = outfile.readLine(); System.out.println(i + " " + s.charAt(0) + " " + (Integer.toBinaryString(s.charAt(0)))); } } }
there are other ranges, like 256 - 365 for which the same occurs. I expect that a character generated as follows:
char x = (char)(270);
would return a binary equivalent of 270 after passed to Integer.toBinaryString() method.
please help! thank you.
|
https://www.daniweb.com/programming/software-development/threads/335526/converting-chars-into-binary-strings-and-back
|
CC-MAIN-2017-30
|
refinedweb
| 218
| 50.73
|
Introduction
A reminder that all code examples come directly from the Timer CodeSandbox I put together. You are encouraged to open it, fork it, play around with the code, follow along, whatever helps you learn best!
In my first article in the React Hooks Series I wrote about the useState hook. This iteration will focus on useEffect (my Timer example calls the useRef hook first, but I think it makes more sense to understand what's happening with useEffect before we tackle useRef).
Part Two - useEffect
What is useEffect?
From the React docs: "The Effect Hook lets you perform side effects in function components:"> ); }
If you’re familiar with React class lifecycle methods, you can think of useEffect Hook as componentDidMount, componentDidUpdate, and componentWillUnmount combined.
Does useEffect run after every render? Yes! By default, it runs both after the first render and after every update.
In my own words: useEffect runs anytime something changes. This could be the user interacting with a form, a button, etc. State changing, like
counter in my Timer app, counting down every second or
start being set from
false to
true when the user hits START. Or the component itself is loaded (mounted) or unloaded (unmounted) from the screen.
Getting started
Add useEffect to our React import.
import React, { useState, useEffect } from "react";
Let's take a look at the first useEffect function.
useEffect(() => { if (start === true) { pauseTimer.current = counter > 0 && setTimeout(() => setCounter(counter - 1), 1000) } return () => { clearTimeout(pauseTimer.current) } }, [start, counter, setCounter])
A lot going on here. Remember that we set state of
start to
false. Therefore, even if our Timer component updates, this useEffect() will not run until
start === true.
Inside of our
if (start === true) conditional block is the meat and potatoes of our useEffect (and really the whole point of the app!):
pauseTimer.current = counter > 0 && setTimeout(() => setCounter(counter - 1), 1000)
However, we are going to ignore
pauseTimer.current for now (this logic is tied to our PAUSE button and the useRef hook).
Let's examine the following:
When
start === true run the code inside the block:
counter > 0 && setTimeout(() => setCounter(counter - 1), 1000)
If
counter > 0 run:
setTimeout(() => setCounter(counter - 1), 1000)
(Remember that we use
setCounter(input) to update
counter. Let's say a user selects 10 seconds,
input === 10 and when user hits submit, then
counter === 10.)
This is where the magic happens. Counter is 10. setTimeout accepts a function to run and a time in milliseconds. When that time expires, setTimeOut will run the function. In our case setTimeout accepts our
setCounter() function and will run after 1000 milliseconds (1 second).
setCounter(counter - 1) will run after 1 second, changing 10 to 9.
Every single time the state of ANYTHING changes/updates, useEffect is called. Therefore, when
counter changes from 10 to 9, useEffect is called again! Is 9 greater than 0? YES! Then run the code to the right of
if counter > 0 which happens to be our setTimeout function. This process happens until our
if counter > 0 is no longer
true. When
counter === 0,
counter is no longer greater than 0, or
false and will skip over the setTimeout to the right.
Next, take a look at this.
return () => { clearTimeout(pauseTimer.current) }
What is this return function inside our useEffect?
This has to do with cleanup. I had to deal with this in my GIF FIT app (the inspiration for this entire series of React hooks articles), where I am dealing with several setTimeouts (6 in total) running in sync.
They are separate components in my app. When one timer ended, another began. I quickly discovered that if you do not "clean up" certain functions inside a useEffect, you will get something called a "memory leak". Basically, my setTimeouts were still running in the background, taking up memory. NOT GOOD.
Luckily, useEffect has a simple solution. It accepts a final function which can clean up effects from the previous render and when the component finally unmounts. The above function inside our useEffect is effectively killing the setTimeout and avoiding any memory leaks! Cool, huh?
Putting it together
{ start === false && counter !== null && counter !== 0 ? <button style={{fontSize: "1.5rem"}} onClick={handleStart}>START</button> : null } { start === true && counter !== 0 ? <button style={{fontSize: "1.5rem"}} onClick={handlePause}>PAUSE</button> : null }
In Part One, useState(), I showed how we rendered the START button if
start === false && counter !== null && counter !== 0
Which gives us access to
onClick={handleStart}
user clicks start
const handleStart = () => { setStart(true) }
start === true
state changes and useEffect() runs
Our setTimeout decrements
count by one
state changes and useEffect runs again
Repeat this action until
count === 0 and is no longer greater than 0.
Yay! Our timer is working!
I'm about to blow your mind. Maybe. Did you know you can have multiple useEffect functions in the same component? Once my timer is finished (
counter === 0), I needed a way to reset the state of
start back to
false
Enter a second useEffect!
useEffect(() => { if (counter === 0) { setStart(false) } }, [counter, setStart])
Pretty straight forward. When useEffect detects that
counter === 0 it will call
setStart(false) which means
start === false.
This is a good time to talk about what
[start, counter, setCounter] and
[counter, setStart] does at the end of our two useEffects. These are dependencies that we are calling inside our useEffects, and we are explicitly telling our useEffects that when one of these change, do your thing!
You don't always need that array to wrap up a useEffect, but it's a good habit to get into. And if you want a useEffect to run only once, you place an empty array
[] at end of your useEffect function, because there aren't any dependencies, it won't know to run when state changes again.
Wrapping up
Thank you for reading Part Two of my React Hooks series. If you missed Part One, please check it out and let me know what you think.
Part Three will focus on the useRef hook and I am really excited about this one. The useRef hook is my least comfortable in terms of use and understanding. But so far it has been one of my favorites to work with. I am really impressed by how much the useRef hook can accomplish.
As always, thank you for making it this far and I look forward to any questions, comments, corrects, and even criticism!
HAPPY CODING
Top comments (2)
Hey James,
thanks for your articles, I love how you take time to explain things!
I am wondering about one thing:
Did you have a particular reason to add the "setters" for the state variables created with setState (I am talking about setStart() and setCounter() ) to the dependencies arrays of both useEffect() calls?
This is a great question, and I want to do some more digging into this and promise to reply again with some more resources that better explain, but the short answer for now is: the linter asks me to put in those dependencies.
I guess you could argue that you don’t have to. But according to the docs “it’s totally free”. And for me I don’t like having linter warnings.
I will see if I can dig up any other better explanations and resources. Maybe someone reading this will have a better answer too. Thanks for asking and I will get back to you soon!
|
https://dev.to/jamesncox/react-hook-series-useeffect-in2
|
CC-MAIN-2022-40
|
refinedweb
| 1,230
| 73.07
|
!
In this light, its even more important to come up with ideas and projects that empower people to make better decisions and improve their standard of living. Machine learning (AI, to use the marketing term) has a huge potential to provide us with smarter tools, tools that can make small decisions and present their operators with higher level choices. It’s not like getting a bigger hammer, it’s a hammer that suggests what nails to use. This post is the start of forging such a hammer.
Small scale farming
Farming is one of the areas where the digital “disruption” (read: influence) still is fairly limited, especially on smaller scales. Available solutions are often too expensive or complex to deploy on fields that really should only feed the family and some close friends! In particular the space around permacultures is relying on getting human experts or relying on infographics. If this information was available to computers however, imagine what a robot like farmbot can do with this!
Permacultures
To allow the design and creation of sustainable ecosystems, a wide range of information is required. This information is collected inside the framework that is permaculture. This framework provides principles created, implemented, and expanded by a world wide community of enthusiasts; but it’s origins go back to the 1970’s book “Permaculture One” by Bill Mollison and David Holmgren.
While those principles touch every aspect of living in an “integrated, evolving system of perennial or self-perpetuating plant and animal species useful to man” (cited from Permaculture One), our experiment focusses on the agricultural aspect: creating plant polycultures. To find out more about the definition, origin and anything else head over to!
In essence, followers of these principles divide their living space into zones, each of which can then be refined to reflect a more natural, cooperative, and self-perpetuating approach. Polycultures are only a part of the tools available to shape these zones.
Polycultures
An ecosystem inspired by nature naturally provides a wealth of improvements over regular growing. The plants complement each other and those so called guilds yield benefits such as:
- Fewer or no pestizides required
- Growth nutrients are exchanged between plants (fertilization)
- Larger and more nutritious yields due to better growing conditions
- Improved efficiency
… or in a word: sustainability.
There are a ton of approaches and ways to achieve this, most of which require a lot of experience and humans “looking at the plants” as well as in-depth knowledge of the plants and soil conditions. A daunting task for many newcomers to that space and unavailable to those without experts nearby. Here, we are going to focus on an engineering approach, starting with an MVP 😁 to create valid (and sometimes new) guilds from easily accessible information.
Important influences include:
- Wind
- Sunlight
- Temperature and climate
- Soil properties and type
- Water levels, availability, and rainfall
- Insects, polination, flowering and harvesting times
Information about the plans themselves is hard to come by, but we were lucky that the people over at collected and structured data of about 2000 plants. This will be the data we are working with, and it includes:
- Root depth and spread. Clearly roots needs water but they can only work well if they don’t compete for the same reservoir. These need to be as far apart as possible.
- Flowering and harvest times. The most efficient garden generates yield year-round without “empty” months. Yet there can’t be too many plants flowering at the same time since this adds stress to their companions!
- Soil types and PH values. Plants won’t grow large in suboptimal soil conditions and while they can be changed (e.g. with fertilization), a guild has to overlap on soil properties as much as possible.
- USDA hardiness zones. These are zones defined by the US Department of Agriculture that classify climate, elevation and soil properties. This is a great proxy for these values and they need to overlap!
Goals, and how to get there
Based on the ideas of a colleague of mine, we - a group of seven Microsoft employees - took on the challenge of starting this project during a one-week internal hackathon. The goal was to create a system that is able to generate an optimized polyculture for home use. Since then, our efforts have come a long way and it has become - mathematically speaking - a combinatorics problem, where we want to maximize yield while satisfying constraints for each plant as well as possible.
However, these constraints change with every plant that is added to the mix, letting us determine the “value” of a mix of plants only after the combinations are known. Effectively it’s necessesary to come up with a combination, then start to evaluate. By comparing these evaluations (e.g. via a score) we could then rank and find the best ones.
However by using a brute force approach (enumerate all possible combinations of a given set), the runtime will grow exponentially - at O(n!), which means that it might take decades or longer to calculate!
So to generate this result set quickly (in a reasonable time), a more structured approach is required. One such approach can be framing the problem as an optimization problem in order to apply a metaheuristic approach! In our case, genetic algorithms with their permutating power seem to be well suited.
Guild Wars
For our approach an individual (i.e. a potential solution to the problem) is essentially a guild - a set of plants growing together, which will be chosen at random from all the available plants.
Next, we require a fitness function to evaluate the fitness of each those individuals! This is the tricky part since we want to create guilds to score highly when they work out in the real world. As mentioned above we only a have a few features to go on, so - for now - the fitness function will be evaluated with the main plant:
def fitness(individual, main_plant_id): if len(individual) == 0: return (-10000,) main_plant = plants_by_id[main_plant_id] # resolve the plant id to all plant data score = 0 # initialize the score with penalties/bonusses for known companions if "_companions" in main_plant: compatible_ids = [plants_by_commonname[p["Companion"]]["id"] for p in main_plant["_companions"] if p["Compatible"] == "Yes"] incompatible_ids = [plants_by_commonname[p["Companion"]]["id"] for p in main_plant["_companions"] if p["Compatible"] == "No"] for i in individual: if i in compatible_ids: score += 1000 if i in incompatible_ids: score -= 1000 # add the main plant to the mix for further scoring plants = [main_plant] + [plants_by_id[i] for i in individual] # standard deviation on roots roots = [root_depth(p) for p in plants] root_depth_spread = np.std(roots) # overlap in soil requirements soil_intersect = reduce(lambda x, y: x.intersection(y), [enum_set(ph(p)) for p in plants], set()) # overlap in USDA zones usda_intersect = reduce(lambda x, y: x.intersection(y), [enum_set(usda(p)) for p in plants], set()) # common memberships membership_intersect = reduce(lambda x, y: x.union(y), [memberships(p) for p in plants], set()).intersection(memberships(main_plant)) # common uses (max 10) uses = set() for plant in individual: u = [u["Use"] for u in plant["_uses"]] if "_uses" in plant else [] uses.update(u) # finalize scoring score += 10 - len(uses) score += len(soil_intersect) score += len(membership_intersect) * 100 score += len(usda_intersect) score *= (root_depth_spread if not np.isnan(root_depth_spread) else 0) # make score independent of guild size return (score / len(individual),)
In short, the fitness function prefers high overlap in soil ph values, USDA zones, known guild membership and use category, as well as a known compatibility (while punishing known incompatibility heavily). Currently the only thing to prefer a large spread (i.e. a high standard deviation) in is the root depth. As a first draft this fitness function serves its purpose, but it will likely get a lot more complex over time.
Fitness of known compatible (x) and incompatible (o) plants
As expected the function separates known compatibilites from incompatibilities.
Mutation and Crossover
Mutation and recombination/crossover are two very important operators to introduce some sort of randomness into the population. This can be very sophisticated, but for now they either add another plant to the guild with a 50% chance (mutation) or using something called a two point crossover.
Selection
Simple - only the best µ (
MU in the code below) parents are selected to create the next population. This makes sense as a starting point but might change in the future when there’s a deeper insight into the problem.
Let the science begin!
The framework we used to work with genetic algorithms is called DEAP, which is nicely documented and comes with a lot of prebuilt tools and algorithms.
Its basis is a toolbox class that registers the individual steps and calls them as appropriate. Since we are working with a custom population (a list of string ids), the initial population has to be created using a specialized function (see below). To get to know all the ins and outs of the toolbox etc, please check out their excellent tutorials. This is our first shot at solving this, so we started with a randomized set of plants!
creator.create("FitnessMax", base.Fitness, weights=(1.0,)) # Maximization, change weights to -1.0 to minimize creator.create("Individual", list, fitness=creator.FitnessMax) toolbox = base.Toolbox() # Population creation functions toolbox.register("individual_guess", initIndividual, creator.Individual) toolbox.register("population_guess", randomPopulation, list, toolbox.individual_guess, main_plant_id) # GA operators toolbox.register("evaluate", fitness) toolbox.register("mate", tools.cxTwoPoint) toolbox.register("mutate", mutSet) toolbox.register("select", tools.selBest) # GA parameters NGEN = 15 # number of generations MU = 100 # number of individuals to select each round LAMBDA = 1000 # population size of each round CXPB = 0.5 # crossover percentage MUTPB = 0.5 # mutation percentage pop = toolbox.population_guess() hof = tools.ParetoFront() # hall of fame to keep the absolute best individuals # statistics output stats = tools.Statistics(lambda ind: ind.fitness.values) stats.register("avg", np.mean, axis=0) stats.register("std", np.std, axis=0) stats.register("min", np.min, axis=0) stats.register("max", np.max, axis=0) algorithms.eaMuPlusLambda(pop, toolbox, MU, LAMBDA, CXPB, MUTPB, NGEN, stats, halloffame=hof)
Each generation produces nice fitness statistics and in order to see if the algorithm converges properly standard deviation has to go down, while the average should go up 🖖:
gen nevals avg std min max 0 100 [1134.57323392] [10255.55109512] [-92857.64144998] [25895.75306028] 1 1000 [12593.19622808] [12802.6623178] [5873.33588997] [120468.65434407] 2 1000 [50537.4606231] [31176.55527062] [25895.75306028] [224921.81501051] 3 1000 [138444.15673245] [44540.70143468] [98907.68243759] [325421.74504245] 4 1000 [272949.4089385] [45394.31901551] [224390.73227114] [369011.82987117] 5 1000 [366902.72634401] [44232.56633966] [325870.60453851] [454566.55372268] 6 1000 [457777.01465387] [42910.76824181] [425915.83348434] [623003.72087292] 7 1000 [566856.99261163] [49954.55599536] [526699.08958865] [722884.90679263] 8 1000 [664096.17029895] [45880.71422853] [624681.71366634] [724121.04313177] 9 1000 [724327.28180196] [4460.43376957] [722884.90679263] [746783.22084422] 10 1000 [728910.31333747] [9025.99582626] [724121.04313177] [746783.22084422] 11 1000 [747677.8868938] [8901.81479744] [746783.22084422] [836249.82580286] 12 1000 [756624.54738967] [27993.18428005] [746783.22084422] [836249.82580286] 13 1000 [833629.86000009] [15286.39903347] [746783.22084422] [842653.06040163] 14 1000 [837044.78445508] [2083.14210111] [836249.82580286] [843809.60703609] 15 1000 [843002.72909254] [2057.02973603] [842653.06040163] [863168.44631953]
Plotting these numbers shows great convergence over these 15 generations and we could even try and run a bit longer to get better solutions.
So far the experiment looks very promising, we will continue working on validating the results in order to achieve real usefulness 😊.
To our future selfs
The current results are already quite promising and show a good direction already. We have several next steps planned:
- Improve data quality (normalize verbose fields, etc.)
- Improve the scoring to reflect real world usefulness
- Generally improve the algorithm and validate results 😊
As this is a really hard real-world problem, we also needed to reduce complexity and thus the decision was made to simplify some things:
- Plant placement and size are left out completely
- Soil and climate data is mostly abbreviated by using the USDA hardiness zones
- Growth time is left out as a factor
- At most one target plant is required (i.e. I have an apple tree, what goes with it?)
- Guilds are optimized for a single purpose (e.g. food only)
As we continue the project there will be more of those that come into consideration, but so far we think we are on a promising path. There will be more blog posts covering this topic, so look out for those!
Until then there will be more Rust content, and a new format! To stay updated on those exciting developments, subscribe via RSS (like feedly) and follow me on twitter.
Share this post
StumbleUpon
|
https://blog.x5ff.xyz/blog/automated-farming-permaculture/
|
CC-MAIN-2019-22
|
refinedweb
| 2,112
| 55.03
|
I as result got the same dreaded message:
Error XPST0003 in, at line 1, column 13: syntax error, unexpected @, expecting end of file
Has any of the readers experienced similar issues with QXmlQuery? Is this a known limitation of QtXmlPatterns? Other queries seem to be working as expected.
PS. I’m attaching the source code of QDomNodeModel, BSD licensed. I hope QXmlQuery starts working one day 😉
Update 2010/01/08: Yay! It actually works. Qt requires expressions like this: /root/element/@attr instead of /root/element@attr to get attributes. I was under the impression that the latter was correct XPath but it turns out that I was wrong.
17 Comments
Hi,
Great work!
Except that I can’t get the element when searching with an attribute query.
For example:
/optional/@optional
The query is positive but : QDomElement elem = m.toDomNode(res.current().toNodeModelIndex()).toElement();
Returns an empty element??
Could this be fixed pls. ?
Thanks,
Frank Vieren
Hi,
I will look into this next week. Could you provide an example XML document in which the problem manifests itself? I have already tested some basic cases and was under the impression that @attr queries work fine.
Cheers,
—
Stan
Hi Stan,
Thx for the quck reply!
That would be great! Thx.
Here is the xml file:
So, I have succes when doing a query like this:
query.setQuery(“//productdefinition”)
but when doing this:
query.setQuery(“//”functionkeys/@title”)
or with value:
query.setQuery(“//functionkeys/@title=’Function keys'”);
are not succesful.
The evaluator finds results but when validating the dom element the element is null/empty.
I have tried it further for other attributes and combinations but with no result. Only querying tag elements are succesful.
I would appreciate it a lot if you could fix it. It is remarkable that qt doesn’t give support for XMLpatterns/XPATH and QDomDocument in a direct way!
So your work is very usefull!
BTW I’m using Qt 4.7 and mingw as build environment.
Thanks in advance,
Have a nice weekend.
Frank
frank.vieren@telenet.be
frank.vieren@barco.com
Hi Stan,
I’m not able to copy past my XML file in the message but if you provide me your email address I can send it to you.
Thanks again for looking at this.
Best regards,
Frank
Hi Stan,
It seems my problem is solved when I’m using the following syntax:
//functionkeys[@title]
instead of
//functionkeys/@title
Thanks for following up. I’ll let you know if I encounter ‘real’ problems.
Best regards,
Frank Vieren
frank.vieren@barco.com
frank.vieren@telenet.be
Wow, it’s quite interesting as I had the exact opposite problem. Or maybe not …. Are you using square brackets literally or to denote that attribute query is optional?
Hi Stan,
You really need the square brackets when the query is based on the attribute.
Thus exactly as I stated before:
//functionkeys[@title]
I also did something like this:
//functionKeys[@value=’F1′]
with succes.
So far I have not encountered anymore problems.
Best regards,
Frank
Good day!
Your module is very good. But can You upgrade QDomNodeModel to support namespace-s ?
You – my last hope. Thanks! =)
It seems feasible, implementing the missing namespaceBindings() method should do the trick. Can you provide a unit test for me? Then, I’ll give it a go.
Hi!
To support namespaces I made small upgrade.
Just replace n.nodeName() with n.localName() in funcion QDomNodeModel::name(…).
Don’t forget to set namespaceProcessing when seting content for QDomDocument.
Hi,
No more problems with the parsing but as my XML file gets longer the queries take much more time to get resolved. Any remedy? The file is 94kbytes (500 lines…)
thanks,
Best regards,
Frank
First, you have to determine whether this is QDomNodeModel’s or QtXmlPatterns’ fault. Are queries on the same XML file faster when run using one of the other techniques:
??
Hi
Appreciate your efforts here, really had my hopes up.
I have to agree with Frank, unfortunately this is very very slow, possibly something like O^N performance.
Either the QAbstractXmlModel is poorly designed, or something with your implementation. But having had played a bit with the filetree example, I think the QAbstractXmlModel has a major flaw. Unfortunately my profiling skills are poor and I don’t know the best way to diagnose the cause – any suggestions?
Quick (< 1 second):
// 300kb file with 100s of nodes
char* filename = "C:\Development\Projects\Looksie\Documentation\MockupGenerator\Language Tools.bmml";
QXmlNamePool pool;
QXmlQuery query(pool);
query.bindVariable("tree", QVariant(QString(filename)));
query.setQuery("doc($tree)//@controlTypeID/string()");
QString s;
query.evaluateTo(&s);
qDebug() << s;
Slow ( ~ 10 seconds):
QXmlNamePool pool;
QDomNodeModel model(pool, m_currentDocument);
QXmlQuery query(pool);
query.bindVariable("doc", model.fromDomNode(mockupElement));
query.setQuery("$doc//@controlTypeID/string()");
QXmlResultItems items;
query.evaluateTo(&items);
Hi, I commented on your original QtCentre post with a solution to degrading performance.
Very good. I was about to suggest what amounts to pretty much the same thing – do an initial pass over whole tree and enumerate all nodes assigning them increasing ordinals and then just compare ordinals. Line/column number approach is even better because it doesn’t require fiddling with QDomNode trying to put an ordinal in there 🙂 Although it makes me wonder what will happen when you start inserting nodes dynamically…
For my use the Dom is always read-only so not an issue.
Once again thanks for your good work.
Crazy that QDom* doesn’t already include query support…
|
https://adared.ch/qdomnodemodel-qxmlquery/
|
CC-MAIN-2020-34
|
refinedweb
| 910
| 60.11
|
Writing a Managed Wrapper for COM Components
Introduction
Up until couple month ago, I was a convinced spectator to the whole .NET Revolution. I thought to myself that like any other new technology it would take some time until it becomes mature enough to use. But as I was learning more about .NET it became clear that this was something bigger then any of the previous Microsoft attempt's to revolutionize the whole industry.
Fortunately, Microsoft put a lot of time and effort in making sure that managed code can talk to non-managed code. You can still call your existing COM components from .NET program using RCW (Runtime Callable Wrapper). RCW will take care of marshalling the .NET calls into COM client calls. You can do this with very little development time. The performance hit should also be minimal in most cases. If your COM component is "heavy", i.e. if it does a lot of work inside, then the overhead is insignificant.
On the other hand, if your COM component contains some "chatty interfaces" where all it does just returns some values, the overhead can be significant compared to the component execution time. On the top of that, CLRs (Common Language Runtime) default behavior is to use Proxy / Stub combination for calling COM components, so even if your COM component is apartment-threaded you still pay the penalty of marshaling your calls. You can overwrite the CLRs default behavior with STAThreadAttribute, but not for all cases.
Moreover, if you create many .NET clients for your COM component it generates more problems because your managed code clients cannot take the full advantage of the .NET Framework features like: parametirized constructros, inheritance, or static methods.
Thus, if you decide not to use RCW there are basically two options: one is to fully migrate your code to .Net, and second option that works with C++ COM components is to write a managed code wrapper around it. I will give you an example of how easy it is to use the second option.
Writing a Wrapper for COM Components
Suppose that you have the following COM component written in C++ using ATL.
// SimpleATL.h : Declaration of the CSimpleATL #ifndef __SIMPLEATL_H_ #define __SIMPLEATL_H_ #include "resource.h" // main symbols #include
/////////////////////////////////////////////////////////// // CSimpleATL class ATL_NO_VTABLE CSimpleATL : public CComObjectRootEx<CCOMSINGLETHREADMODEL>, public CComCoClass<CSIMPLEATL &CLSID_SimpleATL,>, public IDispatchImpl<ISIMPLEATL &LIBID_SIMPLECOMLib &IID_ISimpleATL,,> { public: CSimpleATL() { } DECLARE_REGISTRY_RESOURCEID(IDR_SIMPLEATL) DECLARE_PROTECT_FINAL_CONSTRUCT() BEGIN_COM_MAP(CSimpleATL) COM_INTERFACE_ENTRY(ISimpleATL) COM_INTERFACE_ENTRY(IDispatch) END_COM_MAP() // ISimpleATL public: STDMETHOD(get_ThreadID)(/*[out, retval]*/ long *pVal); STDMETHOD(GetManagerName)(/*[out, retval]*/ BSTR* pbstrName); STDMETHOD(SetManagerName)(/*[in]*/ BSTR bstrName); private: _bstr_t _bstrName; }; #endif //__SIMPLEATL_H_
In order to write a managed code wrapper around your class, here is what you have to do:
1. In Visual Studio .Net create a blank solution; lets say it called COM Wrapper.
2. Add a new Visual C++ project to the solution using Managed C++ Class Library Template. Let's call it Simple .Net
3. Copy both SimpleATL.h and SimpleATL.cpp from your old ATL project directory to the new Simple.Net project directory.
4. Add the files to your project by selecting Add Existing Item from your project contact menu. Both files should appear in Solution Explorer window under the Simple.Net project.
5. Now it is time to do some modifications to the SimpleATL.h and SimpleATL.cpp. Specifically, you would have to delete a bunch of stuff, like macros and all the inheritance relationship. You dont need that anymore in your header file. What you need instead is additional include files -- atlctl.h and atlbase.h. Here is what the header looks like after the editing:
// SimpleATL.h : Declaration of the CSimpleATL #ifndef __SIMPLEATL_H_ #define __SIMPLEATL_H_ #include <atlbase.h> #include <atlctl.h> #include <comdef.h> //////////////////////////////////////////////////////////// // CSimpleATL class CSimpleATL { public: CSimpleATL(){} // ISimpleATL public: STDMETHOD(get_ThreadID)(/*[out, retval]*/ long *pVal); STDMETHOD(SetManagerName)(/*[in]*/ BSTR bstrName); STDMETHOD(GetManagerName)(/*[out, retval]*/ BSTR* pbstrName); private: _bstr_t _bstrName; }; #endif //__SIMPLEATL_H_
As you can see it became a lot smaller than it used to be.
The only editing you have to do with the SimpleATL.cpp file is to delete the reference to SimpleCOM.h file, so the line: #include "SimpleCOM.h" should be gone.
6. Now it is time to create the actual wrapper class. Notice that Visual Studio .Net created the initial class definition with the key __gc, which means that this class considered a managed code. Add the #include statement for the SimpleATL.h file just above the using namespace System; statement. Add another namespace: using namespace System::Runtime::InteropServices; You need InteropServices for converting types from managed to unmanaged code and vice versa.
7. Add a private pointer to the CSimpleATL class. Class1 has to handle the lifetime of the object by instantiating CSimpleATL pointer in the constructor and deleting it inside the destructor.
8. Add a proxy function for every function that you would like to call from the CSimpleATL class. Here is how it turns out:
// SimpleNet.h #pragma once #include "SimpleATL.h" using namespace System; using namespace System::Runtime::InteropServices; namespace SimpleNet { public __gc class Class1 { public: Class1() {_pSimpleATL = new CSimpleATL();} ~Class1() {delete _pSimpleATL;} public: void get_ThreadID (/*[out, retval]*/ Int32* pVal) { long res; HRESULT hRes = _pSimpleATL->get_ThreadID(&res); if(FAILED(hRes)) { Marshal::ThrowExceptionForHR(hRes); } else { IntPtr ptrInt((void*)&res); *pVal = Marshal::ReadInt32(ptrInt); } } void SetManagerName (/*[in]*/ String* bstrName) { IntPtr ptrBstr = Marshal::StringToBSTR(bstrName); HRESULT hRes = _pSimpleATL->SetManagerName((BSTR)ptrBstr.ToPointer()); if(FAILED(hRes)) { Marshal::ThrowExceptionForHR(hRes); } } void GetManagerName(/*[out, retval]*/ String** pbstrName) { BSTR pbstrTemp; HRESULT hRes = _pSimpleATL->GetManagerName(&pbstrTemp); if(FAILED(hRes)) { Marshal::ThrowExceptionForHR(hRes); } else { (*pbstrName) = Marshal::PtrToStringBSTR(pbstrTemp); Marshal::FreeBSTR(pbstrTemp); } } private: CSimpleATL* _pSimpleATL; }; }
All the parameters to Class1 functions are of managed types now. There is some conversion required from managed to non-managed types and vice versa for what is called non-blittable types. VB BSTR for example, is considered a non-blittable type, therefore it requires conversion. This is where the Marshal class becomes handy. Not only it can convert types but it also can throw a .NET type exception based on COM HRESULT return. Pretty cool.
You can now call this code from any .NET application. All you have to do is to add a reference to the Simple.Net.dll in your .NET project, declare and instantiate the object for Class1, and start calling functions.
You have to decide for yourself whether you need to use RCW and call your components from managed code, merge your code into .NET, or write a wrapper. Note however, that the last option is only available for code written in C++. New Visual C++ compiler is the only compiler that can compile managed and unmanaged code at the same time.
References
Microsoft Corporation
.NET Framework Developer's Guide
Blittable and Non-Blittable Types
Steve Busby and Edward Jezierksi
Microsoft Corporation
August 2001
Microsoft .NET/COM Migration and Interoperability
Stanley B. Lippman
MSDN Magazine
February 2002
Still in Love with C++
Modern Language Features Enhance the Visual C++ .NET Compiler
Jeffrey Richter
Applied Microsoft .Net Framework Programming
Microsoft Press 2002
CoolPosted by Legacy on 05/07/2002 12:00am
Originally posted by: Shaique
Great stuff !Posted by Legacy on 04/19/2002 12:00am
Originally posted by: Gast�n Nusimovich
Alex:
This is the right way to go for all of us. Thanks a lot for bringing some light to a rather cloudy aspect of the migration from Windows DNA (legacy ?) code to .NET development.
|
http://www.codeguru.com/cpp/com-tech/activex/wrappers/article.php/c5565/Writing-a-Managed-Wrapper-for-COM-Components.htm
|
CC-MAIN-2017-17
|
refinedweb
| 1,230
| 57.77
|
import games
Moderator: Gaijin Punch
Postby RadiantSvgun » Mon Nov 02, 2009 11:25 pm
Gaijin Punch wrote:Rule of thumb if someone wants to hire you as a contractor: You should get paid about 50% more... if not double. It's like fucking in a whore house without a condom.
Postby zinger » Wed Feb 17, 2010 11:19 am
Postby Gaijin Punch » Wed Feb 17, 2010 12:23 pm
Rade wrote:Finally received a reply by posting in a thread at that Gaijin forum:
Postby RadiantSvgun » Thu Jul 15, 2010 5:17 am
Return to “Score Attack”
Users browsing this forum: No registered users and 1 guest
|
http://forums.gamengai.com/viewtopic.php?f=9&t=1954&sid=632b5f42644ed12e9286fdb03469494e&start=25
|
CC-MAIN-2018-22
|
refinedweb
| 107
| 63.02
|
list of objects vs list of pointers in C++
Hi guys, today we will talk about the topic list of objects vs list of pointers in C++.
Before moving to the differences, let us discuss what is a list and what is a linked list.
LIST: A list is also referred to as an array. It is a collection of elements of the same data types. Continuous memory is allocated to all the elements in a list. And hence, random access to any element is possible.
LINKED LIST: A linked list is ordered collection of elements of the same type but they are connected using pointers. So, no continuous memory is allocated. And hence, only sequential access is possible. In other words, if I want to print the third element, then I need to traverse from the first element to the second element and then I will reach the third element.
List Of Objects vs List Of Pointers
Now, let us discuss the difference between a list of objects and a list of pointers.
List Of Object: It means an array that will store the objects of a class.
Example:
#include<bits/stdc++.h> using namespace std; class student { int marks; public: void getdata() { cin>>marks; } int return_data() { return marks; } }; int main() { student s[5]; // LINE 1 int sum; for(int i=0;i<5;i++) { s[i].getdata(); } for(int i=0;i<5;i++) { sum=sum+s[i].return_data(); } cout<<"sum = "<<sum; }
In this example, I have created a class student. The student class has data member i.e. marks and 2 member functions getdata and return_data. The getdata function will take the marks of the student from the user. The return_data function will return the marks of the student.
In line1, I have created an array S that will store the data of 5 objects of student type.
This program will take marks as input from the user of 5 students and then calculate the sum of marks obtained by all the students.
Let us assume that the user gives the input as:
1 2 3 4 5
So the output for this input will be:
sum = 15
List Of Pointers: It means an array that will store the addresses of different variables.
Example,
#include<bits/stdc++.h> using namespace std; int main() { int *arr[5]; // LINE 1 int sum=0,a[5]; for(int i=0;i<5;i++) { cin>>a[i]; arr[i]=&a[i]; } for(int i=0;i<5;i++) { sum=sum+*arr[i]; } cout<<"sum = "<<sum; }
I have taken an array a that will take 5 inputs from the user. In line1, I have created another array, but this array will store the addresses and hence will be called as an array of pointers.
In first for loop, the user will give the input which will be stored in array a, and then its address will be copied in the other array arr.
This program will display the sum of inputs given by the user.
Let us assume that the user gives the input as:
1 3 5 7 9
Then the output for this input will be:
sum = 25
That is all for today, so we learned about the list of objects vs pointers in C++ language.
|
https://www.codespeedy.com/list-of-objects-vs-list-of-pointers-in-cpp/
|
CC-MAIN-2022-27
|
refinedweb
| 546
| 69.72
|
I am following this example which I got from
class MyContext : DbContext { public DbSet<Post> Posts { get; set; } public DbSet<Tag> Tags { get; set; } protected override void OnModelCreating(ModelBuilder modelBuilder) { modelBuilder.Entity<PostTag>() .HasKey(t => new { t.PostId, t.TagId }); modelBuilder.Entity<PostTag>() .HasOne(pt => pt.Post) .WithMany(p => p.PostTags) .HasForeignKey(pt => pt.PostId); modelBuilder.Entity<PostTag>() .HasOne(pt => pt.Tag) .WithMany(t => t.PostTags) .HasForeignKey(pt => pt.TagId); } } public class Post { public int PostId { get; set; } public string Title { get; set; } public string Content { get; set; } public List<PostTag> PostTags { get; set; } } public class Tag { public string TagId { get; set; } public List<PostTag> PostTags { get; set; } } public class PostTag { public int PostId { get; set; } public Post Post { get; set; } public string TagId { get; set; } public Tag Tag { get; set; } }
Now my question is how would I construct my query to get posts given TagId? Something like:
public List<Post> GetPostsByTagId(int tagId) { //linq query here }
Please keep in mind this is EF7.
My first advice is change your collection properties to
ICollection<T> instead of
List<T>. You can find a really good explanation in this post.
Now going back to your real problem, this is how I would do your query:
public List<Post> GetPostsByTadId(int tagId) { using(var context=new MyContext()) { return context.PostTags.Include(p=>p.Post) .Where(pt=> pt.TagId == tagId) .Select(pt=>pt.Post) .ToList(); } }
You will need to eager load
Post navigation property because EF7 doesn't support lazy loading, and also, as @Igor recommended in his solution, you should include
PostTags as a
DbSet in your context:
public DbSet<PostTags> PostTags { get; set; }
Explanation:
Your query start in
PostTags table because is in that table where you can find all the post related with an specific tag. See the
Include like a inner join with
Post table. If you apply a join between
PostTags and
Posts filtering by
TagId, you will get the columns that you need. With the
Select call you are telling you only need the columns from
Post table.
If you remove the
Include call, it should still work. With the
Include you're telling explicitly that you need to do a join, but with the
Select, the Linq provider of EF is smart enough to see it needs to do implicitly a join to get the
Posts columns as result.
|
https://entityframeworkcore.com/knowledge-base/36725543/many-to-many-query-in-entity-framework-7
|
CC-MAIN-2022-40
|
refinedweb
| 393
| 52.94
|
Pygmentize to clipboard for macOS.
Project description
pygmentize to clipboard for macOS
This utility package is designed to send code through
pygmentize and save it as rich HTML in your macOS
clipboard. It can then be pasted easily anything accept styled HTML input like Evernote, OneNote, Gmail, etc.
Usage
pygclip offers a couple ways to receive code:
- From a file
- Via the standard input
- Pulling it from your clipboard
Examples below:
File
def foo(): return 'bar'
$ pygclip -s monokai -l python path/to/file.py
Standard input
$ pygclip -s monokai -l python def foo(): bar()
Clipboard
$ pygclip -s monokai -l python -c
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/pygclip/0.0.3/
|
CC-MAIN-2021-43
|
refinedweb
| 125
| 67.89
|
Introduction
As you may remember, I’ve been struggling with a tricky build at my current client. Whilst many of the problems have been around how we’ve used the tools available to us (well, Ant), I realised that Ant itself might just not be up to the job. Once a build becomes non-trivial, you inevitably want to start using it as a program, something Ant itself is not really good at.
“SCons”: is a build API form python. Being based on a proper programming API the promise of testable build processes, nice syntax (no more ghastly constructs just to do the build equivalent of an @if@) and above all a system more intuitive to the average programmer than Ant is.
When looking to adopt any open source tool it has to prove itself to fairly quickly – in the case of SCons I set it a few challenges and see how it stacked up.
Test 1: Getting my attention
Looking at the docs, SCons claims that building a Java program should be this simple:
Java('classes', 'src')
Which, when placed in a file called @SConstruct@ and run Will the @scons@ command. The source files in directory @src@ get compiled and placed in directory @classes@. So that locks really nice. Taken together with the fact that SCons’ builders (of which @Java@ is but one) handles dependencies, and you can integrate normal python code (the @SConstruct@ file is just a python source file) makes it a very attractive proposition. It is also the build tool used by Doom 3 – so if nothing else it seems able to handle building large C programs.
So now I’m interested – test passed.
Test 2: Doing something trivial in less than five minutes
I fell at the first hurdle. When running this for a simple “Hello World” example, I got the following error:
scons scons: Reading SConscript files ... scons: done reading SConscript files. scons: Building targets ... javac -d classes -source path src srcHello.java 'javac' is not recognised as an internal or external command, operable program or batch file. scons: *** [classesHello.class] Error 1 scons: building terminated because of errors.
I ran the command @javac -d classes -source path src srcHello.java@ on the command line and it worked. I send a mail to the mailing list and got a very helpful reply very promptly to tell me that SCons doesn’t look in either the @PATH@ or in @JAVA_HOME@ environment variables for the @javac@ command, which seems frankly bizarre. Anyway, I got a fix (thanks Jean-Baptiste), which involves me creating my own @Environment@ (which in SCons represents the tools available, paths, environment variables and the like) and explicitly telling it where to find @javac@:
import os env = Environment(ENV = {'PATH' : os.environ['PATH']}) env.Java('classes', 'src')
This worked, but took significantly more than 5 minutes. However the mailing list had already proven itself to be friendly and responsive, which is important when looking at any open source tool. Test failed.
Test 3: Prove that doing something non-trivial is possible in less that 30 minutes
So, the simple task now accomplished, I turned my attention to something a little more complex. I decided to take a medium sized Java project which has little in the way of external dependencies, and attempt to create a build file for it. I chose log4j. Running the same script on the log4j source code however bombed out with the following less than helpful error:
scons: *** [classesorgapachelog4jAppender.class] Error 1 scons: building terminated because of errors.
Again I look at the command line it was trying to run, and realised it was explicitly passing all source files on the command line – which ends up being rather large. When running manually, I get the expected “@The input line is too long@”. Again, the mailing list responded quickly at informed me that this is a known problem (thanks Werner Schiendl) and has been “logged(scons – [ 797472 ] For command lengths > cmd.exe can handle)”: as a bug. So I either have to apply this patch to be able to compile log4j, or I have to heavily edit the @Java@ build in SCons to use a file to contain source files rather than passing them in.
The more worrying aspect is that I would expect any non-trivial Java program to quickly hit the same problem. This leads me to believe that SCons has not been used on non-trivial Java programs – I might get this problem fixed/worked around, but what other problems might I encounter? Test failed.
Conclusion
It may not of passed all of its tests, but the responsive nature of the mailing list, taken together with how horrible Ant can be to use, makes me want to continue looking at it. It clearly works for C & C++ programs, but it’s Java support needs some love. Right now without work I couldn’t suggest its use in a serious Java development project, but it shouldn’t need that much work to get it that level. In the meantime, I’ll continue to play with it, and may even look to improve the Java support. Look for more on SCons soon
4 Responses to “SCons – passes 1 out of three tests – could do better”
Great summary. What a shame that Scons doesn’t handle Java builds well. I’ve had a cursory look at C# support and that doesn’t look to be there yet either.
Thanks to “Simon”: for spotting this excellent quote which might be of interest to those who’ll say “Ant is good enough” –
“If you decide to go with the same tools and technology as everyone else, you make sure that you won’t fail any worse than they do. However, you also ensure that you won’t succeed any better.”
I was recently enticed into trying scons for my java builds. I ran into the same first problem you described,but managed to stumble into a fix by looking at some extant SConstruct files.
In talking to an scons “guru” he convinved me that the fact that scons does not parse your path is an advantage. In theory this feature makes it easier to port to different environments. Everyone has an example where some tweak of your personal environment leads to something broken, so I actually do buy this argument. The result is a little lengthening of the SCconstruct file, but it still ends up smaller than my equivalent build.xml.
Next problem for me was getting jar to work with a manifest for “Main-Class”. I ended up kludging a Manifest.txt file, so had to learn how to open/write files in python. Basically a couple hours and I had everything “working”.
The first real problem (if you are not offended by kludges) is that scons appears to be broken for anonymous inner classes. And for the first time I can remember recently, google cant find a solution. Uusually you can type in the exact error message or part of it, and google takes you to twenty pages describing how to fix it. The problem appears to be that the naming for anon innner classes is broken, and so scons compiles the source every time, instead of realizing it is up to date.
Occam’s razor suggests maybe I screwed up the source code, but from the command line, or make, or ant, there seems to be no problem. So unless someone from the java community can give scons “some love” (nice!), I will have to look into hacking the javac parser they use. Not a pretty thought.
Cheers,
John
Someone has showered some love! (Greg Ward)
A couple patches which spawn some complaints applied to 0.97, but do the job. I applied 01, and 02, because I think 03 makes the output uglier.
My hope is the next person who types “scons java inner classes broken” into google can find that page.
Still no support for javadoc in scons, so I had do something a little kludgey. Overall I am happy with the system and am moving to use it everywhere.
|
https://blog.magpiebrain.com/2004/12/17/scons-passes-1-out-of-three-tests-could-do-better/?replytocom=608
|
CC-MAIN-2021-25
|
refinedweb
| 1,353
| 69.62
|
:
Sean, let me respond to your inquiry with a few definitions:
graph: A collection of nodes and edges. (FOLDOC)
node: A point or vertex in a graph. (FOLDOC)
directed graph: A graph with one-way edges. (FOLDOC)
connected graph: A graph such that there is a path between any pair
of nodes via zero or more other nodes. (FOLDOC)
colored graph: A graph together with a colouring function which
assigns to each node in the graph a colour. (NIST DADS)
labeled graph: A graph which has labels associated with
each edge or each vertex. (NIST DADS)
function: A rule of correspondence betweeen two sets such
that there is a unique element in the second set
assigned to each element in the first set. (Websters)
(A more boring definition from FOLDOC is below)
rooted graph: A directed graph, with a special 'root' node
such that every node is connected, at least
indirectly, through a series of edges
starting at the root. (my definition)
In YAML we use these terms with exactly these meanings!
I think of a YAML graph as the 'rooted' variety, that is
a directed graph which is connected. Further, I picture
a YAML graph has having coloured nodes (the data type)
and each edge being labeled. A YAML graph has three
sorts of nodes (vertexes):
(a) scalar values, where the node has only 'incoming'
edges visualized as arrows pointing to it;
(b) sequences (arrays); where the 'values' of the sequence
are pointed to by outgoing edges labeled using
subsequent integers
(c) mappings (hashtables); where the 'values' of the mapping
are pointed to by outgoing edges labeled using, a set
of unique 'keys'.
The only 'tricky' thing about this visualization is
that the keys are not just limited to strings, but
can in fact be other nodes, including sequences and
hashtables. This is a slightly mind-bending
trick; but it is needed to model complex keys which
are being found with increasing regularity as
dictionary keys.
Now to address your questions...
On Mon, Oct 20, 2003 at 07:24:44PM -0700, Sean O'Dell wrote:
| Graphs are also "related" values; such as the result set of a function,
| generally not from an arbitrary data set such as in YAML (aside from
| specific specifications).
I think you are using graph in the following sense:
1. A diagram that exhibits a relationship, often functional, between
two sets of numbers as a set of points having coordinates
determined by the relationship. Also called plot.
2. A pictorial device, such as a pie chart or bar graph, used to
illustrate quantitative relationships. Also called chart.
(American Heritage Dictionary)
While this is a good definition, this is not the definition
typically used in 'computer science' or 'abstract math' speak.
The definition of 'graph' in this sense is much more fluid
and fundamental. While a graph (especially directed graphs)
are most often functional in computer systems, they typically
do not have integers or real numbers for vertexes.
| specifications). The purpose of a graph is to show relationship. You would
| draw up a graph to illustrate how the data in a series of YAML nodes are
| related, but the series itself is not a graph.
Ahh, but the point of using a graph *is* to show relationship
between objects -- that is what computer science is all about.
Two objects are connected when they have something 'in common'
For example:
- first: clark
last: evans
- first: oren
last: ben-kiki
- first: brian
last: ingerson
This specific 'series' is illustrating a relation between three
people; and the subordinate mappings, draw a linkage from a
shared anchor 'first' or 'last' to each one of these people.
It is very much a visual picture. Very much a graph. In
fact, we even have a way to specify a 'path' in the graph:
/1/first -> 'oren'
In YPath, a path named with 'edge labels' can be used to
uniquely locate any node in a YAML graph from its root.
This is both simple and powerful.
If you need more convincing, read up a bit on Petri-Nets, as
these are used heavily in manufacturing to solve all sorts
of scheduling constraints on the production line. It converts
what is essentially an 'algebraic' problem (non-numerical)
into a picture that humans can peek at, poke, and sometimes
even solve.
| A YAML element is a value and its associated name, which may be qualified
| with a type identifier and a namespace. A YAML element may be a simple
| string value, an array of elements or an associative hash of elements. YAML
| elements are grouped into sets and may be arranged according to a particular
| specification or arbitrarily. A YAML document is a file or other storage
| unit which contains a set of elements embodied as a YAML stream. A YAML
| stream is a series of Unicode characters which represent the YAML elements,
| plus delimiting characters such as spaces and newlines character.
Well, two problems:
1) You use element instead of node (or vertex). Node is just
right for YAML (see above).
2) Your definition of element is just plain wrong for YAML
(although it does match how XML uses the word) -- you seem
to be trying to attach the 'label' and the 'value' of
a node together (mixing edges and vertexes). Besides
not accurately modeling native data structures, it
actually prevents some very useful graphs (or you can still
represent things, but it takes more elements).
The rest of your paragaph seems like an alternative way of writing
what I've done, rather than rewriting (which I've done again and again
and again and again based on lots of people's feedback), I could
actually use low-level help with verifying that it reads well and
makes sense. So, for example, if you could pick out particular
sentances and explain why they don't make sense or how they could
be imprroved, this woudl help greatly. Or, alternatively, if you
could suggest a better "outline" this could also be helpful.
Thank you so much for thinking about this stuff. It is always nice
to have people following along and poking us so we make sure to
cross all of the eyes and dots to the Ts.
Best,
Clark
PS. Following is the more formal, boring definition of function,
to which both our sequence (array) and mapping (hash) satisfy:.
View entire thread
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
|
https://sourceforge.net/p/yaml/mailman/message/10965795/
|
CC-MAIN-2016-44
|
refinedweb
| 1,101
| 59.53
|
go to bug id or search bugs for
Description:
------------
The Fast CGI standard require that error be reported through the FastCGI connection as a Stderr data stream.
But PHP Fast CGI processes still write errors to original stderr (file handle 3) which prevent from clean standard centralized FCGI logging, especially when the Fast CGI PHP process is not started by the web server (remote Fast CGI).
In most cases, it makes debugging PHP scripts impossible.
Add a Patch
Add a Pull Request
This one turns out to be easy to fix, thus:
--- cgi_main.c.orig Mon Jan 10 14:57:04 2005
+++ cgi_main.c Mon Jan 10 14:53:44 2005
@@ -481,7 +481,14 @@
static void sapi_cgi_log_message(char *message)
{
- fprintf(stderr, "%s\n", message);
+#if PHP_FASTCGI
+ if (!FCGX_IsCGI()) {
+ FCGX_Request *request = (FCGX_Request *)SG(server_context);
+ FCGX_FPrintF( request->err, "%s\n", message );
+ /* ignore return code */
+ } else
+#endif /* PHP_FASTCGI */
+ fprintf(stderr, "%s\n", message);
}
static int sapi_cgi_deactivate(TSRMLS_D)
However, there is another similar bug, which is that a stream opened on "php://stderr" should also direct its output to the FCGI error stream (rather than just to file descriptor #2).
-- Chris Lightfoot
This is an ugly change for users who redirect PHP's stderr to a log file due to file permissions (PHP is not allowed to open the log file).
Instead of writing to the log file, the Apache log now contains tons of rows like this:
[Thu Apr 21 11:18:27 2005] [error] [client 129.0.10.119] FastCGI: server "/home/www/PHP/php/bin/php" stderr: array (
[Thu Apr 21 11:18:27 2005] [error] [client 129.0.10.119] FastCGI: server "/home/www/PHP/php/bin/php" stderr: 'rcs' =>
[Thu Apr 21 11:18:27 2005] [error] [client 129.0.10.119] FastCGI: server "/home/www/PHP/php/bin/php" stderr: array (
etc.
This needs to be addressed -- make logging to fastcgi's stderr optional.
Feel free to fix it then.
Added fastcgi.logging php.ini option to make it possible for people like Sascha to disable this.
|
https://bugs.php.net/bug.php?id=28074
|
CC-MAIN-2016-36
|
refinedweb
| 340
| 60.55
|
Hacking Lightning
By me on Sep 08, 2008
I've been using the Lightning calendaring extension for Mozilla Thunderbird for the last couple of months for basic calendaring and it works pretty awesome.
After Rama hassled me about keeping my task list on my whiteboard at work, I decided to try out the Task lists functionality in Lightning. Its pretty light weight, easy to add a task, almost exactly what I need.
However, one thing that bugged was the predefined views. There's about four or five including a "Not Started" view and a "Completed" view, but none for what I really wanted, a "Not Completed" view. So I decided to add one.
What I really wanted was to add another view, but even though I appeared to tweak all the files necessary, it didn't work. So in the end I just changed the "Overdue" view to "Overview & Open". And it was really easy one I found my way around.
The tweak involves modifying files in two jars; calendar.jar & calendar-en-US.jar both located in ~/.thunderbird/(profile)/extensions/(uuid)/chrome.
In calendar-en-US.jar I wanted to change the label of the button from "Overdue" to "Overview & Open". This is done by unzipping the file, editing locale/en-US/calendar/calendar.dtd. and rezipping the file, changing the line
<!ENTITY calendar.task.filter.overdue.label "Overdue">
to
<!ENTITY calendar.task.filter.overdue.label "Overdue & Open">
Now, in the second jar calendar.jar, I needed to change the logic for deciding what was "overdue". This logic is in content/calendar/calendar-task-view.js The original logic is in the lines
overdue: function filterOverdue(item) { // in case the item has no due date // it can't be overdue by definition if (item.dueDate == null) { return false; } return (percentCompleted(item) < 100) && !(item.dueDate.compare(now()) > 0); },
Which you can see they make a specific provision for excluding things with no dueDate. Many of my tasks do not have a dueDate and so I want to see those too. So I changed the logic to
overdue: function filterOverdue(item) { return (percentCompleted(item) < 100 && (item.dueDate == null || !(item.dueDate.compare(now()) > 0))); },
Once I repacked the jars, voila, I now have the behavior I desired.
Very nifty, Joe! Nice to see you in the blogsphere!
Posted by Dana Nourie on September 08, 2008 at 08:03 AM PDT #
Does your modified logic display tasks that are both overdue and also due 'today'?
Also, can one make a similar change to the 'Today' pane (Mail tab) so that tasks are filtered with the above logic? Now it doesn't seem to allow any filtering, predefined or otherwise.
I'm using Lightning 0.9RC1
Thanks for thinking out of the box!
Posted by Elliot on September 12, 2008 at 09:25 PM PDT #
|
https://blogs.oracle.com/mock/entry/hacking_lightning
|
CC-MAIN-2014-15
|
refinedweb
| 471
| 58.18
|
Why Lazy Functional Programming Languages Rule 439
Posted by ScuttleMonkey
from the laziness-is-the-mother-of-all-invention dept.
from the laziness-is-the-mother-of-all-invention dept.
Da Massive writes "Techworld has an in-depth chat with Simon Peyton-Jones about the development of Haskell and his philosophy of do one thing, and do it well. Peyton-Jones describes his interest in lazy functional programming languages, and chats about their increasing relevance in a world with rapidly increasing multi-core CPUs and clusters. 'I think Haskell is increasingly well placed for this multi-core stuff, as I think people are increasingly going to look to languages like Haskell and say 'oh, that's where we can get some good ideas at least', whether or not it's the actual language or concrete syntax that they adopt.'"
I would have RTFA (Score:5, Funny)
But I was too lazy to click on 13 pageviews.
Mmmm, Kay. (Score:4, Informative)
Hascal, and other functional languages may be good for multi-core development. However not to many programmers program in them... Plus I find they do not scale well for larger application. Its good for true computing problem solving. But today most developopment is for larger application which doesn't necessarly solve problems per-say but create a tool that people can use.
Re:Mmmm, Kay. (Score:5, Insightful)
I spent a whole term at uni learning Miranda, which is similar to Haskell, I really liked it. I have *never* seen it being used since. To my mind they both belong in the category 'interesting, but pointless'.
Its not just because they're old. If age was what killed languages, C and Lisp would be long dead. There just isn't anything I could do with either that I wouldn't be able to do more easily with another more 'mainstream' language.
Re:Mmmm, Kay. (Score:4, Interesting)
I call FUD. There's a lot of things that Lazy functional languages make easy, that mainstream languages don't. Here's just a few examples:
Infinite data structures can be handled naturally. Here's a function that generates the infinite list of fibbonacci numbers:
fibs x y = x : (fibs y (x + y))
Here's a list that does a complex calculation on every one of the fibbonacci numbers:
map complexCalculation (fibs 1 1)
While we're at it, Haskell programs are very easily parallelisable. Here's the same list, but computed on multiple cores:
(parMap rnf) complexCalculation (fibs 1 1)
Haskell only evaluates what it has to -- this program for example which looks up the 3000th element of the sequence will not compute the complexCalculation on the 2999 fibbonaccis before hand like a traditional program would:
(parMap rnf) complexCalculation (fibs 1 1) !! 3000
There's only a small sample of what's so powerful about these languages. If you'd bothered to RTFA, you'd know there are a lot more. But then, I guess this is slashdot.
Re:Mmmm, Kay. (Score:5, Insightful)
I call FUD. There's a lot of things that Lazy functional languages make easy, that mainstream languages don't. Here's just a few examples: Infinite data structures can be handled naturally.
Might that be because infinite data structures don't often exist in mainstream and/or commercial software applications?
Re:Mmmm, Kay. (Score:5, Insightful)
Might that be because infinite data structures don't often exist in mainstream and/or commercial software applications?
Might that be because mainstream programming languages don't support infinite data structures?
Re: (Score:3, Insightful)
Yup, lazy, functionnal, imperative, object : for each problem there is a better solution. However maybe there would be at then end a language where
:
1) syntax could be made a little less strange so that people knowing C (like python by example and then adding its own concepts) could make simple things, simply
2) different paradigms should be coexisting peacefully - fast, simple inner loops in C, structured, object in ($dynamic language), algorithms needing such kind of concurrency or metaprogramming, possible also.
3) a pony
Can't help you with #3. I've been thinking about solutions to #2 (Integration between languages is still a real weak point... I hope this will change over time - part of that will be increased cooperation between the tools themselves. Seems like Microsoft has taken a decent stab at this with
.NET...)
I don't really agree with #1. I don't believe any language yet (especially not C++) has "the perfect syntax", so when it comes to defining new languages I don't believe emulation (of syntax) is necessarily t
Re: (Score:2)
Sure they do. How many lines in the next web page? How many variables? How many on the whole site? How many simultaneous users? How many files do they need access to?
Not having to worry about these constraints saves tons of time.
Re:Mmmm, Kay. (Score:5, Insightful)
Might that be because infinite data structures don't often exist in mainstream and/or commercial software applications?
Sure they do. On my computer, there's an infinite stream of ethernet frames arriving, an infinite stream of video frames leaving, an infinite stream of keyboard events arriving, etc.
The thing about functional languages, and strict lazy functional languages like Haskell, is that the underlying principles are quite different from procedural languages like C. In C, you tell the computer to do things. In Haskell, you tell the computer the relationships between things, and it figures out what to do all on its own.
Personally, I suck at Haskell --- I'm too much of a procedural programmer. My mind's stuck in the rails of doing thing procedurally. But I'd very much like to learn it more, *because* it will teach me different ways of thinking about problems. If I can think of an ethernet router as a mapping between an input and output stream of packets rather than as a sequence of discrete events that get processed sequentially, then it may well encourage me to solve the problem in a some better way.
Re:Mmmm, Kay. (Score:4, Informative)
On my computer, there's an infinite stream of ethernet frames arriving, an infinite stream of video frames leaving, an infinite stream of keyboard events arriving, etc.
You keep on using that word. I do not think it means what you think it means.
Re:Mmmm, Kay. (Score:5, Interesting)
When the last argument given is "look it up", I suspect there's no counter-argument.
In what way is there an "infinite stream of ethernet frames ariving" on your computer with its finite lifetime?
If you have an explanation, why not give it?
Re: (Score:3, Informative)
Well, in this case the "infinite stream" is really a stream of indefinite size. However, you can treat it as being infinite.
Re: (Score:3, Informative)
Re: (Score:3, Insightful)
we don't usually have wires with infinite bandwidth, disks capable of storing infinite frames of video, or users that can press infinite keys.
You completly missed the point. An infinite data structure in a lazy language isn't something that you write down to disk or something you store in memory, its a concept with which you structure your code. For example instead of having a while-loop to poll an event from an event queue like you would do in C, you could have an infinite list of all the events that you system would ever create, but it isn't a list/array in the C sense, its much more like a generator in Python in that the elements are only gene
Re:Mmmm, Kay. (Score:5, Insightful)
Re:Mmmm, Kay. (Score:5, Informative)
Absolutely, and this is why there's one of freenode's biggest IRC channels, a pair of mailing lists with thousands of subscribers, and the Hackage [haskell.org] library/tool repository just waiting to help you solve your real world problem. Be it Compiler building [haskell.org], version control [darcs.net], writing interpretters for popular imperrative languages [perlfoundation.org], Writing 3D shooters [haskell.org], or a whole host of other tasks.
Re:Mmmm, Kay. (Score:5, Insightful)
I call meme misuse.
Re: (Score:2)
I call meme misuse.
Thank you. There is no fear, uncertainty, or doubt here.
Re: (Score:2)
Mmmm... yes, I see you picked a few practical, real world examples, and didn't try to stack the deck with tasks that Haskell (or a similar language) would be particularly well-suited for.
Or not.
Re:Mmmm, Kay. (Score:5, Insightful)
Here's a function that generates the infinite list of fibbonacci numbers: fibs x y = x : (fibs y (x + y))
You have just demonstrated thermian's point.
How often do you actually need to generate infinite sequences? I have never needed to do that outside of a functional programming class.
I'm a big fan of alternative programming languages, I've used some 20 or so since I started 20 years ago. I did a fair amount of commercial Prolog development after I left university, I really like Prolog. It makes certain things really easy and it's a joy to code certain types of solutions in, but I'm never going to write a web-app, or a word-processor in Prolog.
Many of these languages are very clever when it comes to doing certain things, but how often do you actually need to do those things?
The truth is that the vast majority of the software out there does pretty dull, mostly procedural jobs. That's why the main languages in use are just dull variations on the procedural, C/Java/Perl style. No matter how much maths geeks go on about functional programming, procedural systems will always be more suited and easier to use for most of the problems out there.
That isn't to say there is no place for these alternative languages, but it's a smaller one which you probably won't see very often.
Paul
Re: (Score:2)
How often do you actually need to generate infinite sequences?
Actually, when you're able to do it naturally in your language, it becomes a very useful thing to do. For example, when you want fresh variables in a compiler, it's very useful to have the infinite list of variable names you haven't used yet hanging about.
How is X better than Y? (Score:4, Informative)
How is that better than doing (or basically the same in Java/C/perl/ruby/etc.):
I hate to say "you're missing the point" because I feel that's unreasonably dismissive - though I kind of feel you are...
IMO the point isn't necessarily that one method is "better" than another - it's that this idea represents an important and useful way of approaching programming problems. If you understand the style you can appropriate it - it becomes a useful concept for expressing problems and their solutions.
So for instance - while you may not use recursion in C for general problem solving (due to the lack of tail-recursion optimizations which turn the thing into a loop) - understanding the recursive expression of a problem is useful for structuring your solutions, understanding what assertions must be made with respect to the state of the data at what points in the code, etc. - even if you structure your solution as a loop rather than a recursion.
And it should be noted that you can implement infinite sequences in C++, etc. - generally the way to do this is with iterators, and the use case would be for feeding those iterators to algorithms that expect iterators... What Haskell brings to the table is that it encourages you to think of problems and solutions in those terms - learn the method and what you can do with it, how it affects the expression of your code - if you find it a useful idea it's easy enough to implement in most object-oriented languages...
Re:Mmmm, Kay. (Score:4, Insightful)
Functors and generators will do the same thing for you in a more mainstream languages like C++ and Python. And they'll be a hell of a lot more understandable to your average still-wet-behind-the-ears programmer. And you can certainly write code in those languages to do lazy evaluation.
Now, I will grant you that, in general, one can do it more concisely in Haskell than one could in C++ and even Python. But these languages are more well rounded, IMO, than Haskell.
Haskell is fun, but painful for procedural code... (Score:2)
Features of Haskell I really enjoy are how it handles constructors, pattern-matched function arguments, user-definable infix operators (with configurable precedence!) and the way it handles higher-order programming and lazy programming. But I never got monads...
Of course, my philosophy is that that's not necessarily a bad thing. Haskell is, IMO, a domain-specific language specialized on functional problems. I feel it's better to have different specialized languages working together well, rather than have
Re: (Score:2)
That is very cool, but does anyone need to do any of those things in the programs that are being created and used today? I'm talking websites, media players, and video games? Haskell may be the coolest thing in the world but maybe not many people need or want the coolest thing in the world. This is always the problem with arguing "but it's better!": things generally only need to be good enough.
If you don't believe me, look at the human race. Do you think that high intelligence or great beauty are driv
Re: (Score:2)
Did you know that FUD does not really mean 'disagrees with what I think'? He never said lazy languages are useless, he mentioned LISP as a language that's still being used. In my opinion lazyness is useful, so let's just wait for an imperative language to steal that feature so people can actually use it, just like what happened with the rest of functional languages' features... (I know people are gonna mod me troll for this, don't worry)
Re:Mmmm, Kay. (Score:5, Insightful)
Haskell only evaluates what it has to -- this program for example which looks up the 3000th element of the sequence will not compute the complexCalculation on the 2999 fibbonaccis before hand like a traditional program would
And then when you actually do use the other values you program is ridiculously slow because the generator function is recalculating the fibonacci number over and over again.
Except you hope that Haskell automatically memoizes the results, but that destroys your smp performance as the CPUs contend over the result cache. So maybe you have separate caches per thread. Then you program grows larger and all the memoization takes too much memory and the system start dropping out results (3000th fib is what 520? bytes). Then you have to go back and tell it to keep the results longer for that function.
In the end maybe you just make it a list that's precomputed.
But that's really beside the point, because you can do the exact same thing in Smalltalk, Ruby, JavaScript, etc, with most of the same costs and benefits. So really the question is, what makes it better than Smalltalk? It's faster at maths, but that's about it. But it has a harder/alien paradigm, the syntax is foreign, etc. Maybe that's why afaik mainly Haskell is only being used by people that crunch numbers ?
Re:Mmmm, Kay. (Score:4, Insightful)
Except you hope that Haskell automatically memoizes the results
Please don't take this the wrong way, but please stop spreading your misinformation. One doesn't hope Haskell memoizes because it doesn't memoize functions. One requires that Haskell implement call-by-need. Overlapping but very different concepts. I'll assume it was an honest mistake but the difference makes the rest of your post nonsense.
Haskell is only being used by people that crunch numbers
Another point of dissagrement. If anything number crunchers (e.g. scientific computing in which I've done a fair bit of work) avoid Haskell (along with anything that is not Fortran, Matlab or C). Haskell finds favor more among "programing language" types who are interesting in writing their own language (Haskell is a phenominal language to wrong another language in) or in "elegant" ways to write compact solutions to traditionally verbose problems (e.g. merge sort in two "statements").
Re:Mmmm, Kay. (Score:4, Insightful)
paraphrase comment #1: Haskell is too academic to be useful.
paraphrase comment #2: No it isn't, here's how to do something with a fibbonacci sequence.
Uh, you failed at "fibbonacci" to make your point.
:)
Re: (Score:2)
It's only difficult to read if you don't know it. I personally (being experienced in both Haskell and a wide variety of imperative languages) find it much easier to read than any of the imperative languages I've looked at.
Re:Mmmm, Kay. (Score:5, Insightful)
It's only difficult to read if you don't know it.
That is true of almost any language. The point is that there's nothing those languages can do that can't be done, often more easily, with the current crop of popular languages. Elegance cannot beat convenience in the workplace, or in most at any rate.
All that aside, how many Haskell programing jobs have you seen advertised lately? Like it or not, that's what decides which languages people use.
That's why I have to deal with languages I'd prefer to never use, because that's what pays the rent. In my own time I use C.
Re:Mmmm, Kay. (Score:5, Interesting)
I've seen quite a few Haskell jobs advertised recently actually, I even got one of them
:).
Re: (Score:2)
That is true of almost any language. The point is that there's nothing those languages can do that can't be done, often more easily, with the current crop of popular languages. Elegance cannot beat convenience in the workplace, or in most at any rate.
That was the point of the examples he gave there are lots and lots of things that are easier to do in lazy languages than in mainstream languages. Huge blocks of code simply aren't written because they aren't needed.
Re:Mmmm, Kay. (Score:4, Insightful)
No. It's not because of a huge lib. It's because code can be so much more generic than in other languages. And that's the biggest point/plus in Haskell. You don't have to write a new for loop for everything. Even the for loop is abstracted (map, fold, zipWith,
...). And this works with everything. I have yet to find something that's not generalizable in Haskell,
Your 4GL claim turns out to either be true for all languages with a compiler or
the Haskell compiler is just an abstraction you do *once* instead of reinventing it every time and using it in a crude and ugly way, like in C or similar languages,
Your real problem with Haskell is that it is more complex per written token, and so you have to think more per token. Most people seem to generate some inner fear for things they don't understand as good as they expect. And that's the base of all your motivation to find reasons why you dislike Haskell.
Of course you could simplify it, and get something like Python. But this is a bad idea on the long run, because then nature will only create bigger idiots. It's better to wise up a bit, because what you get then, is really really nice!
P.S.: I once tried to design the "perfect language". I stopped as soon as I learned Haskell, because it was not only extremely similar to what I had created myself, but even much better.
quantification of productivity (Score:3, Insightful)
Re:Mmmm, Kay. (Score:5, Funny)
That underlying C code is what needs to be written carefully, because you use Haskell itself to write its own compiler.
There's a Haskell compiler [haskell.org] written in Haskell already. Where does C fit in to that?
After the 'l' and before the 'o'.
Re:Mmmm, Kay. (Score:5, Interesting)
The point is that there's nothing those languages can do that can't be done, often more easily, with the current crop of popular languages.
At University, I studied SML, and came out with a opinion similar to yours concerning functional languages. But when I started to learn Haskell on my own, really learn it, I found that all the concepts in my CompSci courses that seemed so pointlessly complex before, just fell neatly into place.
I'll give you an example of a case where my solution in Haskell to a problem was rather better than any solution I came up with in C#. A while ago I was designing a program to export some hierarchical data held in a database to XML. Because SQL resultsets are 2D grids, I needed a way of converting a 2D grid into a list of fixed-depth tree structures.
In Haskell, the solution is two lines of code, and assuming you know Haskell pretty well, it's fairly clear as well:
I'd be interested to see if you could mirror the functionality in one of the 'popular' languages you mentioned. Perhaps something in Java?
Re: (Score:3, Informative)
import java.util.*;
class Node {
Map<String, Node> map = new HashMap<String, Node>();
static Node asNodes(List<List<String>> input) {
Node nodes = new Node();
for(List<String> row : input) {
Node node = nodes;
for(String column : row) {
if(node.map.containsKey(column) == false) {
node.map.put(column
Re:Mmmm, Kay. (Score:5, Insightful)
I'm not going to take the time to look up the appropriate API calls at the moment, but I believe it's somewhere in the range of 2-3 lines of code to accomplish this feat.
If you're not going to take the time to actually produce any code to back up your point, why bother replying?
The biggest problem with talking about the advantages and disadvantages of programming languages is that people tend to make vague claims without producing any evidence for their case. Can JPA produce lazily produce a hierarchical tree of objects from a single database query? I don't know; it's not an answer you're likely to find in an FAQ or a tutorial. And how much work is it to actually set JPA and XStream up? Does it really only take 2-3 lines of code?
Without providing working code, who knows whether your assertion has any merit or not.
Re: (Score:3, Informative)
Except nobody can read that. The line noise up there with perl!
Nonsense. You need to be familiar with Haskell's syntax, but once you are, it's as clear as it can be for what it does.
Reading the Haskell in evaluation order:
Filters out blank lists.
Groups rows whose first column is identical.
This uses a function from Control.Applicative, but once you know that:
It's not so scary. It takes the first cell from the first column
Re: (Score:3, Insightful)
1. Start with a grid of data
2. Filter out empty rows
3. Group rows together that have identical values for the first column.
4. Make a tree branch from each route, using the value of the first column as the head of the branch, and use recursion to find the child branches from the remaining columns (i.e. every column but the first) of the grid.
So:
Becomes:
Re:Mmmm, Kay. (Score:5, Interesting)
Depends on how you determine "easy to read". The main difference between Haskel and say C is that Haskel is very dense while C is not so much, so one line of Haskel holds a lot more information then C, which of course makes look like its Haskel harder to read, since understanding a line of code requires effort, while its trivial in C. However that line of Haskel might contain the same information as a whole function in C and reading that whole function in C might turn out harder then comprehending that single line of Haskel.
All that said, I don't consider the syntax of Haskel very good, lack of parenthesis around function arguments certainly doesn't help readability and the error messages can be rather obscure as well.
A broader perspective (Score:2)
Just because a language is not widely used outside academia does not mean it isn't a good teaching tool. You can pick up a new language in a few weeks, but a strong understanding of the fundamentals lasts forever. In a teaching setting, you want a language and dev tools that do not get in the way of illustrating the concepts.
At my university, a professor chose C++ to teach computational linguistics to non-programmers. HUGE mistake! We spent about 80% of the course learning C++ badly, rather than linguis
Re: (Score:2)
Plus I find they do not scale well for larger application.
And you discovered that through your own experience with functional programming languages, did you? It's remarkable that you could have gained such insight, yet not know how to spell the name of one of the best-known FPLs in town.
I rather suspect that, like many critics of FPLs, you have never actually tried to use them for a big project, and just assume that they won't scale because you read it somewhere.
But today most developopment is for larger application which doesn't necessarly solve problems per-say but create a tool that people can use.
I'll gloss over whether your unverified assertion about "most development" is actually true. Even assum
Too constrained and academic (Score:5, Insightful)
A separate, but related problem is that the community doesn't seem interested in practical use of it - there aren't lots of bindings to libraries to make easy things easy. Heck, even doing i/o at all isn't really supported very well. Functional programming is very good for the pure computer science part of programming, but unfortunately that's going to make up less than half of any given program. You also need to be able to interface.
So I think the quote in the summary is right: people won't be adopting Haskell or similar pure-functional languages any time soon. What will happen is the next generation of dynamic languages will adopt the best features from functional programming; we've seen that happen already in python and ruby, and it'll happen again. And people will start using them there.
Re:Too constrained and academic (Score:5, Interesting)
A separate, but related problem is that the community doesn't seem interested in practical use of it
Really? That's why there's a new book called Real World Haskell [realworldhaskell.org]. The IRC channel on freenode is one of the largest on the entire network, and full of people doing real world things with haskell.
there aren't lots of bindings to libraries to make easy things easy.
I call FUD. The Hackage database [haskell.org]has literally hundreds of libraries sat there ready for you to use. And if you really need something that isn't there, the FFI is mature, and very easy to use.
Re:Too constrained and academic (Score:4, Insightful)
Re: (Score:2)
I think the real problem with Haskell is that you're required to use your brain to make any use of it.
What, you mean you can't just bash your head against a keyboard repeatedly and expect your program to compile (but not necessarily work? Call me irrational, but hey, I see that as a good thing! Making programmers think before they write something, and then having it work first time is great!
Re: (Score:2)
And even if I give you that, it's still an order of magnitude fewer than you'd get with more popular languages.
Really? Ruby forge has a fairly similar number of projects available. PyPi has only a small amount more. Last time I checked with the guys who manage hackage, they're getting 100 submissions of new projects/versions every *day*.
Please. No such automatic interface is reliable enough to depend on - they always break on the one function you particularly need. The FFI is not a real solution.
Automati
visibility of hackage (Score:4, Insightful)
Hackage is well known to haskell programmers. It is linked to directly from the front page of haskell.org, it is mentioned frequently on the haskell-cafe mailing list, and that's where hundreds of haskell projects [haskell.org] are hosted. If you're an average passer-by and not an active haskell developer, it's not necessarily going to jump out and say "boo!", but it isn't hiding under a rock, either.
Note: hackage is not the standard libraries. Those are documented elsewhere [haskell.org].
Re:Too constrained and academic (Score:5, Interesting)
You also need to be able to interface.
Which is exactly why I find Scala [scala-lang.org] to be interesting. Note: I haven't had time to learn or use it yet, but the design concept there is very interesting.
Re: (Score:2)
I've been using Scala and it's the only way to do anything with Java, imho. Sure, it's no Haskell but it's pretty darn nice. A lot nicer than Java without the performance penalty of other languages that target the JVM, such as Groovy or Python.
Even if you don't have to use Java, the popularity of the language combined with the pervasive NIH syndrome of the community means there is a very rich library of software components to use.
Re:Too constrained and academic (Score:5, Interesting)
Very true, I did a large project in Haskell for a CS class once the three of us working on it hated the language after we were done.
Before that we were pretty happy with Haskell, because the programming assignments leading up to the final project were all fitted to the language, but the instant we had to do something that wasn't, we realized what a mess it is.
Re: (Score:2)
Right in the middle of reading your comment I realized they must have changed something in the Matrix again.
Re: (Score:2)
Re: (Score:2, Interesting)
This is only true for specific implementations. The Haskell 98 report only specifies library support for textual input/output, and is completely useless for portably handling Binary IO. That everyone just uses GHC anyway, which has great support for Binary IO, doesn't negate the point that Haskell itself sucks when it comes to IO.
Other serious shortcomings would have to include lack of proper locale/encoding support (characters are read in as Unicode values; but what is the original encoding assumed to be,
Re: (Score:2)
Correct-a-mundo. On the Haskell part at least. Javascript is a functional language that has managed to go mainstream. Whether you would call it "pure" or not is open for debate.
Re:Too constrained and academic (Score:5, Interesting)
No, I mean it is a true, honest to God, functional language. If you check Wikipedia, for example, you will see it listed as "Multi-paradigm: prototype-based, functional, imperative, scripting".
See, most people look at the C-style syntax and think it means that Javascript is a C-style language. Then they get frustrated when the C-stlye code they write looks like crap. Instead, they should be coding with closures and lambda, much like they would with a LISP program. The result is a far more sophisticated program that performs more work with greater elegance than a C-style equivalent.
I highly recommend Douglas Crockford's "The JavaScript Programming Language" [yahoo.com] introduction to the language if you're not familiar with Javascript's functional aspect.
Re: (Score:2, Insightful)
That a language has "functional aspects" doesn't make the language itself "functional". By the same reasoning you use, Python is a "functional language" too. Hell, even Perl has closures and lambda. And Python has list comprehensions and such. Pretty much every high-level language has these things; that doesn't make them functional.
Re: (Score:3, Informative)
What would you describe as "Functional" then? i.e. What is the key difference between a Functional language and a language that has "Functional Aspects"? Show me the difference you believe is there and I'll show you the Javascript goods.
Oh, and BTW? Academia agrees with me that Javascript is a functional language.
Re:Too constrained and academic (Score:5, Interesting)
So, if you can show me that you can't have mutable variables in Javascript, we can call it functional.
Re:Too constrained and academic (Score:4, Interesting)
An excellent post, sir. Much more on target than cylence's attempts.
This is correct. However, I did not claim that Javascript is a *pure* functional language, only that it is a functional language. If we go by the definition of a pure functional language, then we must discount a long list of functional languages as well. That includes the poster-child for functional programming: LISP
Javascript allows for pure functional programming just like any other functional language. However, it does not enforce the purity of functional code. Something that is a very difficult goal to reach. This combined with the C-Style of the language has the negative effect of encouraging imperative programming. With suitably predictable results. (How many times have we heard, "Javascript sucks!")
This is why I often refer to Javascript as a "LISP-style functional language". The two don't line up exactly, but the foundations of Functional List Processing are present in both.
An interesting aspect of the language's real-world use is that the designers of Javascript's web APIs could have created APIs that force imperative thinking. Yet they didn't. Even for traditionally imperative operations (e.g. networking), the designers of Javascript's web APIs have seen fit to ensure its functional nature through the use of an event system. e.g. I do not read values from XMLHttpRequest, I wait for a functional call.
I personally think it's fascinating that the functional underpinnings of the language have been so well preserved despite the lack of general knowledge about the language. The efforts they are putting into the language will pave the way for more mainstream functional languages in the future.
Re:Too constrained and academic (Score:4, Informative)
There's a more in-depth article on Javascript's functional capabilities here: [hunlock.com]
Other stuff I pulled out of Google for your perusing: [typepad.com] [ucr.edu] [joelonsoftware.com] [ibm.com]
What this all means is that Javascript is the most widely deployed functional language in existence! And that's a fact you can take to the bank.
Re: (Score:3, Interesting)
Has a lot of lazy evaluation stuff in libraries.
Has modifiable syntax/grammar.
Very efficient - number 2 after "C".
Re: (Score:2)
Them there what ?
Why they don't rule: (Score:5, Funny)
Re:Why they don't rule: (Score:5, Informative)
The picture in the linked article is missing a beard.
I was going to mod you funny, then I thought maybe a lot of people wouldn't get the beard reference [microsoft.co.il], so I decided to post instead. Anyone else want to mod parent funny?
Re: (Score:2, Funny)
Re:Why they don't rule: (Score:5, Funny)
Anyone else want to mod parent funny?
Yes, I will!
Oh crap.
It's not for dumb people (Score:5, Insightful). I'm just an old C hacker and haven't used any real math in like 10 years, so I'm not that effective in Haskell
:(
Re: (Score:2)
Personally I'm pretty happpy with pthreads and C anyway... I've yet to be sold on any of these new languages.
Re: (Score:2)
I can get things done in C, and it pays the bills. So I'm not complaining. But I think that letting the compiler figure out the threading for you will be the way of the future when something as simple as a cellphone consists of 300+ minimalistic processing cores working together.
It seems unnecessary today to thread out for more than a dozen cores, but maybe some future technology will require a fair amount of processing power to operate. Perhaps holographic projection need it to encode/decode a data stream
Re:It's not for dumb people (Score:5, Funny)
Don't worry.
In a few years you will see c++++ which will be c++ with functional programing tacked on. Of course you will also see Functional Object C but only Apple will use that.
Re: (Score:2)
Well, that C++0x thing or however its called has lambdas and functions closer to being a first class construct...so thats pretty much exactly whats happening already, give or take
:)
Re: (Score:2)
I should have learned this a long time ago.
Don't try and make stuff up to be funny. The truth is silly enough without my help.
All I can say is ewwwww..
Re: (Score:2)
C++'s template system is actually a Turing-complete functional programming language.
-Chris
Re: (Score:2)
Monads and all that kind of confuses me.
And that's exactly the problem. What are Monads? I'm not really sure (I sort of have a vague intuition of what they are, but that's it). But I do know that you need them in order to do anything that involves side-effects in the world outside of Haskell. And that includes I/O. Yes, that's right, you need to understand a rigorous mathematical construct in order to write a "Hello World" program. There's simply *no way in hell* a language like that will become popul
Re: (Score:2)
That's like saying you don't *have* to understand how I/O works to write data out to a file. Yeah, it's true, but any developer worth his title must.
Besides, basic I/O is just one case of side-effects, and Monads are ridiculously hard to understand and work with relative to procedural solutions to the same problems.
Re: (Score:2)
Welcome Perl. Read Higher-Order Perl ( [plover.com] ) if you don't believe me.
Best of all, it's always installed on all unix boxes (including macs!)
The only thing that's lacking is automatic palatalization across many cores for map{} operations, but I'm sure I can write a custom iterator to bucket things into N-threads on N-core CPUs if such a need arises.
Arrrr, it's Stephenson again! (Score:3, Funny).
Monads, lambda, calculs...arrr, matey, now I know why Haskell's such a bloody mess - Neal Stephenson wrote it! That be why it seems good at first and falls apart at the end.
Re:It's not for dumb people (Score:4, Interesting)
People will get more intelligent just by using functional languages.
Don't be ridiculous. Functional vs procedural isn't a matter of intelligence. It's simply a way of thinking. And the reality is that procedural languages better match the way the human mind works.
IMHO, learning to program in a functional style is like a right-handed person learning to write with their left. Yeah, they can do it, but it requires a ton of work for dubious real-world benefit, and in the end, it's never really natural, simply because that's not the way the brain is wired (except for the odd freakish exception
;).
Re: (Score:3, Insightful)
And the reality is that procedural languages better match the way the human mind works.
To quote you -- don't be ridiculous. From my personal experience of teaching people functional programming, it's nothing to do with imperative programming matching the brain better. Instead, it's to do with people learning to write imperative programs first, and then having a very steep unlearning curve to deal with at the same time as the learning curve for functional programming.
Re: (Score:3, Insightful)
I don't think there is any evidence that procedural languages are easier to understand; SQL, which (like functional languages) is declarative rather than procedural seems to be at least as simple as most programming languages for even neophytes to understand.
Procedural programming is probably easier to understand if you started out for years learning procedural languages, or if you come from a hardware-centric background;
Re: (Score:3, Insightful)
There, fixed for you.
There's nothing natural in the "change-one-byte-at-a-time" von-Neumann bottleneck [stanford.edu] languages, other than they're the ones taught in standard programming courses.
Oh, you mean that the human mind keeps track for changes of state in the environment? Sorry to break the news to you, but that can also be done in functional languages. [wikipedia.org]
Re: (Score:3, Insightful)
As for an "instance of a normal, everyday situation in which humans use a functional model of computation":
SELECT Name, Age FROM My_country
WHERE Age > 18
GROUP BY Age
You'd be hard pressed to find someone thinking of the age groups in their country using a "for..next" model.
Re: (Score:2)
Schools teach functional languages to this purpose. There is no evidence that this is true, but it fills a couple hours of lecture time every semester.
Rule? (Score:2)
one of the best practices (Score:3, Interesting)
We can see this philosophy in C where, except where huge data structures are involved, it is best to make a copy of data rather than work to work on someone else's copy. Likewise functions do one thing, do it well, and, again except for huge data structures, return a copy. Memory is manually allocated, and deallocated, possibly from the functions local heap.
It is interesting to me how it all comes back to just best practices. Make sure you know how much memory you need. Make sure you are only doing what needs to be done right now. Make sure the interfaces are clear. If data is ready, then it should be ok to work on it. To me functional programming is what happens after data models, or objects, are defined. Once the data structure is defined, you can treat it as stateless immutable input. Output is a another structure. Again, the only limitation is if the object is so large that duplicating become a cost performance issue.
Re: (Score:2)
We need a simple language - C
This OO thing is nice - C++
This Virtual machine thing is nice - C#
Lazy Functional languages are nice - C@ ?
If in doubt bolt it onto C
... ;-)
Interview (Score:2)
It figures that this article would be an interview. That way Simon Peyton-Jones didn't need to evaluate the answer to any question until it was asked.
Lazy Functional vs Lisp? (Score:3)
What's the difference between a functional language like Lisp and a lazy functional language?
Just asking.
Re: (Score:3, Interesting)
Lazy evaluation means that a value won't be calculated until it is absolutely necessary. So in the example in the above discussion, you can make things like an array of all the Fibonacci numbers. This works because the environment won't actually calculate anything until it is necessary.
Lisp doesn't support this behavior by default. In LISP all function arguments are evaluated left to right. Although using macros, you could probably write your own lazy evaluation mechanism.
The lazy language (Score:3, Funny)
Re:Lazy Functional vs Lisp? (Score:4, Interesting)
In a lazy language (more properly called "call-by-need"), every expression has a thunk wrapped around it and thunks are automatically forced. (They have thunks in Lisp/Scheme but they must be explicit rather than implicit.)
Lazy languages allow you to "use" values that haven't been defined yet. For example
a = 1 : b
b = 2 : a
Which would produce two lists of alternating ones and twos (a starts with 1 and b starts with 2; also ":" is infix "cons").
In Scheme (which I know better than Lisp), you could accomplish the same with
(let ( [a (cons 1 #f)]
...)
[b (cons 2 #f)])
(set-cdr! a b)
(set-cdr! b a)
But being lazy (1) frees you from thinking about order of evaluation and (2) allows some constructs to be expressed more easily (e.g. taking the fixed point of a composition of functions (example to complicated to post here)).
At the end of the day, any program written in one style can be written in the other so its all differences of ease and different ways of thinking about the program.
Oh my god! (Score:4, Funny)
Combinatorial Parsers, etc. (Score:3, Interesting)
One thing in Haskell that always intrigued me was the idea of combinatorial parsers - a parser that accepts a string and yields not a single parse tree, but all possible parse trees based on the language rules... Simple implementations of those can be implemented in Haskell pretty straightforwardly. Combinatorial parsers for individual rules are stuck together - and while rules may generate a bunch of possible parse trees these are culled by successive rules...
I don't know too much about the practical efficiency of these - but in terms of how they're expressed I think they're great...
Transactional Memory (Score:3, Informative)
Thoughts on Haskell, from a Haskell programmer.
Haskell has a very nice software transactional memory [haskell.org] library, which makes a lot of otherwise-difficult concurrency problems much easier. It's statically safe, too, unlike similar libraries for imperative/OO languages.
Certain other language features are very nice. Monads are also extremely powerful once you wrap your head around them, and the type-class system and standard libraries make a lot of math programming problems much easier.
The language also has downsides. The laziness makes it possible to build up an arbitrarily long chain of suspended computations, which amounts to a hidden memory leak. Laziness also complicates the semantics for "unchecked" exceptions, most notably division by zero. The combination of laziness and purity can make the language very difficult to debug and optimize. While the compiler has very powerful optimization capabilities, sometimes code needs to be just so to use them (like flagging things "const" or "restrict" in C), and this can make it hard to write clean code that runs fast.
The other problem is that most programs need some amount of imperative code somewhere to do the I/O. This code has a tendency to be verbose, nasty and slow in Haskell.
There are also some problems that would be relatively easy to solve in a very nice way within the semantics of the language (give or take), but are not solved well in the standard libraries. These include exception handling, global mutable state, strings and regular expressions, certain I/O operations, arrays and references. It would be very nice if the ST and IO monads were unified, and if references and arrays had nice syntax; this would reduce the ugliness needed to write those occasional bits of imperative code.
Re:Why "lazy"? (Score:5, Informative)
They're "lazy" because they don't do any work until necessary. For example, a function can return an infinitely long list, but only the elements you request will actually be calculated. Or, to compare them to Python, it's like having everything function like an iterator.
Re: (Score:3, Insightful)
Well Haskell isn't "smart" in the sense that it goes out and arbitrarily caches things it thinks may be helpful (that way leads to madness).
It does cache computations it's made if you store them in a variable or part of a data structure, so if you plan things right, you can get the behaviour you want.
Basically, here'
Re: (Score:2)
so I'm not the only one thinking along those lines - good to know.
|
http://tech.slashdot.org/story/08/09/19/1230237/why-lazy-functional-programming-languages-rule
|
CC-MAIN-2014-41
|
refinedweb
| 8,070
| 63.19
|
Java Garbage Collection – ‘Coz there’s no space for unwanted stuff in Java
Garbage Collection is one of the most important features in Java which makes it popular among all the programming languages. The process of garbage collection is implicitly done in Java. Therefore, it is also called Automatic Garbage Collection in Java. The programmer does not need to explicitly write the code to delete the objects.
Today in this article, we are going to learn the concept of Garbage Collection in Java in detail, along with its methods and algorithms. But before that have you checked out our previous article on Wrapper Class in Java? If not then you must surely take a quick sneak peek of Wrapper Class in Java to clear your basics with Techvidvan.
Let’s start discussing the concept of the Garbage Collection in Java.
Garbage Collection in Java
Garbage collection is the technique used in Java to deallocate or remove unreachable objects and unused memory. From the name itself, we can understand that Garbage Collection deals with tracking and deleting the garbage from the memory area.
However, in reality, Garbage Collection tracks each and every object available in the JVM heap space and removes the unused ones.
We know that all the objects that we dynamically create are allocated in the heap memory of the application. Normally, it is the duty of the programmer to both create and delete the objects in the program, but the programmer usually ignores the deletion of the object. This creates a problem of OutOfMemoryErrors due to insufficient memory because of not deleting the unwanted objects.
In Java, the programmer does not have to worry about the problem of releasing the memory of these unused or unwanted objects, as the garbage collection system always runs in the background, and its main aim is to free the memory heap by deleting unreachable objects.
Essentially, Garbage Collection is the process of tracking down all the objects that are still in use and marking the rest of them as garbage. The Garbage Collection process in Java is considered an automatic memory management schema because programmers do not have to explicitly deallocate the objects. The garbage collection in Java runs on low-priority threads.
The implementation of the Garbage Collection is present in the JVM (Java Virtual Machine). Each JVM can implement garbage collection. But there is only one requirement; that it should meet the JVM specification. Oracle’s HotSpot is one of the most common JVMs that offers a robust and mature set of garbage collection options.
Object Life Cycle in Java
The object life cycle in Java can be divided into 3 stages:
1. Object Creation
To create an object, generally, we use a new keyword. For example:
MyClass obj = new MyClass();
We created the object obj of class MyClass. When we create the object, a specific amount of memory is allocated for storing that object. The amount of memory allocated for objects can vary on the basis of architecture and JVM.
2. Object Usage
In this stage, the Object is in use by the other objects of the application in Java. During its usage, the object resides in memory and may refer to or contain references to other objects.
3. Object Destruction
The garbage collection system monitors objects and keeps a count on the number of references to each object. There is no need for such objects in our programs if there are no references to this object, so it makes perfect sense to deallocate this unused memory.
Unreachable Objects in Java
When an object does not contain any “reachable” reference to it, then we call it an unreachable object. These objects can also be known as unreferenced objects.
Example of unreachable objects:
Double d = new Double(5.6); // the new Double object is reachable via the reference in 'd' d = null; // the Integer object is no longer reachable. Now d is an unreachable object.
Eligibility for Garbage Collection in Java
An object can be eligible for garbage collection in Java if and only if it is unreachable. In the above program, after declaring d as null; double object 4 in the heap area becomes eligible for garbage collection.
Object Eligibility
Though Java has automatic garbage collection, an object should be made unreachable manually. There are different ways to know whether the object is eligible for Garbage Collection in Java.
There are generally four ways in Java to make an object eligible for garbage collection:
- Nullifying the reference variable
- Reassigning the reference variable
- Island of isolation
- Creating objects inside a class
Dive a little deep into the concept of Variables of Java with Techvidvan.
Ways of requesting JVM to run Garbage Collector
Even if we make an object eligible for Garbage Collection in Java, it may or may not be eligible for Java Virtual Machine (JVM) to destroy. So there are some ways to request JVM to destroy this object and perform garbage collection.
There are two ways to request JVM for Garbage collection in Java which are:
- Using System.gc() method
- Using Runtime.getRuntime().gc() method
Code to understand the above two methods:
package com.techvidvan.garbagecollection; public class Demo { public static void main(String[] args) throws InterruptedException { Demo obj1 = new Demo(); Demo obj2= new Demo(); // requesting JVM for running Garbage Collector System.gc(); // Nullifying the reference variable obj2 = null; // requesting JVM for running Garbage Collector Runtime.getRuntime().gc(); } @Override // finalize method is called on object once before garbage collecting it protected void finalize() throws Throwable { System.out.println("Garbage Collector "); System.out.println("Garbage collected object: " + this); } }
Output:
Garbage collected object: com.techvidvan.garbagecollection.Demo@17f18626
Before removing an object from memory, the garbage collection thread invokes the finalize() method of that object and gives an opportunity to perform any sort of cleanup required.
A Real-life Example of Garbage Collection
Let’s take a real-life example of a garbage collector.
Suppose you go for an internship at a particular company and you have to write a program that counts the number of employees working in the company, excluding interns. To implement this task, you have to use the concept of a garbage collector.
The actual task given by the company:
Question. Write a program to create a class Employee having the following data members.
- An ID for storing unique id for every employee.
And, the class will have the following methods:
- A default constructor to initialize the id of the employee.
- A method show() to display ID.
- A method showNextId() for displaying the ID of the next employee.
Any beginner, who doesn’t know the concept of garbage collector will write the following code to count the number of employees:
Code to count the number of employees in the company without using garbage collection:
class Employee { private int ID; private static int nextId=1; //we make it static because it should be common among all and shared by all the objects public Employee() { this.ID = nextId++; } public void show() { System.out.println("Id=" +ID); } public void showNextId() { System.out.println("Next employee id will be="+nextId); } }(); } //After this brace, X and Y will be removed. //Therefore, now it should show nextId as 4. A.showNextId(); //Output of this line should be 4 but the output we will get is 6. } }
Output:
Id=2
Id=3
Next employee id will be=4
Next employee id will be=4
Next employee id will be=4
Id=4
Id=5
Next employee id will be=6
Next employee id will be=6
Next employee id will be=6
Now to get the correct output:
If we write the same code using the garbage collection technique, the garbage collector will see that the two objects are free. To decrement the value of the variable nextId, the garbage collector will call the finalize() method only when the programmers override it in their class.
And as we know that we have to request a garbage collector, and to do this, we have to write the following three steps before closing the brace of sub-block.
- Set references to null (that is, X = Y = null;)
- Call System.gc();
- Call System.runFinalization();
Correct Code to count the number of employees (excluding interns) using garbage collection
//Correct code to count the number of employees excluding interns. class Employee { private int ID; private static int nextId=1; //we declare it static because it should be common among all and shared by all the objects public Employee() { this.ID = nextId++; } public void show() { System.out.println("Id="+ID); } public void showNextId() { System.out.println("Next employee id will be="+nextId); } protected void finalize() { --nextId; //In this case, //gc will call finalize() //for 2 times for 2 objects. } }(); X = Y = null; System.gc(); System.runFinalization(); } E.showNextId(); } }
Output:
Id=2
Id=3
Next employee id will be=4
Next employee id will be=4
Next employee id will be=4
Id=4
Id=5
Next employee id will be=6
Next employee id will be=6
Next employee id will be=4
Garbage Collection Algorithms in Java
Garbage Collection Algorithms in Java helps to remove the unreferenced or unreachable objects. These algorithms always run in the background. There are several different types of Garbage Collection Algorithms in Java that run in the background. And among them, one of the algorithms is a Mark and Sweep algorithm.
Mark and Sweep Algorithm
Mark and Sweep algorithm is a fundamental and initial algorithm for Garbage Collection in Java. This algorithm basically performs two primary functions: mark and sweep. Firstly, it should track and detect unreachable objects and, secondly, it should release these objects from the memory heap area so that the programmer can use it again.
1. Mark phase – Mark live objects
It is the first phase of the algorithm in which there is the detection of all the objects that are still alive. It is a stage where the garbage collector identifies which parts of memory are in use and which are not in use.
In this phase when the condition is made, its check bit is set to 0 or false. We set the marked bit to 1 or true for all the reachable objects.
Here we can consider each object as a node and then we visit all the objects or nodes that are reachable from this object/node, and it repeats until we have visited all the reachable nodes.
- The root is a variable that refers to an object and is directly accessible by a local variable. We will assume that we have only one root.
- We can use markedBit(obj) to access the mark bit for an object.
Mark Phase Algorithm:
Mark(root) If markedBit(root) = false then markedBit(root) = true For each v referenced by a root Mark(v)
Note: We can call Mark() for all the root variables if we have more than one root.
2. Sweep phase – Get rid of dead objects
The sweep phase algorithm “clears” all the inaccessible or unreachable objects that is, it releases the stored memory area for all the inaccessible objects. Each of the items whose check value is set to false is removed from the stack memory, for every single other reachable object, we set the value of the stamped bit to false.
Presently the check bit for all the reachable objects is set to false.
Sweep Collection Algorithm:
Sweep() For each object p in a heap If markedBit(p) = true then markedBit(p) = false else heap.release(p)
‘Mark and Sweep’ algorithm is also called a tracing garbage collector because this algorithm is used to trace the objects. For example
- Marked bits set to false.
- Reachable objects are set to true.
- Non-reachable objects get clear from the heap.
Advantages of the Mark and Sweep Algorithm
- It is a cyclic process.
- There are no additional overheads that occur during the execution of an algorithm.
Disadvantages of the Mark and Sweep Algorithm
- While the Java garbage collection algorithm runs, normal program execution stops.
- It runs differently several times on a program.
Implementations or Types of Garbage Collection
JVM has four types of Garbage Collector implementations which are –
- Serial Garbage Collector
- Parallel Garbage Collector
- CMS Garbage Collector
- G1 Garbage Collector
Now, we will briefly discuss each type of garbage collector.
1. Serial Garbage Collector
It is the simplest Garbage Collector implementation as it basically works with a single thread and all the garbage collection events are conducted serially in one thread. As this collector can work on a single thread, it freezes all application threads when it runs. Hence, it is not preferred to use it in multi-threaded applications like server environments.
We can use the following argument to enable Serial Garbage Collector:
java -XX:+UseSerialGC -jar Application.java
2. Parallel Garbage Collector
It is the default Garbage Collector of the JVM and sometimes called Throughput Collectors. Unlike the Serial Garbage Collector, the Parallel Garbage Collector uses multiple threads to manage heap space. But at the same time, it also suspends other application threads while performing garbage Collection. Using this Garbage Collector, we can specify the maximum garbage collection threads throughput and footprint (heap size) and, pause time.
We can use the following argument to enable Parallel Garbage Collector,
java -XX:+UseParallelGC -jar Application.java
3. CMS (Concurrent Mark Sweep) Garbage Collector
The CMS Garbage Collection implementation uses multiple threads for garbage collection. This Garbage Collector is designed for applications that can afford to share processor resources with the garbage collector while the application is running and that prefer shorter garbage collection pauses. Simply we can say that applications using CMS respond slower on average but do not stop responding to perform garbage collection.
We can use the following flag to enable the CMS Garbage Collector:
java -XX:+UseParNewGC -jar Application.java
4. G1(Garbage First) Garbage Collector
G1 (Garbage First) Garbage Collector is the newest garbage collector which is designed as a replacement for CMS. It performs more efficiently as compared to CMS Garbage Collector. It is similar to CMS and is designed for applications running on multiprocessor machines with large memory space.
To enable the G1 Garbage Collector, we can use the following argument:
java -XX:+UseG1GC -jar Application.java
Advantages of Garbage Collection:
- There is no need to manually handle the memory allocation/deallocation because the JVM automatically performs the Garbage Collection for unused space in Java.
- There is no overhead of handling the Dangling Pointer.
- Garbage Collection takes care of a good portion of Automatic Memory Leak management.
Disadvantages of Garbage Collection:
- There is a more requirement of CPU power besides the original application, as JVM has to keep track of object reference creation/deletion. This may affect the performance of requests which require a huge memory.
- Programmers do not have any control over the scheduling of CPU time dedicated to freeing the unreachable objects.
- Using some Garbage Collection implementations an application may stop unpredictably.
- Automatic memory management is not much efficient proper manual memory allocation/deallocation.
Summary
Garbage Collection in Java is useful for preventing memory leaks and for utilizing the space. In this Java tutorial, we learned about the garbage collection in Java and its working. We discussed the important terms related to Java Garbage Collection. We also covered the algorithm for garbage collection. There are four types of Java Garbage Collectors which we learned in this article. We discussed the Java Mark and Sweep algorithm along with its pros and cons. We also had a look at the advantages and disadvantages of the Garbage Collection in Java.
Hope this article helped you in clearing your concepts in Garbage Collection.
Thank you for reading our article. Do share your feedback through the comment section below.
|
https://techvidvan.com/tutorials/java-garbage-collection/
|
CC-MAIN-2020-16
|
refinedweb
| 2,614
| 53.51
|
asn alternatives and similar packages
Based on the "Networking" category
ejabberd10.0 9.2 asn VS ejabberdRobust, ubiquitous and massively scalable Jabber/XMPP Instant Messaging platform.
socket9.4 0.8 asn VS socketSocket wrapping for Elixir.
ExIrc7.6 0.0 asn VS ExIrcIRC client adapter for Elixir projects.
sshkit7.2 4.4 asn VS sshkitAn Elixir toolkit for performing tasks on one or more servers, built on top of Erlang’s SSH application.
sshex7.2 2.0 asn VS sshexSimple SSH helpers for Elixir.
slacker6.8 0.0 asn VS slackerA bot library for the Slack chat service.
hedwig6.7 0.0 asn VS hedwigXMPP Client/Bot Framework for Elixir.
reagent6.5 0.0 asn VS reagentreagent is a socket acceptor pool for Elixir.
kaguya5.9 1.0 asn VS kaguyaA small, powerful, and modular IRC bot.
download5.5 0.0 asn VS downloadDownload files from the internet easily.
yocingo5.2 0.0 asn VS yocingoCreate your own Telegram Bot.
SftpEx5.1 0.0 asn VS SftpExElixir library for streaming data through SFTP
ExPcap4.7 0.0 asn VS ExPcapPCAP parser written in Elixir.
wifi4.5 0.0 asn VS wifiVarious utility functions for working with the local Wifi network in Elixir.
chatty4.4 0.0 asn VS chattyA basic IRC client that is most useful for writing a bot.
eio3.7 0.0 asn VS eioElixir server of engine.io.
chatter3.6 0.0 asn VS chatterSecure message broadcasting based on a mixture of UDP multicast and TCP.
Guri3.1 0.0 asn VS GuriAutomate tasks using chat messages.
tunnerl2.8 0.0 asn VS tunnerlSOCKS4 and SOCKS5 proxy server.
hades2.2 3.8 asn VS hadesA wrapper for NMAP written in Elixir.
torex1.8 1.1 asn VS torexSimple Tor connection library.
pool1.5 0.0 asn VS poolSocket acceptor pool for Elixir.
mac1.4 0.0 asn VS macCan be used to find a vendor of a MAC given in hexadecimal string (according to IEEE).
wpa_supplicant1.0 0.0 L4 asn asn or a related project?
Popular Comparisons
README
IP-to-AS-to-ASname lookup
Uses approximately the algorithm and resources described here:
We support only IPv4 at this point (Until someone wants IPv6 and dares to update this :D)
ASN databases
We use the APNIC files:
- IP-to-AS:
- AS-to-ASN:
setup
In your
mix.exs file:
def application do [applications: [:asn]] # simply add asn to your loaded applications end def deps do [{:asn, ">= 0.1.0"}] end
Note that the initial compilation might take a few more seconds since it compiles the lookup table.
In case you don't want the application single process solution, you can also start
ASN.Matcher.start_link processes by hand and use them through a similat API like the ASN module, just that you will need to pass the matcher process as the first value before the function args.
usage
Due to the sheer size of the table, the compiler refuses to statically put it into the matcher module within a reasonable amount of time, and with a reasonable usage of memory. That's why we pre-compile the data into erlang-terms in external format and store that, and load it again on demand into a process.
BEWARE of wrongly formatted IP addresses! This accepts strings and tuples for IPs and integers for AS IDs, where IP-Strings need to be formatted like 'a.b.c.d' where a-d are integers between 0-255.
# standard usage: ASN.ip_to_asn("8.8.8.8") # => {:ok, "Google Inc."} ASN.ip_to_asn({8, 8, 8, 8}) # => {:ok, "Google Inc."} ASN.ip_to_as("8.8.8.8") # => {:ok, 15169} ASN.ip_to_as({8, 8, 8, 8}) # => {:ok, 15169} ASN.as_to_asn(15169) # => {:ok, "Google Inc."}
is it any good?
bien sûr.
*Note that all licence references and agreements mentioned in the asn README section above are relevant to that project's source code only.
|
https://elixir.libhunt.com/asn-alternatives
|
CC-MAIN-2020-34
|
refinedweb
| 648
| 66.84
|
Create a new Engine instance.
The standard method of specifying the engine is via URL as the first positional argument, to indicate the appropriate database dialect and connection arguments, with additional keyword arguments sent as options to the dialect and resulting Engine.
The URL is a string in the form.
Translate url attributes into a dictionary of connection arguments.
Returns attributes of this url (host, database, username, password, port) as a plain dictionary. The attribute names are used as the keys by default. Unset or false attributes are omitted from the final dictionary.
Connects a Pool and Dialect together to provide a source of database connectivity and behavior.
An Engine object is instantiated publically using the create_engine() function.
Return a Connection object which may be newly allocated, or may be part of some ongoing context.
This Connection is meant to be used by the various “auto-connecting” operations.:
- When a dropped connection is detected, it is assumed that all connections held by the pool are potentially dropped, and the entire pool is replaced.
- An application may want to use dispose() within a test suite that is creating multiple engines.
It is critical to note that dispose() does not guarantee that the application will release all open database connections - only those connections that are checked into the pool are closed. Connections which remain checked out or have been detached from the engine are not affected.
When True, enable log output for this element.
This has the effect of setting the Python logging level for the namespace of this element’s class and object reference. A value of boolean True indicates that the loglevel logging.INFO will be set for the logger, whereas the string value debug will set the loglevel to logging.DEBUG.
Return a list of all table names available in the database. the execution_options dictionary of this Engine.
For details on execution_options, see Connection.execution_options() as well as sqlalchemy.sql.expression.Executable.execution_options().
Provides high-level functionality for a wrapped DB-API connection.
Provides execution support for string-based SQL statements as well as ClauseElement, Compiled and DefaultGenerator objects. Provides a begin() method to return Transaction objects.
The Connection object is not thread-safe.
Construct a new Connection.
Connection objects are typically constructed by an Engine, see the connect() and contextual_connect() methods of Engine.
Begin a transaction and return a Transaction handle.
Repeated calls to begin on the same Connection will create a lightweight, emulated nested transaction. Only the outermost transaction may commit. Calls to commit on inner transactions are ignored. Any transaction in the hierarchy may rollback, however.
Begin a nested transaction and return a Transaction handle.
Nested transactions require SAVEPOINT support in the underlying database. Any transaction in the hierarchy may commit and rollback, however the outermost transaction still controls the overall commit or rollback of the transaction of a whole.
Begin a two-phase or XA transaction and return a Transaction handle.
Returns self.
This Connectable interface method returns self, allowing Connections to be used interchangably with Engines in most situations that require a bind.
Returns self.
This Connectable interface method returns self, allowing Connections to be used interchangably with Engines in most situations that require a bind..
Executes and returns the first column of the first row.
The underlying result/cursor is closed after execution.
Execute the given function within a transaction boundary.
This is a shortcut for explicitly calling begin() and commit() and optionally rollback() when exceptions are raised. The given *args and **kwargs will be passed to the function.
See also transaction() on engine.
Interface for an object which supports execution of SQL constructs.
The two implementations of Connectable are Connection and Engine.
Connectable must also implement the ‘dialect’ member which references a Dialect instance.:
Fetch many rows, just like DB-API cursor.fetchmany(size=cursor.arraysize).
If rows are present, the cursor remains open after this is called. Else the cursor is automatically closed and an empty list is returned.
Fetch one row, just like DB-API cursor.fetchone().
If a row is present, the cursor remains open after this is called. Else the cursor is automatically closed and None is returned.
Fetch the first row and then close the result set unconditionally.
Returns None if no row is present.
deprecated. use inserted_primary_key.
Use inserted_primary_key
Return last_inserted_params() from the underlying ExecutionContext.
See ExecutionContext for details.
Return last_updated_params() from the underlying ExecutionContext.
See ExecutionContext for details.
Return lastrow_has_defaults() from the underlying ExecutionContext.
See ExecutionContext for details. method provides a tuple of primary key values for a newly inserted row, regardless of database backend.
Return postfetch_cols() from the underlying ExecutionContext.
See ExecutionContext for details.
Fetch the first column of the first row, and close the result set.
Returns None if no row is present.).
Represent a Transaction in progress.
The Transaction object is not threadsafe.
Close this transaction.
If this transaction is the base transaction in a begin/commit nesting, the transaction will rollback(). Otherwise, the method returns.
This is used to cancel a Transaction without affecting the scope of an enclosing transaction.
Decorator, memoize a function in a connection.info stash.
Only applicable to functions which take no arguments other than a connection. The memo will be stored in connection.info[key].
Define the behavior of a specific database and DB-API combination.
Any aspect of metadata definition, SQL query generation, execution, result-set handling, or anything else which varies between databases is defined under the general category of the Dialect. The Dialect acts as a factory for other database-specific object implementations including ExecutionContext, Compiled, DefaultGenerator, and TypeEngine.
All Dialects implement the following attributes:
A mapping of DB-API type objects present in this Dialect’s DB-API implementation mapped to TypeEngine implementations used by the dialect.
This is used to apply types to result sets based on the DB-API types present in cursor.description; it only takes effect for result sets against textual statements where no explicit typemap was present.
Build DB-API compatible connection arguments.
Given a URL object, returns a tuple consisting of a *args/**kwargs suitable to send directly to the dbapi’s connect function.
Create a two-phase transaction ID.
This id will be passed to do_begin_twophase(), do_rollback_twophase(), do_commit_twophase(). Its format is unspecified.
convert the given name to a case insensitive identifier for the backend if it is an all-lowercase name.
this method is only used if the dialect defines requires_name_normalize=True.
Return information about columns in table_name.
Given a Connection, a string table_name, and an optional string schema, return column information as a list of dictionaries with these keys:
Additional column attributes may be present.
Return information about foreign_keys in table_name.
Given a Connection, a string table_name, and an optional string schema, return foreign key information as a list of dicts with these keys:
Return information about indexes in table_name.
Given a Connection, a string table_name and an optional string schema, return index information as a list of dictionaries with these keys:
Return information about the primary key constraint on table_name`.
Given a string table_name, and an optional string schema, return primary key information as a dictionary with these keys:
Return information about primary keys in table_name.
Given a Connection, a string table_name, and an optional string schema, return primary key information as a list of column names.
Return view definition.
Given a Connection, a string view_name, and an optional string schema, return the view definition.
Return a list of all view names available in the database.
Check the existence of a particular sequence in the database.
Given a Connection object and a string sequence_name, return True if the given sequence exists in the database, False otherwise.
Check the existence of a particular table in the database.
Given a Connection object and a string table_name, return True if the given table (possibly within the specified schema) exists in the database, False otherwise.
Called during strategized creation of the dialect with a connection.
Allows dialects to configure options based on server version info or other properties.
The connection passed here is a SQLAlchemy Connection object, with full capabilities.
The initalize() method of the base dialect should be called via super().
convert the given name to lowercase if it is detected as case insensitive.
this method is only used if the dialect defines requires_name_normalize=True.
return a callable which sets up a newly created DBAPI connection.
The callable accepts a single argument “conn” which is the DBAPI connection itself. It has no return value..
Load table description from the database.
Given a Connection and a Table object, reflect its columns and properties from the database. If include_columns (a list or set) is specified, limit the autoload to the given column names.
The default implementation uses the Inspector interface to provide the output, building upon the granular table/column/ constraint etc. methods of Dialect.
Transform a generic type to a dialect-specific type.
Dialect classes will usually use the adapt_type() function in the types module to make this job easy.
The returned result is cached per dialect class so can contain no dialect-instance state.
Bases: sqlalchemy.engine.base.Dialect
Default implementation of Dialect
Create a random two-phase transaction ID.
This id will be passed to do_begin_twophase(), do_rollback_twophase(), do_commit_twophase(). Its format is unspecified.
return a callable which sets up a newly created DBAPI connection..
Provide a database-specific TypeEngine object, given the generic object which comes from the types module.
This method looks for a dictionary called colspecs as a class or instance-level variable, and passes on to types.adapt_type().
Bases: sqlalchemy.engine.base.ExecutionContext
return self.cursor.lastrowid, or equivalent, after an INSERT.
This may involve calling special cursor functions, issuing a new SELECT on the cursor (or a new one), or returning a stored value that was calculated within post_exec().
This function will only be called for dialects which support “implicit” primary key generation, keep preexecute_autoincrement_sequences set to False, and when no explicit id value was bound to the statement.
The function is called once, directly after post_exec() and before the transaction is committed or ResultProxy is generated. If the post_exec() method assigns a value to self._lastrowid, the value is used in place of calling get_lastrowid().
Note that this method is not equivalent to the lastrowid method on ResultProxy, which is a direct proxy to the DBAPI lastrowid accessor in all cases.
A messenger object for a Dialect that corresponds to a single execution.
ExecutionContext should have these data members:
Return a new cursor generated from this ExecutionContext’s connection.
Some dialects may wish to change the behavior of connection.cursor(), such as postgresql which may return a PG “server side” cursor.
Return the number of rows produced (by a SELECT query) or affected (by an INSERT/UPDATE/DELETE statement).
Note that this row count may not be properly implemented in some dialects; this is indicated by the supports_sane_rowcount and supports_sane_multi_rowcount dialect attributes.
Return a dictionary of the full parameter dictionary for the last compiled INSERT statement.
Includes any ColumnDefaults or Sequences that were pre-executed.
Return a dictionary of the full parameter dictionary for the last compiled UPDATE statement.
Includes any ColumnDefaults that were pre-executed.
Called after the execution of a compiled statement.
If a compiled statement was passed to this ExecutionContext, the last_insert_ids, last_inserted_params, etc. datamembers should be available after this method completes.
Called before an execution of a compiled statement.
If a compiled statement was passed to this ExecutionContext, the statement and parameters datamembers must be initialized after this statement is complete.
Return a result object corresponding to this ExecutionContext.
Returns a ResultProxy.
|
https://codepowered.com/manuals/SQLAlchemy-0.6.1-doc/html/reference/sqlalchemy/connections.html
|
CC-MAIN-2022-33
|
refinedweb
| 1,925
| 50.53
|
10. Oja’s hebbian learning rule¶
Book chapters
See Chapter 19 Section 2 on the learning rule of Oja.
Grey points: Datapoints (two presynaptic firing rates, presented sequentially in random order). Colored points: weight change under Oja’s rule.
Python classes
The
ojas_rule.oja module contains all code required for this exercise.
At the beginning of your exercise solution file, import the contained functions by
import neurodynex.ojas_rule.oja as oja
You can then simply run the exercise functions by executing, e.g.
cloud = oja.make_cloud() # generate data points wcourse = oja.learn(cloud) # learn weights and return timecourse
A complete script using these functions could look like this:
%matplotlib inline # used for Jupyter Notebook import neurodynex.ojas_rule.oja as oja import matplotlib.pyplot as plt cloud = oja.make_cloud(n=200, ratio=.3, angle=60) wcourse = oja.learn(cloud, initial_angle=-20, eta=0.04) plt.scatter(cloud[:, 0], cloud[:, 1], marker=".", alpha=.2) plt.plot(wcourse[-1, 0], wcourse[-1, 1], "or", markersize=10) plt.axis('equal') plt.figure() plt.plot(wcourse[:, 0], "g") plt.plot(wcourse[:, 1], "b") print("The final weight vector w is: ({},{})".format(wcourse[-1,0],wcourse[-1,1]))
10.1. Exercise: getting started¶
The figure below shows the configuration of a neuron learning from the joint input of two presynaptic neurons. Run the above script. Make sure you understand what the functions are doing.
10.2. Exercise: Circular data¶
Now we study Oja’s rule on a data set which has no correlations.
Use the functions
make_cloud and
learn to get the timecourse for weights that are learned on a circular data cloud (
ratio=1). Plot the time course
of both components of the weight vector. Repeat this many times (
learn will choose random initial conditions on each run), and plot this into the same plot. Can you explain what happens? Try different learning rates eta.
10.3. Exercise: What is the neuron leaning?¶
- Repeat the previous question with an elongated elliptic data cloud (e.g.
ratio=0.3). Again, repeat this several times.
- What difference in terms of learning do you observe with respect to the circular data clouds?
- The “goal” of the neuron is not to change weights, but to produce a meaningful output y. After learning, what does the output y tell about the input?
- Take the final weights [w31, w32], then calculate a single input vector (v1=?, v2=?) that leads to a maximal output firing y. Constrain your input to norm([v1,v2]) =1.
- Calculate an input which leads to a minimal output firing y.
10.4. Exercise: Non-centered data¶
The above exercises assume that the input activities can be negative (indeed the inputs were always statistically centered). In actual neurons, if we think of their activity as their firing rate, this cannot be less than zero.
Try again the previous exercise, but applying the learning rule on a noncentered data cloud. E.g., use
cloud = (3,5) + oja.make_cloud(n=1000, ratio=.4, angle=-45), which centers the data around
(3,5). What conclusions can you draw? Can you think of a modification to the learning rule?
|
https://neuronaldynamics-exercises.readthedocs.io/en/latest/exercises/ojas-rule.html
|
CC-MAIN-2018-43
|
refinedweb
| 517
| 60.82
|
As you learned at the end of the last chapter, one of the great things about ASP.NET is that we can pick and choose which of the various .NET languages we like. In this chapter, we’ll look at some key programming principles using our two chosen languages, VB.NET and C#. We’ll start off with a run-down of some basic programming concepts as they relate to ASP.NET using both languages. We’ll introduce programming fundamentals such as control and page events, variables, arrays, functions, operators, conditionals, and loops. Next, we’ll dive into namespaces and address the topic of classes – how they’re exposed through namespaces, and which you’ll use most often.
The final sections of the chapter cover some of the ideas underlying modern, effective ASP.NET design, starting with that of code-behind and the value it provides by helping us separate code from presentation. We finish with an examination of how object-oriented programming techniques impact the ASP.NET developer. Note that you can download these chapters in PDF format if you’d rather print them out and read them offline.
Programming Basics
One of the building blocks of an ASP.NET page is the application logic: the actual programming code that allows the page to function. To get anywhere with this, you need to grasp the concept of events. All ASP.NET pages will contain controls, such as text boxes, check boxes, lists, and more, each of these controls allowing the user to interact with it in some way. Check boxes can be checked, lists can be scrolled, items on them selected, and so on. Now, whenever one of these actions is performed, the control will raise an event. It is by handling these events with code that we get our ASP.NET pages to do what we want.
For instance, say a user clicks a button on an ASP.NET page. That button (or, strictly, the ASP.NET Button control) raises an event (in this case it will be the Click event). When the ASP.NET runtime registers this event, it calls any code we have written to handle it. We would use this code to perform whatever action that button was supposed to perform, for instance, to save form data to a file, or retrieve requested information from a database. Events really are key to ASP.NET programming, which is why we’ll start by taking a closer look at them. Then, there’s the messy business of writing the actual handler code, which means we need to check out some common programming techniques in the next sections. Specifically, we’re going to cover the following areas:
- Control events and handlers
- Page events
- Variables and variable declaration
- Arrays
- Functions
- Operators
- Conditionals
- Loops
It wouldn’t be practical, or even necessary, to cover all aspects of VB.NET and C# in this book, so we’re going to cover enough to get you started, completing the projects and samples using both languages. Moreover, I’d say that the programming concepts you’ll learn here will be more than adequate to complete the great majority of day-to-day Web development tasks using ASP.NET.
Control Events and Subroutines
As I just mentioned, an event (sometimes more than one) is raised, and handler code is called, in response to a specific action on a particular control. For instance, the code below creates a server-side button and label. Note the use of the
OnClick attribute on the Button control:
Example 3.1.
ClickEvent.aspx (excerpt)
<form runat="server">
<asp:Button
<asp:Label
</form>
When the button is clicked, it raises the
Click event, and ASP.NET checks the
OnClick attribute to find the name of the handler subroutine for that event. Here, we tell ASP.NET to call the
btn1_Click() routine. So now we have to write this subroutine, which we would normally place within a code declaration block inside the
<head> tag, like this:
Example 3.2.
ClickEvent.aspx (excerpt)
<head>
<script runat="server" language="VB">
Public Sub btn1_Click(s As Object, e As EventArgs)
lblMessage.Text = "Hello World"
End Sub
</script>
</head>
Example 3.3.
ClickEvent.aspx (excerpt)
<head>
<script runat="server" language="C#">
public void btn1_Click(Object s, EventArgs e) {
lblMessage.Text = "Hello World";
}
</script>
</head>
This code simply sets a message to display on the label that we also declared with the button. So, when this page is run and users click the button, they’ll see the message “Hello World” appear next to it.
I hope you can now start to come to grips with the idea of control events and how they’re used to call particular subroutines. In fact, there are many events that your controls can use, some of which are only found on certain controls – not others. Here’s the complete set of attributes the Button control supports for handling events:
OnClick
As we’ve seen, the subroutine indicated by this attribute is called for the
Click event, which occurs when the user clicks the button.
OnCommand
As with OnClick, the subroutine indicated by this attribute is called when the button is clicked.
OnLoad
The subroutine indicated by this attribute is called when the button is loaded for the first time – generally when the page first loads.
OnInit
When the button is initialized, any subroutine given in this attribute will be called.
OnPreRender
We can run code just before the button is rendered, using this attribute.
OnUnload
This subroutine will run when the control is unloaded from memory – basically, when the user goes to a different page or closes the browser entirely.
OnDisposed
This occurs when the button is released from memory.
OnDataBinding
This fires when the button is bound to a data source.
Don’t worry too much about the intricacies of all these events and when they happen; I just want you to understand that a single control can produce a number of different events. In the case of the
Button control, you’ll almost always be interested in the
Click event, as the others are only useful in rather obscure circumstances.
When a control raises an event, the specified subroutine (if there is one) is executed. Let’s now take a look at the structure of a typical subroutine that interacts with a Web control:
Public Sub mySubName(s As Object, e As EventArgs)
' Write your code here
End Sub
public void mySubName(Object s, EventArgs e) {
// Write your code here
}
Let’s break down all the components that make up a typical subroutine:
Public, public
Defines the scope of the subroutine. There are a few different options to choose from, the most frequently used being
Public (for a global subroutine that can be used anywhere within the entire page) and
Private (for subroutines that are available for the specific class only). If you don’t yet understand the difference, your best bet is to stick with Public for now.
Sub, void
Defines the chunk of code as a subroutine. A subroutine is a named block of code that doesn’t return a result; thus, in C#, we use the void keyword, which means exactly that. We don’t need this in VB.NET, because the Sub keyword already implies that no value is returned.
mySubName(...)
This part gives the name we’ve chosen for the subroutine.
s As Object, Object s
When we write a subroutine that will function as an event handler, it must accept two parameters. The first is the control that generated the event, which is an Object. Here, we are putting that Object in a variable named s (more on variables later in this chapter). We can then access features and settings of the specific control from our subroutine using the variable.
e As EventArgs, EventArgs e
The second parameter contains certain information specific to the event that was raised. Note that, in many cases, you won’t need to use either of these two parameters, so you don’t need to worry about them too much at this stage.
As this chapter progresses, you’ll see how subroutines associated with particular events by the appropriate attributes on controls can revolutionize the way your user interacts with your application.
Page Events
Until now, we’ve considered only events that are raised by controls. However, there is another type of event – the page event. The idea is the same as for control events .),books:
Objects
Properties hold specific information relevant to that class of object. You can think of properties as characteristics of the objects that they represent. Our
Dog class might have properties such as Color, Height, and Length.
Methods.
>>IMAGE.
Properties
- Height
-.”
Methodsbooks.
Classes.
>>IMAGE!";
Scope:.
Understanding Inheritance.
Separating Code From Content:
sample.aspx
layout, presentation, and static content
sample.vb, sample.cs.
Summary.
Look out for more chapters from Build Your Own ASP.NET Website Using C# And VB.NET in coming weeks. If you can’t wait, download all the sample chapters, or order your very own copy now!
No Reader comments
|
http://www.sitepoint.com/vb-dot-net-c-sharp-programming/
|
CC-MAIN-2015-40
|
refinedweb
| 1,512
| 63.19
|
What is the correct or most robust way to tell from Python if an imported module comes from a C extension as opposed to a pure Python module? This is useful, for example, if a Python package has a module with both a pure Python implementation and a C implementation, and you want to be able to tell at runtime which one is being used.
One idea is to examine the file extension of
module.__file__
First, I don't think this is at all useful. It's very common for modules to be pure-Python wrappers around a C extension module—or, in some cases, pure-Python wrappers around a C extension module if it's available, or a pure Python implementation if not.
For some popular third-party examples:
numpy is pure Python, even though everything important is implemented in C;
bintrees is pure Python, even though its classes may all be implemented either in C or in Python depending on how you build it; etc.
And this is true in most of the stdlib from 3.2 on. For example, if you just
import pickle, the implementation classes will be built in C (what you used to get from
cpickle in 2.7) in CPython, while they'll be pure-Python versions in PyPy, but either way
pickle itself is pure Python.
But if you do want to do this, you actually need to distinguish three things:
sys.
cpickle.
pickle.
And that's assuming you only care about CPython; if your code runs in, say, Jython, or IronPython, the implementation could be JVM or .NET rather than native code.
You can't distinguish perfectly based on
__file__, for a number of reasons:
__file__at all. (This is documented in a few places—e.g., the Types and members table in the
inspectdocs.) Note that if you're using something like
py2appor
cx_freeze, what counts as "built-in" may be different from a standalone installation.
easy_install, less so with
pip) will have either a blank or useless
__file__.
In 3.1+, the import process has been massively cleaned up, mostly rewritten in Python, and mostly exposed to the Python layer.
So, you can use the
importlib module to see the chain of loaders used to load a module, and ultimately you'll get to
BuiltinImporter (builtins),
ExtensionFileLoader (.so/.pyd/etc.),
SourceFileLoader (.py), or
SourcelessFileLoader (.pyc/.pyo).
You can also see the suffixes assigned to each of the four, on the current target platform, as constants in
importlib.machinery. So, you could check that the
any(pathname.endswith(suffix) for suffix in importlib.machinery.EXTENSION_SUFFIXES)), but that won't actually help in, e.g., the egg/zip case unless you've already traveled up the chain anyway.
The best heuristics anyone has come up with for this are the ones implemented in the
inspect module, so the best thing to do is to use that.
The best choice will be one or more of
getsource,
getsourcefile, and
getfile; which is best depends on which heuristics you want.
A built-in module will raise a
TypeError for any of them.
An extension module ought to return an empty string for
getsourcefile. This seems to work in all the 2.5-3.4 versions I have, but I don't have 2.4 around. For
getsource, at least in some versions, it returns the actual bytes of the .so file, even though it should be returning an empty string or raising an
IOError. (In 3.x, you will almost certainly get a
UnicodeError or
SyntaxError, but you probably don't want to rely on that…)
Pure Python modules may return an empty string for
getsourcefile if in an egg/zip/etc. They should always return a non-empty string for
getsource if source is available, even inside an egg/zip/etc., but if they're sourceless bytecode (.pyc/etc.) they will return an empty string or raise an IOError.
The best bet is to experiment with the version you care about on the platform(s) you care about in the distribution/setup(s) you care about.
|
https://codedump.io/share/Ol1jM292dmDP/1/in-python-how-can-one-tell-if-a-module-comes-from-a-c-extension
|
CC-MAIN-2016-50
|
refinedweb
| 684
| 71.44
|
Investors in Wynn Resorts Ltd (Symbol: WYNN) saw new options begin trading this week, for the July 15th expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the WYNN options chain for the new July 15th WYNN, that could represent an attractive alternative to paying $63.10.48% return on the cash commitment, or 55.27% annualized — at Stock Options Channel we call this the YieldBoost.
Below is a chart showing the trailing twelve month trading history for Wynn Resorts Ltd, and highlighting in green where the $62.50 strike is located relative to that history:
Turning to the calls side of the option chain, the call contract at the $65.00 strike price has a current bid of $4.80. If an investor was to purchase shares of WYNN stock at the current price level of $63.10/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $65.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 10.62% if the stock gets called away at the July 15th expiration (before broker commissions). Of course, a lot of upside could potentially be left on the table if WYNN shares really soar, which is why looking at the trailing twelve month trading history for Wynn Resorts Ltd, as well as studying the business fundamentals becomes important. Below is a chart showing WYNN's trailing twelve month trading history, with the $65.00 strike highlighted in red:
.61% boost of extra return to the investor, or 49.58% annualized, which we refer to as the YieldBoost.
The implied volatility in the put contract example, as well as the call contract example, are both approximately 59%.
Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 253 trading day closing values as well as today's price of $63.10) to be 50%. For more put and call options contract ideas worth looking at, visit StockOptionsChannel.com.
The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.
|
https://www.nasdaq.com/articles/interesting-wynn-put-and-call-options-for-july-15th
|
CC-MAIN-2022-27
|
refinedweb
| 368
| 62.98
|
Use JavaCPP and JavaCPP presets with ease. Base plugin for JavaCPP-related projects.
infoMap.put(new Info("ANNOY_NODE_ATTRIBUTE").cppTypes().annotations())
Sadly, I won't be of much help in this discussion as I'm fairly inexperienced at writing custom presets.
One thing I will say though is that it might be easier to use the Maven flow (as used in the official JavaCPP presets) to write and publish your presets first, and then bring them in as dependencies in a Scala project later.).
I did some test, and it seems most of the function are working, the only thing remaining is this code:
void getNnsByItem(AnnoyIndexInterface<int,float> *ptr, int item, int n, int search_k, int *result, float *distances) { vector<int> resultV; vector<float> distancesV; ptr->get_nns_by_item(item, n, search_k, &resultV, &distancesV); std::copy(resultV.begin(), resultV.end(), result); std::copy(distancesV.begin(), distancesV.end(), distances); }
Basically
result and
distancesare two
float[] I send from Java full of 0s. And they are supposed to be rewritten by the CPP part with the generated result. When I call this function it is not rewritten, like nothing happen. Because all the other functions are working I think there is something to change in the preset. The very same function works in JNA (called in the very same way). Have you any idea how I can fix the preset?).
Yep I think I misunderstood what the plugin is doing. Thanks for highlighting this point
Where should I call
get() ?
Currently the C++ code looks like that (modified since previous message):
void get_nns_by_item(S item, size_t n, size_t search_k, vector<S>* result, vector<T>* distances) { std::cout << "size d av " << distances->size() << "\n"; // av -> avant in French means previous const Node* m = _get(item); _get_all_nns(m->v, n, search_k, result, distances); std::cout << "size d ap " << distances->size() << "\n"; // ap -> après in French means after }
The function is called from Scala through the code generated by the preset.
val r3b = Array[Int]() val distances3 = Array[Float]() annoy3.get_nns_by_item(elementToTest, 100, -1, r3b, distances3)
Parameters
result and
distances are 2 arrays, that I init empty in Scala.
When the function is called, it prints in the console:
size d av 0 size d ap 100
So the work is done.
When I check the content of the arrays in Scala they are both empty, for
distances:
println(distances3.toSeq)
it prints:
WrappedArray()
You're providing arrays of length 0, we can't change the size of arrays. You're going to need to provide non-zero-length arrays if you're expecting anything in return.
When I did so, the size of the array was the original size + 100 (because I was retrieving 100 items, which doesn't make sense if the size of the array can't change). Moreover, how does it works for
get()?
When I init the java Array with 100 values (100 is the size of the result I expect), in CPP, when I print the size of the vector after retrieving the result, it is 200.
When I keep the java Array empty, in CPP, when I print the size of the vector after retrieving the result, it is 100.
Finally I made the function work by modifying the original source code:
void get_nns_by_item(S item, size_t n, size_t search_k, vector<S>* result, vector<T>* distances) { vector<S> resultV; vector<T> distancesV; const Node* m = _get(item); _get_all_nns(m->v, n, search_k, &resultV, &distancesV); distances->swap(distancesV); result->swap(resultV); }
As a reminder original source code is there:
As you can see, I just copy the retrieved vector in the Java Array.
All unit tests pass now. However the solution is ugly and we can't share the preset in this state.
Is there some test I can do to help you understand?
Can I force in some way Javacpp to override the function with this code? (the function is inside a structure)
Instead of calling
swap(), this seems to work:
*distances = distancesV; *result = resultV;
I am under the feeling there is something simple to do in the preset.
Right now it looks like that:
@Properties(target="org.bytedeco.javacpp.annoy", value={@Platform(include={ "/mnt/workspace/justice-data/source_code/src/main/cpp/annoyjavacpp.cpp", // very light instance creator "/mnt/workspace/justice-data/source_code/src/main/cpp/annoylib.h", "/mnt/workspace/justice-data/source_code/src/main/cpp/kissrandom.h" }, link="z@.1") }) public class annoy_preset implements InfoMapper { public void map(InfoMap infoMap) { infoMap .put(new Info("AnnoyIndexInterface<int,float>").pointerTypes("AnnoyIndexInterface")) .put(new Info("AnnoyIndex<int,float,Euclidean,Kiss64Random>").pointerTypes("AnnoyIndexE").base("AnnoyIndexInterface")) .put(new Info("AnnoyIndex<int,float,Angular,Kiss64Random>").pointerTypes("AnnoyIndexA").base("AnnoyIndexInterface")) .put(new Info("ANNOY_NODE_ATTRIBUTE").cppTypes().annotations()) ; } }
May be interesting, the function which is called inside the one I am working on:
void _get_all_nns(const T* v, size_t n, size_t search_k, vector<S>* result, vector<T>* distances) { ... // no call to result and distances size_t m = nns_dist.size(); size_t p = n < m ? n : m; // Return this many items std::partial_sort(nns_dist.begin(), nns_dist.begin() + p, nns_dist.end()); for (size_t i = 0; i < p; i++) { if (distances) distances->push_back(D::normalized_distance(nns_dist[i].first)); result->push_back(nns_dist[i].second); } }
If I understand correctly calling
push_back explains why when I directly provide a 100 array I finish in CPP with a 200 array, and for some reason, when I get it back in Java I lose the added elements.
Here
size_t n is the number of elements I am expecting.
Right now, for my own project, I have what I need. The issue is more about how to modify preset to share it in JavaCPP-presets.
For that purpose I need to keep the lib like it is (otherwise it would require to apply the same modification to any update in the future which is not a good solution).
I have played with the template of
std::vector but I am not sure to see how I am supposed to leverage it in Annoy.
Anyway, I was thinking to all of that, it seems to me the issue I have is only related to the call to
push_back() and is not about sending a pointer and references.
push_back()starts at the position
0, when it s finished the copy put the values in the java array starting at position
0.
Therefore the natural solution would be to inform the preset that the array may change of size. Clearly in Java/Scala, changing the size of an array makes no sense, but to me C++ is like dark magic, so may be there is a way to perform such a thing?
Another workaround would be to make preset wrap the original function automatically and adding the new vector stuff like I did by modifying the original source code.
A third solution would be to apply a patch automatically to the original source code of Annoy (the ugliest solution I think).
What do you think?
get_nns_by_itemand
get_nns_by_vector(both have the same issue for the same reason).
|
https://gitter.im/bytedeco/sbt-javacpp?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge
|
CC-MAIN-2020-16
|
refinedweb
| 1,164
| 61.26
|
29 March 2009 20:38 [Source: ICIS news]
HOUSTON (ICIS news)--US olefins market participants were sceptical that a small uptrend which began at the start of the year would last into the second quarter, they said this weekend as the International Petrochemical Conference in San Antonio, Texas, got underway.
The price increase improvement in the first quarter was to be expected given the condition of the ?xml:namespace>
“Anything compared with December would look good,” a propylene producer said.
While demand did seem to improve in January, many had questioned whether there was indeed true new demand or restocking, with buyers taking just enough material to get by.
Price trends in March seemed to support that view. Spot prices for both ethylene and propylene fell sharply in the second week of the month.
US ethylene for March dropped by as much as 10 cents/lb to 23.00-26.25 cents/lb ($507-579/tonne, €380-434/tonne) in the week ended 13 March, while propylene shed 3.00 cents/lb to 20.00 cents/lb, according to global chemical market intelligence service ICIS pricing.
Prices for both products have remained largely steady since then.
Market participants attributed the drop to stalled demand and a sudden increase in supply from the restart of several crackers that were idled when demand plummeted last year.
Among the restarts were
DuPont in February also started up its Orange unit in
US cracker operating rates jumped to about 70% in February, from 59% in January and 56% in December 2008.
An industry consultant expected March operating rates to be near 80%, indicating more supply - and downward price pressure - could be in the pipeline for the second quarter, provided energy prices do not spike.
The key to the outlook is the business cycle. As goes the economy, so goes monomer demand. However, the timing of a recovery is anybody’s guess. In the most optimistic of forecasts, the
An upturn in ethylene could take up to two years to happen, a bank analyst predicted this week, saying the market usually lags the economy by 6-12 months.
In such a scenario, National Petrochemical & Refiner Association (NPRA) participants could still be waiting for a clearer picture for olefins at the 2010 meeting.
The NPRA 2009 International Petrochemical Conference runs Sunday through Tuesday.
($1 = €0.75)
For more on ethylene and propylene visit ICIS chemical intelligence
To discuss issues facing the chemical industry go to ICIS connect
|
http://www.icis.com/Articles/2009/03/29/9204047/npra-09-us-olefins-back-pedals-after-initial-uptrend.html
|
CC-MAIN-2015-11
|
refinedweb
| 411
| 60.95
|
0
Hi
I'm having a little trouble understanding C++ string arguments if that's what you call them.
The below example illustrates.
I'm not sure I understand what line 14 does. The vector 'myvector' is passed to myfunction (called on line 35. I'm assuming string w takes in the first value of the vector (hence myvector.begin()) and moves this begin position to the next position in the vector (post increment).
Does this keep moving the begin() position (hence begin++) as long as there are values in the vector. I'm just not sure how it continually moves through the vector without a loop of some kind.
Also, would anyone know why a pointer (as in 'begin') is used in this situation. Thanks
1. #include <iostream> 2. #include <string> 3. #include <vector> 4. #include <cstdlib> 5. 6. using namespace std; 7. 8. 9. template <typename Iterator> 10. std::string myfunction(Iterator begin, Iterator end) 11. { 12. 13. 14. std::string w(1, *begin++); // I'm not sure what this means. Can't find it 15. //in C++ reference? 16. std::string result = w; 17. std::string entry; 18. 19. for ( ; begin != end; begin++) 20. { 21. int k = *begin; 22. } 23. 24. 25. int main() 26. { 27. vector<char> myvector; 28. 29. for (int i = 0; i < 256; i++) 30. { 31. myvector[i] = string(1,i); //fills vector with //ASCII values. 32. 33. } 34. // now call myfunction 35. cout << myfunction(myvector.begin(),mvector.end()); 36. }
Edited by daino: formatting
|
https://www.daniweb.com/programming/software-development/threads/471768/c-strings-arguments
|
CC-MAIN-2018-30
|
refinedweb
| 251
| 71.92
|
How Seven Segment Display Works & Interface it with Arduino
Arduino Code
Now, it’s time to light up the display with some code.
Before you can start writing code to control the 7-segment displays, you’ll need to download the SevSeg Arduino Library first. You can do that by visiting the GitHub repo and manually downloading the library or, just click this button to download the zip:
To install it, open the Arduino IDE, go to Sketch > Include Library > Add .ZIP Library, and then select the SevSeg ZIP file that you just downloaded. If you need more details on installing a library, visit this Installing an Arduino Library tutorial.
Once you have the library installed, you can copy this sketch into the Arduino IDE. The following test sketch will count up from 0 to 9. Try the sketch out; and then we will explain it in some detail.
#include "SevSeg.h" SevSeg sevseg; void setup() { //Set to 1 for single digit display byte numDigits = 1; //defines common pins while using multi-digit display. Left empty as we have a single digit display byte digitPins[] = {}; //Defines arduino pin connections in order: A, B, C, D, E, F, G, DP byte segmentPins[] = {3, 2, 8, 7, 6, 4, 5, 9}; bool resistorsOnSegments = true; //Initialize sevseg object. Uncomment second line if you use common cathode 7 segment sevseg.begin(COMMON_ANODE, numDigits, digitPins, segmentPins, resistorsOnSegments); //sevseg.begin(COMMON_CATHODE, numDigits, digitPins, segmentPins, resistorsOnSegments); sevseg.setBrightness(90); } void loop() { //Display numbers one by one with 2 seconds delay for(int i = 0; i < 10; i++) { sevseg.setNumber(i); sevseg.refreshDisplay(); delay(2000); } }
Code Explanation:
The sketch starts by including SevSeg library which simplifies the controls and signals to the 7-segment. Next we have to create a SevSeg object that we can then use throughout the sketch.
#include "SevSeg.h" SevSeg myDisplay;
Next, we have to specify how many digits the display has. As we are using a single digit display, we set it to 1. In case you’re using a 4 digit display, set this to 4.
//Set to 1 for single digit display byte numDigits = 1;
The digitPins array simply defines the ‘common pins’ when using a multi-digit display. Leave it empty if you have a single digit display. Otherwise, provide the arduino pin numbers that the ‘common pins’ of individual digits are connected to. Order them from left to right.
//defines common pins while using multi-digit display //Left empty as we have a single digit display byte digitPins[] = {};
The second array we see being initialized is the segmentPins array. This is an array of all the Arduino pin numbers which are connected to pins on the LED display which control the segments; so in this case these are the ones that we connected directly from the breadboard to the Arduino. These also have to be put in the correct order as the library assumes that the pins are in the following order: A, B, C, D, E, F, G, DP.
//Defines arduino pin connections in order: A, B, C, D, E, F, G, DP byte segmentPins[] = {3, 2, 8, 7, 6, 4, 5, 9};
After creating these variables, we then pass them in to the SevSeg constructor using
begin() function.
//Initialize sevseg object sevseg.begin(COMMON_ANODE, numDigits, digitPins, segmentPins, resistorsOnSegments);
In the ‘loop’ section: The program begins to count up from 0 to 9 using the ‘for’ loop and the variable ‘i’. Each time, it uses the SevSeg library function
setNumber() along with
refreshDisplay () to set the number on to the display.
Then there is a second delay before ‘i’ is incremented and the next number displayed.
for(int i = 0; i < 10; i++) { sevseg.setNumber(i); sevseg.refreshDisplay(); delay(1000); }
Arduino Project
Rolling Dice
As a supplement, here’s another project, which allows people who need accessibility tech to “roll” the dice. You can use it to play games like Yahtzee, ludo etc. It uses the same Arduino setup except we use a tactile switch for quick rolling.
The whole point of a dice is to provide a way to randomly come up with a number from 1 to 6. And the best way to get a random number is to use a built-in function random(min,max). This takes two parameters, the first one specifies the lower bound of the random value (including this number) and second parameter specifies the upper bound of the random value (excluding this number). Meaning random number will be generated between min and max-1
#include "SevSeg.h" SevSeg sevseg; const int buttonPin = 10; // the number of the pushbutton pin // variables will change: int buttonState = 0; // variable for reading the pushbutton status void setup(){ byte numDigits = 1; byte digitPins[] = {}; byte segmentPins[] = {3, 2, 8, 7, 6, 4, 5, 9}; bool resistorsOnSegments = true; sevseg.begin(COMMON_ANODE, numDigits, digitPins, segmentPins, resistorsOnSegments); sevseg.setBrightness(90); // initialize the pushbutton pin as an input: pinMode(buttonPin, INPUT); } void loop() { // read the state of the pushbutton value: buttonState = digitalRead(buttonPin); if (buttonState == HIGH) { sevseg.setNumber(random(1,7)); sevseg.refreshDisplay(); } }
|
https://lastminuteengineers.com/seven-segment-arduino-tutorial/
|
CC-MAIN-2020-16
|
refinedweb
| 843
| 60.65
|
Features and Improvements in ArangoDB 3.1
The following list shows in detail which features have been added or improved in ArangoDB 3.1. ArangoDB 3.1 also contains several bugfixes that are not listed here.
SmartGraphs
ArangoDB 3.1 adds a first major Enterprise Edition feature called SmartGraphs. SmartGraphs form an addition to the already existing graph features and allow to scale graphs beyond a single machine while keeping almost the same query performance. The SmartGraph feature is suggested for all graph database use cases that require a cluster of DB-Servers for what ever reason. You can either have a graph that is too large to be stored on a single machine only. Or you can have a small graph, but at the same time need additional data with has to be sharded and you want to keep all of them in the same environment. Or you simply use the cluster for high-availability. In all the above cases SmartGraphs will significantly increase the performance of graph operations. For more detailed information read the SmartGraphs section.
Data format.
Communication Layer
ArangoDB up to 3.0 used libev for the communication layer. ArangoDB starting from 3.1 uses Boost ASIO.
Starting with ArangoDB 3.1 we begin to provide the VelocyStream Protocol (vst) as a addition to the established http protocol.
A few options have changed concerning communication, please checkout Incompatible changes in 3.1.
Cluster
For its internal cluster communication a (bundled version) of curl is now being used. This enables asynchronous operation throughout the cluster and should improve general performance slightly.
Authentication is now supported within the cluster.
Document revisions cache
The ArangoDB server now provides an in-memory cache for frequently accessed document revisions. Documents that are accessed during read/write operations are loaded into the revisions cache automatically, and subsequently served from there.
The cache has a total target size, which can be controlled with the startup
option
--database.revision-cache-target-size. Once the cache reaches the
target size, older entries may be evicted from the cache to free memory. Note that
the target size currently is a high water mark that will trigger cache memory
garbage collection if exceeded. However, if all cache chunks are still in use
when the high water mark is reached, the cache may still grow and allocate more
chunks until cache entries become unused and are allowed to be garbage-collected.
The cache is maintained on a per-collection basis, that is, memory for the cache
is allocated on a per-collection basis in chunks. The size for the cache memory
chunks can be controlled via the startup option
--database.revision-cache-chunk-size.
The default value is 4 MB per chunk.
Bigger chunk sizes allow saving more documents per chunk, which can lead to more
efficient chunk allocation and lookups, but will also lead to memory waste if many
chunks are allocated and not fully used. The latter will be the case if there exist
many small collections which all allocate their own chunks but not fully utilize them
because of the low number of documents.
AQL
Functions added
The following AQL functions have been added in 3.1:
OUTERSECTION(array1, array2, …, arrayn): returns the values that occur only once across all arrays specified.
DISTANCE(lat1, lon1, lat2, lon2): returns the distance between the two coordinates specified by (lat1, lon1) and (lat2, lon2). The distance is calculated using the haversine formula.
JSON_STRINGIFY(value): returns a JSON string representation of the value.
JSON_PARSE(value): converts a JSON-encoded string into a regular object
Index usage in traversals
3.1 allows AQL traversals to use other indexes than just the edge index. Traversals with filters on edges can now make use of more specific indexes. For example, the query
FOR v, e, p IN 2 OUTBOUND @start @@edge FILTER p.edges[0].foo == "bar" RETURN [v, e, p]
may use a hash index on
["_from", "foo"] instead of the edge index on just
["_from"].
Optimizer improvements
Make the AQL query optimizer inject filter condition expressions referred to by variables during filter condition aggregation. For example, in the following query
FOR doc IN collection LET cond1 = (doc.value == 1) LET cond2 = (doc.value == 2) FILTER cond1 || cond2 RETURN { doc, cond1, cond2 }
the optimizer will now inject the conditions for
cond1 and
cond2 into the
filter condition
cond1 || cond2, expanding it to
(doc.value == 1) || (doc.value == 2)
and making these conditions available for index searching.
Note that the optimizer previously already injected some conditions into other conditions, but only if the variable that defined the condition was not used elsewhere. For example, the filter condition in the query
FOR doc IN collection LET cond = (doc.value == 1) FILTER cond RETURN { doc }
already got optimized before because
cond was only used once in the query and the
optimizer decided to inject it into the place where it was used.
This only worked for variables that were referred to once in the query. When a variable was used multiple times, the condition was not injected as in the following query
FOR doc IN collection LET cond = (doc.value == 1) FILTER cond RETURN { doc, cond }
3.1 allows using this condition so that the query can use an index on
doc.value
(if such index exists).
Miscellaneous improvements
The performance of the
[*] operator was improved for cases in which this operator
did not use any filters, projections and/or offset/limits.
The AQL query executor can now report the time required for loading and locking the
collections used in an AQL query. When profiling is enabled, it will report the total
extra.profile value of the result. The loading and locking time can also be view in the
AQL query editor in the web interface.
Audit Log
Audit logging has been added, see Auditing.
Client tools
Added option
--skip-lines for arangoimp
This allows skipping the first few lines from the import file in case the CSV or TSV
import are used and some initial lines should be skipped from the input.
Web Admin Interface
The usability of the AQL editor significantly improved. In addition to the standard JSON output, the AQL Editor is now able to render query results as a graph preview or a table. Furthermore the AQL editor displays query profiling information.
Added a new Graph Viewer in order to exchange the technically obsolete version. The new Graph Viewer is based on Canvas but does also include a first WebGL implementation (limited functionality - will change in the future). The new Graph Viewer offers a smooth way to discover and visualize your graphs.
The shard view in cluster mode now displays a progress indicator while moving shards.
Authentication
Up to ArangoDB 3.0 authentication of client requests was only possible with HTTP basic authentication.
Starting with 3.1 it is now possible to also use a JSON Web Tokens (JWT) for authenticating incoming requests.
For details check the HTTP authentication chapter. Both authentication methods are valid and will be supported in the near future. Use whatever suits you best.
Foxx
GraphQL
It is now easy to get started with providing GraphQL APIs in Foxx, see Foxx GraphQL.
OAuth2
Foxx now officially provides a module for implementing OAuth2 clients, see Foxx OAuth2.
Per-route middleware
It’s now possible to specify middleware functions for a route when defining a route handler. These middleware functions only apply to the single route and share the route’s parameter definitions. Check out the Foxx Router documentation for more information.
|
https://www.arangodb.com/docs/devel/release-notes-new-features31.html
|
CC-MAIN-2020-16
|
refinedweb
| 1,254
| 56.76
|
Rail Spikes - Home tag:railspikes.com,2009:mephisto/ Mephisto Drax 2009-07-01T01:44:29Z Luke Francl tag:railspikes.com,2009-07-01:2004 2009-07-01T01:44:00Z 2009-07-01T01:44:29Z Testing HTTP Authentication <p>If you ever need to test <span class="caps">HTTP</span> Authentication in your <em>functional</em> tests, here is how you do it:</p> <table class="CodeRay"><tr> <td title="click to toggle" class="line_numbers"><pre>1<tt> </tt>2<tt> </tt>3<tt> </tt>4<tt> </tt><strong>5</strong><tt> </tt>6<tt> </tt></pre></td> <td class="code"><pre><span class="r">def</span> <span class="fu">test_http_auth</span><tt> </tt> <span class="iv">@request</span>.env[<span class="s"><span class="dl">'</span><span class="k">HTTP_AUTHORIZATION</span><span class="dl">'</span></span>] = <span class="co">ActionController</span>::<span class="co">HttpAuthentication</span>::<span class="co">Basic</span>.encode_credentials(<span class="s"><span class="dl">"</span><span class="k">quentin</span><span class="dl">"</span></span>, <span class="s"><span class="dl">"</span><span class="k">password</span><span class="dl">"</span></span>)<tt> </tt> get <span class="sy">:show</span>, <span class="sy">:id</span> => <span class="iv">@foobar</span>.id<tt> </tt><tt> </tt> assert_response <span class="sy">:success</span><tt> </tt><span class="r">end</span><tt> </tt></pre></td> </tr></table> <p>This is much like <a href="">testing <span class="caps">SSL</span></a>.</p> <p>Hat tip: Philipp Führer for <a href="">Functional test for <span class="caps">HTTP</span> Basic Authentication in Rails 2</a>.</p> <img src="" height="1" width="1"/> Luke Francl tag:railspikes.com,2009-06-22:1999 2009-06-22T22:25:00Z 2009-06-25T06:41:10Z Adding Routes for Tests <p>I like to be extremely judicious with use of routes. Fewer routes means less memory consumption and fewer confusing magical methods.</p> <p>I always delete the default route <code>map.connect ':controller/:action/:id'</code> (you should too, otherwise all your pretty RESTful routing is easily circumvented). Since Rails now has the ability to remove unneeded RESTful routes, I’ve been removing those, too.</p> <p>However, this judiciousness recently painted me into a corner. I have a controller action that I would like to test and it’s wired up like this:</p> <p><code>map.logout '/logout', :controller => 'user_sessions', :action => 'destroy', :method => 'delete'</code></p> <p>I don’t have this mapped any other way, because why should></pre></td> <td class="code"><pre><span class="r">def</span> <span class="fu">test_logout_should_redirect_to_root_path</span><tt> </tt> <span class="co">UserSession</span>.create(<span class="co">User</span>.first)>Unfortunately, the test fails with <code>ActionController::RoutingError: No route matches {:action=>"destroy", :controller=>"user_sessions"}</code>! Huh?</p> <p>The problem is that the <code>delete</code> (and <code>get</code>, <code>post</code>, etc.) method can’t find the route that I created.</p> <p>Initially, I worked around this using <code>with_routing</code> to define a whole new set of routes just for that>with_routing <span class="r">do</span> |set|<tt> </tt> set.draw <span class="r">do</span> |map|<tt> </tt> map.resource <span class="sy">:user_sessions</span>, <span class="sy">:only</span> => [<span class="sy">:destroy</span>]<tt> </tt> map.root <span class="sy">:controller</span> => <span class="s"><span class="dl">'</span><span class="k">foobars</span><span class="dl">'</span></span>, <span class="sy">:action</span> => <span class="s"><span class="dl">'</span><span class="k">index</span><span class="dl">'</span></span><tt> </tt> <span class="r">end</span>>But that was annoying. And after I had more than one route exhibiting this problem, it got <em>really</em> annoying.</p> <p>Fortunately, I found Sam Ruby’s post <a href="">Keeping Up With Rails</a> about the challenge of Rails’ minor, quasi-documented <span class="caps">API</span> changes. Sam’s post has a bit about how you can add new routes without clearing the existing routes in Rails 2.3.2, which I knew was possible. Following Sam’s link to <a href="">the commit</a> (there’s no docs for this) showed how to do it.</p> <p>Now, I’ve added this to <code>test_helper.rb</code>:<">ActionController::TestCase</span><tt> </tt> <span class="c"># add a catch-all route for the tests only.</span><tt> </tt> <span class="co">ActionController</span>::<span class="co">Routing</span>::<span class="co">Routes</span>.draw { |map| map.connect <span class="s"><span class="dl">'</span><span class="k">:controller/:action/:id</span><span class="dl">'</span></span> }<tt> </tt><span class="r">end</span><tt> </tt></pre></td> </tr></table> <p>The downside to this is that real problems with broken routes may get swept under the rug. You could be more restrictive with the routes you are adding just for tests to overcome that problem.</p> <p><strong>Update:</strong> Thanks to Adam Cigánek in the comments for pointing out my error in why the route didn’t get picked up in the tests. <em>I had the condition hash wrong!</em></p> <p>Instead of:</p> <p><code>map.logout '/logout', :controller => 'user_sessions', :action => 'destroy', :method => 'delete'</code></p> <p>It should be:</p> <p><code>map.logout '/logout', :controller => 'user_sessions', :action => 'destroy', :conditions => {:method => :delete}</code></p> <p>The first way I had worked correctly when testing manually, but only because without <code>:method</code>, the route responds to all <span class="caps">HTTP</span> methods (still no clue why my test didn’t pick it up, though).</p> <p>Interestingly enough, there’s another gotcha here. Notice that I specified <code>:method => 'delete'</code>. Even when put into the <code>:conditions</code> hash, that doesn’t work. You <span class="caps">MUST</span> pass a symbol (<code>:delete</code>) for the <span class="caps">HTTP</span> method.</p> <p>This fixed my problem, but if I ever do need to add routes for tests, now I know how…</p> <img src="" height="1" width="1"/> Luke Francl tag:railspikes.com,2009-06-17:1989 2009-06-17T20:51:00Z 2009-06-17T20:54:57Z JavaScript gotcha: storing objects in an associative array <p>I just ran into a tricky gotcha in JavaScript.</p> <p>I was trying to store some objects in an associative array. Based on my experience with Java, Ruby, and other languages, I expected that givenvar</span> dictionary = {};<tt> </tt><tt> </tt><span class="r">var</span> obj1 = {}; <tt> </tt><span class="r">var</span> obj2 = {};<tt> </tt><tt> </tt>dictionary[obj1] = <span class="s"><span class="dl">'</span><span class="k">foo</span><span class="dl">'</span></span><tt> </tt>dictionary[obj2] = <span class="s"><span class="dl">'</span><span class="k">bar</span><span class="dl">'</span></span><tt> </tt></pre></td> </tr></table> <p>The result of <code>dictionary[obj1]</code> would be ‘foo’ and <code>dictionary[obj2]</code> would be ‘bar’.</p> <p>This is not the case!</p> <p>The problem is that JavaScript objects are not really hash tables. They’re associative arrays, and the key can <em>only</em> be a String. When you insert an object into a associative array, <code>toString()</code> is called and that is used as the key. Unfortunately, the default <code>toString</code> implementation for JavaScript objects returns “[object Object]”...which is not only very unhelpful when debugging, but doesn’t provide you with a unique key for your associative array.</p> <p>You can work around this problem by overriding <code>toString</code>. Or you can figure out another way to associate your object with a value. D’oh!</p> <img src="" height="1" width="1"/> Luke Francl tag:railspikes.com,2009-06-04:1978 2009-06-04T16:39:00Z 2009-06-04T16:42:50Z Sprinkle: the provisioning tool for people who don't have huge server clusters <p>I’ve recently been trying to find a good server automation tool that meets my needs. I looked at <a href="">Chef</a> and <a href="">Puppet</a>.</p> <p>They are both awesome for what they do, but what I don’t like is all the infrastructure I have to maintain to run Chef or Puppet. You need a server to host your server configuration on. But I only have one server![1] Chef does have a <a href="">solo version</a> which can download configuration from a web server and run it. That’s cool, but I don’t want to have a web server just for putting server configuration on.</p> <p>When the time commitment to set up one of these tools up greatly exceeds how long it is for me to bring up a new slice and run through the standard Apache/DB/Passenger stack, I lose interest. In the end, these are great tools for managing a cluster of machines and bringing up a new app in the cluster quickly—and keeping it up to date automatically. If you have big infrastructure needs, they make sense. If you just want to set up a single slice…ugh.</p> <p>After reading a bit about how Puppet and Chef work, what I really wanted was the ability to <em>push</em> server provisioning recipes. I want to maintain the server config in my repository and then provision a new server with a command I run on my machine. Sort of like <a href="">deprec</a>, but understandable.</p> <p>Fortunately, I found <a href="">Sprinkle</a> and <a href="">passenger-stack</a>.</p> <p>Sprinkle lets me quickly define which packages I want installed and push it out to a server to run (via Capistrano, Vlad, or Net::SSH). Sprinkle makes it easy to install software using apt, gem, or source. And unlike a simple shell script, Sprinkle tests whether or not the software is installed before running, and has a concept of dependencies.</p> <p>Passenger-stack removes the pain of writing my own rules for what to install. It comes with the standard stuff you’d need, and you can customize it from there.</p> <p>Here’s how you install all the software you need for a fresh server, after downloading <a href="">passenger-stack</a>:</p> <p><code>sprinkle -c -s config/install.rb</code></p> <p>The best part is that you can run that command again, and it won’t do anything. So you can add new software to your stack, then run it against your server, and only the new software will get installed.</p> <p>This gives you a great way to manage natively compiled gems and ensure that if you ever need to spin up a staging server or a demo server, everything you need gets installed.</p> <p>Check out this screencast by Ben Schwartz, author of passenger-stack.</p> <object height="360" width="640"><param /><param /><param /><embed src=";server=vimeo.com&show_title=1&show_byline=1&show_portrait=0&color=00ADEF&fullscreen=1" height="360" width="640"></embed></object><p><a href="">Passenger-stack demo</a> from <a href="">Ben Schwarz</a> on <a href="">Vimeo</a>.</p> <p>It’s not a smart as Chef and Puppet. It’s not transactional and servers don’t check for new software to install automatically. But it sure is easy. That’s why I call <a href="">Sprinkle</a> “the provisioning tool for people who don’t have huge server clusters.”</p> <p><sup>1</sup> Basically. I have many servers with many different applications on them. And I have a few servers that have multiple environments, but the same software. That’s my big driver for wanting a provisioning tool.</p> <img src="" height="1" width="1"/> Jon tag:railspikes.com,2009-06-02:1980 2009-06-02T16:06:00Z 2009-06-02T16:27:48Z Estimating software: a rule of thumb <p>Estimating software is hard, but most of us have to do it – whether we’re estimating an entire project for a client, or a new feature for a boss, or a change to one of our own projects.</p> <p>I’ve found the following rule helpful when estimating software. This comes from about four years of estimating Rails projects to consulting clients, and moving from bad – dramatically underestimating fixed-bid projects – to pretty good – usually overestimating time & materials projects slightly. (And more importantly, knowing when I can’t estimate, because the scope is too vague or too large.)</p> <h2>Jon’s Law of Estimates</h2> <p>Software difficulty is primarily determined by <em>volume</em>, <em>logic</em>, and <em>integration</em>.</p> <h2>Jon’s Law of Estimates, explained</h2> <p>1. <strong>Volume</strong> is easy to understand. If you’re building software that does more, it will require more work. So if you’re estimating a project that stores recipes, and you’re estimating another project that stores recipes <span class="caps">AND</span> shopping lists, you can expect that the second one will take more work (if everything else is equal).</p> <p>2. <strong>Logic</strong> refers to the rules or business logic behind a feature. The more rules there are, the more work there is. Imagine that our recipe system requires that recipes from some users are manually approved by an administrator, and checks to see that each ingredient in the recipe is present in the step-by-step instructions, and only allows a user to post 3 recipes per hour, and lets users propose alternative versions of a recipe, and lets an alternative version replace the regular version if it achieves a certain rating, etc. That’s more work than a recipe system that just lets users create and rate recipes, even though the volume of features may not be any larger.</p> <p>Interestingly, a technology can make some logic trivial and some logic hard. <a href="">Nested forms</a> are a great example of this. Before Rails 2.3, Rails made it trivial to do <span class="caps">CRUD</span> on a single table at a time, but difficult handle multiple tables. Now it is (almost) trivial to do <span class="caps">CRUD</span> on multiple tables at a time.</p> <p>3. <strong>Integration</strong> points are usually deserving of special consideration in an estimate. This includes talking to a web services <span class="caps">API</span>, another local software system, a data feed, a complex library, etc. Not only do integration points often take time to get right, but they can become sinkholes of time when the documentation is inadequate or incorrect, the other system doesn’t play nice, or you can’t easily test the integration. And your estimate depends on something out of your control: the other system.</p> <h2>External factors</h2> <p>These rules only apply to the difficulty of the software. Several external factors are important as well. These include, most notably, the <strong>client</strong> and the <strong>team</strong>. The client can make a project easy, or they can make a project difficult. Similarly, the right team might be able to blaze through a project quickly, while the wrong team may never finish at all.</p> <h2>The other side of estimating</h2> <p>Here’s the thing about these rules: they’re relative, not absolute. There is no rule that says “Features take 5 days, and integration points take 10”. So estimating requires comparisons. This means that if you’ve never built a Rails app before, you’ll have trouble estimating a Rails project. But once you’ve built a few, you can compare the volume, logic, and integration points of a new project to volume, logic, and integration points of the previous ones.</p> <p>So estimating requires intuition and experience as well as analysis (e.g. Jon’s Law of Estimates). The key to estimating is to combine analysis and intuition, and to let each side refine the other.</p> <img src="" height="1" width="1"/> Luke Francl tag:railspikes.com,2009-05-10:1972 2009-05-10T20:07:00Z 2009-05-10T20:08:46Z Announcing VeloTweets, Pulse of the Peloton <p><a href=""><img src="" height="" width=""></a> I’m pleased to announce <a href="">VeloTweets</a>, the pulse of the peloton, a curated collection of professional cycling Twitter activity. The idea and driving force came from <a href="">Jamie Thingelstad</a>. I did most of the development, and <a href="">Norm Orstad</a> designed the site. <a href="">Chris Hatch</a> helped a lot on the back end, providing a list of cyclists on Twitter, filling out profiles and affiliations, and doing research.</p> <h2>What’s Different about VeloTweets?</h2> <p>We wanted to make VeloTweets different than the other subject matter aggregators out there. We wanted a hook that would combine the immediacy of Twitter with pro cycling in a compelling way.</p> <p>Here’s what we came up with.</p> <p>First, we focused on who to include. Instead of everyone who’s talking about cycling, this contains only pro cyclists (and a few others associated with the sport, like managers or team mechanics).</p> <p>Second, we extended the data that is given to us by Twitter. We can enter every cyclist’s real name, nationality, and team, as well as expanded biographical data (here’s <a href="">Lance Armstrong’s profile</a> for instance).</p> <p>Third, we collected cycling events in a <a href="">calendar</a> that’s displayed on the site, and added a Message of the Day that’s tuned to what’s happening in the racing world each day.</p> <p>Forth, we brought in photos from the tweets (only <a href="">TwitPic</a> is supported right now). We store references to the photos in our DB so we can show the latest photos, along with photos that individuals have posted, and <a href="">all of them</a>. This turns out to be really cool because where else are you going to see <a href="">photos like this one</a> as they happen?</p> <p>After all this we still weren’t totally satisfied with what we’d come up with, because it still looked too much like Twitter (long list of messages in reverse chronological order). Then Jamie came up with the idea of only displaying each cyclist’s most recent tweet in a grid. We really like how this works because people who tweet a lot (like Lance) don’t dominate the page. It gives you an overview of what the whole peloton is talking about without letting a few people dominate it.</p> <h2>Behind the scenes</h2> <p>This application uses Rails 2.3, the <a href="">Suspenders base app</a>, <a href="">make_resourceful</a>, <a href="">semantic_form_builder</a> and the excellent <a href="">HTTPClient</a> library for interacting with Twitter (give up on net/http – <a href="">it is full of fail</a>).</p> <p>Twitter <span class="caps">API</span> access is done directly with <span class="caps">JSON</span>. We pull the <code>friends_timeline</code> and insert those tweets into the database.</p> <h2>Developing for Twitter</h2> <p>I’ve been doing a number of Twitter-related projects lately. The first was <a href="">Twistr</a>, which combines Twitter and Flickr LOLcat style for occasionally amusing results. Then <a href="">Barry Hess</a> and I built <a href="">Follow Cost</a>, which tells you how much someone tweets <em>before</em> you follow them. I created <a href="">a prototype for FanChatter’s next product</a> based on Twitter conversation aggregation. Now comes VeloTweets and another project that’s not public yet.</p> <p>I really enjoy working with the Twitter APIs. It’s fun to develop applications that utilize the platform that the Twitter folks have built.</p> <p>On that front, I recently received a copy of <a href=";tag=justlook-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=0596154615">Twitter <span class="caps">API</span>: Up and Running</a> (Follow Cost is mentioned on page 70!) which I will give a full review to soon. You don’t <em>need</em> a book on the Twitter <span class="caps">API</span> to develop applications for it, but it does provide some ideas and a useful reference, as well as details on some interesting aspects of Twitter (for example, I did not know that direct messages disappear if they are deleted by either party.).</p> <img src="" height="1" width="1"/> Jon tag:railspikes.com,2009-04-30:1966 2009-04-30T18:49:00Z 2009-04-30T18:50:06Z Music and programming: interviews with Chad Fowler and Dave Thomas <p>I’ll be speaking at <a href="">RailsConf 2009</a> this year on music and software development (<a href="">Five musical patterns for programmers</a>). The basic premise is that software development and music actually have quite a bit in common. This may be surprising to some people, who see programming as a cold, rational left-brain sort of thing, like science. But we programmers know that this is not really the case at all.</p> <p>So as a prelude to my talk, I decided to interview two programmer-musicians on the subject: Chad Fowler and Dave Thomas. Both compose and perform music, and both are noted programmers. Here is the interview.</p> <p><em><strong>Rail Spikes:</strong> Tell us a little about your background with both programming and music.</em></p> <p><strong>Chad Fowler:</strong> I started my professional life as a saxophonist in Memphis. I played the Beale street clubs and all the typical Memphis professional musician stuff. Among others, I played for a while with <a href="">Ann Peebles</a> and her husband <a href="">Don Bryant</a> with the rhythm section from all the old <a href="">Hi Records</a> recordings. I did mostly R&B and jazz professionally but I was probably most well known in the Memphis community for making “strange” music. Before playing music professionally, I played guitar in punk bands in high school. I was a fan of punk, heavy metal, hip hop, pop, (new) classical and pretty much everything else. As I immersed myself in the world of jazz, it became quickly clear that the jazz community doesn’t like punk and other less “serious” types of music and has an almost religious negative reaction to jazz musicians who do.</p> <p>It was almost as if any deviation from the “normal” world of jazz made you a traitor. So I did the natural thing: started a group called The Jazz Traitors, which played music that 1) we loved and 2) offended the jazz community (not necessarily in that order).</p> <p>I was also very interested in composing “classical” music. I studied with a composer named <a href="">Kamran Ince</a>, who is still my favorite such composer.</p> <p>As for programming, I’ve been interested in programming since I was a young child using my commodore 64. I wasn’t really that good at it as a kid but I played around a lot. I didn’t get serious until I picked up programming again as a hobby while I was a professional musician. After a late night gig at a bar, it was relaxing to go home and unwind to some C programming tutorials. I didn’t have a <em>need</em> to program, nor did I have a project in mind (except that I have always loved video games and wanted to learn how they worked). But I got so into it, that I ended up getting a job in computer support because a friend filled out an application for me.</p> <p>Being the gamer I am, as soon as I started in computer support, I naturally wanted to “level up”. That meant becoming a network administrator. Then a system administrator. Then a programmer, then a designer, then an architect, then a <span class="caps">CTO</span>, etc. Now here I am. It’s been fun.</p> <p><strong>Dave Thomas:</strong> There was always a lot of music in our house. My father liked to play the piano and the organ (I learned to solder as he built a Heathkit organ from a kit in the late 60s). My mother liked Broadway musicals. So we’d often experience alternating hours of Chopin and South Pacific. My brother was also musical. I wasn’t particularly, but I enjoyed noodling on the piano, and spent hours just playing with chords and progressions.</p> <p>I’ve been programming since I was 15 or so.</p> <p><em><strong>Rail Spikes:</strong> Some developers – yourself included – have suggested a similarity between programming and music composition or performance. How exactly are music and programming similar?</em></p> <p><strong>Dave Thomas:</strong> I’m not sure, but I think it might be something to do with the discovery of patterns. Both music and code consist of nested sets of variations and repetitions. There’s a rythm to executing code, in the same way there’s a rythm to music. It is never exact, but it’s there. After a while, I found I could imagine the rythm and structure of my programs as they run, in the same way you can pick apart the structure of a piece of music as you listen to it. And, jsut as with music, it takes experience to be able to feel the deeper structures and notice the more extreme variations. But being able to spot them in programs makes coding simpler and more interesting. The basic coding structures—loops, method calls, and so on—provide the framework for composing in the same way that staff and bar lines do for music. Algorithms are like the progressions, and data becomes the notes. And in the same way that good music takes all these things and then surprises you, good code does the same thing. It isn’t mechanical and repetitive: instead it uses the constraints to build something bigger and more interesting.</p> <p><strong>Chad Fowler:</strong> It’s hard for me to put my finger on. There’s something similar in the way I think when I do each.</p> <p>I think it all boils down to language, though. In all of these cases (including learning actual language), you take a bunch of tokens (notes, sounds, grunts, functions, classes) and combine them into a grammar which you use to express ideas. The way you do that is totally up to you as long as the intended ideas are communicated. With computer programs, they have to do what they’re meant to do. With music, they express or evoke emotions, paint pictures, cause anxiety or whatever.</p> <p>Some computer programs evoke emotions and cause anxiety as well.</p> <p><em><strong>Rail Spikes:</strong> Is Ruby development more like improvised jazz or composed classical music?</em></p> <p><strong>Chad Fowler:</strong> I think it’s both. And I don’t think Ruby is any different in this than other languages. Much of the discussion about the relationship between programming and music focuses on the more obvious idea of programming as composition. It makes sense, since programmers tend to sit and type their ideas into an editor and then eventually execute it. The programs can be checked, tested, refactored, etc. before the actual performance. This is how classical composition works as well.</p> <p>But the less obvious angle is that in many situations, programming is like performance. In fact, even in music, improvisation is really just real time composition. You don’t get a chance to refactor because your “code” is executed as you write it.</p> <p>I’ve had this same feeling while debugging production problems, hacking new features on a tight deadline, or sometimes during the initial creation of an application. The same synapses are firing as when I was trying to play Cherokee at 200 beats per minute. Mistakes can’t be erased, so they have to be nuanced into (worst case) insignificant events or (best case) important drivers behind the work.</p> <p>From a purely development-oriented perspective, <span class="caps">TDD</span> is more like improvisation than composition. I think that’s what I like about it. It’s motivating and creative in an exciting, time-sensitive way. You take small steps and see where they lead you. Sure, you can always revert your changes if you paint yourself into a corner but part of the fun and challenge is to not paint yourself into a corner.</p> <p>One thing jazz musicians like to say is that every wrong note is just a half step away from a right note. <span class="caps">TDD</span> is like that. You might take a slightly wrong turn. It’s fun to see if you can course-correct without starting over.</p> <p><em><strong>Rail Spikes:</strong> Do developers need to be musically inclined? Does it help?</em></p> <p><strong>Chad Fowler:</strong> Obviously not. Some of the best programmers I know are not musicians. I can’t tell if it helps, but I would guess that developers who are also musicians are different than developers who aren’t. I don’t think that’s because being a musician changes people, though. I think it’s because the people who are both are the kind of people who <em>need</em> to do both.</p> <p>This usually means they’re “right brain” people. This leads to a way of thinking that changes how they approach programming problems.</p> <p>I think learning music (or another right brain discipline) is a good way to exercise your mind. So I wouldn’t be surprised if leaning music helps people exercise their thought processes in ways that will benefit their work as programmers (or authors, or lawyers, or doctors or whatever).</p> <p>I also think, though, that if we were all musicians at heart, we wouldn’t get much done. I rely heavily on my less artsy colleagues to ground me and be sometimes more pragmatic than I am. So I don’t think we all need to be a “right brain” programmer. It would be disasterous if we were.</p> <p><strong>Dave Thomas:</strong> Do they need to be? No. But many of the good ones I know are. I’d guess that density of musicians in software development is many times the population norm. But that means you could also ask the question “Do musicians have to know software development?”</p> <p>I think the more interesting question is to ask “how can people best express what they enjoy doing?” because both music and software development are outlets for this.</p> <p><em><strong>Rail Spikes:</strong> What sort of music do you listen to? Any recommendations for Ruby developers looking to expand their musical horizons?</em></p> <p><strong>Chad Fowler:</strong> As I mentioned earlier, I like all kinds of music (with a few exceptions). Lately I’ve been listening to a lot of instrumental hip hop, such as DJ Qbert and Mixmaster Mike. I’ve also been getting into a genre of electronic music called “electro”, which sounds like the bleeps and bloops that are the soundtrack of my dreams (if a computer is going to generate music I always like it to sound like a computer generated it).</p> <p>As for recommendations, here are a few ideas for things that most developers probably haven’t listened to:</p> <ul> <li><a href="">Kamran Ince</a> – He was my composition teacher and, I think, an accessible introduction to the world of “new music”, which is what we call new composed “classical” music. The term “classical” is a widely spread misnomer. It actually refers to music written in the late 18th and early 19th centuries, but most people use it to mean high brow music written for instruments like violins. So whatever you call it, Kamran Ince writes some beautiful instances of it. Specifically check out his chamber music, such as Domes and Arches.</li> </ul> <ul> <li><a href="">Charlie Wood</a> – I have had the pleasure of playing with Charlie on a few occasions. He is a R&B singer/organist/composer from Memphis and writes some of the most intelligent songs you’ll hear. My favorite album of his is “Who I Am”.</li> </ul> <ul> <li><a href="">John Zorn</a> – Zorn has been around for a long time and is a leader in the world of Avant Garde music. He’s also one of the most amazing saxophonists ever. If you’re new to this kind of thing, his Masada quartet (“radical Jewish music”) produces some great stuff that’s accessible to first time listeners. If you’re looking for something to shock your aural taste buds, try Painkiller (metal-tinged noise) or Naked City.</li> </ul> <p><strong>Dave Thomas:</strong> I listen to just about anything that’s interesting. My playlist here is very varied, and I try to add new stuff to it farily regularly. I know people who are trained as musicians, and I tend to ask them what they’re listening to. Sometimes that leads to challenges: my ear isn’t as developed as their ears. But often it leads to whole new areas of cool stuff. So I’d recommend everyone should find a friend who knows more than you do about music and ask them to surprise and challenge you. (That advice probably applies to just about everything, thinking about it.) It’s easy to find music that stimulates your lizard brain. Get into the habit of looking for the stuff that engages at a higher level too. And, like everything, have fun with it.</p> <img src="" height="1" width="1"/> Jon tag:railspikes.com,2009-04-08:1957 2009-04-08T19:10:00Z 2009-04-08T19:13:18Z Anonymize sensitive data with rake <p>When troubleshooting a nasty bug, it’s often useful to take a look actual production or staging data, or even pull it down into your development database. But this is a huge potential privacy and security concern. Your local environment likely isn’t as secure as your production environment, and you might not want to access this sensitive data (or give it to another team member).</p> <p>Similarly, you might want to replicate your production data on a staging or QA environment to see how new code will interact with real data. Also a privacy concern.</p> <p>Simple solution: anonymize the data!</p> <p>In my current project, I put together an <a href="">anonymize.rake</a> task to deal with this. The most sensitive data in our app is name and phone number. Without that, private information can’t really be linked back to someone. So I pulled the 200 most common first names and 1000 most common last names (in the United States) and put them into an Anonymizer class. Call Anonymizer.random_name for a random, but realistic, name. The class also includes a simple phone number and email anonymizer.<></pre></td> <td class="code"><pre><span class="r">class</span> <span class="cl">Anonymizer</span><tt> </tt> <span class="r">def</span> <span class="pc">self</span>.random_name<tt> </tt> <span class="s"><span class="dl">"</span><span class="il"><span class="dl">#{</span>random_first_name<span class="dl">}</span></span><span class="k"> </span><span class="il"><span class="dl">#{</span>random_last_name<span class="dl">}</span></span><span class="dl">"</span></span><tt> </tt> <span class="r">end</span><tt> </tt> <tt> </tt> <span class="r">def</span> <span class="pc">self</span>.random_first_name<tt> </tt> <span class="co">FIRSTNAMES</span>[rand(<span class="co">FIRSTNAMES</span>.size)]<tt> </tt> <span class="r">end</span><tt> </tt> <tt> </tt> <span class="r">def</span> <span class="pc">self</span>.random_last_name<tt> </tt> <span class="co">LASTNAMES</span>[rand(<span class="co">LASTNAMES</span>.size)]<tt> </tt> <span class="r">end</span><tt> </tt> <tt> </tt> <span class="r">def</span> <span class="pc">self</span>.random_phone<tt> </tt> <span class="s"><span class="dl">"</span><span class="k">612-555-</span><span class="il"><span class="dl">#{</span>rand(<span class="i">8000</span>) + <span class="i">1000</span><span class="dl">}</span></span><span class="dl">"</span></span><tt> </tt> <span class="r">end</span><tt> </tt> <tt> </tt> <span class="co">FIRSTNAMES</span> = <span class="s"><span class="dl">%w(</span><span class="k">James<tt> </tt> John<tt> </tt> Robert<tt> </tt> Michael<tt> </tt><tt> </tt> # etc.<tt> </tt></span></span></pre></td> </tr></table> <p>The rake task is simple:<>namespace <span class="sy">:db</span> <span class="r">do</span><tt> </tt> namespace <span class="sy">:data</span> <span class="r">do</span><tt> </tt> desc <span class="s"><span class="dl">"</span><span class="k">Anonymize sensitive information</span><span class="dl">"</span></span><tt> </tt> task <span class="sy">:anonymize</span> => <span class="sy">:environment</span> <span class="r">do</span><tt> </tt> <span class="r">if</span> <span class="co">RAILS_ENV</span> == <span class="s"><span class="dl">'</span><span class="k">production</span><span class="dl">'</span></span><tt> </tt> puts <span class="s"><span class="dl">"</span><span class="k">Refusing to anonymize production data. You don't really want to do that.</span><span class="dl">"</span></span><tt> </tt> <span class="r">else</span><tt> </tt> puts <span class="s"><span class="dl">"</span><span class="k">Anonymizing all name and email records in the </span><span class="il"><span class="dl">#{</span><span class="co">RAILS_ENV</span><span class="dl">}</span></span><span class="k"> database.</span><span class="dl">"</span></span><tt> </tt> <tt> </tt> <span class="c"># User.find(:all).each do |user|</span><tt> </tt> <span class="c"># user.name = Anonymizer.random_name</span><tt> </tt> <span class="c"># user.email = Anonymizer.random_email(user.name)</span><tt> </tt> <span class="c"># puts "Saving #{user.name} (#{user.email})"</span><tt> </tt> <span class="c"># user.save!</span><tt> </tt> <span class="c"># end</span><tt> </tt> <span class="r">end</span><tt> </tt> <span class="r">end</span><tt> </tt> <span class="r">end</span><tt> </tt><span class="r">end</span></pre></td> </tr></table> <p>You’ll need to do the actual implementation yourself (see the sample <code>User.all.each {} block</code>). It would be easy enough to extend this to work with social security numbers, addresses, etc. Run with:</p> <p><code>rake db:data:anonymize</code></p> <p>Code: <a href="">anonymize.rake</a></p> <img src="" height="1" width="1"/> Jon tag:railspikes.com,2009-04-03:1946 2009-04-03T14:46:00Z 2009-04-06T21:56:31Z Benchmarking your Rails tests (updated) <p><em>Update: stubbing a single integration point shaved 22 seconds off of my unit tests, reducing test time from 35 seconds to 13. See below.</em></p> <p>The first step to <a href="">faster tests</a> is knowing what is slow. Fortunately, this is dead simple with the <a href="">test_benchmark plugin</a> by <a href="">Tim Connor</a>, and originally built by <a href="">Geoffrey Groschenbach</a>. Install the plugin, and when you run your tests via Rake, you’ll see handy output showing you the slowest tests, and the slowest test classes.</p> <h2>Step 1: Install the plugin.</h2> <pre><code>script/plugin install git://github.com/timocratic/test_benchmark.git</code></pre> <h2>Step 2: Run your tests</h2> <p><code>rake test</code></p> <p>Here is a bit of output when I run the unit tests for <a href="">FanChatter</a>:</p> <pre><code>Finished in 34.838173 seconds. Test Benchmark Times: Suite Totals: 25.393 MailReceiverTest 4.520 PhotoTest 1.429 REXMLTest 0.961 TeamTest 0.846 MessageTest </code></pre> <p>Pretty useful information. Almost 75% of our unit testing time is taken up in the MailReceiverTest. So if we want to speed up our tests, we need to make our <span class="caps">MMS</span> testing faster. Looking at that code, I see this line over and over:</p> <table class="CodeRay"><tr> <td title="click to toggle" class="line_numbers"><pre><tt> </tt></pre></td> <td class="code"><pre><span class="co">MailReceiver</span>.receive(fixture_mms(<span class="sy">:fixture_name</span>))</pre></td> </tr></table> <p>This method reads a test email message from the filesystem, and runs it through our mail parsing method. This is basically an integration test, hitting at least two integration points. So if we can remove these bottlenecks, we can reasonably expect a fairly large improvement in our unit test speed.</p> <p>I think we could realistically reduce our unit testing time from 34 seconds to <15></p> <h2>Other options</h2> <p>The test_benchmark plugin fires whenever you run your tests with <code>rake</code>. Tim recently patched the plugin to <em>not</em> fire when run with <a href="">autotest</a>, which is great. Personally, though, I don’t want to see this benchmark information every time I run my tests. So I added the following line to my test.rb environment file:</p> <p><code>ENV['BENCHMARK'] ||= 'none'</code></p> <p>Now, the benchmarks don’t run by default. If I want to see them, I call:</p> <p><code>rake test BENCHMARK=true</code></p> <p>And if to see full tests, showing the time it takes to run every test in the system, just call:</p> <p><code>rake test BENCHMARK=full</code></p> <p>That’s it. You still have to speed up your tests, and there are many ways to do that (from mocking to simply reducing the number of calls to expensive methods), but knowing what’s slow is half the battle.</p> <h2>The stirring conclusion (update)</h2> <p>I spent a few minutes optimizing these slow tests today. First, I tried rearranging the tests to reduce unnecessary calls to the slow method (<code>MailReceiver.receive(message)</code>). I was able to speed MailReceiverTest from about 25 seconds to 17. Not bad, but still slow.</p> <p>The real problem is that this method saves a photo. It creates a Photo record that includes a file, treated sort of like an upload, like this:</p> <table class="CodeRay"><tr> <td title="click to toggle" class="line_numbers"><pre>1<tt> </tt></pre></td> <td class="code"><pre>photo.uploaded_data = mms.file<tt> </tt></pre></td> </tr></table> <p>This is what was slow. But my unit tests don’t actually deal with the file being saved to the filesystem; they test other things, like the right records being created, confirmation emails being sent, etc.</p> <p>So I decided to try bypassing this file save/upload by stubbing the <code>uploaded_data=</code> method. I put the following at the top of my test class:</p> <table class="CodeRay"><tr> <td title="click to toggle" class="line_numbers"><pre>1<tt> </tt>2<tt> </tt>3<tt> </tt></pre></td> <td class="code"><pre><span class="r">def</span> <span class="fu">setup</span><tt> </tt> <span class="co">Photo</span>.any_instance.stubs(<span class="sy">:uploaded_data=</span>)<tt> </tt> <span class="r">end</span><tt> </tt></pre></td> </tr></table> <p>And <em>voila!</em> <code>MailReciverTest</code> went from 25 seconds to 17 seconds to 3 seconds.</p> <img src="" height="1" width="1"/> Luke Francl tag:railspikes.com,2009-03-30:1941 2009-03-30T14:00:00Z 2009-03-31T04:19:14Z 10 Cool Things in Rails 2.3 <p><em>This was presented to the <a href="">Ruby Users of Minnesota</a> on March 30, 2009.</em></p> <p>Here’s a quick look at 10 new Rails features that I think are cool. Not all of them are huge new features, but instead help solve annoying problems. I’ve also created a simple application that demonstrates most of these features. You can <a href="">get it at BitBucket</a></p> <div><a href="" title="10 Cool Things About Rails 2.3">10 Cool Things About Rails 2.3</a><object height="355" width="425"><param /><param /><param /><embed src=";stripped_title=10-cool-things-about-rails-23" height="355" width="425"></embed></object><div>View more <a href="">presentations</a> from <a href="">lukefrancl</a>.</div></div> <h2>1. Rails Boots Faster in Development Mode</h2> <p>This is something all Rails developers can appreciate. In development mode, Rails now lazy loads as much as possible so that the server starts up much faster.</p> <p>This is so fast, instead of replying on reloading (which doesn’t pick up changes to gems, lib directory, etc) one developer wrote a script (does anyone have the link for this?) that watches for file system changes and restarts your <code>script/server</code> process.</p> <p>Using an empty Rails app, I got the following (totally non-scientific) real times for <code>time script/server -d</code>:</p> <p>Rails 2.2: 1.461s<br /> Rails 2.3: 0.869s</p> <p>Presumably this difference would grow as more libraries were used, because Rails 2.3 will lazy load them. However I was too lazy to build up equivalent Rails 2.2 and 2.3 applications to try that out.</p> <h2>2. Rails Engines Officially Supported</h2> <p>Inspired by Merb’s slices implementation, Rails added official support for Engines, which are self-contained Rails apps that you can install into another application. Engines can have their own models, controllers, and views, and add their own routes.</p> <p>Previously this was possible using the Engines plugin, but Engines would often break between Rails versions. Now that they are officially supported, this should be less frequent.</p> <p>There are still some features from the unofficial Engines plugin that are not part of Rails core. You can read about that <a href="">at the Rails Engines site</a>.</p> <h2>3. Routing Improvements</h2> <p>RESTful routes now use less memory because <code>formatted_*</code> routes are no longer generated, resulting in a 50% memory savings.</p> <p>Given this route:</p> <p><code>map.resources :users</code></p> <p>If you want to access the <span class="caps">XML</span> formatted version of a user resource, you would use:</p> <p><code>user_path(123, :format => 'xml')</code></p> <p>In Rails 2.3, <code>:only</code> and <code>:except</code> options to <code>map.resources</code> are not passed down to nested routes. The previous behavior was rather confusing so I think this is a good change.</p> <table class="CodeRay"><tr> <td title="click to toggle" class="line_numbers"><pre>1<tt> </tt>2<tt> </tt>3<tt> </tt>4<tt> </tt></pre></td> <td class="code"><pre>map.resources <span class="sy">:users</span>, <span class="sy">:only</span> => [<span class="sy">:index</span>, <span class="sy">:new</span>, <span class="sy">:create</span>] <span class="r">do</span> |user|<tt> </tt> <span class="c"># now will generate all the routes for hobbies</span><tt> </tt> user.resources <span class="sy">:hobbies</span><tt> </tt><span class="r">end</span><tt> </tt></pre></td> </tr></table> <h2>4. <span class="caps">JSON</span> Improvements</h2> <p><code>ActiveSupport::JSON</code> has been improved.</p> <p><code>to_json</code> will always quote keys now, per the <span class="caps">JSON</span> spec.</p> <p>Before:</p> <p><code>{123 => 'abc'}.to_json</code><br /> <code>=> '{123: "abc"}'</code></p> <p>Now:</p> <p><code>{123 => 'abc'}.to_json</code><br /> <code>=> '{"123": "abc"}'</code></p> <p>Escaped Unicode characters will now be unescaped.</p> <p>Before:</p> <p><code>ActiveSupport::JSON.decode("{'hello': 'fa\\u00e7ade'}")</code><br /> <code>=> {"hello"=>"fa\\u00e7ade"}</code></p> <p>Now:</p> <p><code>ActiveSupport::JSON.decode("{'hello': 'fa\u00e7ade'}")</code><br /> <code>=> {"hello"=>"façade"}</code></p> <p>See <a href="">ticket 11000 for details</a>.</p> <h2>5. Default scopes</h2> <p>Prior to Rails 2.3, if you executed a find without any options, you’d get the objects back unordered (technically, the database does not <strong>guarantee</strong> a particular ordering, but it would typically be by primary key, ascending).</p> <p>Now, you can define the default sort and filtering options for finding models. The default scope works just like a named scope, but is used by default.</p> <table class="CodeRay"><tr> <td title="click to toggle" class="line_numbers"><pre>1<tt> </tt>2<tt> </tt>3<tt> </tt></pre></td> <td class="code"><pre><span class="r">class</span> <span class="cl">User</span> < <span class="co">ActiveRecord</span>::<span class="co">Base</span><tt> </tt> default_scope <span class="sy">:order</span> => <span class="s"><span class="dl">'</span><span class="k">`users`.name asc</span><span class="dl">'</span></span><tt> </tt><span class="r">end</span><tt> </tt></pre></td> </tr></table> <p>The default options can always be overridden using a custom finder.</p> <p><code>User.all # will use default scope</code><br /> <code>User.all(:order => 'name desc') # will use passed in order option.</code></p> <p>Example:<></pre></td> <td class="code"><pre><span class="co">User</span>.create(<span class="sy">:name</span> => <span class="s"><span class="dl">'</span><span class="k">George</span><span class="dl">'</span></span>)<tt> </tt><span class="co">User</span>.create(<span class="sy">:name</span> => <span class="s"><span class="dl">'</span><span class="k">Bob</span><span class="dl">'</span></span>)<tt> </tt><span class="co">User</span>.create(<span class="sy">:name</span> => <span class="s"><span class="dl">'</span><span class="k">Alice</span><span class="dl">'</span></span>)<tt> </tt><tt> </tt>puts <span class="co">User</span>.all.map { |u| <span class="s"><span class="dl">"</span><span class="il"><span class="dl">#{</span>u.id<span class="dl">}</span></span><span class="k"> - </span><span class="il"><span class="dl">#{</span>u.name<span class="dl">}</span></span><span class="dl">"</span></span> }<tt> </tt><tt> </tt><span class="i">3</span> - <span class="co">Alice</span><tt> </tt><span class="i">2</span> - <span class="co">Bob</span><tt> </tt><span class="i">1</span> - <span class="co">George</span><tt> </tt></pre></td> </tr></table> <p>Note how the default order is respected.</p> <h2>6. Nested Transactions</h2> <p>Pass <code>:requires_new => true</code> to <code>ActiveRecord::Base.transaction</code> and a nested transaction will be created.<">User</span>.transaction <span class="r">do</span><tt> </tt> user1 = <span class="co">User</span>.create(<span class="sy">:name</span> => <span class="s"><span class="dl">"</span><span class="k">Alice</span><span class="dl">"</span></span>)<tt> </tt><tt> </tt> <span class="co">User</span>.transaction(<span class="sy">:requires_new</span> => <span class="pc">true</span>) <span class="r">do</span><tt> </tt> user2 = <span class="co">User</span>.create(<span class="sy">:name</span> => <span class="s"><span class="dl">"</span><span class="k">Bob</span><span class="dl">"</span></span>)<tt> </tt> <span class="r">end</span><tt> </tt><span class="r">end</span><tt> </tt></pre></td> </tr></table> <p.</p> <h2>7. Asset Host Objects</h2> <p>Since Rails 2.1, you could configure Rails to use an <code>asset_host</code> that was a <code>Proc</code> with two arguments, <code>source</code> and <code>request</code>.</p> <p>For example, some browsers complain if an <span class="caps">SSL</span> request loads images from a non-secure source. To make sure <span class="caps">SSL</span> always loads from the same host, you could write this (<a href="">from the documentation<">ActionController</span>::<span class="co">Base</span>.asset_host = <span class="co">Proc</span>.new { >}<tt> </tt></pre></td> </tr></table> <p>This works but it’s kind of messy and it’s difficult to implement complicated logic. Rails 2.3 allows you to implement the logic in an object that responds to call with one or two parameters, like the <code>Proc</code>.</p> <p>The above Proc could be implemented">SslAssetHost</span><tt> </tt> <span class="r">def</span> <span class="fu">call</span>> <span class="r">end</span><tt> </tt><span class="r">end</span><tt> </tt><tt> </tt><span class="co">ActionController</span>::<span class="co">Base</span>.asset_host = <span class="co">SslAssetHost</span>.new<tt> </tt></pre></td> </tr></table> <p>David Heinemeier Hansson has already created a better plugin that handles this case: <a href="">asset-hosting-with-minimum-ssl</a>. It takes into account the peculiarities of the different browsers to use <span class="caps">SSL</span> as little as possible, reducing load on your server.</p> <h2>8. Easily update Rails timestamp fields</h2> <p>If you’ve ever wanted to update Rails’ automatic timestamp fields <code>created_at</code> or <code>updated_at</code> you’ve noticed how painful it can be. Rails <span class="caps">REALLY</span> didn’t want you to change those fields.</p> <p>Not any more!</p> <p>Now you can easily change <code>created_at</code> and <code>updated_at</code>:</p> <table class="CodeRay"><tr> <td title="click to toggle" class="line_numbers"><pre>1<tt> </tt>2<tt> </tt>3<tt> </tt>4<tt> </tt></pre></td> <td class="code"><pre><tt> </tt><span class="co">User</span>.create(<span class="sy">:name</span> => <span class="s"><span class="dl">"</span><span class="k">Alice</span><span class="dl">"</span></span>, <span class="sy">:created_at</span> => <span class="i">3</span>.weeks.ago, <span class="sy">:updated_at</span> => <span class="i">2</span>.weeks.ago)<tt> </tt><tt> </tt>=> <span class="c">#<User id: 3, name: "Alice", created_at: "2009-03-08 00:06:58", updated_at: "2009-03-15 00:06:58"></span><tt> </tt></pre></td> </tr></table> <p>Remember, If you don’t want your users changing these fields, you should make them <code>attr_protected</code>.</p> <h2>9. Nested Attributes and Forms</h2> <p>This greatly simplifies complex forms that deal with multiple objects.</p> <p>First, nested attributes allow a parent object to delegate assignment to its child objects.<><span class="r">class</span> <span class="cl">User</span> < <span class="co">ActiveRecord</span>::<span class="co">Base</span><tt> </tt> has_many <span class="sy">:hobbies</span>, <span class="sy">:dependent</span> => <span class="sy">:destroy</span><tt> </tt><tt> </tt> accepts_nested_attributes_for <span class="sy">:hobbies</span><tt> </tt><span class="r">end</span><tt> </tt><tt> </tt><span class="co">User</span>.create(<span class="sy">:name</span> => <span class="s"><span class="dl">'</span><span class="k">Stan</span><span class="dl">'</span></span>, <tt> </tt> <span class="sy">:hobbies_attributes</span> => [{<span class="sy">:name</span> => <span class="s"><span class="dl">'</span><span class="k">Water skiing</span><span class="dl">'</span></span>},<tt> </tt> {<span class="sy">:name</span> => <span class="s"><span class="dl">'</span><span class="k">Hiking</span><span class="dl">'</span></span>}])<tt> </tt></pre></td> </tr></table> <p>Nicely, this will save the parent and its associated models together and if there are any errors, none of the objects will be saved.</p> <p>Forms with complex objects are now straight-forward. To use this in your forms, use the <code>FormBuilder</code> instance’s <code>fields_for</code>><% form_for(<span class="iv">@user</span>) <span class="r">do</span> |f| <span class="s"><span class="dl">%></span><span class="k"><tt> </tt> <div</span><span class="dl">></span></span><tt> </tt> <%= f.label <span class="sy">:name</span>, <span class="s"><span class="dl">"</span><span class="k">User name:</span><span class="dl">"</span></span> <span class="s"><span class="dl">%></span><span class="k"><tt> </tt> <%= f.text_field :name %</span><span class="dl">></span></span><tt> </tt> <<span class="rx"><span class="dl">/</span><span class="k">div><tt> </tt><tt> </tt> <div><tt> </tt> <h2>Hobbies<</span><span class="dl">/</span></span>h2><tt> </tt><tt> </tt> <% f.fields_for(<span class="sy">:hobbies</span>) <span class="r">do</span> |hf| <span class="s"><span class="dl">%></span><span class="k"><tt> </tt> <div</span><span class="dl">></span></span><tt> </tt> <%= hf.label <span class="sy">:name</span>, <span class="s"><span class="dl">"</span><span class="k">Hobby name:</span><span class="dl">"</span></span> <span class="s"><span class="dl">%></span><span class="k"><tt> </tt> <%= hf.text_field :name %</span><span class="dl">></span></span><tt> </tt> <<span class="rx"><span class="dl">/</span><span class="k">div><tt> </tt> <% end %><tt> </tt> <</span><span class="dl">/</span></span>div><tt> </tt><tt> </tt> <%= f.submit <span class="s"><span class="dl">'</span><span class="k">Create</span><span class="dl">'</span></span> <span class="s"><span class="dl">%></span><span class="k"><tt> </tt><% end %</span><span class="dl">></span></span><tt> </tt></pre></td> </tr></table> <p>One catch is that a form is displayed for every associated object. New objects obviously have no associations so you have to create a dummy object in your controller.<><span class="r">class</span> <span class="cl">UsersController</span> < <span class="co">ApplicationController</span><tt> </tt> <span class="r">def</span> <span class="fu">new</span><tt> </tt> <span class="c"># In this contrived example, I create 3 dummy objects so I'll get</span><tt> </tt> <span class="c"># 3 blank form fields.</span><tt> </tt> <span class="iv">@user</span> = <span class="co">User</span>.new<tt> </tt> <span class="iv">@user</span>.hobbies.build<tt> </tt> <span class="iv">@user</span>.hobbies.build<tt> </tt> <span class="iv">@user</span>.hobbies.build<tt> </tt> <span class="r">end</span><tt> </tt><span class="r">end</span><tt> </tt></pre></td> </tr></table> <p>There are a lot of options for nested forms including deleting associated objects, so be sure to read the documentation. Ryan Daigle also has <a href="">a great write-up</a>.</p> <h2>10. Rails Metal <code>\m/</code></h2> <p>You can now write very simple Rack endpoints for highly trafficked routes, like an <span class="caps">API</span>. These are slotted in before Rails picks up the route.</p> <p>A Metal endpoint is any class that conforms to the Rack spec (i.e., it has a <code>call</code> method that takes an environment and returns the an array of status code, headers, and content).</p> <p>Put your class in <code>app/metal</code> (not generated by default). Return a 404 response code for any requests you don’t want to handle. These will get passed on to Rails.</p> <p>There’s a generator you can use to create an example Metal end point:</p> <p><code>script/generate metal classname</code></p> <p>In my sample app, I have what I would consider the “minimally useful” Rails Metal endpoint. It responds to /users.js and returns the list of users as <span class="caps">JSON</span>.<">UsersApi</span><tt> </tt> <span class="r">def</span> <span class="pc">self</span>.call(env)<tt> </tt> <span class="c"># if this path was /users.js, reply with the list of users</span><tt> </tt> <span class="r">if</span> env[<span class="s"><span class="dl">'</span><span class="k">PATH_INFO</span><span class="dl">'</span></span>] =~ <span class="rx"><span class="dl">/</span><span class="k">^</span><span class="ch">\/</span><span class="k">users.js</span><span class="dl">/</span></span><tt> </tt> [<span class="i">200</span>, {<span class="s"><span class="dl">'</span><span class="k">Content-Type</span><span class="dl">'</span></span> => <span class="s"><span class="dl">'</span><span class="k">application/json</span><span class="dl">'</span></span>}, <span class="co">User</span>.all.to_json]<tt> </tt> <span class="r">else</span><tt> </tt> <span class="c"># otherwise, bail out with a 404 and let Rails handle the request</span><tt> </tt> [<span class="i">404</span>, {<span class="s"><span class="dl">'</span><span class="k">Content-Type</span><span class="dl">'</span></span> => <span class="s"><span class="dl">'</span><span class="k">text/html</span><span class="dl">'</span></span>}, <span class="s"><span class="dl">'</span><span class="k">not found</span><span class="dl">'</span></span>]<tt> </tt> <span class="r">end</span><tt> </tt> <span class="r">end</span><tt> </tt><span class="r">end</span><tt> </tt></pre></td> </tr></table> <p>If you want a little bit more help, you can use any other Rack-based framework, for example Sinatra.</p> <p>For more details on how Rails Metal works, check out <a href="">Jesse Newland’s article about it</a>.</p> <p>Thanks for reading! For more details about new features in Rails 2.3, read the excellent <a href="">release notes</a></p> <img src="" height="1" width="1"/> Jon tag:railspikes.com,2009-03-10:1917 2009-03-10T16:34:00Z 2009-03-10T16:35:09Z Slow tests are a bug <p><img src="" />I’ve been doing <span class="caps">TDD</span> for about three years now. Once I figured out how to do it right, it became a natural part of how I program, and I can’t really imagine doing development without it. This isn’t to say that <span class="caps">TDD</span> is the <a href="">only approach</a> to writing quality software or that <a href="">unit testing it the only kind of testing</a> that matters. But it sure is useful.</p> <p>The Ruby world talks a lot about <span class="caps">TDD</span>, moreso than many other developer communities. We have not one, not two, but at least half a dozen testing libraries that are actively being used and developed. For most Ruby developers, the question isn’t “Do you test?” but “BDD or <span class="caps">TDD</span>?” or even “RSpec, Shoulda, or Bacon?” We often use at least 2-3 layers of automated testing, and sometimes use different tools for each layer. Most Ruby conferences devote at least a few talks each day to testing-related topics. We’re test fanboys and -girls, for better or for worse.</p> <p>But in spite of this, we rarely talk about <strong>test speed</strong>. Sure, there are purists who believe that <a href="">unit tests shouldn’t touch the database</a> because anything that touches the DB is actually an integration test. But few Ruby testers actually take this long and lonely road, and I personally <em>prefer</em> tests that talk to a database, at least some of the time.</p> <p>And it’s true that others have written libraries to <a href="">distribute their tests</a> across multiple machines. But that’s the exception that proves the rule – the only reason to distribute your tests is that they’re too slow to begin with.</p> <p>Most Rails projects I’ve worked on have ended up at around 3,000-15,000 lines of code, with a roughly as many lines of test code, and most have test suites that take a minute or more to run. Our test suite for <a href="">Tumblon</a>, for instance, churns along for 2.5 minutes. This is a too slow. And slow tests are a problem for at least two reasons: they <strong>slow down your development</strong> and <strong>decrease code quality</strong>.</p> <p>1. <strong>Slow tests slow down development.</strong> If you’re practicing <span class="caps">TDD</span>, you want to see a test fail before you make it succeed. Two minutes is far too long for this feedback loop to be effective. Of course, you can (and should) just run the test classes that correspond to your code as you program – no need to run your entire test suite every time you write your failing tests. But even still, the test time bar should ideally be set quite low. Frequent 5-10 second delays are enough to break my concentration, and I find myself cmd-tabbing over to other programs if I have to wait more than a few seconds for a test to run. I don’t know of any hard-and-fast rules, but I know that as soon as my test suite runs longer than 30-45 seconds, and individual test classes take longer than 2-3 seconds, I’m less happy and less productive.</p> <p>2. <strong>Slow tests decrease code quality.</strong> There are two simple reasons for this. First, if slow tests break your flow, you’re not only going to write code more slowly: you’re also going to write worse code. Second, if your tests are too slow, you’re not going to wait for them to finish before you move on to the next task. Or worse, you’re not going to run them at all.</p> <h4>So, how can I speed up my tests?</h4> <p>Fortunately, this problem can be addressed. There are plenty of ways to speed up tests. On a current project, we’ve managed to cut our test time substantially – a recent test refactoring cut test time from 129.45 seconds to 31.04 seconds, without removing any tests. That’s a 76% speedup. But we still have room for improvement.</p> <p>Really quickly, here are at least five ways to speed up your test suite. I hope to post more on each of these over the next month or two.</p> <p>1. Use a test database instead of fixtures/factories/etc.</p> <p>2. Only touch the database when necessary</p> <p>3. Organize your tests to avoid duplicate execution</p> <p>4. Separate slow tests out into a lazier testing layer</p> <p>5. <a href="">Run a Rails test server</a></p> <p>I’d love to see the Rails community devote more of its enthusiasm for testing to the question of test speed. There’s nothing wrong with improving our test frameworks, and let’s keep doing that. But let’s also make these frameworks fast.</p> <img src="" height="1" width="1"/> Luke Francl tag:railspikes.com,2009-03-06:1912 2009-03-06T00:26:00Z 2009-03-06T00:26:39Z Dealing with 'duplicate key violates unique constraint' on the primary key <p>I recently had to work through a problem where inserts were failing due to duplicate primary keys.</p> <p>Here’s the error (edited for clarity):</p> <p><code>PGError: ERROR: duplicate key violates unique constraint "contracts_pkey": INSERT INTO "contracts" ('column_1', 'column_2', 'column_3') VALUES('abc', '123', 'xyz') RETURNING "id"</code></p> <p>What is going on here? I’m not even providing the primary key <code>id</code>—that comes from the sequence.</p> <p>Hey wait a second…</p> <p>What was happening is that we had a data import that didn’t use the sequence. So the integers returned by the sequence have already been used, causing a duplicate primary key.</p> <p>To solve the problem, I had to reset the sequence, like this:</p> <p><code>select setval('contracts_id_seq', (select max(id) + 1 from contracts));</code></p> <img src="" height="1" width="1"/> Luke Francl tag:railspikes.com,2009-03-05:1911 2009-03-05T18:49:00Z 2009-03-05T18:49:59Z Today's hard-won lesson <p>An float subtracted from an integer results in a float. When typecast by ActiveRecord, this is converted to an integer.</p> <p><code>validates_numericality_of</code> with <code>:only_integer => true</code> results in a rather obscure error message if a non-integer is present (“is not a number”).</p> <p><code>validates_numericality_of</code> uses the attribute value before type cast for its validation.</p> <p>...</p> <p>That means if you calculate a value before validation, what is printed out by using the attribute method is <em>different</em> than what <code>validates_numericality_of</code> is using for validation. You need to <em>ensure</em> that the value of <code>attr_name_before_type_cast</code> is an integer!</p> <p>I just spent way too much time figuring this out.</p> <img src="" height="1" width="1"/> Luke Francl tag:railspikes.com,2009-02-16:1906 2009-02-16T03:27:00Z 2009-02-16T03:32:18Z Fetcher moved to GitHub <p>A quick <span class="caps">FYI</span> for those who have been using the Fetcher plugin that we wrote (and use on <a href="">FanChatter Events</a>)...</p> <p>I have moved the <a href="">Fetcher plugin repository</a> to GitHub. You can get it at <a>git://github.com/look/fetcher.git</a></p> <p>Happy forking!</p> <p>(And a shameless plug for <a href="">Mike Mondragon</a> and my book: if you need more details about how to make your app speak email, look no further than <a href="">Receiving Email with Ruby</a>!)</p> <img src="" height="1" width="1"/> Jon tag:railspikes.com,2009-02-14:1903 2009-02-14T19:28:00Z 2009-02-14T19:30:36Z Rescuing autotest from a conflicting plugin <p>For the longest time, I wasn’t able to run <a href="">autotest</a> on one of my projects. That was OK; I was intrigued by autotest, but had never really committed to it. The problem: whenever I would try to run autotest, I’d get the following error:</p> <pre><code> loading autotest/rails_rspec Autotest style autotest/rails_rspec doesn't seem to exist. Aborting. </code></pre> <p>I’m running <a href="">Shoulda</a>, not <a href="">RSpec</a>, so I had no idea why this was happening. I tried installing (and uninstalling) RSpec in various configurations, to no avail. Nothing worked.</p> <p>Then I started a new project. Autotest worked just fine on it. After a few days, I got used to autotest, and a few days later, I came to really like it. It helps me get into a <span class="caps">TDD</span> “flow” – all tests pass; write failing tests; write code; all tests pass.</p> <p>So when I came back to my previous project where autotest didn’t work, I decided to dig deeper. Eventually I found a plugin that was causing the problem: <a href="">acts-as-taggable-on</a>. The plugin was written to allow autotesting, as explained in a <a href="">blog post</a>. Supposedly, this is supposed to be a different autotest instance from your app’s main instance, but it wasn’t working that way for me.</p> <p>The fix? <strong>Delete lib/discover.rb</strong> from the acts-as-taggable-on plugin. That’s it – autotest works now.</p> <p>In the end, I maybe could have solved the problem by getting RSpec configured properly, but just running the gem locally didn’t do the trick for me, and I don’t want to add any code to my app to support autotesting of a plugin that I never want to test.</p> <p>So should plugins even ship with test code? Yes, they should. Not for normal use; I never run plugin tests, assuming instead that the plugin is tested by the author. But if an open source plugin ships without tests, it’s that much harder for other developers to fork/fix/improve the plugin. But really, that’s about the only reason for plugin/gem tests. And they should never touch application tests.</p> <img src="" height="1" width="1"/>
|
http://feeds.feedburner.com/RailSpikes
|
crawl-002
|
refinedweb
| 12,007
| 56.15
|
Suppose we have two strings s and t, we have to check whether s can be converted to t in k moves or less. In ith move you can do these operations.
Select any index j (starting from 1) in s, such that 1 <= j <= size of s and j has not been selected in any previous move, and shift the character at that index i number of times.
Remain as it is.
Here shifting a character means replacing it by the next letter in the alphabet (if letter is 'z', then wrap it to 'a'). So shifting a character by i times indicates applying the shift operations i times.
So, if the input is like s = "poput" t = "vwput" k = 9, then the output will be True because at i = 6, we can convert 'p' to 'v', and at i = 8, we can convert 'o' to 'w'.
To solve this, we will follow these steps:
if size of s is not same as size of t, then
return False
count := an array holding (minimum of 1 and (k - i + 1 +(k - i)/26)) for all i from 0 to 25
for each character c1 from s and c2 from t, do
if c1 is not same as c2, then
diff :=(ASCII of c2 - ASCII of c1 + 26) mod 26
if count[diff] <= 0, then
return False
count[diff] := count[diff] - 1
return True
Let us see the following implementation to get better understanding
def solve(s, t, k): if len(s) != len(t): return False count = [min(1, k - i + 1) + (k - i)//26 for i in range(26)] for c1, c2 in zip(s, t): if (c1 != c2): diff = (ord(c2) - ord(c1) + 26) % 26 if count[diff] <= 0: return False count[diff] -= 1 return True s = "poput" t = "vwput" k = 9 print(solve(s, t,k))
"poput","vwput",9
True
|
https://www.tutorialspoint.com/program-to-check-whether-we-can-convert-string-in-k-moves-or-not-using-python
|
CC-MAIN-2022-05
|
refinedweb
| 313
| 67.12
|
validation in java script
validation in java script i have put this code for only entering integer value in text box however error occured... false;
}
this is not working properly
pls help
Here is a javascript
Navigation with Combo box and Java Script
Navigation with Combo box and Java
Script
... in Navigation with Combo box?
JavaScript is a 2-level combo box menu script... selection box. The navigation that requires absolutely no DHTML or Javascript
Select Box Validation in JavaScript
Select Box Validation in JavaScript
In this section we will discuss about select box validation in JavaScript. Select box allows you to create drop down list... created one JavaScript
in which one select box is created name "select your="java.sql.*"%>
<html>
<head>
<script language
combo box code problem
combo box code problem in this my problem related to :
when i...
<html>
<head>
<script language="javascript">
var arr = new...;html>
<head>
<script language="javascript" type="text
|
validation is JSP using JavaScript |
Java
Script Code of Calendar and Date Picker |
JavaScript Combo Box Validation |
JavaScript... |
Conditions In Java Script |
Looping in JavaScript |
Functions in JavaScript
java script validation - Java Beginners
java script validation hi,
i have two radio buttons yea... Button Validation
function callEvent1...=false;
}
Yes
No
Text Box
javascript validation
conditions through java script:
min 6 characters, max 50 characters
must...;title>Java Script Strong Password Validtion</title>
</head>...;/script>
<script language="javascript">
function
combox validation javascript code - JSP-Servlet
For More Details Please Click: validation javascript code hiiiiiii,
I want a javascript...; Hi
This is the source code of combobox validation in javascript
stuts java script validation is not working.
stuts java script validation is not working. hello my stuts client side validation is not working.
pls help me out i have put jsp file's code...; <html:javascript
> Enter
Alphanumeric Validation in JavaScript
Alphanumeric Validation in JavaScript
In this tutorials we will discuss about alphanumeric validation in
JavaScript. Sometimes it may occur when user have... to validate a form which
can accept only character and digit in java Script
java script validation - Java Beginners
java script validation how to do validations in j s p thru java..." with javascript validation :
Contact Details
.txt {
font-size:12px;
font...://
go to that site want to use alert box what i need to do
<html>
<script>
java combo box
java combo box how to display messagedialogbox when the combobox is null,
Thanks in advance
Check Box Validation in PHP - PHP
Check Box Validation in PHP How can validations done on check boxes more than 3? Hi Friend,
Please visit the following link:
Javascript validation - Java Beginners
Javascript validation Hi
I have developed the following function for the validation of Zip code,the problem is, when user enters the valid zip code in first attempt, the background color of text box changes to green while I
combo box connection
combo box connection how to provide connection between three combo boxes,if my 1st combo box is course and 2nd combo box is semester and 3rd combo... combo boxes.
Here is Java swing code:
import java.sql.*;
import
Javascript validation - Java Beginners
Javascript validation Hi,
this is ragav and my doubt is:
I am having a input type="file" text box in which the user can browse the required... or not in javascript? plz urgent..
Hi Friend,
Try the following code
JavaScript - JavaScript Tutorial
. In this JavaScript reference you will find most of
the things about java script...
with combo box and JavaScript
In this lesson you will learn how to navigate... in
JavaScript.
Form
Validation with JavaScript
JavaScript Email Validation
JavaScript Email Validation...;
In this section we are going to check email Validation using JavaScript.
JavaScript allows to perform a client side validation of email Ids in several
forms
Time validation
box is in the correct format or not using java script.
Please help me for doing...;html>
<script language="JavaScript">
function check(timeStr) {
var...Time validation Hi. I have a text box in html to get time
Dojo Combo Box
Dojo Combo Box
In this section, you will learn what is combo box and
how to create a combo box in dojo. For creating the Combo box you need "
javascript form validation
javascript form validation How to validate radio button , dropdown list and list box using onsubmit event..please give me a sample example..If we do... this:
<Script language = JavaScript>
function validate
java script text box
java script text box hi,
I created a button when i click... in alert).
i also want the text box should generate in front of NEW button(next/prev) and after it submits it should show before the NEW line.dont add again two box
javascript validation
.
<html>
<script src="combovalidate1.js" type="text/javascript... it..
<html>
<head>
<script type="text/javascript">
function... it is usefull..Try it..
<html>
<head>
<script type="text/javascript
validation.....
validation..... hi..........
thanks for ur reply for validation code.
but i want a very simple code in java swings where user is allowed to enter... give a message box that only numerical values should be entered. How can
Combo Box operation in Java Swing
Combo Box operation in Java Swing
... the Combo
Box component, you will learn how to add items to the
combo box, remove items from the combo box.
This program shows a text field, a combo box
URL validation in JavaScript
URL validation in JavaScript
In this tutorial we are going to discuss, how to validate a URL in JavaScript.
Java Script is a Scripting language. A Java Script language is a lightweight
programming language. JavaScript code are written
JavaScript - JavaScript Tutorial
most of
the things about java script. JavaScript tutorial is classified...;
Navigation
with combo box and JavaScript
In this lesson you will learn how... email address in you JSP
program using JavaScript.
Java Script/>
--of validation.xml--
<form name="logonForm">
<
remove item from list box using java script - Java Beginners
remove item from list box using java script remove item from list box using java script Hi friend,
Code to remove list box item using java script :
Add or Remove Options in Javascript
function addItem
combo box
combo box Hi,
[_|] dropdown box
[ ] [INCLUDE... a screen like this using jsp-servlet(or DAO,DTO),in that drop down box i should get
please send me java script for html validation - Java Beginners
please send me java script for html validation please send me code for javascript validation .........please send me its urgent
a.first:link { color: green;text-decoration:none; }
a.first:visited{color:green;text
retrieving from oracle database using jsp combo box
;
<script type="text/javascript">
function validateFormOnSubmit(theForm...;
<script src="../js/treeviewer.js" type="text/javascript"></script>...retrieving from oracle database using jsp combo box hi this is my
Dojo Combo Box
Dojo Combo Box
... box and
how to create a combo box in dojo. For creating the Combo box you need "dijit.form.ComboBox".
Combo Box: A
combo box is a graphical user
form to java script conversion
form to java script conversion I am using a form one more form cotain a piece of code .
how to convert it to java script to function it properly...;
</form>
<script type="text/javascript" src="... and password with combo box....please help me
I want to validate username... help me to validate username and password by using combo box.....
Please Help Me
how to insert list box in java script dynamically and elements retrieving from database like oracle
how to insert list box in java script dynamically and elements retrieving from... box in javascript when elements retrieving from database..
That is whenever I insert new course in a table.. It should be seen in my list box
alert for validation
alert for validation i want to put alert on the text box that enter data should be in DD/MM/YYYY format only through javascript validation.
<html>
<script type='text/javascript'>
var
struts validation
-form-elements.js" type="text/javascript"></script>
<script src="../../js/jquery.js" type="text/javascript"></script>
<script src...struts validation I want to apply validation on my program.But i am
jsp and java script
it to a javascript variable if the value is less than 3 i want to display a alert box am getting value into javascript but alert box is not displaying what could...jsp and java script <%@taglib prefix="logic" uri="http
JavaScript password validation
JavaScript password validation
We are going to discuses about JavaScript... used JavaScript password validation and
check for an alphabet , number...
<html>
<head>
<title>Java Script Strong Password
Java Script
Java Script Hello Sir, Is it possible to select only current date from datetimepicker in struts2 through java script validation. Means when i select....
Try to send me java script validation code for this purpose that user won't
Java Script
.
You can learn Java Script from our JavaScript - JavaScript Tutorial pages...Java Script What is Java Script? How to learn Java Script?
Hi
Java Script is client side programming language. It runs on browser
Registration page with JavaScript Validation
Registration page with JavaScript Validation HTML Registration page with JavaScript Validation - required source code
Registration page in HTML with JavaScript Validation source code
<html>
<head>
Validation probs in javascript
;
<HTML>
<head>
<SCRIPT language="javascript">...Validation probs in javascript This is my sports1.jsp file
<... file my validation works correctly but when my validation fails i am getting
Form Validation with JavaScript
Form Validation with JavaScript I created a simple form including nine or ten fields in HTML, and i want to validate it in JavaScript. How can i do...:
<html>
<script src="validateForm.js">
</script>
<form
Java Script.
Java Script. Hi Sir,
The below java script code is not working in Google chrome can yo give me the solution as soon as possible.
<script type="text/javascript">
function setValue(){
var val="";
var frm
combo program - Java Beginners
combo program I want to give DOB in combo box for that i have to give 1 to 30 for days and 1900 to 2010. Without giving all, using for loop i have to get the values. Please help me..
Hi Friend,
Please visit
javascript script tag fields - Java Beginners
javascript script tag fields hi all,
i wanted to know what is meaning of type property in javascript script tag?
In most cases we write type... of tag specifies the MIME type of a script. Javascript is the default scripting
javascript-email validation - Java Beginners
javascript-email validation give the detail explanation for this code:
if (str.indexOf(at)==-1 || str.indexOf(at)==0 || str.indexOf(at)==lstr... about email validation at:
Java Script.
Java Script. The below code is nit working in Google chrome can any one tell me the solution as soon as possible.
<script type="text/javascript">
<!--
function setValue(){
var val="";
var frm
javascript validation
javascript validation validation of comparing dropdownlist and textbox in javascript
Combo box In Java
Combo box In Java
This tutorial will help you to create a Combo box in java... a value in the
editable field. Combo box can be created using JCombobox in java..., with the help of the example we are going to
create.
Example : Combo Box in Java import
jtable combo - Java Beginners
jtable combo i am using jtable (using defaulttablemodel) ,when i am click on a particular cell of jtable i want to display the combo box in that cell,plz provide program Hi Friend,
Try the following code:
import
Mobile Number Validation Form in JavaScript
Mobile Number Validation Form in JavaScript
In this example we have created... No. and email id entered by the user. We have used Java
Script to validate the Mobile... Validation Form:
<html>
<head>
<script type="text
Loading combo box from oracle how can i load values into a combobox from oracle database when a value is selected in another combo box
java script
java script how will you identify whether the number entered... digit or two digit number.
<HTML>
<HEAD>
<SCRIPT LANGUAGE="JavaScript">
function getvalue(){
var v=document.getElementById("text
please send me javascript validation code - Java Beginners
please send me javascript validation code hallo sir , please send me java script code for this html page.since i want to do validation.i am a new user in java ....please send me its urgent
javascript date validation using regex
javascript date validation using regex Hi,
I want to validate...;title>Validating Date format</title>
<script type="text/javascript">...;Validating Date format</title>
<script type="text/javascript">
javascript - Java Beginners
TextBox JavaScript Validation Can you please give an example of text box validation in JavaScript? hi<html><head><script...;script language=javascript>document.write("<input type=text name=limit
Javascript Form validation Example
Javascript Form validation Example
In this section we will discuss about how... am giving a simple example for form
validation using Javascript.
Example...;head>
<title>Form Validation Example</title>
<script type
Form Validation
Form Validation Java script validation for checking special characters and white spaces and give an alert.Please help me urgent
Thanks in advance
Validation code - Java Interview Questions
Validation code Hi,
Anyone can please send me a javascript function...
*************
Java Script Calender Date Picker
function isNumberKey(value...
****************
Java Script Calender Date Picker
function isNumberKey
|
http://www.roseindia.net/tutorialhelp/comment/99898
|
CC-MAIN-2014-52
|
refinedweb
| 2,204
| 55.64
|
Problem :
stringstream is a stream class to operate on strings. It basically implements input/output operations on memory (string) based streams. stringstream can be helpful in different type of parsing. The following operators/functions are commonly used here
- Operator >> Extracts formatted data.
- Operator << Inserts formatted data.
- Method str() Gets the contents of underlying string device object.
- Method str(string) Sets the contents of underlying string device object.
Its header file is sstream.
One common use of this class is to parse comma-separated integers from a string (e.g., “23,4,56”).
stringstream ss("23,4,56"); char ch; int a, b, c; ss >> a >> ch >> b >> ch >> c; // a = 23, b = 4, c = 56
You have to complete the function vector parseInts(string str). str will be a string consisting of comma-separated integers, and you have to return a vector of int representing the integers.
Sample Input :
23,4,56
Sample Output :
23 4 56
Solution :
#include <sstream> #include <vector> #include <iostream> using namespace std; vector<int> parseInts(string str) { vector<int> str2(3); stringstream ss(str); char ch; ss >> str2[0] >> ch >> str2[1] >> ch >> str2[2]; return str2; } int main() { string str; cin >> str; vector<int> integers = parseInts(str); for(int i = 0; i < integers.size(); i++) { cout << integers[i] << "\n"; } return 0; }
112.
|
https://sltechnicalacademy.com/stringstream-hackerrank-solution/
|
CC-MAIN-2021-04
|
refinedweb
| 218
| 56.55
|
Search...
FAQs
Subscribe
Pie
FAQs
Recent topics
Flagged topics
Hot topics
Best topics
Search...
Search Coderanch
Advance search
Google search
Register / Login
Ishita Saha
Ranch Hand
39
22
Threads
0
Cows
since May 30, (39/100)
Number Threads Started (22 (39/10)
Number Threads Started (22/10)
Number Likes Received (0/3)
Number Likes Granted (0/3)
Set bumper stickers in profile (0/1)
Set signature in profile
Set a watch on a thread
Save thread as a bookmark
Create a post with an image (0/1)
Recent posts by Ishita Saha
calling a perl script from java program
Hi,
I need to execute a perl script from a java program, I got some stuff on google:
import java.io.*; public class RunCommand { public static void main(String args[]) { Process process; try { process = Runtime.getRuntime().exec("cmd.exe /c:/perl/bin perl.exe basic.pl"); process.waitFor(); if(process.exitValue() == 0) { System.out.println("Command Successful"); try { BufferedReader in = new BufferedReader(new InputStreamReader(process.getInputStream())); String line = null; while ((line = in.readLine()) != null) { System.out.println(line); } } catch (IOException e) { e.printStackTrace(); } } else { System.out.println("Command Failure"); } } catch(Exception e) { System.out.println("Exception: "+ e.toString()); } } }
It gives me the output as: Command Successful
but I need to see the output which should be: "I have a no" (which is there in basic.pl)
also I tried running another script - which should create a txt file ( that is not happening either)
any help is appreciated.
Thanks!
show more
12 years ago
Java in General
handling CLOB from java
I tried it other way and it worked:
tempClob = CLOB.createTemporary(connection,true, CLOB.DURATION_SESSION); tempClob.open(CLOB.MODE_READWRITE); // open the temporary CLOB in readwrite mode to enable writing Writer tempClobWriter = tempClob.getCharacterOutputStream(); // get the output stream to write tempClobWriter.write(sClobData); // write the data into the temporary CLOB
I found it from my collegue
show more
12 years ago
JDBC and Relational Databases
handling CLOB from java
It gives me following error:
ORA-22920: row containing the LOB value is not locked
show more
12 years ago
JDBC and Relational Databases
handling CLOB from java
Hi,
I need to write a simple program to fetch a column data which is clob (oracle 9.2)
and insert/modify it and save it back to database.
I tried to use java.sql.Clob which has a setString function but that gives me
**************************************.sql.CLOB.setString(CLOB.java:1148)
Please help me with this, I need this working urgently.
I also came to know that I can get a clob as a String and save back a String as a clob but that is not working in oracle 9.2.
I think that only works with oracle 10g
Regards
show more
12 years ago
JDBC and Relational Databases
ship out a java program
I shall give it a try. Thanks
show more
12 years ago
Java in General
Insert multiple rows in database
Hi,
I need to code a program in Java with JDBC connection to ORACLE where I need to get say 200 rows from a table in database. Insert 50 more rows in the resultset and do some changes in the exisiting 200 rows (some column data) and then write it back to database. I
s there a way I can do this directly in the resultset and write back the result set (250 rows this time) back to database.
The other lenghty process will be to store the reslutset in a multidimensional array - do all changes in Array and then write the rows one by one. may be in a loop using prepared statement.
Can anybody please suggest a cleaner and faster approach.
Thanks in advance
show more
12 years ago
JDBC and Relational Databases
ship out a java program
Hi,
I have a small java application, which has a program that connects with database and update some data. Now I have to ship this application to a unix server.
could anybody please tell me how should i package this application (jar or something) and then what will be the command to run this application. There will not be any command line arguments needed to run this.
Thanks in advance.
show more
12 years ago
Java in General
IBM HTTP Server redirection - version 610
Hi,
We have recently migrated our application from WAS5.x to WAS6.
The previous version of WAS was having IHS 1.3.28 and WAS6 is having IHS 610 installed.
we had a RewriteRule which was working fine in IHS 1.3.28 ( the same rule when I copy to 610 version, it works fine but the application context root starts giving 404 : File not found error
GET /App_root/index.html HTTP/1.1" 404 63 (from access_log)
the sample of the RewriteRule
RewriteEngine On
RewriteRule ^/$ /index.html [PT]
RewriteRule ^/(.*) /App_Root/$1 [PT]
RewriteRule ^/App_Root/(.*)/$ /App_Root/$1/index.html [PT]
I wanted to know if there is a difference between how these two version behave in terms of rewriteRule and what could be the resolution for this.
can anybody help there please.
Thanks!
show more
13 years ago
WebSphere
remote debugging with tomcat and eclipse
Hi All,
I have tried following what is available on internet for enabling remote debugging of web application using tomcat and eclipse but somehow it is not working for me.
I am using jdk1.5.0_09, eclipse Version: 3.2.1, apache-tomcat-6.0.18 and com.sysdeo.eclipse.tomcat_3.2.1
I have set JVM arguments in eclipse -> windows -> perferences -> tomcat -> JVM Setting as
-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=1044
now i see the three icons for tomcat start, stop and restart but when I click on start icon I am getting following error:
ERROR: JDWP unable to get necessary JVMTI capabilities. ["debugInit.c",L279]
any idea what is missing or wrong here.........
quick response is appreciated
Thanks in advance.
show more
13 years ago
Tomcat
difference between App server and web server
ok, I understand that web application server provides servlet and EJB container in addition to web server.
what I still want to understand is that why do we need a web server If I am using a web application server.
Web Application server should be able to take care of everything - right?
but all the j2ee web applications I have seen are deployed on app server and web server, could you please help me to identify the unique features or responsibilities of web servers which are not present in app server.
Thanks
Ishita
show more
13 years ago
WebSphere
difference between App server and web server
Hi,
this is a very basic questions but I am still confused on this.
I want to understand difference between Web application server and web server, like we are using IBM WAS and IHS.
also Can I use only one of them without using other, like can I host a JSP on WAS without using IHS - I am not sure how to try and test this out.
also for one of the application we are using apache web server to host a perl based application
what is the difference between apache and IHS and how to make a choice between them.
Thanks in advance
show more
13 years ago
WebSphere
Submit from a JSP page based on a choice
Hi,
I have a JSP page where I have given two choices to the user.
Now based on the selected choice - I have to submit the form to a servlet.
To be specific there are two radio buttons -
if user select radio button 1 - he should be redirected to URl abc
if user select radio button 2 - he should be redirected to URl xyz
could anybody please guide me what approach I should take
Thanks
show more
14 years ago
JSP
need a good book
Hi,
Please let me know a good book on portal devlopment using Java/J2EE
which talks about decoraters, theme, access level security and all related stuff.
Thanks!
show more
14 years ago
Portals and Portlets
reading a file record by record
I have a requirement of reading a file record by record so this would be a text file being built from mainframe data.
I have to read it and display all column information with proper heading.
I want to know - what should be the best approach to read a file which doesn't contain just raw data but it contains records from a database.
Also this information needs to be displayed in a web page which keeps refreshing every 30 secs
show more
14 years ago
Java in General
Ksh script
Yeah - I have the commands and i am trying my hands on the same.
I run into a few questions like:
I am using something like:
1 START=$(date +%s)
2 remain=30
3 sleep $remain
4 END=$(date +%s)
5 DIFF=$(( $END - $START))
6 echo "it took $DIFF secs"
7 sqlplus -s user_id/PWD@host <<EOF>
8 exec stored_proc;
9 EXIT;
10 EOF
This logic may not seem to be much relevant but this in in draft version for now.
Now I need to perform while i am in SQL connection, but if i insert statements 1-6 inside 7-10. It gives me error for each of the command - not found error which is expected since they are not sql commands.
Can somebody tell me what should be the way to do this. Thanks.
show more
14 years ago
GNU/Linux
|
https://www.coderanch.com/u/149559/Ishita-Saha
|
CC-MAIN-2022-21
|
refinedweb
| 1,583
| 67.28
|
This page was generated from doc/source/methods/iforest.ipynb.
Isolation Forest¶
Overview. The algorithm is suitable for low to medium dimensional tabular data.
Usage¶
Initialize¶
Parameters:
threshold: threshold value for the outlier score above which the instance is flagged as an outlier.
n_estimators: number of base estimators in the ensemble. Defaults to 100.
max_samples: number of samples to draw from the training data to train each base estimator. If int, draw
max_samplessamples. If float, draw
max_samplestimes number of features samples. If ‘auto’,
max_samples= min(256, number of samples).
max_features: number of features to draw from the training data to train each base estimator. If int, draw
max_featuresfeatures. If float, draw
max_featurestimes number of features features.
bootstrap: whether to fit individual trees on random subsets of the training data, sampled with replacement.
n_jobs: number of jobs to run in parallel for
fitand
predict.
data_type: can specify data type added to metadata. E.g. ‘tabular’ or ‘image’.
Initialized outlier detector example:
from alibi_detect.od import IForest od = IForest( threshold=0., n_estimators=100 )
Fit¶
We then need to train the outlier detector. The following parameters can be specified:
X: training batch as a numpy array.
sample_weight: array with shape (batch size,) used to assign different weights to each instance during training. Defaults to None.
od.fit( X_train ) outlier scores.
|
https://docs.seldon.io/projects/alibi-detect/en/latest/methods/iforest.html
|
CC-MAIN-2020-40
|
refinedweb
| 217
| 54.29
|
Hello everybody!
I'm building a automated camera pan an tilt platform to take panorama pictures. I don't have much electronics experience but I'm jumping in the deep end anyway
I'm trying to connect my LV168 with the Micro Serial Servo Controller.
As far as I can figure out the serial lines on the LV168 are PD0 and PD1, are these the user I/O lines 8 and 7?
So should I connect:
(on LV168) - (on servo controller)User I/O signal 7 - RS232 serial input User I/O ground 7 - GND
Are there by any change example programs for the LV-168 that demonstrate the micro servo controller?
Thanks in advance!
Hey Marcel, it's not quite the same as your setup, but have you taken a look at this thread?
Anyway, you want to connect PD1 (I/O pin 7) to the logic-level serial input (SIN) pin on your micro serial servo controller, not the RS-232 level serial input pin. Also, you must have a ground connection between the two devices, but which ground pin you use isn't of particular importance.
As for a code example, if you want to use the Arduino language, that thread I linked to above should help. If you're writing in C, this should work, but I hate posting code I haven't actually tested:
#define F_CPU 20000000//CPU clock
#define BAUD 9600//baud rate for UART
#define MYUBRR 129//(F_CPU/16/BAUD-1) baud rate variable for UART hardware
#include <avr/io.h>
void USART_Init(unsigned int ubrr);
void USART_Trans(unsigned char data);
int main(){
USART_Init(MYUBRR);
while(1);
}
void USART_Init(unsigned int ubrr){//Initialize USART hardware
UBRR0H=(unsigned char)(ubrr>>8);//set buad rate
UBRR0L=(unsigned char) ubrr;
UCSR0B=(1<<TXEN0);//enable transmitter
UCSR0C=(3<<UCSZ00);//Set frame format for 8bit with 1 stop
}
void USART_Trans (unsigned char data){//Transmit a byte of data over USART
while(!(UCSR0A&(1<<UDRE0)));//wait for transmition to complete
UDR0=data;
}
That should set servo 0 to its neutral position, using Pololu mode (mode select jumper removed on the servo controller) with the LV168 running at full 20MHz. Is everything working now?
-Adam
Hello.
I assume that by "user I/O lines 7 and 8" you mean counting the header pins from left to right on the LV-168 starting from 1? If so, then you are correct. If you look on the silkscreen on the back of the board you can see the serial pins labeled as D0 and D1:
So when the board is flipped over and you're looking down at the top side of the PCB, the LV-168's serial receive pin (PD0) is the rightmost one on the 3x8 header strip, and the serial transmit pin (PD1) is the one just to the left of it.
- Ben
Thanks for your answers Nexisnet and Ben!
Is using the connection from PD7 (ground+signal) to the RS232 not possible?
The reason I want to use the RS232 input is because the LV168 and the Servo controller are using diferent voltages. The LV168 is using 3.6V and the Servo Controller is using 4.8V.
I tried your example program, but unfortunately it doesn't seem to work. I see the following:
(Both LV168 and Servo Controller are off)Turn on Servo Controller -> Yellow LED is on continouslyTurn on LV168 -> Red LED is on continously, green LED blinks
Could this be because I'm using the RS232 input?
I tried hooking up the LV168 to the same battery pack as the servo controller (4 x 1.2V) but it then the LV168 made a high pitched squeaky sound, so I turned it off directly. It didn't sound healthy at all.
The Orangutan LV168 is actually running at 5V (it uses a step-up voltage regulator to produce 5V from source voltages as low as 2V), so it can communicate just fine with the micro serial servo controller.
The serial output of the LV168 must be connected to the logic level serial input pin of the servo controller, the RS-232 input will invert the serial signal, which is probably why you saw the servo controller's LEDs indicating a serial error.
I hope your LV168 is alright, you should never connect more than 5V to it. The rating of your battery pack is the "nominal" voltage, but the voltage actually varies as the battery is charged and discharged. A fully charged NiMH four-cell pack can reach 6.4V, well out of spec of the LV168.
Also, just to check, are you using the VCC=VS jumper on your servo controller? This bypasses the on-board voltage regulator, so you should not apply more than 5V this way. Good power options are:How's it going now? Did your LV168 survive?
Ah, I didn't know that! Luckily it seems my LV-168 did survive, first thing I did was reprogram it with the test program. To my relief it seems to work ok (funky music!)
I tried hooking up the Servo controller to the 3 cell battery pack, but it doesn't seem to have enough power that way. When I use 4 cell pack the yellow LED is on as soon as I connect, when I use the 3-cell pack none of the LEDs turn on. (Or are all LEDs supposed to be off?)
That leaves me with the problem on how to lay the ground wire between the two boards. Here is my current setup (without a GND wire, so it won't work):
Laying a GND wire between the GND pins on the two board seems like a bad idea, since they are on different voltages. Or isn't that a problem?
I don't see the image in your post (I just see the word "image") but yes, you will need a ground wire connecting your two boards, and no, it won't be a problem if they're at different voltages. The ground connection just serves as a reference for the signal power. Voltages aren't absolute, they're relative differences, so the signals sent from your Orangutan to your servo controller need a common reference (i.e. 5V more than what?).
Also, since the electronics on the servo controller are powered through a 5V linear regulator (which limits higher input voltages down to 5V) and the electronics on your LV168 Orangutan are powered through a step-up regulator (which increases lower voltages up to 5V) the two boards are actually 'running' at the same voltage anyway, even though they're being powered from different sources.
Actually, the LV168 5V step-up regulator can source about 150mA above what the Orangutan uses itself, and the servo controller's regulator can take 5V as its input. If you're not doing anything else with it, you could power the electronics of your servo controller (voltage input on the left side of the board in the pictures) from the LV168's 5V output header, and the servo bus (right side of the board in the pictures) from a separate, higher battery. Just MAKE SURE you don't have the VCC=VS jumpers connected!
I'm back at it after a break, and connecting the ground wire has worked! My servo is making tiny ticking noises indicating that its getting some signals at least. Not much movement though.
Sorry to have to ask help again, but which values should I send to make it ping-pong between the maximum values?
I tried something like this:
[code] int main(){
USART_Init(MYUBRR);
while(1){
_delay_ms(1000);34);//position low 7 bits
_delay_ms(1000);
}
}[/code]
Hey Marcel, welcome back. I see two little bugs in the code you pasted above.
First, you had no way of knowing this, but at a 20MHz clock speed the stock delayms function maxes out at 13ms, so your second-long delays are really only 13ms long. You can achieve longer delays by calling a shorter delay multiple times in a loop.
Also, the two different positions you're commanding are very close together, since you're only changing the low byte of the two-byte absolute position command. The two-byte protocol has a range from 500 to 5500, so on a scale with 5000 increments you're commanding a change of only 4 increments. Frankly I'm surprised the servo is even making noise for such a small change.
You don't want to slam your servo into its mechanical stops, so lets try swinging it between 2000 and 4000. Using the Pololu mode absolute position protocol, the two position bytes would be:2000: 0x0F,0x504000: 0x1F,0x20
So you would want to change your code to something like this:
int main(){
USART_Init(MYUBRR);
unsigned char i;
while(1){
USART_Trans(0x80);//start byte
USART_Trans(0x01);//device ID (servo controller)
USART_Trans(0x04);//command number 4 (absolute position)
USART_Trans(0x00);//servo number
USART_Trans(0x0F);//position high bits (2000: 0x0F,0x50)
USART_Trans(0x50);//position low 7 bits
for(i=0;i<100;i++){
_delay_ms(10);
}
USART_Trans(0x80);//start byte
USART_Trans(0x01);//device ID (servo controller)
USART_Trans(0x04);//command number 4 (absolute position)
USART_Trans(0x00);//servo number
USART_Trans(0x1F);//position high bits (4000: 0x1F,0x20)
USART_Trans(0x20);//position low 7 bits
for(i=0;i<100;i++){
_delay_ms(10);
}
}
}
Did that do it? For a little more explanation of the Pololu-mode absolute position protocol, check out this thread.
Hmmm, no luck so far.
I've added a bit of code that prints to the lcd, the program is running fine switching the position ever second.The green led on the Servo controller is blinking every 1 sec, but the servo is not moving.
And here is a picture of my setup:
old.cgtextures.com/.Temp/LV168_2.jpg
And in case the wires are hard to see, here is a schematic of how I connected them.
old.cgtextures.com/.Temp/LV168.jpg
I've also tried a different servo (the one I'm using is the huge HiTec 805BB) but that gave me no movement either. The batteries are fully charged.
Thanks again for your help, I really appreciate that you are taking your time to help me out here. Once I get the servo moving I'm sure I do the rest myself
Hmm, that all looks right. Have you tried checking the servo number settings? Sometimes in figuring out how to use the servo controller, you can accidentally send a string of bytes that configures the controller to respond to a different set of servo numbers (I've done it).
Detailed instructions are in the manual, but basically you can check the servo number range by putting the servo controller in Pololu mode (pull off the jumper then start it up) and sending the byte string {0x80, 0x02, 0x10}. The red and yellow LEDs should turn on, and the green led should flash a number of times over and over again with a 1 second pause in between. If the green LED flashes just once that's not the problem, but if it flashes any other number of times the servo number range has been changed. To reset it to normal (servos 0-7) reset the servo controller and send it the byte string {0x80, 0x02, 0x00}. You should now see the green LED flash just once every second. You'll need to reset the servo controller again before it will move any servos.
If that's not it I'm running out of ideas.
It works! My servo is happily moving back and forth.
I'm ashamed to say the problem was that I had the Servo Controller in Mini SSC II mode, I thought the jumper was to turn Pololu mode on...
Many thanks for your help, I'll be sure to post once I have some practical results out of my camera rig (or when I have more questions, which is more likely )
|
https://forum.pololu.com/t/pd0-pd1-on-lv168/947
|
CC-MAIN-2017-51
|
refinedweb
| 1,998
| 67.28
|
By Beyang Liu on May 30, 2016
This was originally a talk at Google I/O 2014. Check out the slides and YouTube video. Thanks to the Go team for inviting us!
Sourcegraph is a large-scale, multi-language code search and cross-reference engine that indexes hundreds of thousands of open-source repositories. Sourcegraph lets developers:
Unlike other code search engines, Sourcegraph actually analyzes and understands code like a compiler, instead of just indexing the raw text of the code. (This is what enables the features above.)
If you haven’t used Sourcegraph, check out the homepage and then come back here. Since the rest of this post is about how we built Sourcegraph, it’s important for you to know what it is.
Sourcegraph has two main parts:
We’ll start with the web app. Next week we’ll cover Part 2 and discuss how we designed and built our language analysis toolchain.
Our web app has 3 layers:
Basically, our frontend is just another API client to our own API. (We dogfood our own API.)
Here’s what it looks like:
For example, if you go to a repository’s page on Sourcegraph, our web frontend will make a bunch of calls to our API to gather the data it needs. The API will in turn call into the data store, which queries the database. The API turns the results into JSON and sends them back to the web frontend, which renders an HTML template using the data.
This structure is pretty common for large web apps, and there are some nice ways Go makes it super simple to implement.
We made several deliberate choices that have helped simplify the development of our web app:
Our goal, based on our experience with similar systems, was to avoid complexity and repetition. Large web apps can easily become complex because they almost always need to twist the abstractions of whatever framework you’re using. And “service-oriented” architectures can require lots of repetitive code because not only do you have a client and server implementation for each service, but you often find yourself representing the same concepts at multiple levels of abstraction.
Fortunately, Go and a few libraries make it relatively simple to avoid these issues. Let’s run through each point and see how we achieve it and how it helps us.
(Side note: When we were thinking about how to build our web app, we looked around for examples of large Go web apps and couldn’t find many good examples. One notable good example was godoc.org’s source code, which we learned a lot from.)
We don’t use a web framework because we found we didn’t need one in Go. Go’s net/http and a few wrapper/helper functions suffice for us. Here are the techniques and glue code we use to make it all work in the absence of a framework.
Handler functions: We define our handlers with an error return value, and we use a simple wrapper function to make them implement http.Handler. This means we can centralize error handling instead of having to format error messages and pass them to the http.Error func for each possible error. Our handler functions look like:
func serveXYZ(w http.ResponseWriter, r *http.Request) error { ... }
Global variables: For virtually all request “context”, such as DB connections, config, etc., we use global variables. We chose this simple solution instead of relying on a framework to inject context for the request.
Router: We use gorilla/mux for routing.
Rendering HTML: We use html/template and a simpler helper function to render the template:
func executeTemplate(req *http.Request, resp http.ResponseWriter, tmplName string, status int, header http.Header, tmplData interface{}) error { ... }
Returning JSON: We just use a simpler helper function:
// writeJSON writes a JSON Content-Type header and a JSON-encoded object to the // http.ResponseWriter. func writeJSON(w http.ResponseWriter, v interface{}) error { data, err := json.MarshalIndent(v, "", " ") if err != nil { return &httpError{http.StatusInternalServerError, err} }
w.Header().Set("content-type", "application/json; charset=utf-8") _, err = w.Write(data) return err }
We have one “service” interface for each noun in our system: repositories, users, code definitions, etc. Our HTTP API client and data store both implement the same interfaces. That is:
Here’s a simplified version of our repositories interface.
type RepositoriesService interface { Get(name string) (*Repo, error) List() ([]*Repo, error) Search(opt *SearchOptions) ([]*Repo, error) // ... }
When we began, the client and data store implementations were a bit different, but they basically accomplished the same thing. They accepted different (but similar) parameters and returned different types that represented the same nouns. These differences were initially motivated by the need for greater performance (in the data store) and user-friendliness (in the API client). For example, the API client methods returned structs with some additional fields populated (which required a few additional queries), for greater convenience.
Unifying the sets of interfaces took away a tiny bit of performance and user-friendliness but made our API way cleaner and simpler overall. We now have a single set of interfaces that runs through our entire system, with a single set of method behaviors, parameters, return values, error behaviors, etc., no matter whether you’re using our API client or data store.
Here’s a (simplified) example of one of those data store methods, to make it concrete. This method implements the RepositoriesService.Get method described above.
type repoStore struct{ db *db }
func (s *repoStore) Get(name string) (*Repo, error) { var repo *Repo return repo, s.db.Select(&repo, "SELECT * FROM repo WHERE name=$1", name) }
And here’s a (simplified) example of a method implementation in our HTTP API client library. Again, this is the same RepositoriesService.Get method.
type repoClient struct{ baseURL string }
func (s *repoClient) Get(name string) (*Repo, error) { resp, err := http.Get(fmt.Sprintf("%s/api/repos/%s", s.baseURL, name)) if err != nil { return nil, err } defer resp.Body.Close()
var repo Repo return &repo, json.NewDecoder(resp.Body).Decode(&repo) }
(Notice that we’ve hardcoded the URL here. We’ll revisit that soon and find a better solution.)
Initially our frontend web app just called the data store functions directly. This meant that each handler mixed HTTP and database code in ad-hoc ways. It was messy. This messiness made it hard to correctly implement HTTP caching and authorization because our handlers were already very complex.
Now our handlers have very clearly delineated responsibilities. The frontend handlers do HTML templating and call an API client method. The API handlers do HTTP authentication/authorization/caching and then call a data store method. In all cases, our HTTP handlers are concerned with HTTP (which is exactly as it should be) and they delegate the rest of the work.
For example, here’s what a (simplified) frontend handler looks like. It reads the HTTP request parameters, calls the API client (to do the real work), and renders the HTTP (HTML) response.
var repoAPIClient RepositoriesService = &repoClient{""}
func handleRepoPage(w http.ResponseWriter, r *http.Request) { name := mux.Vars(r)["Name"] repo, _ := repoAPIClient.Get(name) fmt.Fprintf(w, "
%s
Clone URL: %s", repo.Name, repo.CloneURL) }
And here’s what a (simplified) API handler looks like. Again, it reads the HTTP request parameters, calls the data store (to do the real work), and renders the HTTP (JSON) response with a cache header.
var repoStore RepositoriesService = &repoStore{dbh}
func serveRepository(w http.ResponseWriter, r *http.Request) error { repo := mux.Vars(r)["Repo"] rp, err := repoStore.Get(repo) if err != nil { return repositoryError(err) } writeLastModifiedHeader(rp.UpdatedAt) return writeJSON(w, rp) }
The key point here is that our HTTP handlers are concerned with handling HTTP, and they call out to implementations of our API to do the real work. This greatly simplifies our HTTP handling code.
Remember back to our API client implementation? We used Sprintf to construct the URL string. And here in the router definition (below), we repeat the same URL pattern. This is bad because we have more than 75 routes, some with fairly complex matching logic, and it’s easy to get them out of sync.
To solve this, we use a router package (such as gorilla/mux) that lets us define routes and mount handlers separately. Our server mounts handlers by looking up named routes we’ve defined, but our API client will just use the route definitions to generate URLs.
const ( RepoGetRoute = "repo" RepoListRoute = "repo.list" )
func NewAPIRouter() *mux.Router { // Define the routes but don't attach handlers. m := mux.NewRouter() m.Path("/api/repos/{Name:.*}").Name(RepoGetRoute) m.Path("/api/repos").Name(RepoListRoute) return m }
// init is called at server startup. func init() { m := NewAPIRouter() // Attach handlers to the routes. m.Get(RepoGetRoute).HandlerFunc(handleRepoGet) m.Get(RepoListRoute).HandlerFunc(handleRepoList) m.Get(RepoSearchRoute).HandlerFunc(handleRepoSearch) http.Handle("/api/", m) }
Then to generate URLs in our HTTP API client using the router, we use the existing route definition:
var apiRouter = NewAPIRouter()
func (s *repoClient) List() ([]*Repo, error) { url, _ := apiRouter.Get(RepoListRoute).URL() resp, err := http.Get(s.baseURL + url.String()) if err != nil { return nil, err } defer resp.Body.Close()
var repos []*Repo return repos, json.NewDecoder(resp.Body).Decode(&repos) }
Notice that we’re no longer hardcoding URLs, so if we update the route definitions, our generated URLs will automatically be updated as well.
We’ve actually made our route definitions open source, as part of our open source client library. Our API server imports the client library, which is where the route definitions live, and just mounts handlers as we saw on the previous slide. So, it’s much easier for us to keep our API client library in sync with our servers. (In fact, we think all web services should open-source their routing! It would make building API clients much easier.)
Most of the methods in our API take some main arguments and some extra parameters. In URLs, the parameters are encoded in the querystring; in Go function calls, they’re just a final struct pointer argument.
Initially we had 3 different parameter sets for each logical interface method:
This was needlessly complex and required a lot of code to convert among the various forms. Thankfully, because the structs were very similar, it was easy to agree on a single struct that all implementations could use. Doing this simplified our code quite a bit.
To do this, we defined the querystring as a Go struct, like SearchOptions here. In each HTTP handler, we use gorilla/schema to decode the querystring into the Go struct. And in the API client, we use Will Norris’ go-querystring to convert a Go struct back into a querystring. These two libraries perform essentially the inverse operations. And in our data store, of course, we still get the parameters as a Go struct.
Having a single definition of each parameter set is much simpler and it lets us rely on the Go compiler’s type checker to warn us if we make mistakes (unlike the non-type-checked querystring map[string]strings we used to use).
Here’s an example parameter struct that is shared by all implementations (frontend and backend):
// this is the options struct for the method: Search(opt *SearchOptions) ([]*Repo, error) type SearchOptions struct { Owner string Language string }
To decode the parameter struct from a querystring like ?Owner=alice&Language=go, in the API HTTP handler:
import ["github.com/gorilla/schema"]()
var d = schema.NewDecoder()
func handleRepoSearch(w http.ResponseWriter, r *http.Request) { var opt SearchOptions d.[Decode]()(&opt, r.URL.Query()) // decode querystring with github.com/gorilla/schema // now opt is populated with values from the querystring
And to encode a parameter struct like SearchOptions{“alice”, “go”} back into ?Owner=alice&Language=go, in the HTTP API client:
import ["github.com/google/go-querystring/query"]()
func (s *repoClient) Search(opt *SearchOptions) ([]*Repo, error) { url, _ := apiRouter.Get(RepoSearchRoute).URL() q, _ := query.[Values]()(opt) resp, err := http.Get(s.baseURL + url.String() + "?" + q.Encode()) // ... }
We do a few other things to make it easier to build our web app:
We’ve used Go to build a large-scale code search engine that compiles and indexes hundreds of thousands of repositories. We hope that we’ve conveyed some of the ways we use Go’s features and that these patterns will be useful you when building large Go apps.
Follow me (@sqs) and @srcgraph on Twitter. And check out Sourcegraph!
|
https://about.sourcegraph.com/blog/google-i-o-talk-building-sourcegraph-a-large-scale-code-search-cross-reference-engine-in-go/
|
CC-MAIN-2018-47
|
refinedweb
| 2,085
| 58.28
|
I need to create a simple Java program for a commission calculator with the following requirements:
I need to ask the user to enter their total sales (as a double) in at the keyboard and I need to use "if" statements to tell them
their commission (sales * commission rate) using the following criteria:
o < $8000 in sales – Commission rate = 10%
o $8000 to $12000 in sales – Commission rate = 12%
o $12000.01 to $18000 in sales – Commission rate = 15%
o $18000.01 to $20000 in sales – Commission rate = 18%
o > $20000 in sales – Commission rate = 20%
Then for the output the user’s Commission in a well formatted way (use the DecimalFormat class)
In comment blocks have a description of what the program does. thank you! I think I did something wrong here is mine
import java.util.Scanner; public class CommissionCalculator //This program calculates the commission of sales { public static void main(Strings[] args) { // create a scanner Scanner input = new Scanner(System.in); //define variable double sales; //inputted sales amount double commission = 0; //commission pay amount System.out.print("Please enter sales amount:"); double sales = input.nextDouble(); if (sales > 20000.01 ) commission = sales * .20; else if (sales > 18000.01 && sales <= 20000) commission = sales * .18; else if (sales > 12000.01 && sales <= 18000) commission = sales * .15; else if (sales > 8000.01 && sales <= 12000) commission = sales * .12; else (sales <= 8000) commission = sales * .10; { System.out.println("Your Commission + sales total is :$" + mask.format((sales));} } }
|
http://www.javaprogrammingforums.com/whats-wrong-my-code/36512-need-help-creating-commission-calculator.html
|
CC-MAIN-2014-42
|
refinedweb
| 242
| 66.74
|
Custom Templates for Jupyter Notebooks with Jinja2
In data science, you will often need to create reports of your work to show to decision makers or other non-technical personnel. Converting your Jupyter Notebook into a stable PDF or HTML document is more transferable to colleagues who do not have Python or Jupyter installed. Python uses a library called
nbconvert and a templating language called
Jinja2 for converting documents. Templates define how a document will be displayed on a webpage or another output format. Understanding how to customize templates is beneficial to making beautified reports of your Notebooks.
Jupyter notebooks are a way to run Python code in your browser. If you'd like to learn more about Python, be sure tot take a look at our free Intro to Python for Data Science course.
In this tutorial, you'll cover the following topics:
- Templates and the
Jinja2template designer.
- Rendering templates and inheritance.
- How to use
nbconvertfor exporting your Notebooks.
- The syntax and structure for extending the Jupyter default templates for HTML export.
- The differences for exporting to LaTeX and PDF formats.
What are templates?
From the Python wiki: "Templating, and in particular web templating, is a way to represent data in different forms... Frequently, templating solutions involve a document (the template) and data. Templates usually look much like the final output, with placeholders instead of actual data"
The Jupyter Notebook can be exported easily by using
File ->
Download As (In Jupyter Lab, you will see
Export Notebook As). This option uses the default templates held in the main Jupyter environment.
Jinja2 is a powerful templating language for Python to define blocking and typesetting. Templates have sections, defined by tags, which tell the template how to render the input data. The data replaces the variables, or expressions, when the template is rendered.
The delimiters for the different sections of a template are:
{% ... %}for Statements
{{ ... }}for Expressions to print to the template output
{# ... #}for Comments not included in the template output
Take a look at this short example of a
Jinja template (courtesy of
Jinja2 documentation):
<>
You can see above that much of the template resembles a normal HTML document.
Jinja needs a few extra lines to interpret the code as a template. The
{% block head %} defines the head section of the HTML document and how this template will extend its formatting. The
{% block title %} section describes where the input title will be displayed. And the
{% endblock %} appears in multiple places because that ends any corresponding template block.
Now you are ready to learn how to use
Jinja to create custom templates of your own!
Introduction to Jinja
Jinja is a templating engine that allows you to define how your documents will be displayed. Specifically, for this tutorial, you will focus on how to export your Jupyter Notebook with the help of
Jinja templates.
First, from
Jinja2 import
Template:
from jinja2 import Template
Understanding Basic Rendering with Templates
Rendering data in
Jinja templates is pretty straightforward. Using brackets and variable names, you can display your data in the template:
myTemplate = Template("My template is {{ something }}!") myTemplate.render(something="awesome")
'My template is awesome!'
Calling
.render(input) allows you to display the template in your Notebook with your input replacing the generic
{{ something }} from the template. This is an example of
expression usage in
Jinja.
You can use a template more dynamically by defining a Python
statement, which
Jinja will interpret.
myFancyTemplate = Template("Let's count to 10: {% for i in range(11) %}{{i}} " "{% endfor %}") myFancyTemplate.render()
"Let's count to 10: 0 1 2 3 4 5 6 7 8 9 10 "
Notice you defined a for loop and the template rendered the numbers from 0 to 10 with spaces between. The
statement syntax of
{% expression %} provides a lot of flexibility for making a custom template.
Template Inheritance
In order to make templates that extend the default Jupyter exports, you can use template inheritance. Inheritance is the concept in programming that you can implement something in a child object that can utilize the parent's defined functionality. A simple example is if you have a
Dog object that can bark, eat, and walk, you can use inheritance to create a
Dalmatian object that can inherit all those attributes and add others, like has spots. You will see in this tutorial how inheritance is very useful for templating, because typically only small changes are needed to customize the output. Like other inheritance languages,
Jinja uses keywords like
extends and
super to access parent definitions.
Take a look at the example parent template from above, called base.html:
<>
A child template might look like this (courtesy of Jinja2 documentation):
{% extends "base.html" %} {% block title %}Index{% endblock %} {% block head %} {{ super() }} <style type="text/css"> .important { color: #336699; } </style> {% endblock %} {% block content %} <h1>Index</h1> <p class="important"> Welcome to my awesome homepage. </p> {% endblock %}
Notice how the child template begin with
{% extends "base.html" %}. This declaration tells the
Jinja templating engine how to treat this document (as an HTML with inheritance from base.html). Using this child template allows for specifying different attributes for the homepage (like a specific CSS color). Also, you can see in the head block that the child template inherits the parent's style with the call to
super().
Using nbconvert for Exporting Jupyter Notebooks
Now that you understand the basic syntax and inheritance of templates, you can learn how to export your Jupter Notebook with
nbconvert and define templates to customize the output.
First import
nbconvert:
import nbconvert
nbconvert can create output to a variety of formats. This tutorial will focus on HTML and LaTeX/PDF output. To view the output in the notebook, use the
IPython display function and the
--stdout in your call to nbconvert.
example = !jupyter nbconvert --to html 'Example.ipynb' --stdout from IPython.display import HTML, display display(HTML('\n'.join(example)))
[NbConvertApp] Converting notebook Example.ipynb to html
This snippet of a Notebook looks very similar to the original .ipynb file. The output displays as a standalone document with headings and text the way it typically displays in a Notebook. The code cell blocks are displayed as gray boxes (known as "Notebook style"). The main difference you can see is that the Jupyter default HTML template does not display "Out:" prompts, like active Notebooks do.
Using nbconvert and a Custom Child Template to Export Jupyter Notebooks
Typically, you'll want to extend the Jupyter Notebook's default export templates and make small design changes to your output.
For example, look at this simple template that removes Markdown cells from the output (called rmMkdwn.tpl):
{% extends 'basic.tpl'%} {% block markdowncell -%} {% endblock markdowncell %}
Applying a template to
nbconvert is done with the
--template= option:
example = !jupyter nbconvert --to html 'Example.ipynb' --template='rmMkdwn.tpl' --stdout display(HTML('\n'.join(example)))
[NbConvertApp] Converting notebook Example.ipynb to html
Even a very simple template, like rmMkdwn.tpl , can help you customize your output tremendously.
Look at this more complicated template, which boxes cells in red (called boxRed.tpl):
{% extends 'full.tpl'%} {% block any_cell %} <div style="border:thin solid red"> {{ super() }} </div> {% endblock any_cell %}
example = !jupyter nbconvert --to html 'Example.ipynb' --template='boxRed.tpl' --stdout display(HTML('\n'.join(example)))
[NbConvertApp] Converting notebook Example.ipynb to html
Above, you can see that the only style change is the red boxes around each cell.
super() is used to make sure that each cell maintains its individual styling from the parent template full.tpl.
Differences in Templates for Exporting to LaTeX and PDF
Since { } and % are special characters in LaTeX, you have to use (( )) and *. Also the default LaTeX templates are base.tplx, article.tplx, and report.tplx, which correspond to LaTeX document classes.
Look at removing Markdown cells with a LaTeX child template:
((* extends 'article.tplx' *)) ((* block markdowncell -*)) ((* endblock markdowncell *))
nbconvert can make LaTeX or PDF output - the PDF output is compiled from the LaTeX templates. For ease of viewing in this tutorial, look at what happens when you export to PDF using rmMkdwn.tplx.
!jupyter nbconvert --to pdf 'Example.ipynb' --template='rmMkdwn.tplx' from IPython.display import IFrame IFrame('Example.pdf', width=800, height=500)
[NbConvertApp] Converting notebook Example.ipynb to pdf [NbConvertApp] Support files will be in Example_files/ [NbConvertApp] Making directory Example_files [NbConvertApp] Writing 16047 16102 bytes to Example.pdf
Notice, this output turned out very differently than the HTML version. LaTeX feeds the first Markdown cell into the article title, so it still displays in the output. The template did correctly remove the subtitle (so it regarded that Markdown cell as true Markdown). The article document class updates with today's date - which is not something in the original Notebook. The font and spacing is different than the HTML version. Lastly, by default the output uses the classic IPython display, not the "Notebook style" display.
If you choose to extend a LaTeX document class that lives in Jupyter, make sure you understand what formatting it will use.
Conclusion
There are some key things to remember about templates,
Jinja2 and
nbconvert.
Templates define how a document will be displayed on a webpage or another output format. Jupyter Notebooks implement
Jinja templates to display different export formats. Templates have inheritance rules that allow you to define parent and child templates for similar pages, with slightly different formatting.
Jinja templates have
statements,
expressions, and (optionally)
Statements, defined by
{% statement %}, define the template structure.
Expressions, defined by
{{ expression }}, fill the template with your data. And
{# comment #}, do not get displayed in the output - they are internal to the template only.
nbconvert is a Python library that allows you to convert your Jupyter Notebook into other formats, like HTML, LaTeX and PDF.
nbconvert uses
Jinja templates to define how Jupyter Notebooks will be displayed in these formats. You can define custom templates or extend the default Jupyter templates using inheritance.
Now you are ready to start making your own templates and exporting beautiful Jupyter Notebook documents!
References:
Python templating documentation
|
https://www.datacamp.com/community/tutorials/jinja2-custom-export-templates-jupyter
|
CC-MAIN-2019-26
|
refinedweb
| 1,658
| 57.16
|
Caching
Automatically set HTTP cache headers and save full responses in a cache.
Production apps often rely on caching for scalability.
A single GraphQL request consists of running many different resolvers, each of which can have different caching semantics. Some fields may be uncacheable. Some fields may be cacheable for a few seconds, and others for a few hours. Some fields may have values that are the same for all users of your app, and other fields may vary based on the current session.
Apollo Server provides a mechanism for server authors to declare fine-grained cache control parameters on individual GraphQL types and fields, both statically inside your schema using the
@cacheControl directive and dynamically within your resolvers using the
info.cacheControl.setCacheHint API.
For each request, Apollo Server combines all the cache hints from all the queried fields and uses it to power several caching features. These features include HTTP caching headers for CDNs and browsers, and a GraphQL full response cache.
Defining cache hints
You can define cache hints statically in your schema and dynamically in your resolvers.
Important note on compatibility: Setting cache hints is currently incompatible with the
graphql-toolsimplementation of schema stitching, because cache hints are not appropriately communicated from one service to the other.
Adding cache hints statically in your schema
The easiest way to add cache hints is directly in your schema using the
@cacheControl directive. Apollo Server automatically adds the definition of the
@cacheControl directive to your schema when you create a new
ApolloServer object with
typeDefs and
resolvers. Hints look like this:
type Post @cacheControl(maxAge: 240) { id: Int! title: String author: Author votes: Int @cacheControl(maxAge: 30) comments: [Comment] readByCurrentUser: Boolean! @cacheControl(scope: PRIVATE) } type Comment @cacheControl(maxAge: 1000) { post: Post! } type Query { latestPost: Post @cacheControl(maxAge: 10) }
You can apply
@cacheControl to an individual field or to a type.
Hints on a field describe the cache policy for that field itself; for example,
Post.votes can be cached for 30 seconds.
Hints on a type apply to all fields that return objects of that type (possibly wrapped in lists and non-null specifiers). For example, the hint
@cacheControl(maxAge: 240) on
Comment.post, and the hint
@cacheControl(maxAge:1000) on
Comment applies to the field
Post.comments.
Hints on fields override hints specified on the target type. For example, the hint
@cacheControl(maxAge: 10) on
Query.latestPost takes precedence over the hint
@cacheControl(maxAge: 240) on
See below for the semantics of fields which don't have
maxAge set on them (statically or dynamically).
@cacheControl can specify
maxAge (in seconds, like in an HTTP
Cache-Control header) and
scope, which can be
PUBLIC (the default) or
PRIVATE.
Adding cache hints dynamically in your resolvers
If you won't know if a field is cacheable until you've actually resolved it, you can use the dynamic API to set hints in your resolvers:
const resolvers = { Query: { post: (_, { id }, _, info) => { info.cacheControl.setCacheHint({ maxAge: 60, scope: 'PRIVATE' }); return find(posts, { id }); } } }
If you're using TypeScript, you need the following to teach TypeScript that the GraphQL
info object has a
cacheControl field:
import 'apollo-cache-control';
Setting a default
maxAge
By default, root fields (ie, fields on
Query and
Mutation) and fields returning object and interface types are considered to have a
maxAge of 0 (ie, uncacheable) if they don't have a static or dynamic cache hint. (Non-root scalar fields inherit their cacheability from their parent, so that in the common case of an object type with a bunch of strings and numbers which all have the same cacheability, you just need to declare the hint on the object type.)
The power of cache hints comes from being able to set them precisely to different values on different types and fields based on your understanding of your implementation's semantics. But when getting started with the cache control API, you might just want to apply the same
maxAge to most of your resolvers.
You can achieve this by specifying a default max age when you create your
ApolloServer. This max age will be used instead of 0 for root, object, and interface fields which don't explicitly set
maxAge via schema hints (including schema hints on the type that they return) or the dynamic API. You can override this for a particular resolver or type by setting
@cacheControl(maxAge: 0). For example:
const server = new ApolloServer({ // ... cacheControl: { defaultMaxAge: 5, }, }));
The overall cache policy
Apollo Server's cache API lets you declare fine-grained cache hints on specific resolvers. Apollo Server then combines these hints into an overall cache policy for the response. The
maxAge of this policy is the minimum
maxAge across all fields in your request. As described above, the default
maxAge of all root fields and non-scalar fields is 0, so the overall cache policy for a response will have
maxAge 0 (ie, uncacheable) unless all root and non-scalar fields in the response have cache hints (or if
defaultMaxAge is specified).
If the overall cache policy has a non-zero
maxAge, its scope is
PRIVATE if any hints have scope
PRIVATE, and
PUBLIC otherwise.
Serving HTTP cache headers
For any response whose overall cache policy has a non-zero
maxAge, Apollo Server will automatically set the
Cache-Control HTTP response header to an appropriate value describing the
maxAge and scope, such as
Cache-Control: max-age=60, private. If you run your Apollo Server instance behind a CDN or other caching proxy, it can use this header's value to know how to cache your GraphQL responses.
As many CDNs and caching proxies only cache GET requests (not POST requests) and may have a limit on the size of a GET URL, you may find it helpful to use automatic persisted queries, especially with the
useGETForHashedQueries option to
apollo-link-persisted-queries.
If you don't want to set HTTP cache headers, pass
cacheControl: {calculateHttpHeaders: false} to
new ApolloServer().
Saving full responses to a cache
Apollo Server lets you save cacheable responses to a Redis, Memcached, or in-process cache. Cached responses respect the
maxAge cache hint.
To use the response cache, you need to install its plugin when you create your
ApolloServer:
import responseCachePlugin from 'apollo-server-plugin-response-cache'; const server = new ApolloServer({ // ... plugins: [responseCachePlugin()], });
By default, the response cache plugin will use the same cache used by other Apollo Server features, which defaults to an in-memory LRU cache. When running multiple server instances, you’ll want to use a shared cache backend such as Memcached or Redis instead. See the data sources documentation for details on how to customize Apollo Server's cache. If you want to use a different cache backed for the response cache than for other Apollo Server caching features, just pass a
KeyValueCache as the
cache option to the
responseCachePlugin function.
If you have data whose response should be cached separately for different users, set
@cacheControl(scope: PRIVATE) hints on the data, and teach the cache control plugin how to tell your users apart by defining a
sessionId hook:
import responseCachePlugin from 'apollo-server-plugin-response-cache'; const server = new ApolloServer({ // ... plugins: [responseCachePlugin({ sessionId: (requestContext) => (requestContext.request.http.headers.get('sessionid') || null), })], });
Responses whose overall cache policy scope is
PRIVATE are shared only among sessions with the same session ID. Private responses are not cached if the
sessionId hook is not defined or returns null.
Responses whose overall cache policy scope is
PUBLIC are shared separately among all sessions with
sessionId null and among all sessions with non-null
sessionId. Caching these separately allows you to have different caches for all logged-in users vs all logged-out users, if there is easily cacheable data that should only be visible to logged-in users.
Responses containing GraphQL errors or no data are never cached.
The plugin allows you to define a few more hooks to affect cache behavior for a specific request. All hooks take in a
GraphQLRequestContext.
extraCacheKeyData: this hook can return any JSON-stringifiable object which is added to the cache key. For example, if your API includes translatable text, this hook can return a string derived from
requestContext.request.http.headers.get('Accept-Language').
shouldReadFromCache: if this hook returns false, the plugin will not read responses from the cache.
shouldWriteToCache: if this hook returns false, the plugin will not write responses to the cache.
In addition to the
Cache-Control HTTP header, the response cache plugin will also set the
Age HTTP header to the number of seconds the value has been sitting in the cache.
|
https://www.apollographql.com/docs/apollo-server/performance/caching/
|
CC-MAIN-2020-10
|
refinedweb
| 1,435
| 50.26
|
IBM has a big article up today about working with Microformats. I know this is one of those “buzz heavy” items on the interwebs 2.0 these says. Ray Ozzie gives it a lot of play with the new Live Clipboard stuff coming out of MS — which I admit is a hella cool idea. Pat thinks they are really cool, and Calvin has grown Tails into a whole big deal now.
The problem is, however, they are stupid. They are a hack to get around a problem that is going away VERY shortly.
Take the IBM example:
>
Yay, you have an iCal microformat in your page. You can use Trails, now to stick it right into your Google calendar. Neat.
The problem is, this is a serious abuse of HTML. The way you SHOULD have done this is:
<html:div <vevent> <dtstart>20060501</dtstart><html:abbr>May 1</html:abbr> ...
Then present your iCal entry with CSS. Yes, we have waited years and years and years for Microsoft to get off their rears and implement CSS with namespaces, which everyone else has had for years. However, IE7 is around the proverbial corner, and we should finally get the option to embed actual real data into our HTML pages and style it. There is no reason to use semantically incorrect HTML and beat up on the class attribute.
And rather than create a whole new class of software for dealing with this hack — though ROME is going to add Microformat support with the 2.0 version — just use a regular parser and get the actual XML you want right out of the document. Moreover we can get rid of the other stupid hacks we use right now, like putting RDF inside comments for machine readable metadata, or adding tag attributes that don’t conform to the list of acceptable values to the spec.
Please, lets not continue to kludge our way on top of a web that is arcane at this point. It is an XML world. Live in it.
The thing that has annoyed me about microformats is that they brand some incredibly simple thing that has been used for ages, then make it out to be something specatacular and new and go on the conference circuit making speaches on it. :).
Yup. I've wished for YEARS that I could do it the second way. Problem is, until the prevalent browsers (and you know who I'm talking about) support XHTML, you're stuck doing hacks. It's sad, really, it can't really be that hard...
Thank you for articulating this. Micro formats are at best the html equivalent of Duct Tape. Sure you can build a house out of the stuff, but really, why would!!!
If it were an XML world, we would use XML - which admittedly, is much better at modeling this kind of.
Sometimes the right way to do something isn't the best way. Once Vista ships, the whole Windows using world won't instantly shift to IE 7. For better or worse, we're going to be stuck in a world where the majority of web brower users will be on IE6 or earlier for several years to come, and a large minority will be pre-IE7 for even longer.
Does anybody remember that this (the structured approach noted above) was what the XML inventors had in mind before the markup got redirected as an application integration format?
Will you please look at ? I suspect there's a side that this article is not telling, and it deserves mentioning in that articles Discussion ("Talk") page.
The most interesting part is that IE actually supports custom tag styling for ages.
>.
|
http://www.oreillynet.com/onjava/blog/2006/07/why_i_hate_microformats.html
|
crawl-002
|
refinedweb
| 615
| 80.11
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.