text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
29 April 2011 20:36 [Source: ICIS news]
HOUSTON (ICIS)--BASF is seeking a price increase on ?xml:namespace>
The price increase nomination was for 4 cents/lb ($88/tonne, €59/tonne) effective 16 May, or as contracts allowed, according to a buyer.
The initiative brings BASF in line with other
SABIC Innovative Plastics announced a 5 cent/lb increase for its extruded pipe and sheet market effective 2 May, or as contracts allow.
US domestic ABS prices were assessed by ICIS at 148-168 cents/lb DEL (delivered) for extrusion material and 139-158 cents/lb for injection material.
One buyer said he expects the price nominations will go through, based on rising costs for acrylonitrile and butadiene.
"Feedstocks are so tight, and the availability as far as production is concerned is limited," the buyer said. "Until things start to ease up a little bit, they can increase their prices."
INEOS ABS also announced a price increase nomination of 5 cents/lb effective 1 June, on top of the 4 cents/lb they are seeking in May, according to a customer letter obtained by ICIS.
Major producers of ABS are BASF, INEOS ABS, SABIC Innovative Plastics and Styron.
($1 = €0.67)
For more on acrylonitrile-butadiene | http://www.icis.com/Articles/2011/04/29/9456128/basf-seeks-4-centlb-price-hike-for-may-us-abs.html | CC-MAIN-2015-14 | refinedweb | 207 | 60.45 |
You can use the following basic syntax to perform an inner join in pandas:
import pandas as pd df1.merge(df2, on='column_name', how='inner')
The following example shows how to use this syntax in practice.
Example: How to Do Inner Join in Pandas
Suppose we have the following two pandas DataFrames that contains information about various basketball teams:
import pandas as pd #create DataFrame df1 = pd.DataFrame({'team': ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H'], 'points': [18, 22, 19, 14, 14, 11, 20, 28]}) df2 = pd.DataFrame({'team': ['A', 'B', 'C', 'D', 'G', 'H'], 'assists': [4, 9, 14, 13, 10, 8]}) #view DataFrames print(df1) team points 0 A 18 1 B 22 2 C 19 3 D 14 4 E 14 5 F 11 6 G 20 7 H 28 print(df2) team assists 0 A 4 1 B 9 2 C 14 3 D 13 4 G 10 5 H 8
We can use the following code to perform an inner join, which only keeps the rows where the team name appears in both DataFrames:
#perform left join df1.merge(df2, on='team', how='inner') team points assists 0 A 18 4 1 B 22 9 2 C 19 14 3 D 14 13 4 G 20 10 5 H 28 8
The only rows contained in the merged DataFrame are the ones where the team name appears in both DataFrames.
Notice that two teams were dropped (teams E and F) because they didn’t appear in both DataFrames.
Note that you can also use pd.merge() with the following syntax to return the exact same result:
#perform left join pd.merge(df1, df2, on='team', how='inner') team points assists 0 A 18 4 1 B 22 9 2 C 19 14 3 D 14 13 4 G 20 10 5 H 28 8
Notice that this merged DataFrame matches the one from the previous example.
Note: You can find the complete documentation for the merge function here.
Additional Resources
The following tutorials explain how to perform other common operations in pandas:
How to Do a Left Join in Pandas
How to Merge Pandas DataFrames on Multiple Columns
Pandas Join vs. Merge: What’s the Difference? | https://www.statology.org/inner-join-pandas/ | CC-MAIN-2022-40 | refinedweb | 371 | 69.55 |
Rank each roman numeral.
After you rank them, loop through the input and compare the current ith character with the ith-1 character.
In particular, we compare their ranks.
On the cases for examples when we need to convert "IV" to 4. Initally, when we see the "I", we add 1 to the current count. However, when we immediately see the "V" right after the "I", we backtrack and subtract 2(one for the count we incorrectly added and one for the conversion) from the current count and add 5. Thus, we have effectively added 4 to the current count.
Same goes for 90("XC"). We see X, we add 10 to the current count. However, when we see "C", we know that "C"'s rank is greater than "X"'s rank. Because of this, we need to correct ourselves by subtracting 20 and adding 100. Effectively adding 90.
public class Solution { public int romanToInt(String s) { HashMap<Character, Integer> rte = new HashMap<Character, Integer>(); rte.put('I', 1); rte.put('V', 5); rte.put('X', 10); rte.put('L', 50); rte.put('C', 100); rte.put('D', 500); rte.put('M', 1000); HashMap<Character, Integer> rank = new HashMap<Character, Integer>(); rank.put('I', 1); rank.put('V', 2); rank.put('X', 3); rank.put('L', 4); rank.put('C', 5); rank.put('D', 6); rank.put('M', 7); char last = s.charAt(0); int count = 0; for(int i = 0; i < s.length(); i++) { char current = s.charAt(i); int rankA = rank.get(last); int rankB = rank.get(current); if(rankA == rankB) { count += rte.get(current); } else if(rankA < rankB) { count -= 2*rte.get(last); count += rte.get(current); } else { count += rte.get(current); } last = current; } return count; } } | https://discuss.leetcode.com/topic/71894/java-super-easy-to-understand-o-n | CC-MAIN-2017-39 | refinedweb | 289 | 81.9 |
Dynamic Service Discovery with Java
Something for Nothing?
It is rightly said you cannot get something for nothing. Those of you who have worked with multicast network services know that there isn't just one magic channel that all the multicast traffic goes out on—you still have to co-ordinate the server and client to use the same multicast address and port. Because this still must be kept in sync, it is indeed worth asking whether you have gained anything or merely moved the mountain (slightly). Here's where the idea pays off: Even though different service applications will be running on different ports, the service discovery code only needs to run on one agreed-upon multicast channel. In other words, Although you have shifted the mountain slightly, you only really need to climb up it one time. (Furthermore, the same service cannot run on the same port inside the same machine... however, any number of programs can join the same multicast channel. This lets each service instance run on any port available, again without needing to coordinate what those port numbers actually are.)
To be fair, maximum pay-off only comes if there is a system-wide service running in the machine that all applications (server- and client-side) can interact with. Each server-side application talks to this service and registers itself when it starts up, handing over its instance name and connection parameters (and unregistering when it stops running). The requests that come in over the network are actually responded to via this system-global service. (This is what Bonjour and various work-alikes are.) Your own project might not be of sufficient importance to install and run your own operating system service. (Then again, maybe it is, but my own project wasn't.) However, if you trade down a little bit, each service instance can easily run its own query responder on an agreed-upon multicast channel. This still allows for the actual service itself to run on whichever dynamic port is available (or whichever one suits the whims of whomever installs and runs it) while allowing for clients to find the service without any advanced knowledge. You really do seem to get something for nothing out of the deal.
Details
By way of a reminder, this "automagic" service discovery technique employs the use of a multicast address. The address range 224.0.0.0 through 239.255.255.255 (or alternately, 224.0.0.0/4) is a special range of IP addresses set aside for multicast use. They cannot be assigned to a specific host. (This being said, these addresses still have ports.) Furthermore, a handful of addresses in the 224.0.0.x address space are already reserved for specific use. You can get some good information and links by reading this wikipedia entry. You should also refer to the Java API documentation for the MulticastSocket class. (The Socket and ServerSocket classes are probably more familiar, but those are not used for multicast traffic.) Also, you might want to refer to the Java API documentation for the DatagramPacket class, which forms the transmission container for UDP data.
Just to give a brief example, multicast networking code looks as follows:
... MulticastSocket socket = new MulticastSocket(9999); InetAddress address = InetAddress.getByName(230.0.0.1); socket.joinGroup(address); ... DatagramPacket inboundPacket, outboundPacket; ... socket.receive(inboundPacket); ... socket.send(outboundPacket); ...
The sample code I will be working with uses the 230.0.0.1 address and the 4321 port. This is a rather arbitrary choice, and can easily be changed as needed. Also, instead of giving a step-by-step explanation of my particular implementation for the server-side responder code and client-side browser code, I will rather be showing a simple "time server" application and client, and then the steps needed to connect both sides to the provided implementation. It is left as an exercise to the reader to poke around in source code, but the biggest pieces involve rather vanilla networking and threading code.
Time Server Example: Starting Point
Just to keep the demo simple, consider you want to write a TCP-based network service that, when connected to, immediately emits the machine's current time to the client and then disconnects. This is rather elementary networking code on both sides. First, the server:
import java.io.IOException; import java.io.OutputStreamWriter; import java.net.InetAddress; import java.net.InetSocketAddress; import java.net.ServerSocket; import java.net.Socket; import java.util.Date; public class TimeServer { public static void main(String[] args) { ServerSocket serverSocket = null; /* section one */ try { serverSocket = new ServerSocket(); serverSocket.bind( new InetSocketAddress( InetAddress.getLocalHost(), 9999)); } catch (IOException ioe) { System.err.println( "Could not bind a server socket to port 9999: "+ioe); System.exit(1); } /* section two */ System.out.println("Server is now taking connections..."); while (true) { try { Socket socket = serverSocket.accept(); System.out.println("Connection from: "+ socket.getInetAddress()); OutputStreamWriter writer = new OutputStreamWriter( socket.getOutputStream()); writer.write(new Date().toString()+"\r\n"); writer.flush(); socket.close(); } catch (IOException ie) { System.err.println("Exception: "+ie); } } } }
Page 2 of 4
| http://www.developer.com/services/article.php/10928_3728576_2/Dynamic-Service-Discovery-with-Java.htm | CC-MAIN-2014-41 | refinedweb | 844 | 50.02 |
Difference between revisions of "POSIX Threads Explained, Part 3"
Latest revision as of 08:09, December 31, 2014
Improve efficiency with condition variables
Support Funtoo and help us grow! Donate $15 per month and get a free SSD-based Funtoo Virtual Container.
Condition variables explained
I ended my previous article by describing a particular dilemma how does a thread deal with a situation where it is waiting for a specific condition to become true? It could repeatedly lock and unlock a mutex, each time checking a shared data structure for a certain value. But this is a waste of time and resources, and this form of busy polling is extremely inefficient. The best way to do this is to use the
pthread_cond_wait() call to wait on a particular condition to become true.
It's important to understand what pthread_cond_wait() does -- it's the heart of the POSIX threads signalling system, and also the hardest part to understand.
First, let's consider a scenario where a thread has locked a mutex, in order to take a look at a linked list, and the list happens to be empty. This particular thread can't do anything -- it's designed to remove a node from the list, and there are no nodes available. So, this is what it does.
While still holding the mutex lock, our thread will call
pthread_cond_wait(&mycond,&mymutex). The pthread_cond_wait() call is rather complex, so we'll step through each of its operations one at a time.
The first thing pthread_cond_wait() does is simultaneously unlock the mutex mymutex (so that other threads can modify the linked list) and wait on the condition mycond (so that pthread_cond_wait() will wake up when it is "signalled" by another thread). Now that the mutex is unlocked, other threads can access and modify the linked list, possibly adding items.
At this point, the pthread_cond_wait() call has not yet returned. Unlocking the mutex happens immediately, but waiting on the condition mycond is normally a blocking operation, meaning that our thread will go to sleep, consuming no CPU cycles until it is woken up. This is exactly what we want to happen. Our thread is sleeping, waiting for a particular condition to become true, without performing any kind of busy polling that would waste CPU time. From our thread's perspective, it's simply waiting for the pthread_cond_wait() call to return.
Now, to continue the explanation, let's say that another thread (call it "thread 2") locks mymutex and adds an item to our linked list. Immediately after unlocking the mutex, thread 2 calls the function pthread_cond_broadcast(&mycond). By doing so, thread 2 will cause all threads waiting on the mycond condition variable to immediately wake up. This means that our first thread (which is in the middle of a pthread_cond_wait() call) will now wake up.
Now, let's take a look at what happens to our first thread. After thread 2 called pthread_cond_broadcast(&mymutex) you might think that thread 1's pthread_cond_wait() will immediately return. Not so! Instead, pthread_cond_wait() will perform one last operation: relock mymutex. Once pthread_cond_wait() has the lock, it will then return and allow thread 1 to continue execution. At that point, it can immediately check the list for any interesting changes.
Stop and review!
queue.h(c source code)
/* queue.h ** Copyright 2000 Daniel Robbins, Gentoo Technologies, Inc. ** Author: Daniel Robbins ** Date: 16 Jun 2000 */ typedef struct node { struct node *next; } node; typedef struct queue { node *head, *tail; } queue; void queue_init(queue *myroot); void queue_put(queue *myroot, node *mynode); node *queue_get(queue *myroot);
queue.c(c source code)
/* queue.c ** Copyright 2000 Daniel Robbins, Gentoo Technologies, Inc. ** Author: Daniel Robbins ** Date: 16 Jun 2000 ** ** This set of queue functions was originally thread-aware. I ** redesigned the code to make this set of queue routines ** thread-ignorant (just a generic, boring yet very fast set of queue ** routines). Why the change? Because it makes more sense to have ** the thread support as an optional add-on. Consider a situation ** where you want to add 5 nodes to the queue. With the ** thread-enabled version, each call to queue_put() would ** automatically lock and unlock the queue mutex 5 times -- that's a ** lot of unnecessary overhead. However, by moving the thread stuff ** out of the queue routines, the caller can lock the mutex once at ** the beginning, then insert 5 items, and then unlock at the end. ** Moving the lock/unlock code out of the queue functions allows for ** optimizations that aren't possible otherwise. It also makes this ** code useful for non-threaded applications. ** ** We can easily thread-enable this data structure by using the ** data_control type defined in control.c and control.h. */ #include <stdio.h> #include "queue.h" void queue_init(queue *myroot) { myroot->head=NULL; myroot->tail=NULL; } void queue_put(queue *myroot,node *mynode) { mynode->next=NULL; if (myroot->tail!=NULL) myroot->tail->next=mynode; myroot->tail=mynode; if (myroot->head==NULL) myroot->head=mynode; } node *queue_get(queue *myroot) { //get from root node *mynode; mynode=myroot->head; if (myroot->head!=NULL) myroot->head=myroot->head->next; return mynode; }
control.h(c source code)
#include <pthread.h> typedef struct data_control { pthread_mutex_t mutex; pthread_cond_t cond; int active; } data_control;
control.c(c source code)
/* control.c ** Copyright 2000 Daniel Robbins, Gentoo Technologies, Inc. ** Author: Daniel Robbins ** Date: 16 Jun 2000 ** ** These routines provide an easy way to make any type of ** data-structure thread-aware. Simply associate a data_control ** structure with the data structure (by creating a new struct, for ** example). Then, simply lock and unlock the mutex, or ** wait/signal/broadcast on the condition variable in the data_control ** structure as needed. ** ** data_control structs contain an int called "active". This int is ** intended to be used for a specific kind of multithreaded design, ** where each thread checks the state of "active" every time it locks ** the mutex. If active is 0, the thread knows that instead of doing ** its normal routine, it should stop itself. If active is 1, it ** should continue as normal. So, by setting active to 0, a ** controlling thread can easily inform a thread work crew to shut ** down instead of processing new jobs. Use the control_activate() ** and control_deactivate() functions, which will also broadcast on ** the data_control struct's condition variable, so that all threads ** stuck in pthread_cond_wait() will wake up, have an opportunity to ** notice the change, and then terminate. */ #include "control.h" int control_init(data_control *mycontrol) { int mystatus; if (pthread_mutex_init(&(mycontrol->mutex),NULL)) return 1; if (pthread_cond_init(&(mycontrol->cond),NULL)) return 1; mycontrol->active=0; return 0; } int control_destroy(data_control *mycontrol) { int mystatus; if (pthread_cond_destroy(&(mycontrol->cond))) return 1; if (pthread_mutex_destroy(&(mycontrol->cond))) return 1; mycontrol->active=0; return 0; } int control_activate(data_control *mycontrol) { int mystatus; if (pthread_mutex_lock(&(mycontrol->mutex))) return 0; mycontrol->active=1; pthread_mutex_unlock(&(mycontrol->mutex)); pthread_cond_broadcast(&(mycontrol->cond)); return 1; } int control_deactivate(data_control *mycontrol) { int mystatus; if (pthread_mutex_lock(&(mycontrol->mutex))) return 0; mycontrol->active=0; pthread_mutex_unlock(&(mycontrol->mutex)); pthread_cond_broadcast(&(mycontrol->cond)); return 1; }
Debug time
One more miscellaneous file before we get to the biggie. Here's dbug.h:
dbug.h(C source code)
#define dabort() \ { printf("Aborting at line %d in source file %s\n",__LINE__,__FILE__); abort(); }
We use this code to handle unrecoverable errors in our work crew code.
The work crew code
Speaking of the work crew code, here it is:
workcrew.c(c source code)
#include <stdio.h> #include <stdlib.h> #include "control.h" #include "queue.h" #include "dbug.h" /* the work_queue holds tasks for the various threads to complete.*/ struct work_queue { data_control control; queue work; } wq; /* I added a job number to the work node. Normally, the work node would contain additional data that needed to be processed. */ typedef struct work_node { struct node *next; int jobnum; } wnode; /* the cleanup queue holds stopped threads. Before a thread terminates, it adds itself to this list. Since the main thread is waiting for changes in this list, it will then wake up and clean up the newly terminated thread. */ struct cleanup_queue { data_control control; queue cleanup; } cq; /* I added a thread number (for debugging/instructional purposes) and a thread id to the cleanup node. The cleanup node gets passed to the new thread on startup, and just before the thread stops, it attaches the cleanup node to the cleanup queue. The main thread monitors the cleanup queue and is the one that performs the necessary cleanup. */ typedef struct cleanup_node { struct node *next; int threadnum; pthread_t tid; } cnode; void *threadfunc(void *myarg) { wnode *mywork; cnode *mynode; mynode=(cnode *) myarg; pthread_mutex_lock(&wq.control.mutex); while (wq.control.active) { while (wq.work.head==NULL && wq.control.active) { pthread_cond_wait(&wq.control.cond, &wq.control.mutex); } if (!wq.control.active) break; //we got something! mywork=(wnode *) queue_get(&wq.work); pthread_mutex_unlock(&wq.control.mutex); //perform processing... printf("Thread number %d processing job %d\n",mynode->threadnum,mywork->jobnum); free(mywork); pthread_mutex_lock(&wq.control.mutex); } pthread_mutex_unlock(&wq.control.mutex); pthread_mutex_lock(&cq.control.mutex); queue_put(&cq.cleanup,(node *) mynode); pthread_mutex_unlock(&cq.control.mutex); pthread_cond_signal(&cq.control.cond); printf("thread %d shutting down...\n",mynode->threadnum); return NULL; } #define NUM_WORKERS 4 int numthreads; void join_threads(void) { cnode *curnode; printf("joining threads...\n"); while (numthreads) { pthread_mutex_lock(&cq.control.mutex); /* below, we sleep until there really is a new cleanup node. This takes care of any false wakeups... even if we break out of pthread_cond_wait(), we don't make any assumptions that the condition we were waiting for is true. */ while (cq.cleanup.head==NULL) { pthread_cond_wait(&cq.control.cond,&cq.control.mutex); } /* at this point, we hold the mutex and there is an item in the list that we need to process. First, we remove the node from the queue. Then, we call pthread_join() on the tid stored in the node. When pthread_join() returns, we have cleaned up after a thread. Only then do we free() the node, decrement the number of additional threads we need to wait for and repeat the entire process, if necessary */ curnode = (cnode *) queue_get(&cq.cleanup); pthread_mutex_unlock(&cq.control.mutex); pthread_join(curnode->tid,NULL); printf("joined with thread %d\n",curnode->threadnum); free(curnode); numthreads--; } } int create_threads(void) { int x; cnode *curnode; for (x=0; x<NUM_WORKERS; x++) { curnode=malloc(sizeof(cnode)); if (!curnode) return 1; curnode->threadnum=x; if (pthread_create(&curnode->tid, NULL, threadfunc, (void *) curnode)) return 1; printf("created thread %d\n",x); numthreads++; } return 0; } void initialize_structs(void) { numthreads=0; if (control_init(&wq.control)) dabort(); queue_init(&wq.work); if (control_init(&cq.control)) { control_destroy(&wq.control); dabort(); } queue_init(&wq.work); control_activate(&wq.control); } void cleanup_structs(void) { control_destroy(&cq.control); control_destroy(&wq.control); } int main(void) { int x; wnode *mywork; initialize_structs(); /* CREATION */ if (create_threads()) { printf("Error starting threads... cleaning up.\n"); join_threads(); dabort(); } pthread_mutex_lock(&wq.control.mutex); for (x=0; x<16000; x++) { mywork=malloc(sizeof(wnode)); if (!mywork) { printf("ouch! can't malloc!\n"); break; } mywork->jobnum=x; queue_put(&wq.work,(node *) mywork); } pthread_mutex_unlock(&wq.control.mutex); pthread_cond_broadcast(&wq.control.cond); printf("sleeping...\n"); sleep(2); printf("deactivating work queue...\n"); control_deactivate(&wq.control); /* CLEANUP */ join_threads(); cleanup_structs(); }
Code walkthrough
Now it's time for a quick code walkthrough. The first struct defined is called
wq, and contains a data_control and a queue header. The data_control structure will be used to arbitrate access to the entire queue, including the nodes in the queue. Our next job is to define the actual work nodes. To keep the code lean to fit in this article, all that's contained here is a job number.
Next, we create the cleanup queue. The comments explain how this works. OK, now let's skip the threadfunc(), join_threads(), create_threads() and initialize_structs() calls, and jump down to main(). The first thing we do is initialize our structures -- this includes initializing our data_controls and queues, as well as activating our work queue.
Cleanup special
Now it's time to initialize our threads. If you look at our create_threads() call, everything will look pretty normal -- except for one thing. Notice that we are allocating a cleanup node, and initializing its threadnum and TID components. We also pass a cleanup node to each new worker thread as an initial argument. Why do we do this?
Because when a worker thread exits, it'll attach its cleanup node to the cleanup queue, and terminate. Then, our main thread will detect this addition to the cleanup queue (by use of a condition variable) and dequeue the node. Because the TID (thread id) is stored in the cleanup node, our main thread will know exactly which thread terminated. Then, our main thread will call pthread_join(tid), and join with the appropriate worker thread. If we didn't perform such bookkeeping, our main thread would need to join with worker threads in an arbitrary order, possibly in the order that they were created. Because the threads may not necessarily terminate in this order, our main thread could be waiting to join with one thread while it could have been joining with ten others. Can you see how this design decision can really speed up our shutdown code, especially if we were to use hundreds of worker threads?
Creating work
Now that we've started our worker threads (and they're off performing their threadfunc(), which we'll get to in a bit), our main thread begins inserting items into the work queue. First, it locks wq's control mutex, and then allocates 16000 work packets, inserting them into the queue one-by-one. After this is done, pthread_cond_broadcast() is called, so that any sleeping threads are woken up and able to do the work. Then, our main thread sleeps for two seconds, and then deactivates the work queue, telling our worker threads to terminate. Then, our main thread calls the join_threads() function to clean up all the worker threads.
threadfunc()
Time to look at threadfunc(), the code that each worker thread executes. When a worker thread starts, it immediately locks the work queue mutex, gets one work node (if available) and processes it. If no work is available, pthread_cond_wait() is called. You'll notice that it's called in a very tight while() loop, and this is very important. When you wake up from a pthread_cond_wait() call, you should never assume that your condition is definitely true -- it will probably be true, but it may not. The while loop will cause pthread_cond_wait() to be called again if it so happens that the thread was mistakenly woken up and the list is empty.
If there's a work node, we simply print out its job number, free it, and exit. Real code would do something more substantial. At the end of the while() loop, we lock the mutex so we can check the active variable as well as checking for new work nodes at the top of the loop. If you follow the code through, you'll find that if wq.control.active is 0, the while loop will be terminated and the cleanup code at the end of threadfunc() will begin.
The worker thread's part of the cleanup code is pretty interesting. First, it unlocks the work_queue, since pthread_cond_wait() returns with the mutex locked. Then, it gets a lock on the cleanup queue, adds our cleanup node (containing our TID, which the main thread will use for its pthread_join() call), and then it unlocks the cleanup queue. After that, it signals any cq waiters (pthread_cond_signal(&cq.control.cond)) so that the main thread will know that there's a new node to process. We don't use pthread_cond_broadcast() because it's not necessary -- only one thread (the main thread) is waiting for new entries in the cleanup queue. Our worker thread prints a shutdown message, and then terminates, waiting to be pthread_joined() by the main thread when it calls join_threads().
join_threads()
If you want to see a simple example of how condition variables should be used, take a look at the join_threads() function. While we still have worker threads in existence, join_threads() loops, waiting for new cleanup nodes in our cleanup queue. If there is a new node, we dequeue the node, unlock the cleanup queue (so that other cleanup nodes can be added by our worker threads), join with our new thread (using the TID stored in the cleanup node), free the cleanup node, decrement the number of threads "out there", and continue.
Wrapping it up
We've reached the end of the "POSIX threads explained" series, and I hope that you're now ready to begin adding multithreaded code to your own applications. For more information, please see the Resources section, which also contains a tarball of all the sources used in this article. I'll see you next series! | http://www.funtoo.org/index.php?title=User:Akiress&diff=cur&oldid=8032 | CC-MAIN-2015-40 | refinedweb | 2,772 | 55.74 |
Subject: Re: [boost] [1.47] New libraries
From: Michael Caisse (boost_at_[hidden])
Date: 2011-03-17 12:41:26
On 03/17/2011 02:24 AM, Daniel James wrote:
> On 17 March 2011 08:52, Thomas Heller<thom.heller_at_[hidden]> wrote:
>
>> There are still some unresolved issues. Mainly how to deal with the
>> migration of Boost.Bind, Boost.Lambda and Phoenix V2. There hasn't
>> been any discussion regarding that.
>>
> I don't think it'd be a good idea to do anything to bind or lambda in
> this release. Was there a consensus on replacing them? Personally, I'd
> like to keep the existing Boost.Bind, even if in a different
> namespace. I appreciate its relative simplicity and portability.
>
> Daniel
> _______________________________________________
>
I would like to echo this sentiment. IMHO a good plan would be to
release Phoenix V3 in 1.47 as a first class citizen. That will increase
the exposure of the library and provide more confidence to users in
selecting Phoenix over BLL and possibly Boost.Bind. As releases
continue, then we can add forwarding headers and other mechanism to
eventually deprecate BLL.
Thomas has done an excellent job at mitigating the technical issues for
migration to Phoenix; however, we should not hold up release of the
library while the logistics are being considered. | http://lists.boost.org/Archives/boost/2011/03/178707.php | CC-MAIN-2013-20 | refinedweb | 217 | 68.26 |
(For more resources related to this topic, see here.)
The objective of the first chapter is to quickly introduce you to the technology and then dive deep into the understanding the fabric of the tool. The chapter also helps you set the environment, which will be used throughout the book.
Chapter 1, Know Your Horse Before You Ride It, starts with discussing the various features of APEX. This is to give heads up to the readers about the features offered by the tool and to inform them about some of the strengths of the tool. In order to understand the technology better, we discuss the various web server combinations possible with APEX, namely the Internal mod_plsql, External mod_plsql, and Listener configuration. While talking about the Internal mod_plsql configuration, we see the steps to enable the XMLDB HTTP server. In the Internal mod_plsql configuration, Oracle uses a DAD defined in EPG to talk to the database and the web server. So, we try to create a miniature APEX of our own, by creating our DAD using it to talk to the database and the web server. We then move on to learn about the External mod_plsql configuration. We discuss the architecture and the roles of the configuration files, such as dads.conf and httpd.conf. We also have a look at a typical dads.conf file and draw correlations between the configurations in the Internal and External mod_plsql configuration. We then move on to talk about the wwv_flow_epg_include_mod_local procedure that can help us use the DAD of APEX to call our own stored PL/SQL procedures. We then move on to talk about APEX Listener which is a JEE alternative to mod_plsql and is Oracle’s direction for the future.
Once we are through with understanding the possible configurations, we see the steps to set up our environment. We use the APEX Listener configuration and see the steps to install the APEX engine, create a Weblogic domain, set the listener in the domain, and create an APEX workspace.
With our environment in place, we straight away get into understanding the anatomy of APEX by analyzing various parts of its URL. This discussion includes a natter on the session management, request handling, debugging, error handling, use of TKPROF for tracing an APEX page execution, cache management and navigation, and value passing in APEX. We also try to understand the design behind the zero session ID in this section.
Our discussions till now would have given you a brief idea about the technology, so we try to dig in a little deeper and understand the mechanism used by APEX to send web requests to its PL/SQL engine in the database by decoding the APEX page submission. We see the use of the wwv_flow.accept procedure and understand the role of page submission. We try to draw an analogy of an APEX form with a simple HTML to get a thorough understanding about the concept.
The next logical thing after page submission is to see the SQL and PL/SQL queries and blocks reaching the database. We turn off the database auditing and see the OWA web toolkit requests flowing to the database as soon as we open an APEX page.
We then broaden our vision by quickly knowing about some of the lesser known alternatives of mod_plsql.
We end the chapter with a note of caution and try to understand the most valid criticisms of the technology, by understanding the SQL injection and Cross-site Scripting (XSS).
After going through the architecture we straight away spring into action and begin the process of learning how to build the reports in APEX. The objective of this chapter is to help you understand and implement the most common reporting requirements along with introducing some interesting ways to frame the analytical queries in Oracle. The chapter also hugely focuses on the methods to implement different kinds of formatting in the APEX classic reports.
We start Chapter 2, Reports, by creating the objects that will be used throughout the book and installing the reference application that contains the supporting code for the topics discussed in the second chapter.
We start the chapter by setting up an authentication mechanism. We discuss external table authentication in this chapter. We then move on to see the mechanism of capturing the environment variables in APEX. These variables can help us set some logic related to a user’s session and environment. The variables also help us capture some properties of the underlying database session of an APEX session. We capture the variables using the USERENV namespace, DBMS_SESSION package, and owa_util package.
After having a good idea of the ways and means to capture the environment variables, we build our understanding of developing the search functionality in a classic APEX report. This is mostly a talk about the APEX classic report features, we also use this opportunity to see the process to enable sorting in the report columns, and to create a link that helps us download the report in the CSV format.
We then discuss various ways to implement the group reports in APEX. The discussion shows a way to implement this purely, by using the APEX’s feature, and then talks about getting the similar results by using the Oracle database feature. We talk about the APEX’s internal grouping feature and the Oracle grouping sets. The section also shows the first use of JavaScript to manipulate the report output. It also talks about a method to use the SQL query to create the necessary HTML, which can be used to display a data column in a classic report. We take the formatting discussion further, by talking about a few advanced ways of highlighting the report data using the APEX’s classic report features, and by editing the APEX templates.
After trying our hand at formatting, we try to understand the mechanism to implement matrix reports in APEX. We use matrix reports to understand the use of the database features, such as the with clause, the pivot operator, and a number of string aggregation techniques in Oracle. The discussion on the string aggregation techniques includes the talk on the LISTAGG function, the wm_concat function, and the use of the hierarchical queries for this purpose. We also see the first use of the APEX items as the substitution variables in this book. We will see more use of the APEX items as the substitution variables to solve the vexed problems in other parts of the book as well. We do justice with the frontend as well, by talking about the use of jQuery, CSS, and APEX Dynamic Actions for making the important parts of data stand out. Then, we see the implementation of the handlers, such as this.affectedElements and the use of the jQuery functions, such as this.css() in this section. We also see the advanced formatting methods using the APEX templates. We then end this part of the discussion, by creating a matrix report using the APEX Dynamic Query report region.
The hierarchical reports are always intriguing, because of the enormous ability to present the relations among different rows of data, and because of the use of the hierarchical queries in a number of unrelated places to answer the business queries. We reserve a discussion on the Oracle’s hierarchical queries for a later part of the section and start it, by understanding the implementation of the hierarchical reports by linking the values in APEX using drilldowns. We see the use of the APEX items as the substitution variable to create the dynamic messages. Since we have devised a mechanism to drill down, we should also build a ladder for the user to climb up the hierarchical chain. Now, the hierarchical chain for every person will be different depending on his position in the organization, so we build a mechanism to build the dynamic bread crumbs using the PL/SQL region in APEX. We then talk about two different methods of implementing the hierarchical queries in APEX. We talk about the connect, by clause first, and then continue our discussion by learning the use of the recursive with clause for the hierarchical reporting. We end our discussion on the hierarchical reporting, by talking about creating the APEX’s Tree region that displays the hierarchical data in the form of a tree.
The reports are often associated with the supporting files that give more information about the business query. A typical user might want to upload a bunch of files while executing his piece of the task and the end user of his action might want to check out the files uploaded by him/her. To understand the implementation of this requirement, we check out the various ways to implement uploading and downloading the files in APEX.
We start our discussion, by devising the process of uploading the files for the employees listed in the oehr_employees table. The solution of uploading the files involves the implementation of the dynamic action to capture the ID of the employee on whose row the user has clicked. It shows the use of the APEX items as the substitution variables for creating the dynamic labels and the use of JavaScript to feed one APEX item based on the another. We extensively talk about the implementation of jQuery in Dynamic Actions in this section as well. Finally, we check out the use of the APEX’s file browse item along with the WWV_FLOW_FILES table to capture the file uploaded by the user.
Discussion on the methods to upload the files is immediately followed by talking about the ways to download these files in APEX. We nest the use of the functions, such as HTF.ANCHOR and APEX_UTIL.GET_BLOB_FILE_SRC for one of the ways to download a file, and also talk about the use of dbms_lob.getlength along with the APEX format mask for downloading the files. We then engineer our own stored procedure that can download a blob stored in the database as a file. We end this discussion, by having a look at the APEX’s p process, which can also be used for downloading.
AJAX is the mantra of the new age and we try our hand at it, by implementing the soft deletion in APEX. We see the mechanism of refreshing just the report and not reloading the page as soon as the user clicks to delete a row. We use JavaScript along with the APEX page process and the APEX templates to achieve this objective.
Slicing and dicing along with auditing are two of the most common requirements in the reporting world. We see the implementation of both of these using both the traditional method JavaScript with the page processes and the new method of using Dynamic Actions. We extend our use case a little further and learn about a two way interaction between the JavaScript function and the page process. We learn to pass the values back and forth between the two.
While most business intelligence and reporting solutions are a one way road and are focused on the data presentation, Oracle APEX can go a step further, and can give an interface to the user for the data manipulations as well. To understand this strength of APEX, we have a look at the process of creating the tabular forms in APEX. We extend our understanding of the tabular forms to see a magical use of jQuery to convert a certain sections of a report from display-only to editable textboxes.
We move our focus from implementing the interesting and tricky frontend requirements to framing the queries to display the complex data types. We use the string aggregation methods to display data in a column containing a varray.
Time dimension is one of the most widely used dimension in reporting circles and comparing current performance with the past records is a favorite requirement of most businesses. With this in mind, we shift our focus to understand and implement the time series reports in APEX. We start our discussion by understanding the method to implement a report that shows the contribution of each business line in every quarter. We implement this by using partitioning dimensions on the fly. We also use the analytical functions, such as ratio_to_report, lead, and lag in the process of creating the time series reports. We use our understanding of time dimension to build a report that helps a user compare one time period to the other. The report gives the user the freedom to select the time segments, which he wishes to compare. We then outwit a limitation of this report, by using the query partition clause for the data densification.
We bring our discussion on reports based on time dimension, by presenting a report based on the modal clause to you. The report serves as an example to show the enormous possibilities to code, by using the modal clause in Oracle.
We bring our discussion on reports based on time dimension, by presenting a report based on the modal clause to you. The report serves as an example to show the enormous possibilities to code, by using the modal clause in Oracle.
Chapter 3, In the APEX Mansion – Interactive Reports is all about Interactive Reports and the dynamic reporting. While the second chapter was more about the data presentation using the complex queries and the presentation methods, this chapter is about taking the presentation part a step ahead, by creating more visually appealing Interactive Reports.
We start the discussion of this chapter, by talking about the ins and outs of Interactive Reports. We postmortem this feature of APEX to learn about every possible way of using Interactive Reports for making more sense of data. The chapter has a reference application of its own, which shows the code in action.
We start our discussion, by exploring at various features of the Actions menu in Interactive Reports. The discussion is on search functionality, Select Columns feature, filtering, linking and filtering Interactive Reports using URLs, customizing Rows per page feature of an IR, using Control Break, creating computations in IR, creating charts in IR, using the Flashback feature and a method to see the back end flashback query, configuring the e-mail functionality for downloading a report, and for subscription of reports and methods to understand the download of reports in HTML, CSV, and PDF formats.
Once we are through with understanding the Actions menu, we move on to understand the various configuration options in an IR. We talk about the Link section, the Icon View section, the Detail section, the Column Group section, and the Advanced section of the Report Attributes page of an IR. While discussing these sections, we understand the process of setting different default views of an IR for different users.
Once we are through with our dissection of an IR, we put our knowledge to action, by inserting our own item in the Actions menu of an IR using Dynamic Actions and jQuery.
We continue our quest for finding the newer formatting methods, by using different combinations of SQL, CSS, APEX templates, and jQuery to achieve unfathomable results. The objectives attained in this section include formatting a column of an IR based on another column, using CSS in the page header to format the APEX data, changing the font color of the alternate rows in APEX, using a user-defined CSS class in APEX, conditionally highlighting a column in IR using CSS and jQuery, and formatting an IR using the region query.
After going through a number of examples on the use of CSS and jQuery in APEX, we lay down a process to use any kind of changes in an IR. We present this process by an example that changes one of the icons used in an IR to a different icon.
APEX also has a number of views for IR, which can be used for intelligent programming. We talk about an example that uses the apex_application_page_ir_rpt view that show different IR reports on the user request.
After a series of discussion on Interactive Reports, we move on to find a solution to an incurable problem of IR. We see a method to put multiple IR on the same page and link them as the master-child reports.
We had seen an authentication mechanism (external table authentication) and data level authorization in Chapter 2, Reports. We use this chapter to see the object level authorization in APEX. We see the method to give different kinds of rights on the data of a column in a report to different user groups.
After solving a number of problems and learning a number of things, we create a visual treat for ourselves. We create an Interactive report dashboard using Dynamic Actions. This dashboard presents different views of an IR as gadgets on a separate APEX page.
We conclude this chapter, by looking at advanced ways of creating the dynamic reports in APEX. We look at the use of the table function in both native and interface approach, and we also look at the method to use the APEX collections for creating the dynamic IRs in APEX.
Chapter 4, The Fairy Tale Begins – Advanced Reporting, is all about advanced reporting and pretty graphs. Since we are talking about advanced reporting, we see the process of setting the LDAP authentication in APEX. We also see the use of JXplorer to help us get the necessary DN for setting up the LDAP authentication. We also see the means to authenticate an LDAP user using PL/SQL in this section. This chapter also has a reference application of its own that shows the code in action.
We start the reporting building in this chapter, by creating the Sparkline reports. This report uses jQuery for producing the charts. We then move on to use another jQuery library to create a report with slider. This report lets the user set the value of the salary of any employee using a slider.
We then get into the world of the HTML charts. We start our talk, by looking at the various features of creating the HTML chart regions in APEX. We understand a method to implement the Top N and Bottom N chart reports in the HTML charts. We understand the APEX’s method to implement the HTML charts and use it to create an HTML chart on our own. We extend this technique of generating HTML from region source a little further, by using XMLTYPE to create the necessary HTML for displaying a report.
We take our spaceship into a different galaxy of the charts world and see the use of Google Visualizations for creating the charts in APEX.
We then switch back to AnyChart, which is a flash charting solution, and has been tightly integrated with APEX. It works on an XML and we talk about customizing this XML to produce different results. We put our knowledge to action, by creating a logarithmic chart and changing the style of a few display labels in APEX. We continue our discussion on AnyChart, and use the example of Doughnut Chart to understand advanced ways of using AnyChart in APEX. We use our knowledge to create Scatter chart, 3D stacked chart, Gauge chart, Gantt chart, Candlestick chart, Flash image maps, and SQL calendars.
We move out of the pretty heaven of AnyChart only to get into another beautiful space of understanding the methods of displaying the reports with the images in APEX. We implement the reports with the images using the APEX’s format masks and also using HTF.IMG with the APEX_UTIL.GET_BLOB_FILE_SRC function.
We then divert our attention to advanced jQuery uses such as, the creation of the Dialog box and the Context menu in APEX. We create a master-detail report using dialog boxes, where the child report is shown in the Dialog box.
We close our discussion in this chapter, by talking about creating wizards in APEX and a method to show different kinds of customized error messages for problems appearing at different points of a page process.
Chapter 5, Flight to Space Station: Advanced APEX, is all about advanced APEX. The topics discussed in this chapter fall into a niche category of reporting the implementation. This chapter has two reference applications of its own that shows the code in action.
We start this chapter, by creating both the client-side and server-side image maps APEX. These maps are often used where regular shapes are involved. We then see the process of creating the PL/SQL Server Pages (PSPs). PSPs are similar to JSP and are used for putting the PL/SQL and HTML code in a single file. Loadpsp then converts this file into a stored procedure with the necessary calls to owa web toolkit functions. The next utility we check out is loadjava. This utility helps us load a Java class as a database object. This utility will be helpful, if some of the Java classes are required for code processing.
We had seen the use of AnyChart in the previous chapter, and this chapter introduces you to FusionChart. We create a Funnel Chart using FusionChart in this chapter. We also use our knowledge of the PL/SQL region in APEX to create a Tag Cloud.
We then stroll into the world of the APEX plugins. We understand the interface functions, which have to be created for plugins. We discuss the concepts and then use them to develop an Item type plugin and a Dynamic Action plugin. We understand the process of defining a Custom Attribute in a plugin, and then see the process of using it in APEX.
We move on to learn about Websheets and their various features. We acquaint ourselves with the interface of the Websheet applications and understand the concept of sharing Websheets. We also create a few reports in a Websheet application and see the process of importing and using an image in Websheets. We then spend some time to learn about data grids in Websheets and the method to create them. We have a look at the Administration and View dropdowns in a Websheet application.
We get into the administration mode and understand the process of configuring the sending of mails to Gmail server from APEX.
We extend our administration skills further, by understanding the ways to download an APEX application using utilities, such as oracle.apex.APEXExport.
Reporting and OLAP go hand in hand, so we see the method of using the OLAP cubes in APEX. We see the process of modeling a cube and understand the mechanism to use its powerful features.
We then have a talk about Oracle Advanced Queues that can enable us to do reliable communication among different systems with different workloads in the enterprise, and gives improved performance.
We spend a brief time to understand some of the other features of APEX, which might not be directly related to reporting, but are good to know. Some of these features include locking and unlocking of pages in APEX, the Database Object Dependencies report, Shortcuts, the Dataloading wizard, and the APEX views.
We bring this exclusive chapter to an end, by discussing about the various packages that enable us to schedule the background jobs in APEX, and by discussing various other APEX and Database API, which can help us in the development process.
Chapter 6, Using PL/SQL Reporting Packages, Jasper, and Eclipse BIRT, begins a new phase of our journey. Since we have explored most of the reporting features of APEX, we set our eyes on other technologies that can serve as extensions of APEX. We talk about creating reports in them and then using these reports in APEX. This chapter has two reference applications of its own that shows the code in action.
We begin this chapter, by exploring some of the PL/SQL wrappers that can help us do the PDF printing. The chapter begins, by discussing the process of writing the code that can help us print the PDF reports using both PL/PDF and PL_FPDF. We then move on to see the less cost intensive measures, and understand the process of coding that can help us generate the RTF, XLS, CSV, and HTML documents without any separate printing engine.
We move from no engine to explicit engines and see the process of deploying the Apache cocoon and Apache fop on Weblogic, and using them for printing reports in most known formats in APEX.
While cocoon and fop were used for the report printing, the reports were themselves in APEX. We now check out technologies, such as Eclipse BIRT and Jasper reports. We understand the process of installing these technologies, creating reports in them, and then using them in APEX through the RESTful web services.
Chapter 7, Integrating APEX with OBIEE, takes us to a whole new world. We talk about OBIEE and BI Publisher in this chapter. These tools have been the face of Oracle’s business intelligence solution and are loaded with all kinds of the reporting features. The primary objective to introduce these technologies to you and to show their integration with APEX is to help you understand the strengths of each of these technologies and to enable you to use them with APEX, whenever required. This chapter also has a reference application of its own.
We start our discussion in this chapter, by talking about the Oracle Fusion Middleware architecture since OBIEE is a part of it. We then spend some time to understand the wiring of OBIEE, its various components, and their uses. The talk includes discussion on the BI Server component and the BI Presentation Server component. After understanding the basics, we move on to see the process of creating a simple report in OBIEE. We learn about various OBIEE features, such as selection steps, filters, and hierarchies in the process.
Once there, switch from the development mode to the observation mode and have a look at some of the features that OBIEE shows in its sample repository and web catalog. We check out a typical OBIEE dashboard, understand KPI (Key Performance Indicators) and KPI watchlists. We then pay some attention to create an action in OBIEE. Actions are OBIEE’s way of scheduling and delivering reports. We understand the OBIEE Mapviewer, which helps us use the external maps provided by Google and Navteq. This enables us to overlay our BI data on top of the maps to produce the best in class location intelligence solutions. We skim over the OBIEE’s strategy management feature that helps us to link business objectives and presents them in a number of solutions, such as Strategy Tree, Strategy Wheel, Cause and Effect Map, and Strategy Map.
We end our discussion of the features of OBIEE by looking at the process of configuring an SMTP server for OBIEE and understanding the delivery of e-mails, by scheduling an OBIEE agent, and monitoring it through OBIEE Job Manager.
Once we have a good idea about the possibilities with OBIEE, we start looking at the process of integrating OBIEE with APEX. We start, by using the web service approach and have a look at iBotService of OBIEE that can help us trigger an agent in OBIEE from APEX, and hence enable us to deliver the OBIEE reports by e-mails. We also look at OBIEE’s HTMLViewService, which enables us to fetch the OBIEE reports at APEX end using AJAX.
Finally, we have a look at a less secure method to use OBIEE in APEX. We check out the process of using OBIEE’s Go URL for linking it with APEX. The credentials and all the necessary information required by OBIEE are passed from APEX in the URL in this method. iFrames in APEX can also use Go URL and we have a discussion on this part as well.
After having a good discussion about OBIEE, we have a look at another interesting technology called BI Publisher. BI Publisher is not a new name for the APEX developers since it has historically been used for the report printing with APEX.
We have a brief talk about the tool and then nosedive into the process of creation of a report in BI Publisher. The book is shipped with all kinds of code to make your life easier. We start by understanding the process of creating a data model in BI Publisher and then use the MS Word plugin of BI Publisher to create a template. We put both these pieces together to create a BI Publisher report.
Once our report is ready, we look at the process of scheduling this report, and delivering it via e-mail.
We then move into an exclusive zone in BI Publisher. We talk about the creation of the Bar code reports in BI Publisher. Note that we haven’t discussed these reports yet in any of the technologies so far, so this type of report makes BI Publisher unique. We then take our BI Publisher development skills to a whole new level, by looking at the possibilities of coding the dynamic reports in BI Publisher. This talk includes discussion on changing columns on the fly, changing groupings on the fly, changing sort order, and much more.
Once we have seen some of the good reporting features of BI Publisher, it is time for us to get into the integration. We start this discussion with the traditional method of using the convert servlet of BI Publisher for the report printing. In this method, data and the template required by BI Publisher to print the report, are stored at the APEX end and BI Publisher just serves as a printing engine.
We then move on to see the use of BI Publisher’s web service for integration with OBIEE and finally check out a less secure method of using the BI Publisher’s guest folder method for using the BI Publisher reports in APEX.
Chapter 8, All About Web Services and Integrations, is one of my favorites. Till now, we have been talking about using web services in APEX, but this teaches us about the creation of different types of web services. The spectrum includes the use of PL/SQL to Resource Templates of APEX Listener to BPEL for the creation of web services. Not only this, the chapter also talks about using the statistical tools, such as Oracle R for predictive analysis in APEX and the use of Google API to help us import data from the external servers for advanced analytics.
We start the discussion of this chapter, by talking about creating the necessary ACL and assigning the right grants to schemas. We then set the XMLDB services, which help us to project stored procedures and functions as web services. We then have a look at parsing the response of an invocation to a stored procedure using the XMLDB services. The stored function, which is invoked is such that it returns different columns of a single record. We then device a function that is capable of returning a bunch of records, we parse its response, and display the result in a report region. We also see the process of passing the APEX items as inputs to the XMLDB services. We see a report region that uses XMLTable for parsing the web service response. Until now, we had been handling the stored procedures and functions through the XMLDB web services, but now we fire queries on tables using native web service, and parse its response.
We then move on to see the process of configuring and creating the RESTful web services using Resource Templates. Resource Templates return the JSON objects as response, and we use a method to parse the response in our report regions.
Finally, we look at a process of creating a RESTful PL/SQL web service using DADs. We see the process of passing the arguments to this web service, by using the RESTful web service reference in APEX. We also have a look at the method to parse the response of this web service.
BPEL is a glue that can hold a variety of technologies together in an heterogeneous system. It also has a unique feature called Human Workflows and its associated worklist that can help us code the complex business decision processing systems and can help us transfer the job from the workstack of one user to the workstack of another. It is these wide applications of the technology that made us look into the various possibilities in BPEL and to find ways to use them in APEX. We use this part of the text to install, configure, and create both the synchronous and asynchronous processes in BPEL, which can then be used in APEX just like any other SOAP web service. We also have a detail discussion on using BPEL Human Workflows and have presented an example to use it.
Once we are through with BPEL, we have a short section dedicated to understanding the creation of the reports in the SAP Crystal reports, and using them in APEX. We also have short sections dedicated to the migration of data and code from MS Access to APEX and from Oracle Forms and Reports to APEX.
We bring our discussion of this chapter to an end, by elaborately discussing about the use of Oracle R and Google API in APEX. The talk on the Google report implements a use case of finding the best places to place the ATM machines in an area of interest and to get a PNG image that shows the plot of the points suggested by Oracle R for placing the ATM machines in the area of interest.
The discussion on Google API on the other hand is about using the Google Places API for finding the latitude/longitude information of important landmarks, such as bus stops, airports, and so on. The process described in this section fetches information from API, parses it, and presents the extracted information as an output of a SQL query. The code that is the SQL query that triggers the request to fire Google API also sends the desired landmark type (for example, bus stop and airport), and the desired city to Google API. The parsed information from the response can then be combined with our BI information to find the correlation of landmarks with the location of our retail store. This is however, a use case, and the process can be applied to get data from a number of other Google APIs. Data can be used in APEX for a variety of analysis.
Chapter 9, Performance Analysis, is all about tuning. The chapter informs you about the best development practices, and also introduces you to some of the database tools that can be used for the purpose of tuning the code.
The Tuning pointers for development section discusses v(),page and region caching, pagination scheme, tuning of like comparison, and many more.
Similarly, the talk on the database tools includes a natter about most of the features of the database that can help you hunt a bottle neck and eliminate it.
In a nut shell, the book is intended for all those who believe that making the technologies work in harmony and using their strengths to meet the objectives is a potent challenge. This book is for you, if you wish to spring into the action of the APEX development from the time you hold this book in your hand. The book is designed for innovative architects and enthusiastic developers.
Resources for Article :
Further resources on this subject:
- Prepare and Build [Article]
- Oracle APEX Plug-ins [Article]
- Getting Your APEX Components Logic Right [Article] | https://www.packtpub.com/books/content/oracle-apex-42-reporting | CC-MAIN-2016-36 | refinedweb | 5,953 | 58.62 |
Point Multiple Subdomains To The Same Frontend01 Sep 2020
Back in 2019 I built an online ticketshop for sports clubs. In its core, the shop was a webapp that processes payments and sends PDF via email. When it came to customization, things got tricky: Each club had a different name, different pictures, and sometimes even different questions they wanted to ask their customers. To give each of the clubs a customized experience, we provided each of them with their own subdomain. Eventually there were six different frontend deployments, multiple branches and the code bases started to diverge. Recently I learned that you can use DNS ARecords to route all requests under a certain domain to the same frontend. Thanks to DongGyun!
This article explains how you can point multiple subdomains to the same frontend deployment by creating DNS records and a static website with the AWS Cloud Development Kit (CDK). That will enable you to give each of your customers a customized experience, while having just one frontend deployment.
Shortcut: If you don’t need Infrastructure as Code (IaC), then an ARecord in Route 53 with
*.yourdomain.com that points to your existing CloudFront distribution gets you the same result.
The magic is in the chapter “Wildcard Routing”. Check out the full source code on GitHub.
Prerequisites
To deploy the solution of this article, you should have an AWS account and some experience with the AWS CDK. It’s also good to have an unused domain registered in Amazon Route 53, but we will learn how to use other providers and used domains as well.
This article uses CDK version 1.60.0. Let me know if anything breaks in newer versions!
Please bootstrap your account for CDK by running
cdk bootstrap. We will need this for the
DnsValidatedCertificate.
Optional: Understanding how DNS and especially nameservers work will help you a lot with troubleshooting potential routing issues.
The Solution
Let’s find a solution by putting us in the customers shoes. As a customer I want to go to
bear.picture.bahr.dev or
forest.picture.bahr.dev or any other address in the format
*.picture.bahr.dev and then see a picture for the word in the beginning. As a developer I want the least amount of complexity possible. Multiple frontend deployments increase complexity.
The request flow would look like this:
You can see above that only the domain changes, but nothing else. At the core of the solution are wildcard ARecords which let us route traffic for any subdomain to a particular target. The website can then take the URL, extract the subdomain and ask for the right picture. In the next chapter we will take a look at each part in detail.
1. Create A Hosted Zone
To register DNS records in AWS, we need to create a Hosted Zone in Route 53. Each Hosted Zone costs $0.50 per month.
The Hosted Zone is easiest to set up if you have a domain that is managed by Route 53 and that you don’t use for anything else yet.
We will also look at how you can set up your Hosted Zone if you are already using your Route 53 domain for another purpose (e.g. your blog) or if that domain is managed by a different provider than Route 53.
Depending on who manages your domain (e.g. Route 53 or GoDaddy) and if you already use your apex domain for other websites, you have to tweak the solution a bit. In my example, I already use my apex domain
bahr.dev for my blog, and have the domain managed by GoDaddy. We will see how to specify the right records there in the following chapters.
Warning: Before deleting hosted zones, please make sure you delete all related records in the root hosted zone or third party provider. Dangling CNAME and NS records might allow an attacker to serve content in your name.
1.1. Fresh Domain That Is Managed By Route 53
This is the easiest path. All we need is a Hosted Zone for our domain.
import { HostedZone } from '@aws-cdk/aws-route53'; ... const domain = `bahr.dev`; const hostedZone = new HostedZone(this, "HostedZone", { zoneName: domain });
Route 53 can now serve DNS records for that domain.
1.2. Used Domain That Is Managed By Route 53
This assumes that you already have a Hosted Zone for your apex domain, use your apex domain for something different and want to use a subdomain instead. An apex domain is your top level domain, e.g.
bahr.dev or
google.com.
We need to tell the DNS servers that information about the subdomain is in another Hosted Zone and do this by creating a
ZoneDelegationRecord.
import { HostedZone } from '@aws-cdk/aws-route53'; ... // bahr.dev is already in use, so we'll start // at the subdomain picture.bahr.dev const apexDomain = 'bahr.dev'; const domain = `picture.${apexDomain}`; // as above we create a hostedzone for the subdomain const hostedZone = new HostedZone(this, "HostedZone", { zoneName: domain }); // add a ZoneDelegationRecord so that requests for *.picture.bahr.dev // and picture.bahr.dev are handled by our newly created HostedZone const nameServers: string[] = hostedZone.hostedZoneNameServers!; const rootZone = HostedZone.fromLookup(this, 'Zone', { domainName: apexDomain }); new ZoneDelegationRecord(this, "Delegation", { recordName: domain, nameServers, zone: rootZone, ttl: Duration.minutes(1) });
A low time to live (TTL) allows for faster trial and error as DNS caches expire quicker. You should increase this as you make you get ready for production.
We will later add ARecords, so that requests to
picture.bahr.dev and
*.picture.bahr.dev go to the same CloudFront distribution.
bahr.dev will not be affected.
1.3. Domain Is Managed By A Provider Other Than AWS
Again we will create a Hosted Zone in Route 53, but this time we need manual work to register the nameservers of our Hosted Zone with our DNS provider. To get started, first create a Hosted Zone through the AWS console.
This will give us a Hosted Zone with two entries for Nameservers (NS) and Start Of Authority (SOA). We will copy the authoritative nameserver, and tell our DNS provider to delegate requests to our Hosted Zone in AWS.
Copy the authoritative nameserver from the SOA record, go to your DNS provider and create a nameserver record, where you replace the values for
Name and
Value:
Type: NS Name: picture Value: ns-1332.awsdns-38.org
Use a specific value like
picture if you want to start at a subdomain like
*.picture.bahr.dev or use
@ if you want to use your apex domain like
*.bahr.dev.
Then use the following CDK snippet to import the Hosted Zone that you created manually.
import { HostedZone } from '@aws-cdk/aws-route53'; ... const domain = `picture.bahr.dev`; const hostedZone = HostedZone.fromLookup(this, 'HostedZone', { domainName: domain });
2. Certificate
Now that we have DNS routing set up, we can request and validate a certificate. We need this certificate to serve our website with https.
With the CDK we can create and validate a certificate in one command:
import { DnsValidatedCertificate, ValidationMethod } from "@aws-cdk/aws-certificatemanager"; ... const certificate = new DnsValidatedCertificate(this, "Certificate", { region: 'us-east-1', hostedZone: hostedZone, domainName: this.domain, subjectAlternativeNames: [`*.${this.domain}`], validationDomains: { [this.domain]: this.domain, [`*.${this.domain}`]: this.domain }, validationMethod: ValidationMethod.DNS, });
There’s a lot going on here, so let’s break it down.
First we set the region to
us-east-1, because CloudFront requires certificates to be in
us-east-1.
We then use the CDK construct
DnsValidatedCertificate which spawns a certificate request and a lambda function to register the CNAME record in Route 53. That record is used for validating that we actually own the domain.
The parameter
hostedZone specifies which Hosted Zone the certificate shall connect with. This is the Hosted Zone we created before.
domainName and
subjectAlternativeNames specify which domains the certificate should be valid for. The remaining parameters configure the validation process.
3. Frontend Deployment
With the certificate in place, we can create a Single Page Application (SPA) deployment via S3 and CloudFront. We’re using the npm package cdk-spa-deploy to shorten the amount of code required for configuring the S3 bucket and attaching a CloudFront distribution.
import { SPADeploy } from 'cdk-spa-deploy'; ... const deployment = new SPADeploy(this, 'spaDeployment') .createSiteWithCloudfront({ indexDoc: 'index.html', websiteFolder: './website', certificateARN: certificate.certificateArn, cfAliases: [this.domain, `*.${this.domain}`] });
The
index.html can be an HTML file as short as
<p>Hello world!</p> and should be stored in the folder
./website.
In the browser we can use JavaScript to get the subdomain. The line of code below splits the URL
ice.picture.bahr.dev into an array
['ice', 'picture', 'bahr', 'dev'] and then picks the first element
'ice'.
const subdomain = window.location.host.split('.')[0];
With that information, the website can then contact the CMS to get the right assets for your customer.
4. Wildcard Routing
And finally it’s time for the wildcard routing. With the CDK code below, all requests to
*.picture.bahr.dev and
picture.bahr.dev will be routed to the frontend deployment we set up above.
import { CloudFrontTarget } from "@aws-cdk/aws-route53-targets"; import { ARecord, RecordTarget } from '@aws-cdk/aws-route53'; ... });
Once all the DNS records have propagated, we can test our setup. Please note that deploying the whole solution sometimes takes 10 to 15 minutes.
Try It Yourself
Here’s the full CDK code that you can copy into your existing CDK codebase.
I suggest that you start with checking out the source code and adjust the domain and Hosted Zone to your needs. Add a
ZoneDelegationRecord if you need it. Make sure to run
cdk bootstrap if you haven’t done that yet.
import * as cdk from '@aws-cdk/core'; import { SPADeploy } from 'cdk-spa-deploy'; import { DnsValidatedCertificate, ValidationMethod } from "@aws-cdk/aws-certificatemanager"; import { CloudFrontTarget } from "@aws-cdk/aws-route53-targets"; import { HostedZone, ARecord, RecordTarget } from '@aws-cdk/aws-route53'; export class WildcardSubdomainsStack extends cdk.Stack { private readonly domain: string; constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) { super(scope, id, props); const domain = `picture.bahr.dev`; const hostedZone = new HostedZone(this, "HostedZone", { zoneName: domain }); const certificate = new DnsValidatedCertificate(this, "Certificate", { hostedZone, domainName: this.domain, subjectAlternativeNames: [`*.${this.domain}`], validationDomains: { [this.domain]: this.domain, [`*.${this.domain}`]: this.domain }, validationMethod: ValidationMethod.DNS }); const deployment = new SPADeploy(this, 'spaDeployment') .createSiteWithCloudfront({ indexDoc: 'index.html', websiteFolder: './website', certificateARN: certificate.certificateArn, cfAliases: [this.domain, `*.${this.domain}`] }); }); } }
Now run
AWS_PROFILE=myProfile npm run deploy to deploy the solution. Replace
myProfile with whatever profile you’re using for AWS. Here’s more about AWS profiles.
The deployment may take somewhere between 10 and 15 minutes. Grab a coffee and let CDK do its thing. If you run into problems, check the troubleshooting section below.
Once the deployment is done, you should be able to visit any subdomain of the domain you specified (e.g.
bear.picture.bahr.dev for the domain
picture.bahr.dev) and see your website.
Troubleshooting
The DNS routing doesn’t work.
A high time to live (TTL) on DNS records can make changes difficult to test. Try to lower the TTL as far as possible.
If your domain is not managed by Route 53, make sure that the DNS routing from your DNS provider is set up correctly.
If you use your apex domain for something else, make sure to set up a
ZoneDelegationRecord that redirects traffic for your subdomain to your new Hosted Zone.
The deployment failed to clean up.
Depending on which step the deployment fails, not all resources can be cleaned up. This is most likely due to the CNAME record that the lambda function of the
DnsValidatedCertificate created. Go to the Hosted Zone, remove the CNAME record and delete the stack by running
cdk destroy or deleting it through the AWS console’s CloudFormation service.
Failed to create resource. Cannot read property ‘Name’ of undefined
Clean up the stack, remove and redeploy it. I’m not sure where that error comes from, but retrying fixed it for me.
The certificate validation times out.
Make sure you are using the right approach so that the required CNAME record will be visible to the DNS servers. If you’ve used your domain before, set up the right
ZoneDelegationRecord. This can be a bit tricky so feel free to reach out to me on Twitter.
Next Steps
Check out the full source code and try it yourself! If you’d like to contribute, a PR to cdk patterns is probably a good idea.
Further Reading
Enjoyed this article? I publish a new article every month. Connect with me on Twitter and sign up for new articles to your inbox! | https://bahr.dev/2020/09/01/multiple-frontends/ | CC-MAIN-2020-50 | refinedweb | 2,109 | 59.19 |
A few hours earlier, a developer named Azer Koçulu unpublished (or, depending on who you believe in, "liberated") more than 250 of his modules from the
npm registry. The move was sudden, and was made regarding a disagreement between him and a lawyer representing a messaging app called Kik over one of his modules, which was, incidentally, also named
kik.
The main problem is: one of this package was essential for a lot of other npm packages. The package in question was called
left-pad. Soon after, a lot of users reported that they can't install a lot of npm modules, most notably babeljs, due to this missing dependency. And so, the drama ensues.
While I choose to take no sides on this issue, I believe that Azer was entitled to his own opinion (well, a lot of annoyed programmers started to give him beef when it happened, anyways). In fact, the main problem actually lies with how the npm system works. There has also been some posts criticising npm's packaging system long before the
left-pad disaster even occured, most notably this article by Jonathan Ong.
Full disclosure: I’m not really an active node user. I’ve never really used it other than for running a few Grunt/Gulp scripts. While I agree with some of that post’s sentiments (in fact, I’ve grown an irrational hatred of nested modules, thanks to npm), I’m not going to beat around the bush. There are, in fact, a handful of problems that could pose a danger to the Node ecosystem when someone has unpublished a package.
In npm, packages share a global namespace that are registered on a first-come, first-served basis. This could be an unfortunate case for some aspiring developer who wanted to publish his own package called
my-awesome-package, but couldn't because that name has been squatted by someone else with a half-arsed code that hasn't been updated in forever.
However, it's a completely different story when you unpublish something from the npm registry. As soon as you unpublished a package, that namespace is disaccosiated from that package, and the name is up for grabs for everyone to use again.
Remember that one time, when someone published a malicious npm package which had a
preinstall hook that runs the
rm -rf /* command on your shell? If someone unpublished a fairly popular npm package, that package name is up for grabs again. Someone else could then take control of that package name, and publish his own version of the package, which bits of its code swapped out to include malicious code.
A more general problem that arises is that people just trust the developer of a package. Packages aren't approved by anyone, and they're often updated without checking the code. When npm "un-un-published"
left-pad, they transferred the ownership of it to someone who said they'd maintain it. This also means that people have to trust npm to pick someone trustworthy. And given that we still have no way to verify packages on npm, that code could be secretly swapped out to include malicious code pretty easily.
The fact that the unpublishing of a small npm package with only 17 lines of code leads to packages breaking all over the place is a serious single point of failure, which really needs to be addressed properly. But unfortunately, nothing has been done, even after the whole
rimrafall fiasco. In fact, there has been an attempt to add package verification using GPG into npm, but unfortunately, the pull request was rejected.
You could literally write leftPad in the time it takes 'npm install left-pad' to complete.— Krøstian (@cjno) March 23, 2016
The first thing that we've learned, is that npm should really find a way to do away with its global package naming system. Something like
username/package would be a proper package naming scheme to use, because it's important that we could verify the the author of said package, while avoiding name squatting at the same time.
And finally, it's interesting that we could just blindly trust some globally-namespaced package without any form of verification. Just like how we trust secure websites with a green padlock on your browser, I silently wait for the day that we could safely install npm packages, verified through network of trust that would transparently verify any npm packages, be it by package signing (like what Android does), or what have you.
Until then, just be careful about what you
npm install.
Thanks to @hexdefined for his help on this post! | https://resir014.xyz/posts/2016/03/23/npms-single-point-of-failure | CC-MAIN-2021-17 | refinedweb | 777 | 58.32 |
This walkthrough is a transcript of the Identifying Rows video available on the DevExpress YouTube Channel.
In this tutorial, you'll learn about the ways grid Views identify their rows.
Data source indexes refer to records in the bound list. You will use them for data editing. As you can expect each data row has a unique index, while group rows simply refer to the first available data row, and service rows return negative values.
Row handles are used by the grid View to identify rows of any type. Group rows have consecutive negative indexes, service rows have pre-defined values and data rows have positive indexes.
Finally, visible indexes enumerate all rows in the order they are visible on screen. These identifiers are mainly used to implement row navigation.
Now take a closer look at when to use each type of row identifiers and how they differ from one another.
If you have plain data displayed by the grid, then these three identifiers are usually the same in each row. All of them are row indexes that start with 0.
Sorting Data
Sorting data is one way to see the difference between these identifiers. The order of records has changed, and the data source indexes followed. Same rows are identified by the same data source indexes but the order is now different. On the other hand, row handles and visible indexes are still consecutive integers starting with 0 and they match each other in each row.
Filtering Data
A similar effect is achieved when you filter rows. Data is reloaded, row structure is being re-built, and so visible indexes and row handles are updated to reflect the new structure, while data source indexes follow their corresponding rows.
Incorrect Using Row Handles
An important takeaway is that row handles and visible indexes change in response to user actions. Create a simple example to illustrate this point. The Save Index button in the Ribbon control will save the currently focused row's handle. For this purpose, declare an integer savedRowHandle field and assign the grid View's ColumnView.FocusedRowHandle property value to it.
int savedRowIndex;
private void barButtonSaveIndex_ItemClick(object sender, ItemClickEventArgs e) {
savedRowIndex = gridView1.FocusedRowHandle;
}
There's also the Change Value button. Its Click event handler uses the ColumnView.SetRowCellValue method to set the Name column cell to an empty string within the saved row.
private void barButtonChangeValue_ItemClick(object sender, ItemClickEventArgs e) {
gridView1.SetRowCellValue(savedRowIndex, colName, string.Empty);
}
Run the application and first focus the row with Audi A6. Click the Save Index button, then move focus away and finally click the Change Value button. As expected, the cell in the saved row has been changed.
Re-start the application. Now, first sort the Name column, then locate the row showing Audi A6. Save the row's handle, which is now 2 - using the Save Index button. Then, clear sorting and notice how the row's handle has changed. So, if you press Change Value, it will not change in the Audi A6 row that was saved.
Using Data Source Indexes Instead of Row Handles
To fix this, you need to modify the code so that it stores data source indexes instead of row handles. Then, in the Change Value handler, convert the stored index into a row handle and only then apply the change.
int savedRowIndex;
private void barButtonSaveIndex_ItemClick(object sender, ItemClickEventArgs e) {
savedRowIndex = gridView1.GetDataSourceRowIndex(gridView1.FocusedRowHandle);
}
private void barButtonChangeValue_ItemClick(object sender, ItemClickEventArgs e) {
int rowHandle = gridView1.GetRowHandle(savedRowIndex);
gridView1.SetRowCellValue(rowHandle, colName, string.Empty);
}
Run the application to see that the code works as expected now even when using data shaping operations such as sorting or filtering.
Differences Between Row Handles and Data Source Indexes
Next see what happens when you group data. One of the key differences between row handles and data source indexes is that row handles for group rows are negative integers. There are obviously no data source indexes for group rows, because they don't exist in the data source. So the values shown in group rows are indexes of the first data row within the group. One more thing worth noting is that row handles for data rows are always non-negative integers.
Iterating Through Rows Using Row Handles
If you want to iterate through all rows in the grid control's memory, you can simply enumerate row handles from 0 to the View's BaseView.DataRowCount property.
Take a look the Clear Name button's Click event handler that does exactly this to clear values in the Name column for all currently loaded rows. The handler code is wrapped into the BaseView.BeginUpdate and BaseView.EndUpdate method calls to avoid multiple updates to the View. It starts with the row handle equal to 0 and then enumerates all integers up to the BaseView.DataRowCount property value. The loop body calls the ColumnView.SetRowCellValue method to clear the value in the Name column.
private void barButtonClearName_ItemClick(object sender, ItemClickEventArgs e) {
gridView1.BeginUpdate();
int rowHandle = 0;
while (rowHandle < gridView1.DataRowCount) {
gridView1.SetRowCellValue(rowHandle, colName, string.Empty);
rowHandle++;
}
gridView1.EndUpdate();
}
Run the application. First filter the records to display only Audis. Click the button and see the names cleared. Now remove filtering and group data by Make. You'll see the Name column has been cleared in the Audi group, but other makes still have the data.
So only those rows that matched the filter criteria were loaded into the memory. If you press the Clear Name button now, the change will affect all rows – in expanded or collapsed groups.
Differences Between Row Handles and Visible Indexes
The grouped View also reveals an important difference between row handles and visible indexes. First, visible indexes still start with 0 and the value is incremented with each visible row – be it a group row or a data row. Secondly, you'll notice that row handles are already assigned to all rows loaded into the memory, including those in collapsed groups. Expand and collapse operations on group rows don't affect row handles. Visible indexes, on the other hand, will be recalculated with each expanded state change to account for rows that have become visible or hidden.
Using Visible Indexes
To illustrate the usage of visible indexes, implement a button that navigates to the next visible row in the View – an alternative to pressing the DOWN key. The handler first determines the focused row's visible index using the GridView.GetVisibleIndex method that takes a row handle as a parameter. Next the code increments the obtained visible index, and, finally, converts it back to a row handle value using the GridView.GetVisibleRowHandle method and sets the focus using this newly obtained handle.
private void barButtonNextRow_ItemClick(object sender, ItemClickEventArgs e) {
int visibleIndex = gridView1.GetVisibleIndex(gridView1.FocusedRowHandle);
visibleIndex++;
gridView1.FocusedRowHandle = gridView1.GetVisibleRowHandle(visibleIndex);
}
Run the application. You'll see that the button works in both plain and grouped views.
One last thing worth mentioning in this tutorial is that special types of rows, such as the New Item Row, are assigned pre-defined row handle values.
To see how you can use these predefined values, handle the ColumnView.BeforeLeaveRow event.
The grid control has static fields specifying them. This also includes the GridControl.InvalidRowHandle value that is returned by some methods if a row handle cannot be obtained. In the code, check whether the current row is the New Item Row, and if so, display a confirmation message box.
private void gridView1_BeforeLeaveRow(object sender, DevExpress.XtraGrid.Views.Base.RowAllowEventArgs e) {
if (e.RowHandle == DevExpress.XtraGrid.GridControl.NewItemRowHandle) {
DialogResult result = MessageBox.Show("Are you done editing the new record?", "Confirmation", MessageBoxButtons.YesNo);
e.Allow = (result == System.Windows.Forms.DialogResult.Yes);
}
}
Run the application. Focus the New Item Row and then try to change focus back to one of the data rows. If you click No, the focus will stay unchanged.
Grid Views provide methods allowing you to convert row identifiers into one another. To see how this works, analyze the handler that displays row index information in this application.
There are three columns, one displays visible indexes, another row handles and the third - data source indexes. They are unbound and are populated using the ColumnView.CustomUnboundColumnData event. The code first obtains the row handle using the data source index passed as a parameter. Then, the visible index is determined using the row handle. After that all the values are displayed in their corresponding columns.
using DevExpress.XtraGrid.Views.Grid;
//...
private void gridView1_CustomUnboundColumnData(object sender, DevExpress.XtraGrid.Views.Base.CustomColumnDataEventArgs e) {
GridView view = sender as GridView;
if(view == null) return;
int rowHandle = view.GetRowHandle(e.ListSourceRowIndex);
int visibleIndex = view.GetVisibleIndex(rowHandle);
if (e.IsSetData) return;
if (e.Column.FieldName == "gridColumnRowHandle")
e.Value = rowHandle;
if (e.Column.FieldName == "gridColumnVisibleIndex")
e.Value = visibleIndex;
if (e.Column.FieldName == "gridColumnListSourceIndex")
e.Value = e.ListSourceRowIndex;
}
Now go over the key points once again.
Data source indexes
Row Handles
Visible Indexes | https://documentation.devexpress.com/WindowsForms/114736/Controls-and-Libraries/Data-Grid/Getting-Started/Walkthroughs/Grid-View-Columns-Rows-and-Cells/Tutorial-Identifying-Rows | CC-MAIN-2018-05 | refinedweb | 1,484 | 50.84 |
The Author mode of <oXygen/> XML Editor demonstrates a new productive way of authoring XML documents, similar to a word processor.
<oXygen/> makes XML document authoring easier than editing with an unstructured word processing application. The author's focus is on the semantics of the XML content he/she enters while the formatting and layout is performed automatically by <oXygen/> XML Editor.
As always, <oXygen/> does not try to reinvent the wheel and does not lock users into custom formats. The WYSIWYG like rendering is driven by CSS stylesheets conforming with the W3C CSS 2.1 specification. Some enhancements introduced by the W3C CSS 3 working draft like CSS XML namespaces and the attr function are also supported.
The simplest way to edit an XML document visually is to associate it a CSS that defines the styles for the XML elements the document is using. The association is done by inserting in the document header the standard xml-stylesheet processing instruction.
The same file, in the Author visual editing mode:
The tagless editor comes with ready to use support for largely used XML frameworks: DITA, DocBook 4 / DocBook 5, TEI P4 / TEI.
You can create your own editing framework, similar to the ones that are preconfigured in <oXygen/>. See this section: Extensible XML Editor .
The CALS Table Model is a standard for representing tables in SGML/XML. The editor supports the CALS table model for DocBook and DITA. The HTML table model is supported for XHTML and DocBook.
In case you are customizing the editor for an XML framework that uses other types of tables, you can write a Java extension for describing the number of rows and columns a cell may span. To describe what elements enter the tables, rows and cells you must use the standard CSS display property with the values: "table", "table-row", "table-cell", etc.. For more information, see the User Manual.
Using a form control is the most convenient way of presenting and entering data into a document. This allows less technical users to interact with the content of a document without being intimidated by the XML structure. oXygen XML Editor introduces form controls as a way of inputting/entering content and attribute values while working in the Author mode.
The oxy_editor CSS extension function allows you to edit XML attribute and element text values directly in the Author mode using form controls. Various implementations are available out of the box: text fields, pop-ups, comboboxes, checkboxes, buttons which invoke custom Author actions, date pickers, or URL choosers. You can also implement JAVA custom form controls.
Just press the Enter key and you will have the list of all XML elements that can be inserted at the caret position.
In the following figure it was selected the word W3C. By pressing ENTER, the word can be enclosed in XML markup. In our case, the acronym element.
The Author mode allows drag and drop editing. You can select the XML content then drag and drop it in the desired location to move or copy that content..
<oXygen/> combines the easy way of working in a visual WYSIWYG like XML editor with the full power of XML source editing. Also, the Outline view, which is synchronized with the edited XML document, shows the entire markup structure.
With the navigation links support it becomes easy to go from a DITA conref to the referred content, from a DocBook link to the target element or from an XInclude reference to the included content, etc. by just a single click.
The links are specified in the associated CSS using the custom property link . <oXygen/> can open the referred document and show the linked element each time you click on a link in the Author mode. The DITA, TEI and DocBook CSS stylesheets were updated to this new support. Links are automatically created for the included/referenced XML content: for XInclude, for DITA conref, DocBook link and xref elements.
<oXygen/> identifies the target element using its ID attribute or using an XPointer element scheme expression. You may read more about adding links and how to use them for your type of XML documents in the User Manual.
In the above figure clicking the blue text makes the editor place the caret at the position of the referred element. Navigating through a large XML document had never been so easy!
The Find All Elements action is available in all the XML editor modes: Text, Author and Grid. It represents an easy way to search for XML elements by their tag names, attribute names and values.. | http://www.oxygenxml.com/xml_editor/wysiwyg_xml_editor.html | CC-MAIN-2013-20 | refinedweb | 764 | 53.71 |
Table of Contents
One of the most frequently asked questions that any Java developer needs to have the answer to is to how to write a prime number program in Java. It is one of the basic concepts concerning the leading high-level, general purpose programming language.
There are several ways of writing a program in Java that checks whether a number is prime on not. However, the basic logic remains the same i.e. you need to check whether the entered number (or already defined in the program) has some divisor other than 1 and itself or not.
The prime number program is an indispensable part of learning Java. Hence, most of the great books on Java covers it. Before moving forward to discuss the prime number program in Java, let’s first understand the concept of prime numbers and their importance.
Prime Numbers – The Definition and Importance
Any number that is only divisible by 1 other than itself is known as a primary number. 3, 5, 23, 47, 241, 1009 are all examples of prime numbers. While 0 and 1 can’t qualify for being a prime number, 2 is the only even prime number in the entire infinitely long set of prime numbers.
Prime numbers exhibit a number of odd mathematical properties that make them desirable for a wide variety of applications, many of which belongs to the world of information technology. For example, primes find use in pseudorandom number generators and computer hash tables.
There are several instances in the history of using encryption for hiding information in plain sight. Amazingly, it is the process of using prime numbers to encode information.
With the introduction of computers, modern cryptography was introduced. It became feasible to generate complex and longer codes that were much, much difficult to crack.
Most of the modern computer cryptography is dependent on making use of prime factors of large numbers. As prime numbers are the building blocks of whole numbers, they are of the highest importance to number theorists as well.
Check out more about the importance of prime number in IT security.
Prime Number Program in Java
As already mentioned, there are several ways of implementing a prime number program in Java. In this section, we’ll look at three separate ways of doing so as well as 2 additional programs for printing primes.
Type 1 – A Simple Program With No Provision for Input
This is one of the simplest ways of implementing a program for checking whether a number is a prime number or not in Java. It doesn’t require any input and simply tells whether the defined number (by the integer variable n) is a prime number or not. Here goes the code:
public class PrimeCheck{ public static void main(String args[]){ int i,m=0,flag=0; int n."); } } } }
Output:
3 is a prime number.
Type 2 – A Program in Java Using Method (No User Input Required)
This Java code demonstrates implementation of a prime number program that uses a method. Like the program mentioned before, it doesn’t ask for any user input and works only on the numbers entered to the defined method (named checkPrime) in the program. Here is the code:
public class PrimeCheckUsingMethod{ static void checkPrime(int n){ int i,m=0,flag."); } } } public static void main(String args[]){ checkPrime(1); checkPrime(3); checkPrime(17); checkPrime(20); } }
Output:
1 is not a prime number.
3 is a prime number.
17 is a prime number.
20 is not a prime number.
Type 3 – A Program in Java Using Method (Requires User Input)
This Java program is similar to the aforementioned program. However, this program prompts for user input. Here goes the code:
import java.util.Scanner; import java.util.Scanner; public class PrimeCheckUsingMethod2 { public static void main(String[] args) { Scanner s = new Scanner(System.in); System.out.print("Enter a number: "); int n = s.nextInt(); if (isPrime(n)) { System.out.println(n + " is a prime number."); } else { System.out.println(n + " is not a prime number."); } } public static boolean isPrime(int n) { if (n <= 1) { return false; } for (int i = 2; i < Math.sqrt(n); i++) { if (n % i == 0) { return false; } } return true; } } )
Sample Output:
Enter a number: 22
22 is not a prime number.
[Bonus Program] Type 4 – A Program in Java to Print Prime Numbers from 1 to 100
This code will demonstrate a Java program capable of printing all the prime numbers existing between 1 and 100. The code for the program is:
class PrimeNumbers { public static void main (String[] args) { int i =0; int num =0; String primeNumbers = ""; for (i = 1; i <= 100; i++) { int counter=0; for(num =i; num>=1; num--) { if(i%num==0) { counter = counter + 1; } } if (counter ==2) { primeNumbers = primeNumbers + i + " "; } } System.out.println("Prime numbers between 1 and 100 are :"\n); System.out.println(primeNumbers); } }
Output:
Prime numbers between 1 and 100 are:
2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97
[Bonus Program] Type 5 – A Program in Java to Print Prime Numbers from 1 to n (User Input)
This Java program prints all the prime numbers existing between 1 and n, where n is the number entered by the user. Here is the code:
import java.util.Scanner; class PrimeNumbers2 { public static void main (String[] args) { Scanner scanner = new Scanner(System.in); int i =0; int num =0; String primeNumbers = ""; System.out.println("Enter a number:"); int n = scanner.nextInt(); scanner.close(); for (i = 1; i <= n; i++) { int counter=0; for(num =i; num>=1; num--) { if(i%num==0) { counter = counter + 1; } } if (counter ==2) { primeNumbers = primeNumbers + i + " "; } } System.out.println("Prime numbers between 1 and n are:"/n); System.out.println(primeNumbers); } }
Sample Output:
Enter a number: 22
Prime numbers between 1 and 22 are:
2 3 5 7 11 13 17 19
It’s Done!
That was all about the prime number program in Java. No matter at what skill level a Java developer is, it is very important to be able to write a program concerning prime numbers, at least for checking whether a given number (or set of numbers) is a prime or not.
Continued learning is very important for advancing in coding. A program that you are able to write now might be better when you write it after gaining new knowledge. If you’re looking to enhance your Java skill further, consider checking out some of the best Java tutorials.
Do you know some fascinating way of implementing a prime number program in Java? Care to share with us? Then you can do so by the comment window below. Thanks in advance. | https://hackr.io/blog/prime-number-program-in-java | CC-MAIN-2019-18 | refinedweb | 1,125 | 63.29 |
go to bug id or search bugs for
Description:
------------
with PHP, several methods are available to produce output:
echo "hello world\n";
print "hello world\n";
print_r("hello world\n");
var_export("hello world\n");
However all these methods have something in common: they do not produce their
own newline. With each method, the user is required to provide a newline with
"\n", PHP_EOL or similar. This is bothersome because many other programming
languages offer such a method. For example Python:
print('hello world')
Perl:
use feature say;
say 'hello world';
Ruby:
puts 'hello world'
Lua:
print 'hello world'
Even C:
#include <stdio.h>
int main() {
puts("hello world");
}
Out of the above examples, I would say Perl and Ruby are most similar to PHP, in
that they also have the "print" method:
$ perl -e 'print 2; print 3;'
23
$ ruby -e 'print 2; print 3;'
23
However even in this case, "print" can be made to produce a newline by default:
$ perl -e '$\ = "\n"; print 2; print 3;'
2
3
$ ruby -e '$\ = "\n"; print 2; print 3;'
2
3
My request would be, in order of preference:
1. the "echo" method be modified such that it produces newline by default
2. new method method be added, perhaps "say", that produces newline by default
3. a variable be introduced, "$\" or similar, that controls output record
separator
I understand that some of these methods are decades old and unlikely to change,
but if I do nothing then I have only myself to blame.
Add a Patch
Add a Pull Request
1 would break *lots* of backwards compatibility so it isn't even remotely an option.
2 is reasonable, though "say" doesn't really fit the PHP naming style.
3 seems to be the result of confusing this language with Perl.
Adding a new function/construct like "println" or "echoln" or whatever isn't too drastic a change, but you should still hop on the internals mailing list and see what people think about this idea.
@requinix that page has literally 20 mailing lists
perhaps you could name a specific one so that your suggestion will be constructive?
it has not that many ctrl+f "internals" while I must say that only the idea change the default here is insane to say it polite
@spam2 under the Internals heading 11 mailing list are shown, so thats not exactly helpful
further - that pages doesnt actually show what the email address is
so even if i knew the correct list how would i even send a message
WTF -
* there is a form
* there is "Internals list"
* there you can select which lists you want to subscribe
* below you enter your e-mail
* then you click "subscribe"
* below you see " You will be sent a confirmation mail at the address you wish to be subscribed or unsubscribed"
* as for any other mailing list you get a message
* that message explains how to post and unsubscribe
guess what: internals@lists.php.net
if that barrier is too high than for good reasons
@spam2 i think the "barrier" is too high for you
else you would realize that people can post to a mailing list without subscribing
however to do that, one would need to know what the email is
which is my point, that page in question *does not* list the email address
so by simple reading of the page and by your statement it is not readily possible to post without subscribing, as it should be.
> people can post to a mailing list without subscribing
and you think you gain what with that when you not receive answers expect from idiots with borken mail coiuents not knowing "reply to list button" and press "reply all" to flood people with duplicate swhile you not see ny response from people knowing how to use a mailing list?
and NO post from a rndom unsibsucribed email to a list is not how it works typically because very random spambot would spread it#s shit to hundrets our thounds of subscribers unfiltered
Feel free to ignore spam2.
The list email is internals@lists.php.net.
@requinix thank you
the emails have now been added:
finally is it possible to ban spam2? at best he has a tenuous grasp on english, and at worst he is actively and purposefully harming the PHP community
> actively and purposefully harming the PHP community
by expect common sense while finding out the address of internals takes 10 seconds? while ideas like "1. the echo method be modified such that it produces newline by default", even the thinking about it proves lack of common sense?
what would you do with "echo $a, $b, $c"?
echo is *not* a method at all to begin with
@spam2 you are a professional idiot.
but as the page this bug is hosted on might exist for many years into future, i feel it my duty to give a proper refutation. results of the example you gave:
echo $a, $b, $c;
are well defined in other programming languages. Python will print the objects separated by "sep" (default empty) and finally by "end" (default newline):
Perl will print the objects separated by "$," (default empty) and finally by "$\" (default empty):
Ruby will print the objects separated by "$," (default empty) and finally by "$\" (default empty):
And even Awk will print the objects separated by "OFS" (default space) and finally by "ORS" (default newline):
As you have proven your poor understanding of programming languages (and the english language), i would strongly suggest that you simply stop posting. However i know this warning will fall on deaf ears. dumb has no off button.
when I want the behavior of other programming languages I use them, that's why there are so many | https://bugs.php.net/bug.php?id=77684 | CC-MAIN-2022-27 | refinedweb | 956 | 56.83 |
Brian Jones - New Office file formats announced
- Posted: Jun 01, 2005 at 8:59 PM
- 243,087 Views
-uff said
Really Cool! I just got home and it was right when this got posted. I can't wait to see the Reference Schemas. You said it would be on the Microsoft XML page...when will it be up? Can you post a link to Brian's blog so we can all subscribe?
The other pages will be up soon.
Backward compatible: There will be updates to Office 2000, XP, and 2003 that will allow those versions to read and write this new format. You don’t have to use the new version of Office to take advantage of these formats. (I think this is really cool. I was a big proponent of doing this work)
nice
1) There's simpler file formats.
2) Even though the current suite can write the new file formats, the suite is still buggy & has high maitenance cost.
3) Some maverick within MS proposes writing a new Office suite to deal directly w/ the simpler file formats
4) The newly proposed Office suite is inherently simpler & therefore, more robust and has lower cost, the maverick says.
5) The maverick got the OK -- as long as he writes the new Office suite in .NET -- lower dev cost & all.
6) Right around Longhorn release, the maverick finishes Office.net and there is finally a reason to upgrade to Longhorn.
OK, here's why I want a new Office. When I open Word, there are 1,000 UI elements staring me in the face when all I want to do was to type a note. Since Longhorn is all about visualization, how about some adaptive UI for the new Office? I'll thank you personally.!
Hurrah!! Really, this is awesome news. Hopefully, people will take full advantage of this! At any rate, this is very very cool!
yes, i still remember the Office 97 is able to open the new file formats of Office 2003(it's shocking), you know, our company is only able to makes a 2003 version app to open up a file formats of 97 verion.
Office 12 will still be able to open up Office 97 files and will still be able to save as the binary format that Office 97 will be able to open up.
By the way, why haven't you upgraded? Office 2003 is a LOT better than Office 97.
this is means that we as programmers need a convert tools for compatible with Office 97 and Office 12.
i knew it, i used to a legal Office97 in company, and use a piratic Office2003 at home.
No, you just need to stick with the old binary file format if you need Office 97 compatibility. No converter needed.
But does it makes the given reasoning for MS not participating in the ODF design process seem all the more shallow.
why not use .xml(or .msoffice) as unified file extension of Office(includes Word ,Excel,PowerPoint,etc).
then we are able to open and edit the .xml in the unified IDE(office , just like Visual Studio IDE ).
That would defeat the point of filename extensions under Windows. VS is an IDE, yet there's absolutely no need for all the files it handles to have the same filename extension. It would be a nightmare for your average users to understand.
just like the .sln(xml) file, users don't care about what is it(VC,VC#, Word, Excel), just need to double click for open up it.
you also can use different Icons to distinguish the different .sln files .the Icon has the word signalment when use Word to export a .xml file,the Icon of file should be the Excel signalment when use Excel to export a .xml file, .
could you guys to provide a plug(or office sevice pack) for the older Office(97,2000,XP,2003) to directly open the new file format up?
What happens when you save the document as docx? Does the excel object become a binary ole file inside the zip container, or does the object become an Excel xml referenced from the word xml file?
How does Word know which embedded objects can be persisted as xml files rather than binary ole objects?
Too long for an extension and too confusing for customers. Adding an 'X" is probably the best thing to do at this time.
Scoble already said, Office 2000, Office XP, Office 2003. Thats it. No Office 97.
one default extentions for all files ??? ,,,, mmmm
is a great idea, all the documents are "objects" and they are independent serialized objects in Xml format ... I like this a lot
Bye from Spain
El Bruno
Edit:
Why are the whitepapers in Word format? Surely they should be in PDF format?
Q. Are the formats going to be licence encumbered like the current (old) XML formats? I mean we have been able to look at XML from office for a while now, but to use the format you had to agree to all sorts of stuff that wasn't terribly attractive for the ISV.
Tom
Hey Scoble,
why the link to Mary Jo Clueless?
for example:
she says that
"Developers say there's a dirty little secret about Longhorn that few Softies are discussing publicly: Longhorn won't be based on the .Net Framework."
and later in the same trash:
"(Maybe Microsoft's revelation on Wednesday that the .Net Framework 2.0 beta is breaking applications has something to do with this? We're waiting for official word back from Microsoft.)"
this is just one of many times where her facts and information are false, wrong, misquotes or in some way missing key information.
I posted a reply to her "talkback" and I see it's gone. I guess she did not care for my telling her she was wrong and that I could make up junk like that easy and that perhaps I should work there ....
I don't have a problem with storys that ask questions or reveal facts .... but when it's a mess of bad quotes, half truths and guesswork it get's me Angry!
I don't know - and really, no client that I know of has used open office, so I don't really care.
Would be nice to think this new openness will extend to whatever cool UI they put on Office 12. Be nice to be able to give our apps the same look and feel and save us all from reinventing the wheel.
Also, even though I'd imagine Office 12 is a little way off yet would MS consider releasing the updates for Office 2000, XP and 2003 earlier so that we can start building apps for it before Office 12 is released?
It is an evolution (therefore mostly the same). Clearly, it will be updated to incorporate new "Office 12" functionality. Another type of change is structural - some of the features, such as document properties, that are common across all the applications or are a type of content that is likely to be manipulated on its own have been broken out into their own separate "part." This'll make it easier to manipulate this type of information in a programmatic way in solutions. The other change is that things like images and OLE objects are stored in their native binary form rather than as encoded XML, since we've heard from customers that this is the preferred way to work with these types of things.
I actually don't know what's going on regarding namespaces - we'll have to hear from Brian on this. Regardless, "Office 12" will be able to read and write the Office 2003 XML as well, so solutions built today will continue to run..
Yeah, he's got a point, what about PST XML files?
But you have to keep reading.....
Um... please, no... XML is many things, but space-efficient it is not.
How about ditching .pst files and using a SQL backend for local Outlook storage instead?
Let me make sure I'm interpreting this correctly.
If I want to write a utility that converts (say) PowerPoint .pptx files into a series of static HTML pages... then do I have to license the patent or not?
What if I want to sell the utility?
That page is the license; by reading it you have the license. So yes, you can freely write (and sell) that utility without asking Microsoft or signing anything anywhere. You can even do that today with the Word and Excel 2003 XML formats.
What about Access and Publisher? No .mdbx or .pubx?
Access and Outlook's PST are database formats (i.e. the whole point is that you can add and remove data from them and search them quickly) so it wouldn't make any sense to turn them into XML-zips.
As for outlook using SQL Server locally, that would be cool from a geek point of view, but the fact that it doesn't exposes a real flaw with Microsoft's SQL Server embedded strategy. Although SSExpress is pretty lightweight, it's still an out-of-process server and another dependency that has to be installed for a given app to work. If MS made a version of SQL Server that was literally just a DLL you could link with your app, then applications like Outlook could use it. They should take a look at VistaDB.
Good QUestion!
in fact in a "Standard" zip file the end of the file is where the "Directory" is stored so if you have half a file then you have no list of files to expand... there are tools that can scan a zip and determine some of the data though... it's been a while since I had to fix a bad zip.
()
, and I *still* can't add ExcelML to my apps because of clients and even employees not yet using Office 2003. Giving outdated Office users a path for backwards compatibility will remove developer worries about moving forward with these formats.
A few questions:
1. Can we still load *uncompressed* XML files if there is no need for an archive? When they saves, will they retain their uncompressed state if nothing is embedded that requires enclosure?
2. Why Zip files? Obviously decent compression, but why not one of the XML-based schemas for XML and binary enclosures, (e.g., Echo/RSS)?
I just RTA, found the answer...
3. Will Office VBA developers have direct access to the XML DOM? That would be *sweet*.
4. Will the Excel ODBC driver be updated to work with these XLSX files natively? If so, will the 255-character ROWSCAN truncation issue be resolved?
5. Will an entire Excel workbook be considered a single XML file as before, or will worksheets be stored as separate XML files? (my vote is for the same single-file format, FWIW).Nevermind, found the answer in the article...
6. For PowerPoint, I would love to see the opposite: separate files for each slide, allowing easy programmatic means of compiling presentations from libraries of "best practice" slides. PPT is actually the most exciting prospect of the three to be able to access via ZIP code.
7. For Word forms, where will current form data be stored? In a separate file like Acrobat's XFDF?
8. For secure files, which ZIP encryption standard will you be employing, if any? The WinZip method or the PKZip method? I seem to remember that there was some patent frap about this a few years ago...
9. While you are at it, any built-in support for SVG in any of the applications?
You guys been learning from people who choose not to patent?
This is PATENTLY UNTRUE. Read the page again.
You need to include the following language in your products (prominantly displayed in both the license and documentation):
"This product may incorporate intellectual property owned by Microsoft Corporation. The terms and conditions upon which Microsoft is licensing such intellectual property may be found at."Simply reading the notices is NOT sufficient.
Absolutely - I stand corrected. Sorry for the confusion! I am not a lawyer and should not be trying to interpret legalese.
But can't that be said of any application that is written in .NET and runs on Windows anything?
Not sure what your point is. For those wanting more clarity around the licensing, the following page is very good:
Hey, I was just feeling trollish this morning
Btw, the /. crowd is complaining that this license is too restrictive (since it apparently includes language that allows Microsoft to revoke the license if you sue Microsoft).
So, if you have an Excel file embedded in a Word document, you could crack open that Word document and you would find a .xslx file. You could then crack that Excel file open too if you wanted and see what it has embedded.
Metro is closer to that All in One file format. as it is a PDL and Persistance format (metro reach) and a printer Spool all in one.
The Document API takes into account a Container (metro uses zip also) and document manipulations. although it does to appear to lose some of the fidality, (print to metro from PPT appears to lose the round trip without massive xslt work)
Since Metro will work with the new font engine in Avalon, any chance that O12 will have access to use the new open type features that Avalon documents will support??
Douglas.
This is what Open Office does.
My reaction is more: "Finally, you goddam M#()$ F_#@ers..."
I've always wondered how we delude ourselves that we live in the Information Age when that information is locked up by some rich c&$*s in America. F#$& you.
Go ahead and censor this.
However, there were no answers about Microsoft providing XSL transforms into DocBook or XHTML---third parties have to write their own (and the Office schemas "keep changing").
Although it would be nice to support the format that was designed for Technical documents natively in Office. It would be nice to see the Book format in Office completely overhauled so that it works
Especially with distributed writing resposibilities.
perhaps a .bookx format With a top level container that contains the outline of the book. and references the actual Docx files that relate to the chapters.
Although I havent tried the compound document platform in 2k3, I know in 97, 2k, and XP it was almost unbearable to work with at times. especailly when multiple writers were involved.
douglas
Isn't that similiar to IE? Just MSHTML DLL that apps link to?
If it is, I'm getting spooky sense of deja vu.
Got to love the Slashdotters.
Also, you make me sick.
Exactly! Though I'll not bring myself to express in this manner..
I agree with this sentiment, once a document is loaded into Word, for example, I'd rather manipulte the XML, much more powerful than using the old API because frankly the API doesn't do stuff that I would like it too (for example inserting a subdocument between two other subdocuments).....
It will make the document's content technology proof given that Office has such a wide customer base.
Long-term storage of documents/data is becoming a problem as they are tied to the version of an application. The change from O97 to O2000 is a case in point. Documents may have to be kept for 30 years or more.
A rule of thumb has always been to save the info in ASCII format. However, given that the new docs will be compressed it seems that bit 8 is still alive.
In this respect One assumes that the zip format in use is not propriety and will still be available in 30 years time.
Who cares, I hear you say!
Well, it could be information relating to your mortgage; the one that you pass down to your children..
The Office Open XML Formats use the same ZIP/XML conventions that Metro uses. So, you can use System.IO.Packaging in the WinFX SDK to open and manipulate the format. In fact, we'll be showing this at our TechEd session next week.
Now, the content is clearly different as Metro is a fixed file format whereas the Office Open XML Formats are for the rich document information needed for manipulating Office documents in a collaborative environment which includes, display, metadata, change revisions, comments, etc...
We will clearly provide tools and help to developers who want to work with files in these formats. It's just a bit too early to be able to give specifics. With the Office 2003 schemas we have already shipped a transform to HTML for Word in the Word Viewer. Simply install it, and you can find the XSL in the Office directory in the program files tree.
DocBook is an interesting thing, being a combination of data elements and display XML. A straight transform wouldn't be possible unless you gave the user a way to define the data-aspects of it as well.
What's the story with VBA and VSTO customisations?
For development, will we be able to store the xml and VBA inside it in plain text, so we can easily use source-code control, merging changes etc?
For production, will we be able to protect the file, such that the VBA code can't be viewed?
Will we be able to store VSTO assemblies in the file, so they can be distributed much more easily than at present?
Will I be able to digitally sign parts of the xml file, use DRM on bits of it etc?
The new stuff will abide by the same terms as the old stuff. We worked closely with a number of customers and governments to ensure the terms met their bars for openness; we don't see a reason to change them.
What do you mean by 'preview code?' We will release patches that allow versions back to and including Office 2000 to read and write files in the new format.
Hey Stephen, I answered a few of these questions back on my blog.
You have access to each part since it's just ZIP, so if you wanted you could sign and/or encrypt each part. We aren't going to have any built in functionality though for that level of granularity from directly within Office though. It would need to be part of a seperate solution.
-Brian
People telling so called it preview and referred an URL to Microsoft containing "preview". That's why I said preview.
So, are you releasing these patches on Monday? Or is that just rumors someone spawned?
to open the new Office12 file formats up directly, no more .net framework and WinFX.
Comment removed at user's request.
Presumably the interior of the document is encrypted.
Probably a misinterpretation. We have a 'preview Web site' we just put up: It doesn't have much on it now, but it does give you a way to sign up to receive notices of future information. We wouldn't release any patches outside of a formal beta because it all needs to work together.
There are a number of types of protections that can be applied; we will do the appropriate thing for each case. When you really want to control what people can do with content, you would use something like Rights Management. When you do this, the document is encrypted - it needs to be that way in order to enforce the rights. Sorry, no easy XML access. If you're using something lightweight like range locking in Word 2003, where the purpose is to create a more robust template or solution in order to help protect honest end users from messing up, the password is encrypted but everything else is in XML. This could be opened up and abused through XML, but then that feature is not intended for high security. There's a range of options between these, depending on the intended use of the feature. (I can see that we should document all these types of cases somewhere ... thanks.)
So a CompoundFileContainer generates basically a ZIP file?
--edit: Nevermind, in the Beta1 RC it has been renamed to ZipPackage. Question answered.
Windows is getting inspired by Linux....weeee...lol
Interesting Question. The ideal case would be the later.
The Answer is do not using OLE but .Net and XML where possible. When you have the ability to see Office as a single Program and Word, Excel, etc. are only Plugins it should not be a real Rocket Science (but even a lot of work).
-- Ok, I truly have no idea how the next office will implement this.
But I think there is a lack of innovation because some of the Ideas from OLE arent realized until today.
In an ideal case u have only one program (Windows) and file-format (XML). Accordingly to the unix idea everything-is-a-file there should be an everything-is-a-XML-element mentology. And so u can begin make the whole filesystem as an (virtual) single XML-File. Instead of filetypes there could be used the XML-Namespace.
-- I think something like this is the idea behind WinFS.
If the "Main Program" Windows opens an File (XML-Section with an special file-namespace in the Analogy) it should look in the registry with program is registered for the given Filetype (namespace). The Program is then started with is then responsible for loading and handling the file (section).
When the program (ie. Word) is up and the file (ie. .docx) contains a namespace the program can't handle by itself (ie. a .xlsx subsection) the program witch is registered is loaded and used for handling it.
-- Instead of inline-XML-data there could also be a XLink and/or XInclude
The presentation and handling of the section works mostly like OLE in the current aproach.
A more advanced approach would be to use a common rendering objects/environment - something like Avalon. Instead of returning a rectangular-bitmap-representation of the subsection, the handling program could return a Stream of Avalon-Objects.
In the example as given above Word could host the whole file and Excel handels the Excel-Subsection. The Data of the Excel Subsection is handeled by Excel and gets transformed to Avalon-Objects with are sended back to Word. Word itself transformes the Word-part of the file also into Avalon objects and then integrates the result from Excel. The Graphics Subsystem of Avalon is then presenting the Data to the user and gets input actions based on triggers (I think of a kind of dynamic created EventHandlers) embeded in the Avalon-Data.
Given so if the user now is editing the text of the word-part, the mouse and keyboard events are handeled by Word. But if the user clicks on an cell of the Excel-Table (given that the subsection described above is a table), the events are calling Excel-Methods.
Because this works transparent for user it may seem that word has the same possibilities than Excel - Excel appears as an kind of Ad-Hoc-Plugin for Word. Because the common interface is Avalon, the same works also in the other direction, when embedding an Word-Document into an Excel Sheet. Also the use of Avalon would give the possibility to draw the data in non-rectangular and auto-linebreaking bounds and possibly even over the content of the host-document.
-- a bit radical thinking so far, and I'm not a professional writer - but I hope you can understand what I mean. But please correct me.
Equation Editor was one of my favorite Word extras. IIRC it was actually developed by a third party (not Microsoft) who still sells a pro version? Is that true?
The "Equation Editor" is the small version of MathType witch is a typesetting-only version of Mathematica. But because it is just an OLE Plugin and not natively Word it has many Drawbacks:
The idea I've described above would allow to insert MathML directly into WordML and let the MathML-Program do the translation from MathML (to SVG) to Avalon witch would be rendered by Word. So if Word has mot MathML Support and you have a program like a Avalon based Version of Equation Editor witch can work as Plugin for Word (you need special interfaces witch aren't existing today) you wold have proper vertical align, and no resizing artefacts, and proper font-style choosing and rendering by applying the Word-Document-Style also to the Avalon-Code generated from the MathML-Code.
The only problem that keeps is the lack of Shortcut-Support of the "Equation Editor".
But currently tis is just an idea of mine - not a product of the near future like Office 12.
In Fact, this was the purpose to waste Word for my main use - even there are lot of good but expensive Programs witch do that work really good.
(I'm currently writing Scientific-Documents in HTML and/or TEX)
I just hope that Word will natively support MathML. There will still be room for applications like MathType (for better editing, etc.) but there are many problems with equations nowadays with both Equation Editor and MathType (mainly problems of spacing for inline equations and aligning of multiple equations).
There are more and more people in the scientific and academic fields who are writing thesis in Word because they are easier to edit and share on screen than a tex documents (both me and my roommates were required by our advisors to write the document in Word even though the main platform we worked on was UNIX).
There is still some times before Office 12 is released, so I hope this will make it into it.
Freney - We've heard this request from a number of different customers. I'll have more information over the next several months around the schemas and what our formats are going to look like. In addition, I'm trying to find out what types of things people would want to get out of our files, so we can look at providing good documentation and tools for transforming from our formats into other formats. MathML is definitely a common request.
-Brian
If you are able to go through Outlook's OM though, then there are things you could to do output Outlook data as XML. Here's an article showing taking Outlook tasks and outputting them as XML and then importing that into Word:
-Brian
-Brian
HI,
Wish to see the evolution soon !
Thanks
-Arun
You can get your first preview here. I posted example files in 3 formats:
-Brian
However, in this interview I haven't heard about some really important issues. Like protection, security, document autencity. What will happen if I set password to the Word document? It'll just use this password for ZIP compression? How do I know whether the document came from a reliable source and wasn't tempered? Ok, it is easy and exciting to replace a JPG image, but how would I know that it was replaced? It is easy to change the author's name in XML but not secure. I would really like Microsoft to make it possible to detect WHICH PART OF THE DOCUMENT was tampered (if it was). Security is a very important issue here, when Word and Excel go to open standards.
I hope in the next interview you will ask these questions!
Remove this comment
Remove this threadclose | http://channel9.msdn.com/Blogs/TheChannel9Team/Brian-Jones-New-Office-file-formats-announced?format=smooth | CC-MAIN-2014-15 | refinedweb | 4,625 | 72.46 |
!
Step 1: Parts Needed
for this project you'll need the following....
1. An Arduino. (I used a nano but any arduino should work)
2. A Tilt sensor. (I went back to the Memsic 2125. Mainly because I had some handy.)
3. A Nokia 5110 LCD display (Inexpensive and easy to find!)
4. Wires. (Many colorful WIRES!)
5. Breadboard or perfboard.
Step 2: Assembly Is Simple!
Assembly is super easy because it's all point to point. No separate components like resistors, capacitors and so on.
I included a simple pinout for the Memsic 2125 Tilt sensor.
The Nokia 5110 display units have the pin descriptions silk screened on them.
Here's a pin to pin listing for each part to the Arduino (nano) in my case.....
--------------------------------------
Arduino <------> Nokia LCD
GND ------------------- GND
5V -------------------- VCC
D4 --------------------- Light
D5 --------------------- CLK
D6 --------------------- DIN
D7 --------------------- DC
D8 --------------------- CE
D9 --------------------- RST
For the Memsic 2125
GND ------------------GND
5V ------------------VCC
A4 ------------------ XOut
A5 ------------------- YOut
Step 3: How Does It Work?
What makes it go?
the short answer is heat. The tilt sensor is actually a little chamber with 4 super tiny thermometers arrayed around it. as you tilt the sensor the heat rises and the thermometers measure the difference in temperature. it's that simple! The electronics in the chip convert the temperature differences into an X and Y measurement and send that data out the X and Y pins.
Step 4: The Code to Make It Go!
The code for this little project it pretty simple as well.
Since the sensor outputs it's signals as X and Y values. All we really need to do is read the values and convert them into something "real world" that we can display on our 5110 LCD.
The Code sets up the 5110 display. Draws a little bulls eye and then starts reading the x,y data from the memsic.
it then does a couple of mappings to convert the 3000 to 6000(ish) output into 2 values.
Stage one maps the memsic output to a scale for the display in both x and y (0-48) and (0-84) so we can display and animate the bubble around the display.
There's also a serial output that sends the raw data to the usb. you don't have to use it. but it's there if needed.
The second stage mapping then scales the display scales to -90 to 90 for the X ad Y text labels on the display for angles. (this is an approximate angle display) we're not worrying about accuracy without level. we just want a general idea!
Here's the code......
#include <SPI.h> //arduino micro led visual level #include <Adafruit_GFX.h> #include <Adafruit_PCD8544.h> Adafruit_PCD8544 display = Adafruit_PCD8544(5, 6, 7, 8, 9); // pin 2 - Serial clock out (SCLK) 5 // pin 3 - Serial data out (DIN) 6 // pin 4 - Data/Command select (D/C) 7 // pin 12 - LCD chip select (CS) 8 // pin 11 - LCD reset (RST) 9 const int X = A4; //X pin on m2125 const int Y = A5; //Y pin on m2125 int i=0; int dist,inv=0; boolean stan=0; void setup() { //set up serial Serial.begin(9600); pinMode(X, INPUT); pinMode(Y, INPUT); display.begin(); display.setContrast(50); display.clearDisplay(); } void loop() { //read in the pulse data int pulseX, pulseY; int angleX, angleY; int accelerationX, accelerationY; pulseX = pulseIn(X,HIGH); pulseY = pulseIn(Y,HIGH); //map the data for the nokia display accelerationX = map(pulseX, 3740, 6286, 48, 0); accelerationY = map(pulseY, 3740, 6370, 84, 0); // map data to crude angles angleX = map(accelerationX,48,0,-90,90); angleY = map(accelerationY,0,84,-90,90); display.drawRect(0,0,84,48,BLACK); display.drawLine( 42, 0, 42, 48, BLACK); display.drawLine( 0, 24, 84, 24, BLACK); display.drawCircle(42,24,10,BLACK); // display bubble display.fillCircle(accelerationY,(accelerationX),4,BLACK); display.setCursor(4,4); display.println("X: "+ String(angleX)); display.setCursor(4,38); display.println("Y: "+ String(angleY)); display.display(); display.clearDisplay(); //Send the data to the serial in case we'd like to see what's being reported and //possible pc use later Serial.print("X"); Serial.print(pulseX); Serial.print("Y"); Serial.print(pulseY); Serial.println(""); // delay the data feed to we dont overrun the serial delay(90); }
Step 5: Lets See If It Works Like We Expect?
After all our hard work. Lets see if it does what we expect?
Looks like it works!
Once installed in a battery powered case it'll be ready for action! | http://www.instructables.com/id/Arduino-Micro-Electronic-Bubble-Level/ | CC-MAIN-2017-17 | refinedweb | 753 | 67.04 |
IGOR Pro 6 New Feature Details
Below are the details of Igor Pro 6 relative to Igor Pro 5.
System Requirements
On Macintosh, Igor Pro 6 runs on Mac OS X 10.3.9 or later. Igor Pro 6 runs natively on both PowerPC and Intel-based Macintoshes. Igor Pro 6 does not run on Mac OS 9.
On Windows, Igor Pro 6 runs under Windows 2000, Windows XP, and Windows Vista. Igor Pro 6 does not run on Windows NT, Windows 95, Windows 98 or Windows ME.
Igor License Activation
The installers for previous versions of Igor required your serial number and activation key before the installation would succeed. Now these are requested by the Igor application itself.
When Igor Pro 6 is first launched, it prompts you for your serial number and activation key.
These are printed on a card you received with your copy of IGOR Pro (look for a card with "IMPORTANT!" written on it) or in an email you received from WaveMetrics.
NOTE: The activation key for Igor 6 is different than for previous versions of Igor.
If you do not enter the serial number and activation key, Igor runs in fully-functional 30-day evaluation mode. After 30 days it becomes a limited functionality demo and Igor will no longer save experiments, procedures, data, or graphics until you enter a valid serial number and activation key.
You can enter a license activation key at a later time by selecting "About Igor Pro..." from the "Igor Pro" menu and clicking the "License..." button.
Igor Registration
When you activate your license, Igor asks if you want to register it. If you say yes, Igor opens the Igor Pro Registration web page in your web browser.
If you initially skip registration and later want to register or if you want to change your registration information, choose About Igor Pro from the Igor Pro menu (Macintosh) or from the Help menu (Windows), click the License button, and click the Register button in the resulting dialog.
WaveMetrics uses registration information only to contact you regarding support issues and to inform you of major upgrades. We do not share your information with other companies.
If you are upgrading and have previously registered, you do not need to register again.
Version Compatibility
Igor Pro 6 can read files created by all earlier versions of Igor.
If you don't use features new in Igor Pro 6, then experiment files that it writes are readable by earlier versions.
Once you use features added in Igor Pro 6.
Some obsolete features of earlier versions of Igor Pro are no longer supported. See Features Removed From Igor Pro 6.
Some behaviors have changed slightly in Igor Pro 6. These changes may affect some existing Igor experiments. See Behavior Changes In Igor Pro 6 for details.
Updating Your Igor Preferences
Igor will automatically create a new "Igor Pro 6" preferences directory and will import older preferences if possible.
On Macintosh PowerPC, if you are upgrading from a previous version of Igor, Igor will copy your old preferences into the new preferences directory.
On Macintosh Intel, Igor will not copy old preferences because of byte-order issues. You will start with factory-default preferences.
On Windows, Igor Pro 5 preferences will be copied into the new preferences file but older preferences will not be copied.
User-Interface Changes
Graphs, tables and layouts as well as subwindows within graphs and panels may now be hidden. This, in combination with the new Graph Browser package, should greatly ease the difficulty of working with large numbers of graph windows. Programmers should be aware of the new /HIDE flag for Display, Edit, Layout, NewLayout, NewWaterfall, NewImage, NewPanel and DoWindow along with the hide keyword for SetWIndow and GetWindow.
Prior to Igor Pro 6, when viewing a 3D or 4D wave in a table, command-arrow-key (Macintosh ) or Ctrl+Arrow-Key (Windows ) changed the currently viewed layer. This is changed to option-arrow-key or Alt+Arrow-Key.
You can hide a graph, table, layout or panel by shift-clicking the close icon. You can also specify that clicking on the minimize icon hides instead of minimizing via a checkbox in the Miscellaneous Settings dialog (Misc menu.) When this is turned on, option clicking the minimize icon hides all windows of whatever type you clicked.
The command window can now be hidden, although for normal use it is recommended that you not hide it.
The Windows menu now includes a Hide submenu for hiding multiple windows at once and a Show menu for showing multiple windows at once. Pressing the shift key while clicking the Windows menu changes the sense of the show or hide, for example, changing "Hide->All Graphs" to "Hide->All Except Graphs".
The Windows menu now includes a Procedure Windows submenu to group procedure windows. Previously procedure windows appeared in the Other Windows submenu.
The Windows menu now includes a Recent Windows submenu to make it easier to re-activate windows that you were recently working on. The contents of the Recent Windows submenu is saved when you save an experiment and restored when you re-open the experiment.
If you press the Cmd key (Macintosh ) or Ctrl key (Windows ) while clicking the menu bar, a temporary Recent Windows menu will be accessible from the main menu bar. This shortcut is intended to save you the trouble of navigating through the Windows menu to the permanent Recent Windows submenu.
The Windows->Graphs, Windows->Tables, Windows->Layouts and Windows->Other Windows submenus now show both the window name and the window title for windows that have both.
The Send Behind menu item in the Windows->Control submenu was moved to the Windows menu and renamed Send To Back. A new Bring To Front item was added..
If you control-click (Macintosh) or right-click (Windows) in a text window, and if the text you clicked was not already selected, Igor now moves the caret to the point at which you clicked before displaying the contextual menu. This allows you to get help for an operation or function in a procedure window by just control-clicking or right-clicking the name of the routine. Previously you had to first select the name of the routine and then control-click or right-click.
New per-window marquee: Now, each window or subwindow can have its own marquee. Marquees are no longer killed when a different window is made active. Only the marquee in the topmost window's active subwindow is animated. The GetMarquee and SetMarquee operations now take the standard /W=hcSpec flag. Also, GetMarquee stores the full hcSpec in S_marqueeWin (previously just the base window name.)
External tool palette: The tool palette for graphs and panels is now created external to the host window. This ensures all the tools are available even when the host window is small. For compatibilty, ShowTools with no flags (as used in recreation macros) will produce the old style tools while ShowTools/A will create the new external style. You can force the old style using:
SetIgorOption useNewToolPalette=0
External info panel: The info panel for graphs is now created external to the host window. This ensures all the cursor information is visible even when the host window is small. You can force the old style using:
SetIgorOption useNewInfoPanel=0
Note about external info panel and tools: When a window is zoomed to full screen, external tools and info panels are killed and made internal. The reverse happens when the window is restored. On Windows, can get a state where external windows are not brought to front if exit zoom mode by some means other than clicking the restore icon. If this happens, zoom and then restore a window using the icons.
Info panel change: Now, to get popup menus in the info panel, you need to right click.
Windows: Igor Pro now uses updated Open File and Save File dialogs which include a navigation bar.
Fling mode for graphs: You can now fling a graph using the hand tool. Press option (alt) and drag in the interior of a graph releasing the mouse button while still moving. Click to stop, use shift to force horizontal or vertical motion. While in fling mode (hand cursor showing,) you don't need to press option to adjust the motion.
Macintosh: Added support for horizontal scrolling using the mouse wheel for tables, page layouts, procedure windows and notebooks.
Previously, holding shift would disable entering trace offset mode when clicking and holding on a trace. Now, to disable, use caps lock.
Guide To Igor Pro 6 Improvements
Here are the Igor Pro 6 improvements discussed on this page.
Behavior Changes In Igor Pro 6
Graphics Export Improvements
Procedure Window Improvements
Control Panel Improvements
Curve Fitting Improvements
Image Processing Improvements
Data Import And Export Improvements
File Command Improvements
Miscellaneous Improvements
New And Improved Example Experiments
New And Improved WaveMetrics Procedure Files
New Operations And Functions
Changed Operations And Functions
Features Removed From Igor Pro 6
This section discusses features removed from Igor Pro 6 because they are obsolete or have been superseded by newer features.
The Graphical Slicer XOP is now obsolete. It is completely replaced by Gizmo. If you are not familiar with Gizmo try the VolumeSlicerDemo experiment in Examples:Visualization:Gizmo Examples. Also see the Gizmo Overview.
The Surface Plotter XOP is now obsolete (though not removed). It is completely replaced by Gizmo. Old experiments containing surface plots should function as usual but as of Igor Pro 6 there is no menu item to open a Surface Plotter window. You can still open a new Surface Plotter window using the command CreateSurfer.
The Wavelet XOP is obsolete. It is replaced by the built-in operation DWT.
NeuralNetworker XOP is obsolete. It is replaced by the built-in operations NeuralNetworkTrain and NeuralNetworkRun. Both operations have almost identical syntax to the one used in the old XOP.
Speak XOP is obsolete.
MDInterpolator is obsolete. It is replaced by the built-in Interp3D() and Interp3DPath.
StatsFuncs and SpecialFuncs XOPs have been replaced by built-in functions. See Statistics Improvements.
The ANOVA package (Analysis->Packages->ANOVA) is now obsolete. It is still being shipped, but you should move to the built-in statistics functionality (see Statistics). The old ANOVA package depended on the ANOVASupport XOP to work; if you really want to continue using this package you must install the XOP in the Igor Extensions folder. You will find the ANOVASupport xop in your Igor Pro folder, in More Extensions:Data Analysis:Obsolete.
Windows Support
Our early testing of Igor 6 on Windows Vista has not turned up any problems (so far).
Mac OS X Support
When you create and save a new plain text file (notebook or procedure file) on Mac OS X, Igor now uses linefeed as the line terminator. Previously it used carriage-return on Macintosh and carriage-return/linefeed on Windows. Now it uses linefeed on Mac OS X and carriage-return/linefeed on Windows.
Intel Macintosh
With the release of Intel-based Macintoshes in January of 2006, Mac OS X now runs on two processor "architectures": PowerPC and Intel. The Igor Pro 6 application is a "universal" Mac OS X package containing both PowerPC and Intel code and thus runs native on both processors.
The Igor Pro package is inside the Igor Pro Folder. If you have enabled the Show All File Extensions setting in the Finder preferences then the Igor Pro package will appear as "Igor Pro.app".
When you run Igor Pro 6 on a PowerPC Macintosh, the PowerPC code in the Igor Pro package executes. When you run Igor Pro 6 on an Intel Macintosh, the Intel code executes.
Older versions of Macintosh Igor Pro are PowerPC-only. They can run on Intel Macintosh under Apple's "Rosetta" emulation system. Execution speed is roughly one-third of the speed running on a similar PowerPC Macintosh.
XOPs
In order to use an XOP with the Intel Macintosh version of Igor, the XOP must be recompiled as an Intel executable or as a "universal" executable which contains both PowerPC and Intel code. The Igor XOP Toolkit includes instructions for converting PowerPC XOPs to universal XOPs.
The following WaveMetrics XOPs have not yet been ported because third-party libraries are not yet available. Consequently these XOPs can not run with the Intel version of Igor:
HDF Loader
MLLoadWave
NetCDF Loader
NIGPIB and NIGPIB2
If you run the Intel Macintosh version of Igor while a PowerPC-only XOP is installed, Igor will not display an error message but will simply ignore the PowerPC-only XOP. This allows you to easily switch to the PowerPC version of Igor to run PowerPC-only XOPs, as explained in the next section. Similarly, if you run the PowerPC version of Igor while an Intel-only XOP is installed, Igor will not display an error message but will simply ignore the Intel-only XOP.
The KillWindow operation now works with XOP target windows.
Executing PowerPC XOPs on Intel Macintosh
If you need to run an XOP that has not been ported to Intel Macintosh, you must run the PowerPC version of Igor. You can do this on an Intel Macintosh by checking the Open Using Rosetta checkbox in the Finder's Info window for the Igor Pro application package.
With the Open Using Rosetta checkbox checked, the PowerPC code in the Igor Pro 6 application package will run and will call the PowerPC code in any XOPs that you invoke.
Backward Compatibility
If you save any kind of Igor file that contains "high-ASCII" characters (characters other than the standard digits, letters and punctuation) from the Intel Macintosh version of Igor and then load that experiment into an older version of Igor (Igor Pro 5 or earlier), the high-ASCII characters will be garbled.
Behavior Changes In Igor Pro 6
Windows: When resolving shortcuts, Igor now instructs the operating system to refrain from doing an exhaustive search if the target of the shortcut has been moved or renamed. This should prevent the very long delays that some users experienced when starting Igor. However, it also means that shortcuts will not work if the target file has been moved or renamed.
As of Igor Pro 6, it is an error to apply table the date or date&time formats to a wave displayed in a table unless the wave is double-precision. Date and date/time values can not be accurately stored in waves of lesser precision. If you get an error relating to this issue, use the Redimension operation to change the wave to double-precision.
The behavior of LoadWave/J (Load Delimited Text) and LoadWave/F (Load Fixed Field Text) operations are slightly changed in the presence of columns that are all blank. If you tell LoadWave to ignore blanks at the end of the column (via the /V flag) and if a column is all blank, previously LoadWave would not create a wave for that column. Now it will create a zero point wave. If you want to completely skip a column, use the /B flag.
Relaxed type checking for WAVE variables: Most operations that create WAVE variables and that required a preexisting WAVE to have the exact same type as would be created if there were no preexisting WAVE now allow less exacting compatibiltiy requirements. Most now just check for numeric vs text and real vs complex. Make, however, is still strict.
Igor Pro 6 is more strict with regard to #"" syntax used in programming certain controls. Previously, the quotes were optional and now they are required.
The way Igor maps false-color image values to colors is subtly different.
When saving an unpacked experiment, Igor would create file names for waves that included the .ibw extension. If a wave name exceeded 27 character in length, Igor would truncate the wave name to add the file name extension. This caused a problem if you had two waves whose names matched in the first 27 characters. The same file name would be used for both waves. Now Igor will omit the file name extension in this case.
When using the /R flag with the Duplicate operation, the action with just a single value in brackets was not specified. The actual action was such that [a] acted like [a,*]. Now, as of Igor Pro 6, [a] acts like [a,a]. I.e., a single row, column etc is specified.
Tick labels that include a prefix and units now have a space between the number and the prefix.
Dialog Improvements
Added support for matrix results in the Operation Result Displayer. Now for operations that can produce a matrix result, such as FFT or Convolve, you can choose to display the result as an image or contour plot.
Dialog Wave Browsers now support "type selection". When the wave browser list has keyboard focus, if you quickly type a few characters the first item that starts with those characters is found. If you type nothing for some short waiting period (currently one second), it starts over again accumulating the characters.
New dialogs: Resample, Filter Design and Application.
Some dialogs have been modified to support new features. Some highlights:
Curve Fit Dialog supports more cursors, a textbox of curve fit results and multi-threaded fitting.
Histogram dialog supports new binning modes (Histogram/B=3 and 4) and options (/X and /P); added Operation Result Displayer.
Modify Axis dialog, Range tab supports the ability to set one end of an axis to manual range and the other end to auto range.
New Annotation and Modify Annotation dialogs support relative font size settings.
Smooth dialog now supports Loess and Median smoothing.
Trace Offset dialog (from the Modify Trace Appearance dialog) supports new trace multiplier feature.
Graphing Improvements
Graphs can now have up to 10 cursors (ABCDEFGHIJ). To view and select cursors, in info panel, right click and select desired pair from menu.
Now, in addition to crosshair cursors, you can choose vertical only and horizontal only hairlines. The Cursor operation's /H flag now takes values of 2 for vertical and 3 for horizontal.
You can now cause a y axis to be autoscaled to the subset of data defined by the current x axis range. Use SetAxis/A=2.
You can now autoscale just one end of an axis by providing to SetAxis a '*' as a range value to denote that that value be autoscaled.
You can now scale a trace using the new muloffset keyword with ModifyGraph. See ModifyGraph for Traces. See Trace Multipliers for information on interactively scaling a trace.
TagVal now has two additional selector codes: TagVal(6) returns the x muloffset (with the not set value of zero translated to 1) while TagVal(7) returns the y muloffset.
You can now mask individual points for display using the new mask keyword with ModifyGraph. See the example in the ModifyGraph for Traces documentation.
New styles are now available for arrowMarker mode and you can now draw arrows between points. See ModifyGraph's arrowMarker keyword and the example in the ModifyGraph for Traces documentation. These styles are also available for draw tools. See Drawing Improvements.
ModifyGraph zColor now works with color index waves.
ModifyGraph zColorMax and zColorMin keywords set the color (or invisibility) of the trace when the z value exceeds the zMin, zMax range.
ModifyGraph useLongMinus= 1 causes the normal minus sign (hypen) to be replaced with a longer dash.
Table Improvements
To avoid moire patterns, grid lines are now drawn using a gray color instead of a dotted pattern.
You can now choose the decimal symbol (dot or comma) used when entering data in tables. Choose Table->Table Misc Settings. The selected decimal symbol is used for entering data and copying and pasting. See Decimal Symbol and Thousands Separator in Tables for details.
The target cell ID area in the top-left corner of the table now identifies wave element corresponding to the target cell. For example, if a table displays a 2D wave, the ID area might show "R13 C22", meaning that the target cell is on row 13, column 22 of the 2D wave. For 3D waves the target cell ID includes the layer ("L") and for a 4D wave it includes the chunk ("Ch").
File->Save Table Copy to a text file now includes the horizontal row of dimension labels if they are displayed.
The SaveTableCopy /N flag has been redefined to support saving horizontal row of dimension labels if they are displayed.
The ModifyTable operation allows you to specify a range of column numbers when using column number syntax. This is useful in style macros and not much else:
ModifyTable rgb[0,3] = (0, 0, 65535) // Set columns 0 through 3 to blue
Prior to Igor Pro 6, if you created a table displaying a wave with a very large number of columns (roughly 10,000 or more) it took a long time to display the table. It took essentially forever with a huge number of columns (100,000 or more). Igor Pro 6 can handle virtually any number of columns quickly.
A side-effect of the improvement in handling very large numbers of columns is the loss of the ability to set properties independently for the real and imaginary columns of a complex wave. The following ModifyTable command which used to set the real columns to red and the imaginary columns to blue now sets both to blue:
ModifyTable rgb(wave0.real)=(50000,0,0), rgb(wave0.imag)=(0,0,50000)
When adjusting the width of a column for a multidimensional wave, by default Igor sets all columns to the new width. You can set the width of one specific column by pressing the command key (Macintosh) or Ctrl key (Windows) before you start dragging the edge of the column.
Normally Igor sets the properties of all data columns from a given wave the same. You can override this behavior to set the properties of specific columns only by selecting the columns to be set, pressing the command key (Macintosh) or Ctrl key (Windows), and then making a selection from the Table menu or the table popup menu.
You can autosize the columns of a wave by double-clicking the vertical boundary to the right of the wave name or using the Autosize Columns menu item in the Table menu. See Autosizing Columns By Double-Clicking for details. You can also use the autosize keyword of the ModifyTable operation.
You can now use the arrow keys to move the selection in the entry area instead of finalizing the edit and changing the selected cell. To enable use of arrow keys in the entry area, choose Misc-Miscellaneous Settings, select the Table category, and click the Apply Arrow Keys To Entry Area checkbox.
Shift-clicking the close box of a table now hides the table. Clicking the minimize button of a table now hides the table if you check the Minimize Icon Is Hide checkbox in the Misc Settings dialog. Option-shift-clicking the minimize icon hides all tables if you check the Minimize Icon Is Hide checkbox in the Misc Settings dialog.
You can show and hide various parts of a table using the Show submenu in the Table menu or using ModifyTable showParts.
Changed default for the Point column to integer format instead of general format.
Page Layout Improvements
Shift-clicking the close box of a layout now hides the layout. Clicking the minimize button of a layout now hides the layout if you check the Minimize Icon Is Hide checkbox in the Misc Settings dialog. Option-shift-clicking the minimize icon hides all layouts if you check the Minimize Icon Is Hide checkbox in the Misc Settings dialog.
You can show or hide the windows corresponding to layout graph or table objects by right-clicking the object or by selecting one or more objects and right-clicking in an empty area of the page.
Notebook Improvements
Special characters (e.g., pictures, date, time) now have names. See Special Character Names for details. The names are of use only to advanced programmers.
Added functions SpecialCharacterList and SpecialCharacterInfo.
The Notebook operation has a findSpecialCharacter keyword.
Added a new kind of special character called an "action". An action is a special character which, when clicked, executes Igor commands. See Notebook Special Action Characters for details. A new operation, NotebookAction, can create and modify actions..
Help Improvements.
Graphics Export Improvements
SavePict can now save to a string variable rather than to a file. This is for use with a new feature of ListBoxes.
SavePict now accepts a /SNAP flag to save a snapshot of a window including controls.
Procedure Window Improvements
New #if style compiler directives as well as existing #include and #pragma keywords are colorized. >Also, the default colors for some keywords were changed to make them more distinctive.
Igor Procedures now are reloaded after an Adopt All.
Gizmo Improvements
ModifyObject commands can now handle constants in the form GL_SRC_ALPHA..
Data objects scaling is now extended to group objects.
You can now duplicate objects in the object list.
Drawing Improvements
New arrow styles for Draw tools using SetDrawEnv with keywords arrowSharp, arrowFrame and astyle.
New coordinate transformations for Draw tools using SetDrawEnv with keywords translate, rotate, scale, origin, push, pop and rsabout. An example is provided.
New named groups for Draw tools using SetDrawEnv with keywords gname and gedit.
New operation DrawAction provides support for deleting, inserting and reading back a named group or entire draw layer. An example is provided. The extractOutline keyword is useful for region of interest (ROI) readback for image processing.. Also see the new ToolsGrid operation.
Can now rotate draw objects by clicking just outside a selection's handles. When the arrow tool is selected, the cursor changes shape when over the handles and when over the invisible rotation handles. Note: at the present time, rotation of text and pictures is not supported.
When dragging, resizing or rotating, a live outline of the object(s) is presented.
You can now resize multiple selected objects. Previously, resize worked only when a single object was selected.
Rectangles, Rounded Rectangles and ovals now use polygons and/or Bezier curves. Consequently, the outline is centered on the coordinates rather than being inside the enclosing rectangle. Also, line dash and arrow properties are supported.
Annotation Improvements
ColorScales can frame up to 98 individual colors, the frame color can be different than the annotation's foreground color, the axis range can be manually set, and date axis tick labels are supported.
New flag: TextBox/LS= lProvides a line spacing tweak. Value is points of extra (plus or minus) line spacing. With negative values, you may need to add an extra blank line to avoid clipping the bottom of the last line.
Control Improvements.
The Control Procedure dialog generates structure-based procedures if you select the relevant checkbox.
ListBoxes now support graph and picture images. See ListBox's new keyword, special.
New features for ListBox: autoscroll. Can now use a value of -1 with the row keyword to cause the listbox to scroll to the first selected cell (if any.) Can combine with selRow to both select a row and to ensure it is visible (modes 1 and 2.)
ListBoxes and other controls now support mousewheel. On Mac, both vertical and horizontal work. On Windows, horizontal should work when Vista is released.
You can now use ListBox modes 5 & 6 without a selWave. Use new keyword selCol to set the column along with existing keyword selRow. You can read back the selection using ControlInfo with the row going in V_Value and the column going in a new varible named V_selCol.
Extended syntax for PopupMenu and ValDisplay value keyword: Previously, using the # prefix meant that the following text needed to be a literal quoted string. Now, the following text can be a string expression that will be evaluated at run time to obtain the actual string expression that needs to be executed each time the popup menu is used. In other words, there is a level of indirection.
SetVariable now supports a matrix wave and accepts a wave column spec: wave[row][col]
The action function's structure now includes:
Int32 colIndex Column index for a wave, unless colLabel is not null. char colLabel[MAX_OBJ_NAME+1] Wave column label.
Listboxes and Buttons now honor text justification escape codes (\JL, \JC, and \JR) embedded in the text for a listbox cell or the text for the button title.
Control Panel Improvements
New noEdit keyword for ModifyPanel. Use to prevent users from modifying a control panel.
New no-activate mode for panels: NewPanel/NA.
You can now create panels that act like subwindows but live in their own windows attached to a host graph window. The host graph and its exterior subwindows move together and, in general, act as single window. Exterior subwindows have the advantage of not disturbing the host graph and, unlike normal subwindows, are not limited in size by the host graph.
You can now create panels that float above all other windows (except dialogs.) Because floaters cover up other windows, you should use them sparingly if at all. To create a floating panel, use NewPanel with the new /FLT flag.
You can now right click in a base panel window or external subwindow to get a popup menu that includes Show or Hide tools. This menu is not provided if ModifyPanel noEdit=1 is in effect. However, programmers can temporarily turn the menu back on by executing
SetIgorOption IndependentModuleDev=1
Alternatively, you can temporarily turn noEdit off to get tools.
Analysis Improvements
The Analysis menu was reorganized. The Hanning item was removed (but you can still invoke the Hanning operation from the command line). The Misc Operations submenu was removed with its items (Rotate and Unwrap) moved to the Data menu. The Filter and Resample items were added. A new Statistics submenu was added.
Histogram now supports two new modes (Strugers and Scott) for automatic determination of the number of bins. New centered-X option (/C flag) creates output wave with X values centered on the bins. New sqrt(N) wave option (/N flag) creates a wave containing values that are square root of the bin counts. These options are useful when curve fitting to a histogram. New (/P) flag can be used for normalizing the histogram as a probability density centered on the bins.
Added new operation FPClustering which implements the farthest-point clustering algorithm.
Added two new flags to the PCA operation.
The FindLevel, PulseStats, and EdgeStats now work with any data type of wave, not just single and double precision floating point. FindLevel and FindLevels now have an /EDGE flag.
Added the Loess operation, which smooths srcWaveName using locally-weighted regression smoothing. Loess can be used to interpolate over and replace NaNs. See the Loess Demo.pxp example experiment.
Added the FilterFIR operation (which replaces the now-obsolete-but-still-supported SmoothCustom operation). FilterFIR can filter along any wave dimension, and can also generate simple digital filters, including a very steep notch filter.
Added the FilterIIR operation, which can filter along any wave dimension and generate simple IIR filters. The automatically-designed filter coefficients are bilinear transforms of the Butterworth analog prototype with an optional variable-width notch filter, using Direct Form 1 or Direct Form II cascade implementations.
Added median smoothing (/M) to the Smooth operation. The median and boxcar (/B) smoothing modes now detects NaNs in the input data and adjusts the averaging appropriately. The Savitzky-Golay (/S) smoothing coefficients limit has been raised from 25 to 32767.
Added the WindowFunction operation, a generalization of the now-obsolete (though still supported) Hanning operation.
CWT: changed the FFT method so that it now accounts for wave scaling in the transform scales.
Statistics Improvements
Added an extensive collection of operations and functions for statistical analysis. They include 32 new statistical test operations, 38 new cumulative distribution functions, 30 new probability distribution functions, 34 new inverse cumulative distribution functions, 10 random noise distributions and 17 general statistical utility functions. You can find more information in the Statistics help file.
Matrix Improvements
Added new functions to MatrixOP: diagRC(wave,rows,cols), Rec(wave), Sum(wave), MagSqr(wave), SumSqr(wave), TriDiag(w1,w2,w3), Convolve(w1,w2,flag), Correlate(w1,w2,flag), SubtractMean(w,opt), SyncCorrelation(), AsyncCorrelation(), ChirpZ(src,A,W,M), ChirpZf(src,startF,endF,df), beam(w,x,y), WaveMap(w1,w2) and Frobenius norm. MatrixOp now supports a /C flag to create a complex wave reference for the output wave. For 3D operations you can use the /NTHR flag. See Curve Fitting Using the Quick Fit Menu.
It is now possible to use wave subrange notation like that used with the Display operation for almost any wave that is used by the CurveFit, FuncFit or FuncFitMD operations.
Curve Fitting now offers Orthogonal Distance Regression in addition to ordinary least squares. This is useful for situations in which there are measurement errors in the independent variables as well as in the dependent variable. This goes under the name Errors in Variables, Measurement Error Models, or Random Regressor Models. See the CurveFit operation.
Curve Fitting now supports fitting to implicit functions.
You can now specify a list of fitting functions to FuncFit. The result will be a fit to the sum of the functions. See the section Fitting Sums of Fit Functions in the FuncFit reference.
All-at-once fit functions can now implement multivariate fit functions.
A new user-defined fit function format has been added that takes a single structure parameter. This allows you to transmit arbitrary data to a user-defined fit function. See User-Defined Fitting Function: Detailed Description for a discussion of all formats for user-defined fitting functions.
Gauss1D(wave, x) and Gauss2D(wave, x, y) functions to complement the built-in fit functions gauss and Gauss2D.
Curve fits now add the information previously found only in the history report to the wave note of the auto-destination wave (the one requested by /D with no wave).
Curve fits can now add a textbox to the graph displaying the Y data. This textbox contains information similar to what is printed in the history report. The textbox is controled by the /TBOX flag.
The Curve Fit Textbox Preferences dialog allows you to select what information is included in the textbox. This dialog is accessible from the Quick Fit menu (Analysis menu) or from the Output Options tab of the Curve Fit dialog.
The Curve Fit dialog supports all graph cursors A-J..
See Igor Pro 6-Compatible Color Tables or the ColorsMarkersLinesPatterns.pxp example experiment.
Image Plot Improvements
Earlier Igors mapped the image value by rounding to the nearest color table index:
index= round((z-zmin)/(zmax-zmin)*(numColors-1))
which meant only half of the first and last colors were used.
Igor 6 maps the image value by truncating to the nearest color table index:
index= floor((z-zmin)/(zmax-zmin)*numColors)
which uses the first and last colors for as many values as the other colors.
You can revert to the old way of scaling image colors by executing this command each time the experiment starts up:
SetIgorOption preIgor6ColorScaling=1
(You can put the command at the top of the main procedure window to have it executed when the experiment is opened.)
Image Processing Improvements
New operation ImageFocus for processing stacks of multi-focal images.
ImageAnalyzeParticles can now provide minimum, maximum and average intensity level for each particle.
Added /U flag to ImageAnalyzeParticles to save the marker wave as unsigned byte.
ImageSave with the flag /WT now saves tag information even if the image is 8 bits. The operation is more restrictive now; it actually tests that none of the tags you are adding conflicts with any of the primary tags.
Implemented the Zhang and Suen thining algorithm in MatrixFilter (see /T flag).
ImageSeedFill has three new keywords for specifying the seed pixel/voxel in terms of rows, columns and layers. This solves some roundoff difficulties when the source wave had wave scaling..
Previous versions of the SndLoadWave XOP worked only on Macintosh and loaded System 7 resource sound files; this version works on both Macintosh and Windows, but no longer loads resource sound files. Consequently, the resource-related flags to SndLoadWave command are no longer supported.
This version was revamped for Igor Pro 5 or later to permit calling SndLoadWave directly from a user function. This required changing the /S flag part of SndLoadWave.
SndLoadWave XOP implements mostly-backwards-compatible LoadWAVfile and SaveWAVfile commands that were provided only on Windows through the LoadWAVfile XOP. See LoadWAVfile and SaveWAVfile as documented in this help file about the compatibility.
You can now specify comma as the decimal symbol when saving waves in text files using the Save operation or the Save Waves dialog.
File Command Improvements
The Grep operation can find key words (regular expressions, actually) in files.
File operations are "thread-safe". See ThreadSafe Functions for more info.
Programming Improvements
Added support for conditional compile using #if and #ifdef style compiler directives.
To provide support for computers with multiple processors and to allow creation of preemptive multitasking background tasks, a new class of user functions called ThreadSafe has been created. See ThreadSafe Functions for more info.
You can now create multiple background tasks. See CtrlNamedBackground.
Windows (and subwindows) may now be hidden. See the new /HIDE flag for Display, Edit, Layout, NewLayout, NewWaterfall, NewImage, NewPanel and DoWindow along with the hide keyword for SetWIndow and GetWindow.
New options for DoWindow:
Can use /B=bname to send window behind window bname.
Can use /W=targWin for explicit designation of target.
The HideIgorMenus and ShowIgorMenus operations hide or show most of the built-in main menus. The trace popup contextual menu can be extended by User-Defined Menus. See Built-in Menus That Can Be Extended.
User-Defined Menus can now specify Multiple Menu Items on one line, and Specialized Menu Item Definitions provide user-defined color, line style, marker, pattern, character and font menus. A dynamic user-defined menu item disappears from the menu if the menu item string expression evaluates to ""; the remainder of the menu definition line is then ignored; see Optional Menu Items. The new GetLastUserMenuInfo operation sets variables in the local scope to indicate the value of the last selected user-defined menu item.. See The IndependentModule Pragma.
WinList, FunctionInfo, FunctionList, ProcedureText, and DisplayProcedure have additional features to work with independent modules.
The DebuggerOptions operation can programatically enable or disable the debugger's settings.
The internal method Igor uses to deallocate memory when waves are killed has been improved. The new method uses reference counting to determine when a wave is no longer referenced anywhere and can be safely deallocated. The old method never fully killed waves but kept a list for reuse when making new waves. The new method should reduce the likelihood of out of memory errors, especially on Mac. See Wave Reference Counting and WAVEClear.
The #include statement can now include a procedure file relative to the procedure file containing the #include statement. Previously syntax like this:
#include ":procfile"
caused Igor to look for a file named "procfile" relative to the Igor Pro Folder. This is still the case. However now, if no file is found relative to the Igor Pro Folder, Igor looks again, but this time relative to the procedure file containing the #include statement. This allows you to distribute a master procedure file that includes helper procedure files stored next to the master.
This second step occurs only if the procedure file containing the #include statement is stored as a standalone file, not if it is a packed file stored in an experiment file.
The name of the file being included must end with the standard ".ipf" extension but the extension is not used in the include statement.
New and Improved XOPs
Added VISA XOP to "More Extensions\Data Acquisition". Supports instrument control using the VISA protocol. See VISA XOP for details.
Miscellaneous Improvements
Added "regular expression" commands.
A regular expression is a pattern that is matched against a subject string from left to right. Most characters stand for themselves in a pattern, and match the corresponding characters in the "subject". See Regular Expressions in Igor.
In the case of Grep, the "subject" is each line of the input file, or for a text wave input, one or more columns of each row, considered one line or row at a time.
For GrepList the subject is each string list item, considered one item at a time.
For GrepString and SplitString, the "subject" is the (only) input string.
The regular expression syntax supported in Igor is based on the PCRE -- Perl-Compatible Regular Expression Library. This syntax is similar to regular expressions supported by various UNIX and POSIX egrep(1) commands. Igor's implementation does not support the Unicode (UTF-8) portion of PCRE. See Regular Expressions References.
New flags for Print: /C causes all numeric expressions to be treated as complex and /SR enables wave subrange printing.
Igor Pro 6 attempts to detect and repair corrupted page setup records that can result when pre-Carbon Macintosh experiments are loaded on Mac OS X. See Pre-Carbon Page Setup Records for details.
Windows: When sending a print job, the document name which appears in the print queue is now taken from the window title. Previously it was "IGOR Document" for all print jobs.
New And Improved Example Experiments
GizmoColorScaleDemo.pxp
3DRotationsPanel.pxp
DimensionReductionDemo.pxp
NumericalIntegrationDemo.pxp
SphericalHarmonicsDemo.pxp
FFTSwappingDemo.pxp
Clustering.pxp is replacing KMeans.
Loess Demo.pxp
DepthSortingDemo.pxp
quaternaryDemo.pxp
fuzzyClassifyDemo.pxp
MultipleFitsInThreads.pxp
Spectral ConfidenceInterval.pxp
33 new example experiments in Statistics folder
New And Improved WaveMetrics Procedure Files
New Graph Browser.ipf is located in the Igor Procedures folder and is therefore always available via the Misc menu. The graph browser in combination with the new ability to hide graphs makes it much easier to deal with large numbers of graph windows.
The #include <New Polar Graphs> procedures have been revised to be compatible with Independent Modules. Indeed, they are used to implement the new Filter... dialog. See The IndependentModule Pragma.
GizmoLabels.ipf
Added command printing option to IP (Image Processing) procedures: Image Line Profile.ipf, Image Histogram Panel.ipf, Image Grayscale Filters.ipf, Image Rotate.ipf.
New IP Procedure: zeroPencil.ipf for drawing zero values in grayscale images.
New IP Procedure: ImageCurvature.ipf
New IP Procedure: LoadFolderImages.ipf
Updated ImageSlider.ipf changing layer display to a setVariable.
Updated AppendImageToContour to provide GUI to sliced (simulated filled) contours.
Changed Image Line Profile procedure to use Gizmo instead of Surface Plotter.
Decimate.ipf includes options to generate standard deviation and standard error waves. The Decimate menu item is now in the Analysis menu.
New Statistics procedures: StatsPlots.ipf, Ancilla1D.ipf, AllStatsProcedures.ipf, ANOVA1PowerCalc.ipf, and StatsProcs.ipf
New procedure file: PopupWaveSelector.ipf uses WaveSelectorWidget.ipf to provide a popup control panel containing a WaveSelectorWidget. See the procedure file comments for documentation.
Updated Technical Notes
TN003, which explains how to read and write an Igor binary wave file, and PTN003, which explains how to read and write an Igor packed experiment file, have been modified to include information about the platform (Macintosh or Windows) that wrote the file. Currently the only significance of this information is that Igor uses it to translate from the Macintosh character set to the Windows character set or vice versa if the file contains "high-ASCII" characters (characters other than digits, letters or common punctuation symbols). If you have written software based on TN003 or PTN003 and you care about high-ASCII characters, see the updated versions of those tech notes.
XOP Toolkit Improvements
The XOP Toolkit now supports development of "universal" XOPs for Mac OS X - that is, XOPs that run on Intel Macintosh as well as PowerPC Macintosh.
When looking for help for an external operation or function, Igor looks for the XOP help file in the same directory as the XOP. Now, if the XOP help file is not found there, Igor will also check open help files to see if there is help for the external operation or function.
New Operations And Functions
New Programming Operations And Functions
CtrlNamedBackground
DebuggerOptions
GetIndependentModuleName
GetLastUserMenuInfo
HideIgorMenus
ShowIgorMenus
ThreadProcessorCount
ThreadGroupCreate
ThreadGroupPutDF
ThreadGroupGetDF
ThreadGroupWait
ThreadReturnValue
ThreadGroupRelease
WAVEClear
RatioFromNumber
New File-related Operations and Functions
Grep
New Analysis Operations And Functions
FilterFIR
FilterIIR
Loess
WindowFunction
Resample
erf and erfc now support complex arguments.
New Notebook Operations And Functions
NotebookAction
SpecialCharacterList
SpecialCharacterInfo
New Miscellaneous Operations And Functions
ChebyshevU returning the chebyshev polynomial of the second kind.
beta returns the beta function.
cequal compares two complex numbers.
Gauss1D(wave, x) and Gauss2D(wave, x, y) functions to complement the built-in fit functions gauss and Gauss2D.
LinearFeedbackShiftRegister generates bit streams with custom feedback taps. Provides defaults that generate maximal-length sequences. The bit patterns are (bad) random bit patterns with useful spectral properties.
Hypergeometric function implemented in HyperG0F1, HyperG1F1, HyperG2F1 and HyperGPFQ.
SphericalHarmonics are now implemented as a built-in function.
DrawAction
ToolsGrid
Changed Operations and Functions
The AnnotationInfo function returns the text of the annotation as escaped text. That is, backslashes are returned as \\ and quotes were returned as \". This is useful if you subsequently use the text in an Execute statement or print the output to the history. However if you programmatically use the returned text in a subsequent Textbox command, it does the wrong thing. AnnotationInfo now has a third parameter that allows you to turn the escaping of the text off.
Added a /E flag to the Save operation. Using /E=0 when saving as general or delimited text suppresses the use of escape codes when writing text containing carriage return, linefeed, tab or backslash characters.
The CsrInfo string function now includes a RECREATION key containing the Cursor command to regenerate the current settings. The command includes the /W flag.
StringByKey and related string functions can be made case-sensitive through an optional matchCase parameter.
The ReplaceString function now works much faster when doing a large number of replacements.
New keyword in AxisInfo: SETAXISCMD provides the full SetAxis command.
The DisplayProcedure command's function or macro name is optional, allowing one to simply display (show and bring to front) the named procedure window.
Added LOCK keyword to WaveInfo to read back the wave's SetWaveLock value.
Last updated: Saturday, July 7, 2007 | http://www.wavemetrics.com/products/igorpro/newfeatures/whatsnew6details.htm | crawl-001 | refinedweb | 7,747 | 55.64 |
Expedia xml api wordpress
Please Sign Up or Login to see details. details
I need some help to fix my script for connecting to the opc, read a data sample and write another one ..
Estamos interesados en crear un website para HOTEL que integre varias funciones. 1) OTA ( simil booking, expedia ) solo hotel, NO rentacar, excursiones ni aereos 2) CHANNEL MANAGER, si el channel manager tambien tiene revenue, Software hotelero, PMS, motor de busqueda. es mejor... ( Simil wubook, octorate, rategain, etc..) 3 ) comparador de..
so hello i need someone that can use mime redirecting as if the user device has accessed the raw link or sort technique and scrap movie and tvshows websites uploaded links...and tvshows websites uploaded links like within openload streamango rapidvideo and all other iframes and that can be implemented into a hybrid application . needs to work as a api
...project that accesses remote api calls that in turn generate xml. This xml needs to be written into a series of coldfusion queries that can generate data for a website. Formatting of this data from the final queries will be done here. All you have to do is to access the api, generate the soap data into xml (sample api calls are available to help each
experience android developer around 20 screens to be design in xml
import xml data to tally and tally developer
Need to convert 2 PSD/SVG to XML | https://www.freelancer.com/work/expedia-xml-api-wordpress/ | CC-MAIN-2018-22 | refinedweb | 237 | 61.56 |
How to wait for some time while writing to a file.
I am appending some text in log file, its giving error as Some other process using this file.
How i have to wait untill its free.
How to wait for some time while writing to a file.
I am appending some text in log file, its giving error as Some other process using this file.
How i have to wait untill its free.
What logging framework are you using that's giving you these problems?
Use log4net or NLog or some other framework for your logging.
What logging framework are you using that's giving you these problems?
Here, how i am writing log to a file.
Its giving error only for some time. It may or may not give error as "Some other process using this file".
How to rectify this. Is it possible to wait untill other process release this file.
public void WriteLog(LogType type, LogStatus status, string message, params object[] args) { DateTime now = DateTime.Now; string path = GetPath(type, now); string appname; string usrname; if (AppService.Provider.CurrentAppType == null) appname = "-"; else appname = AppService.Provider.CurrentAppType.ToString(); if (UserService.Provider.CurrentUser == null) usrname = "-"; else usrname = UserService.Provider.CurrentUser.Name; appname = (appname + new string(' ', 10)).Substring(0, 10); usrname = (usrname + new string(' ', 16)).Substring(0, 16); string line = now.ToString("HH:mm:ss.fff", CultureInfo.InvariantCulture) + " " + status.ToString().Substring(0, 1) + " " + appname + " " + usrname + " " + string.Format(CultureInfo.InvariantCulture, message, args) + Environment.NewLine; lock (_lck) { File.AppendAllText(path, line); #if DEBUG Console.Write(line); #endif } }
Use log4net or NLog or some other framework for your logging.
dont know how to use. tell me how?
Using a while loop arround the code open and write to the file.
Suggestion, put that code into another thread, so if it can't write out yet. Just sleep for one sec and try again. Something like this could do a dirty trick;
writtenOut = false;
while writtenOut=false
{
open file
if could open file
write to file
close file;
writtenOut=true;
sleep(1000);
}
so as other have said open it, dont close it until you're done with it
I'd not recommend this in practice. If you open it and just hold on to it, what other process is now failing because you're holding onto resources? How about doing some research to find out what other process is using the file? If it's your own code doing it, then there's probably a bug in there somewhere of not releasing it properly that needs to be corrected. I'd look at something like HandleEx from sysinternals to determine what is holding onto the file.
-Nel. | https://www.daniweb.com/programming/software-development/threads/170182/wait-while-file-write-in-c | CC-MAIN-2022-33 | refinedweb | 444 | 68.97 |
NAMEMail [v14.9.9] — send and receive Internet mail
SYNOPSIS
DESCRIPTION
Compatibility note: S-nail (Mail) -d or -v enables obsoletion warnings divides incoming mail into its constituent messages and allows the user to deal with them in any order. It offers many COMMANDS and INTERNAL VARIABLES for manipulating messages and sending mail. It provides the user simple editing capabilities to ease the composition of outgoing messages, and increasingly powerful and reliable non-interactive scripting capabilities.
Options
- -: spec
- Explicitly control which of the Resource files shall be sourced (loaded): if the letter ‘
s’ is (case-insensitively) part of the spec then the system wide mail.rc is sourced, likewise the letter ‘
u’ controls sourcing of the user's personal ~/.mailrc file, whereas the letters ‘
-’ and ‘
/’ explicitly forbid sourcing of any resource files. Scripts should use this option: to avoid environmental noise they should “detach” from any configuration and create a script-specific environment, setting any of the desired INTERNAL VARIABLES via -S and running configurating commands via -X. This option overrides -n.
- -A account
- Executes an account command for the given user email account after program startup is complete (all resource files are loaded, any -S setting is being established; only -X commands have not been evaluated yet). Being a special incarnation of defined macros for the purpose of bundling longer-lived settings, activating such an email account also switches to the accounts primary system mailbox (most likely the inbox).
- -a.
- -B
- ([Obsolete]: Mail will always use line-buffered output, to gain line-buffered input even in batch mode enable batch mode via -#.)
- -b addr
- Send a blind carbon copy to address, if the setting of expandaddr, one of the INTERNAL VARIABLES, allows. The option may be used multiple times. Also see the section On sending mail, and non-interactive mode.
- -C .
- -c addr
- Send carbon copies to the given receiver, if so allowed by expandaddr. May be used multiple times.
- -d
- set the internal variable debug which enables debug messages and disables message delivery, among others; effectively turns almost any operation into a dry-run.
- -E
- set skipemptybody and thus discard messages with an empty message part body. This command line option is [Obsolete].
- -e
-.
- -F
- Save the message to send in a file named after the local part of the first recipient's address (instead of in record).
- -f
- Read in the contents of the user's secondary mailbox
MBOX(or the specified file) for processing; when Mail is quit, it writes undeleted messages back to this file (but be aware of the hold option). The optional file argument will undergo some special Filename transformations (also see file).
- Display a summary of headers and exit; a configurable summary view is available via the -L option.
- -h
- Show a short usage summary.
- -i
- set ignore to ignore tty interrupt signals.
- -L spec
- Display a summary of headers of all messages that match the given spec, then exit. See the section Specifying messages for the format of spec. If the -e option has been given in addition no header summary is produced, but Mail will instead indicate via its exit status whether spec matched any messages (‘
0’) or not (‘
1’); note that any verbose output is suppressed in this mode and must instead be enabled explicitly (e.g., by using the option -v).
- -M type
- Special send mode that will flag standard input with the MIME ‘
Content-Type:’ set to the given type and use it as the main message body. [v15 behaviour may differ] Using this option will bypass processing of message-inject-head and message-inject-tail. Also see -q, -m, -t.
- -m file
- Special send mode that will MIME classify the specified file and use it as the main message body. [v15 behaviour may differ] Using this option will bypass processing of message-inject-head and message-inject-tail. Also see -q, -M, -t.
- -N
- inhibit the initial display of message headers when reading mail or editing a mailbox folder by calling unset for the internal variable header.
- -n
- Standard flag that inhibits reading the system wide mail.rc upon startup. The option -: allows more control over the startup sequence; also see Resource files.
- -q file
- Special send mode that will initialize the message body with the contents of the specified file, which may be standard input ‘
-’ only in non-interactive context. Also see -M, -m, -t.
- -R
- Any mailbox folder opened will be in read-only mode.
- -r -f from-addr will be passed to a file-based mta whenever a message is sent. Shall from-addr include a user name the address components will be separated and the name part will be passed to a file-based mta individually via -F name. If an empty string is passed as from-addr then the content of the variable from (or, if that contains multiple addresses, sender) will be evaluated and used for this purpose whenever the file-based mta is contacted. By default, without -r that is, neither -f nor -F command line options are used when contacting a file-based MTA, unless this automatic deduction is enforced by seting var[=value]
- -S cannot be changed from within Resource files or an account switch initiated by -A. They will become mutable again before commands registered via -X are executed.
- -s subject
- Specify the subject of the message to be sent. Newline (NL) and carriage-return (CR) bytes are invalid and will be normalized to space (SP) characters.
- -t
- The message given (on standard input) is expected to contain, separated from the message body by an empty line, a message header with ‘
To:’, ‘
Cc:’, or ‘
Bcc:’ fields giving its recipients, which will be added to any recipients specified on the command line. If a message subject is specified via ‘
Subject:’ then it will be used in favour of one given on the command line. Also understood user
- Initially read the primary system mailbox of user, appropriate privileges presumed; effectively identical to ‘
-f %user’.
- -V
- Show Mail's version and exit. The command version will also show the list of features: ‘
$ mail -Xversion -Xx’.
- -v
- setting the internal variable verbose enables display of some informational context messages. Using it twice increases the level of verbosity.
- -X -# and errexit.
- -~
- Enable COMMAND ESCAPES in compose mode even in non-interactive use cases. This can be used to, e.g., automatically format the composed message text before sending the message:
$ ( echo 'line one. Word. Word2.';\ echo '~| /usr/bin/fmt -tuw66' ) |\ LC_ALL=C mail -d~:/ -Sttycharset=utf-8 bob@exam.ple
- -#
-
MBOXand inbox (the latter three to /dev/null). The following prepares an email message in a batched dry run:
$ LC_ALL=C printf 'm bob\n~s ubject\nText\n~.\nx\n' |\ LC_ALL=C mail -d#:/ -X'alias bob bob@exam.ple'
- -.
- This flag forces termination of option processing in order to prevent “option injection” (attacks). It also forcefully puts Mail into send mode, see On sending mail, and non-interactive mode.
--’ separator will be passed through to a file-based mta (Mail-Transfer-Agent) and persist for the entire session. expandargv constraints do not apply to the content of mta-arguments.
A starterMail is a direct descendant of BSD Mail, itself a successor Mail then the system side is not a mandatory precondition for mail delivery. Because Mail strives for compliance with POSIX mailx(1) it is likely that some configuration settings have to be adjusted before using it is a smooth experience. (Rather complete configuration examples can be found in the section EXAMPLES.) The default global mail.rc, one of the Resource files, bends those standard imposed settings of the INTERNAL VARIABLES a bit towards more user friendliness and safety already. For example, it sets hold and keepsave in order to suppress the automatic moving of messages to the secondary mailbox
MBOXthatmode has been enabled). It also enables sendwait in order to synchronize Mail with the exit status report of the used mta when sending mails. It sets emptystart to enter interactive startup even if the initial mailbox is empty, sendmail(1) $ mail -s ubject -a ttach.txt bill@exam.ple # But... try it in an isolated dry-run mode (-d) first $ LC_ALL=C mail -d -:/ -Ssendwait -Sttycharset=utf8 \ -b bcc@exam.ple -c cc@exam.ple \ -Sfullnames -. \ '(Lovely) Bob <bob@exam.ple>' eric@exam.ple # With SMTP $ LC_ALL=C mail -d -:/ -Sv15-compat -Ssendwait -Sttycharset=utf8 \ -S mta=smtps://mylogin@exam.ple:465 -Ssmtp-auth=none \ -S from=scriptreply@exam.ple \ -a /etc/mail.rc \ -. eric@exam.ple
~’ -C and customhdr). ~? gives an overview of most other available command escapes. The command escape ~. will leave compose mode and send the message once it is completed. Alternatively typing ‘
control-D’ (‘
^D’) at the beginning of an empty line has the same effect, whereas typing ‘
control-C’ (‘
^C’) twice will abort the current letter (saving its contents in the file denoted by
DEADunless nosave is set). A number of ENVIRONMENT and INTERNAL VARIABLES can be used to alter default behavior. E.g., messages are sent asynchronously, without supervision, unless the internal variable sendwait is set, therefore send errors will not be recognizable until then. setting (also via -S) -d or debug sandbox dry-run tests will prove correctness. Message recipients (as specified on the command line or defined in ‘
To:’, ‘
Cc:’ or ‘
B Mail | mail -Sexpandaddr -s test ./mbox.mbox $ echo bla | mail -Sexpandaddr -s test '|cat >> ./mbox.mbox' $ echo safe | LC_ALL=C \ mail -:/ -Sv15-compat -Ssendwait -Sttycharset=utf8 \ -Sexpandaddr=fail,-all,+addr,failinvaddr -s test \ -. bob@exam.ple
cohorts’ and have it go to a group of people. These aliases have nothing in common with the system wide aliases that may be used by the MTA, which are subject to the ‘
name’ constraint of expandaddr and are often tracked in a file /etc/aliases (and documented in aliases(5) and sendmail(1)). Personal aliases will be expanded by Mail before the message is sent, and are thus a convenient alternative to specifying each addressee by itself; they correlate with the active set of alternates and are subject to metoo filtering.
alias cohorts bill jkf mark kridle@ucbcory ~/mail/cohorts.mbox
USER@HOST’ or ‘
HOST’ context-dependent variable variants: for example addressing ‘
File pop3://yaa@exam.ple’ would find pop3-no-apop-yaa@exam.ple, pop3-no-apop-exam.ple and pop3-no-apop in order. See On URL syntax and credential lookup and INTERNAL VARIABLES. To avoid environmental noise scripts should “detach” Mail from any configuration files and create a script-local environment, ideally with the command line options -: to disable any configuration file in conjunction with repetitions of -S to specify variables:
$ env LC_ALL=C mail -:/ \ -Sv15-compat -Ssendwait -Sttycharset=utf-8 \
LC_ALL“C”, but will nonetheless take and send UTF-8 in the message text by using ttycharset. In interactive mode, which is introduced in the next section, messages can be sent by calling the mail command with a list of recipient addresses:
$ mail -d -Squiet -Semptystart "/var/spool/mail/user": 0 messages ? mail "Recipient 1 <rec1@exam.ple>", rec2@exam.ple ... ? # Will do the right thing (tm) ? m rec1@exam.ple rec2@exam.ple
On reading mail, and interactive modeWhen invoked without addressees Mail
help print). Here the variable crt controls whether and when Mail will use the configured
PAGERfor), e.g., last and the next message, respectively. The command search (a more substantial alias for from) will display a header summary of the given message specification list instead of their content, e.g., the following will search for subjects:
In the default setup all header fields of a message will be typed, but fields can be white- or blacklisted for a variety of applications by using the command headerpick, e.g., to restrict their display to a very restricted set for type: ‘
? from '@Some subject to search for'
headerpick type retain from to cc subject’. In order to display all header fields of a message regardless of currently active ignore or retain lists, use the commands Type and Top; Show will show the raw message content. Note that historically the global mail -f (or file) specified a mailbox explicitly prefixed with the special ‘
%:’ modifier (propagating the mailbox to a primary system mailbox), then messages which have been read will be automatically moved to a secondary mailbox, the users
MBOXfile, when the mailbox is left, either by changing the active mailbox or by quitting Mail (also see Message states) –. After examining a message the user can reply ‘
r’ to the sender and all recipients (which will also be placed in ‘
To:’ unless recipients-in-cc is set) or Reply ‘
R’ exclusively to the sender(s). ‘
x’ command. To end a mail processing session one may either issue quit ‘
q’ to cause a full program exit, which possibly includes automatic moving of read messages to the secondary mailbox
MBOXas well as updating the [Option]al (see features) line editor history-file, or use the command exit ‘
x’ instead in order to prevent any of these actions.
HTML mail and MIME attachmentsMessages which are HTML-only become more and more common and of course many messages come bundled with a bouquet of MIME (Multipurpose Internet Mail Extensions) parts for, e.g., attachments. To get a notion of MIME types, Mail Mail to verify the given assertion and possibly provide an alternative MIME type. Whereas Mail Mail Mail (Mail Mail about MathML documents and make it display them as plain text,=@ ? endif ? mimetype @ application/mathml+xml mathml ? wysh set pipe-application/pdf='@&=@ \ trap "rm -f \"${MAILX_FILENAME_TEMPORARY}\"" EXIT;\ trap "trap \"\" INT QUIT TERM; exit 1" INT QUIT TERM;\ mupdf "${MAILX_FILENAME_TEMPORARY}"'
Mailing listsMail “magical”
Mail-Followup-To:’ header is honoured when the message is being replied to (via reply and Lreply) and followup-to controls whether this header is created when sending mails; it will be created automatically for a couple of reasons, too, like when the special “mailing list specific” respond command Lreply is used, when reply is used to respond to a message with its ‘
Mail-Followup-To:’ being honoured etc. A difference in between the handling of known and subscribed lists is that the address of the sender is usually not part of a generated ‘
Mail-Followup-To:’ when addressing the latter, whereas it is for the former kind of lists. Usually because there are exceptions: say, if multiple lists are addressed and not all of them are subscribed lists. For convenience Mail will, temporarily, automatically add a list address that is presented in the ‘
List-Post:’ header of a message that is being responded to to the list of known mailing lists. Shall that header have existed Mail will instead, dependent on the variable reply-to-honour, use an also set ‘
Reply-To:’ for this purpose in order to accept a list administrators' wish that is supposed to have been manifested like that (but only if it provides a single address which resides on the same domain as what is stated in ‘
List-Post:’).
Signed and encrypted messages with S/MIME smime-ca-no-defaults to avoid using the default certificates and point smime-ca-file and/or smime-ca-dir to a trusted pool of certificates. In general,
? set smime-sign-cert=ME@HERE.com.paired \ smime-sign-message-digest=SHA256 \ smime-sign
On URL syntax and credential lookup[v15-compat] For accessing protocol-specific resources usage of Uniform Resource Locators (URL, RFC 1738) has become omnipresent. Mail
USER’ and ‘
PASSWORD’ are specified they must be given in URL percent encoded form (RFC 3986; the command urlcodec may be helpful):
Note that these Mail Mail exist in multiple versions, called variable chains for the rest of this document: the plain ‘
PROTOCOL://[USER[:PASSWORD]@]server[:port][/path]
variable’ as well as ‘
variable-HOST’ and ‘
variable-USER@HOST’. Here ‘
HOST’ indeed means ‘
server:port’ if a ‘
port’ had been specified in the respective URL, otherwise it refers to the plain ‘
server’. Also, ‘
USER’ is not truly the ‘
USER’ that had been found when doing the user chain lookup as is described below, i.e., this ‘
USER’ will never be in URL percent encoded form, whether it came from an URL or not; i.e., variable chain name extensions of INTERNAL VARIABLES must not be URL percent encoded. For example, whether an hypothetical URL ‘
smtp://hey%3Ayou@our.house’ had been given that includes a user, or whether the URL was ‘
smtp://our.house’ and the user had been found differently, to lookup the variable chain smtp-use-starttls Mail first looks for whether ‘
smtp-use-starttls-hey:you@our.house’ is defined, then whether ‘
smtp-use-starttls-our.house’ exists before finally ending up looking at the plain variable itself. Mail obeys the following logic scheme when dealing with the necessary credential information of an account:
- If no ‘
USER’ has been given in the URL the variables user-HOST and user are looked up; if no such variable(s) can be found then Mail will, when enforced by the [Option]al variables netrc-lookup-HOST or netrc-lookup, search the users .netrc file for a ‘
HOST’ specific entry which provides a ‘
login’ name: this lookup will only succeed if unambiguous (one possible matching entry for ‘
HOST’). It is possible to load encrypted .netrc files via netrc-pipe. If there is still no ‘
USER’ then Mail will fall back to the user who is supposed to run Mail, the identity of which has been fixated during Mail startup and is known to be a valid user on the current host.
- Authentication: unless otherwise noted this will lookup the PROTOCOL-auth-USER@HOST, PROTOCOL-auth-HOST, PROTOCOL-auth variable chain, falling back to a protocol-specific default should this have no success.
- If no ‘
PASSWORD’ has been given in the URL, then if the ‘
USER’.
From:’ (or ‘
Sender:’) header field(s), which means that the values of smime-sign, smime-sign-cert, smime-sign-include-certs and smime-sign-message-digest will not be looked up using the ‘
USER’ and ‘
HOST’ chains from above but instead use the corresponding values from the message that is being worked
Encrypted network communicationS
STLS’, which will be used if the variable (chain) pop3-use-starttls is set: “faked” locale environment, an option which can be used to generate and send, e.g., 8-bit UTF-8 input data in a pure 7-bit US-ASCII ‘
LC_ALL=C’ environment (an example of this can be found in the section On sending mail, and non-interactive mode). Changing the value does not mean much beside that, because several aspects of the real character set are implied by the locale environment of the system, which stays unaffected by ttycharset. Messages and attachments which consist of 7-bit clean data will be classified as consisting of charset-7bit character data. This is a problem if the ttycharset character set is a multibyte character set that is also 7-bit clean. For example, the Japanese character set ISO-2022-JP is 7-bit clean! To achieve this, the variable charset-7bit must be set to ISO-2022-JP. (Today a better approach regarding email is the usage of UTF-8, which uses 8-bit bytes for non-US-ASCII data.) If the [Option]al character set conversion capabilities are not available (features does not include the term ‘
+iconv’), then ttycharset will be the only supported character set, it is simply assumed that it can be used to exchange 8-bit messages (over the wire an intermediate, configurable mime-encoding may be applied), and the rest of this section does not apply; it may however still be necessary to explicitly set it if automatic detection fails, since in that case it defaults to LATIN1 used in outgoing messages can be declared using the sendcharsets variable, Mail work even more closely related to the current locale setting automatically by using the variable sendcharsets-else-ttycharset, optionally be saved in
DEAD.TYPElocale and/or the variable ttycharset. The best results are usually achieved when Mail is run in a UTF-8 locale on Mail, but not if the special command exit is used) – however, because this may be irritating to users which are used to “more modern” mail-user-agents, the default global mail.rc, next, pipe, Print, print, top, Type, type, undelete. The commands dp and dt will always try to automatically “step” and type the “next” logical message, and may thus mark multiple messages as read, the delete command will do so if the internal variable autoprint is set. Except when the exit command can be used to access such messages.
- ‘
preserved’
- The message has been processed by a preserve command and it will be retained in its current location.
- ‘
saved’
- The message has been processed by one of the following commands: save or write. Unless when the exit command is used, messages that are in a primary system mailbox and are in ‘
saved’ state when the mailbox is left will be deleted; they will be saved in the secondary mailbox
MBOXwhen the internal variable keepsave is set.
- answered
- Mark messages as having been answered.
- draft
- Mark messages as being a draft.
- flag
- Mark messages which need special attention.
Specifying messagesCommands which take Message list arguments, such as from a.k.a. search, type and delete, can be given a list of message numbers as arguments to apply to a number of messages at once. Thus ‘
delete 1 2’ delet. The following special message names exist:
- .
- The current message, the so-called “dot”.
- ;
- The message that was previously the current message.
- ,
- The parent message of the current message, that is the message with the Message-ID given in the ‘
In-Reply-To:’ field or the last entry of the ‘ message arguments of the previous command.
-
@’ search expression.
- “magical” regular expression characters ‘
^[]*+?|$’ is seen (see re_format(7)). If the optional @name-list part is missing the search is restricted to the subject field body, but otherwise name-list specifies a comma-separated list of header fields to search, e.g.,In order to search for a string that includes a ‘
'@to,from,cc@Someone i ought to know'
@’ (commercial at) character the name-list is effectively non-optional, but may be given as the empty string. Also, specifying an empty search expression will effectively test for existence of the given header fields. Some special header fields may be abbreviated: “magical” regular expression characters is seen. The special names ‘
header’ or ‘
<’ can be used to search in (all of) the header(s) of the message, and the special names ‘
body’ or ‘
>’ and ‘
text’ or ‘
=’ will perform full text searches – whereas the former searches only the body, the latter also searches the message header ([v15 behaviour may differ] this mode yet brute force searches over the entire decoded content of messages, including administrativa strings).@@a\.safe\.domain\.match$
- :c
- All messages of state or with matching condition ‘
c’, where ‘
c’ is one or multiple of the following colon modifiers:
- a
- answered messages (cf. the variable markanswered).
- d
- ‘
deleted’ messages (for the undelete and from commands, e.g., ‘
28-Dec-2012’.
- specification cannot be used as part of another criterion. If the previous command line contained more than one independent criterion then the last of those criteria is used.
On terminal control and line editor[Option] Terminal control will be realized through one of the standard UNIX libraries, either the Termcap Access Library (libtermcap, -ltermcap), or, alternatively, the Terminal Information Library (libterminfo, -l. Mail”)). Prevent usage of a line editor in interactive mode by setting the internal variable line-editor-disable. Especially if the [Option]al terminal control support is missing setting entries in the internal variable, e.g., Mail if used on the empty line unless the internal variable ignoreeof is set (mle-del-fwd).
- ‘
\cE’
- Go to the end of the line (mle-go-end).
- ‘
\cF’
- Move the cursor forward one character (mle-go-fwd).
- ‘
\cG’
- Cancel current operation, full reset. If there is an active history search or tabulator expansion then this command will first reset that, reverting to the former line content; thus a second reset is needed for a full reset in this case (mle-reset).
- ‘
\cH’
- Backspace: backward delete one character (mle-del-bwd).
- ‘
\cI’
- [Only new quoting rules] Horizontal tabulator: try to expand the word before the cursor, supporting the usual Filename transformations (mle-complete; this is affected by mle-quote-rndtrip).
- ‘
\cJ’
- Newline: commit the current line (mle-commit).
- ‘
\cK’
- Cut all characters from the cursor to the end of the line (mle-snarf-end).
- ‘
\cL’
- Repaint the line (mle-repaint).
- ‘
\cN’
- [Option] Go to the next history entry (mle-hist-fwd).
- ‘
\cO’
- ([Option]ally context-dependent) Invokes the command dt.
- ‘
\cP’
- [Option] Go to the previous history entry (mle-hist-bwd).
- ‘
\cQ’
- Toggle roundtrip mode shell quotes, where produced, on and off (mle-quote-rndtrip). This setting is temporary, and will be forgotten once the command line is committed; also see shcodec.
- ‘
\cR’
- [Option] Complete the current line from (the remaining) older history entries (mle-hist-srch-bwd).
- ‘
\cS’
- [Option] Complete the current line from (the remaining) newer history entries (mle-hist-srch-fwd).
- ‘
\cT’
- Paste the snarf buffer (mle-paste).
- ‘
\cU’
- The same as ‘
\cA’ followed by ‘
\cK’ (mle-snarf-line).
- ‘
special-treated and cannot be part of any other sequence, because any occurrence will perform the mle-prompt-char function immediately.
- ‘
\cW’
- Cut the characters from the one preceding the cursor to the preceding word boundary (mle-snarf-word-bwd).
- ‘
\cX’
- Move the cursor forward one word boundary (mle-go-word-fwd).
- ‘
\cY’
- Move the cursor backward one word boundary (mle-go-word-bwd).
- it will first be consumed by the active sequence.
-] Mailand which can be fine-tuned by the user via the internal variable termcap. On top of what Mail. Mail
is-spam’ state can be prompted: the ‘
:s’ and ‘
:S’ message specifications will address respective messages and their attrlist entries will be used when displaying the headline in the header display.
- spamrate rates the given messages and sets their ‘
is-spam’ flag accordingly. If the spam interface offers spam scores those can also be displayed in the header display by including the ‘
%$’ format in the headline variable.
- spamham, spamspam and spamforget will and spamset will simply set and clear, respectively, the mentioned volatile ‘
is-spam’ message flag, without any interface interaction.
COMMANDS as well as those defined by the variable ifs are removed from the beginning and end. Placing any whitespace characters at the beginning of a line will prevent a possible addition of the command line to the [Option]al history.. Mail). [Option]ally the command help (or ?), when given an argument, will show a documentation string for the command matching the expanded argument, as in ‘
?t’, which should be a shorthand of ‘
?type’; with these documentation strings both commands support a more verbose listing mode which includes the argument type of the command and other information which applies; a handy suggestion might thus be:
? define __xv { # Before v15: need to enable sh(1)ell-style on _entire_ line! localopts 1;wysh set verbose;ignerr eval "${@}";return ${?} } ? commandalias xv '\call __xv' ? xv help set
Command modifiersCommands may be prefixed by one or multiple command modifiers. Some command modifiers can be used with a restricted set of commands only, the verbose version of list will ([Option]ally) show which modifiers apply.
-.
- The modifier ignerr indicates.
- scope does yet not implement any functionality.
- u does yet not implement any functionality.
- Some commands support the vput modifier:, e.g., can be used for some old and established commands to choose the new Shell-style argument quoting rules over the traditional Old-style argument quoting.
Message list argumentsSome, e.g.,
\’, asCommands which don't expect message-list arguments use sh(1)ell-style, and therefore POSIX standardized, argument parsing and quoting rules. |, ampersand &, semicolon ;, as well as all characters from the variable ifs, and / or space, tabulator, newline. The additional metacharacters left and right parenthesis (, ) and less-than and greater-than signs <, > that the sh(1) supports are not used, and are treated as ordinary characters: for one these characters are a vivid part of email addresses, and it seems highly unlikely that their function will become meaningful to Mail.
Compatibility note: [v15 behaviour may differ] Please note that even many new-style commands do not yet honour ifs to parse their arguments: whereas the sh(1)ell is a language with syntactic elements of clearly defined semantics, Mail parses entire input lines and decides on a per-command base what to do with the rest of the line. This also means that whenever an unknown command is seen all that Mail can do is cancellation of the processing of the remains of the line. vput): INTERNAL VARIABLES as well as ENVIRONMENT (shell) variables can be accessed through this mechanism, brace enclosing the name is supported (i.e., to subdivide a token)..
Quoting is a mechanism that will remove the special meaning of metacharacters and reserved words, and will prevent expansion. There are four quoting mechanisms: the escape character, single-quotes, double-quotes and dollar-single-quotes:
? echo one; wysh set verbose; echo verbose=$verbose.
-.
- Arguments enclosed in ‘
$'dollar-single-quotes'’ extend normal single quotes in that reverse solidus escape sequences are expanded as follows:
- ‘
\a’
- bell control character (ASCII and ISO-10646 BEL).
- ‘
\b’
- backspace control character (ASCII and ISO-10646 BS).
- ‘
\E’
- escape control character (ASCII and ISO-10646 ESC).
- ‘
\e’
- the same.
- ‘
\f’
- form feed control character (ASCII and ISO-10646 FF).
- ‘
\n’
- line feed control character (ASCII and ISO-10646 LF).
- ‘
\r’
- carriage return control character (ASCII and ISO-10646 CR).
- ‘
\t’
- horizontal tabulator control character (ASCII and ISO-10646 HT).
- ‘
\v’
- vertical tabulator control character (ASCII and ISO-10646 VT).
- ‘
\\’
- emits a reverse solidus character.
- ‘
\'’
- single quote.
- ‘
\"’
- double quote (escaping is optional).
- ‘
\NNN’
- eight-bit byte with the octal value ‘.
- ‘
\uHHHH’
- Identical to ‘
\UHHHHHHHH’ except it takes only one to four hexadecimal characters.
- ‘
\cX’
- Emits the non-printable (ASCII and compatible) C0 control codes 0 (NUL) to 31 (US), and 127 (DEL). Printable representations of ASCII control codes can be created by mapping them to a different part of the ASCII character set, which is possible by adding the number 64 for the codes 0 to 31, e.g., 7 (BEL) is, e.g., ‘
^G’, the reverse solidus notation has been standardized: ‘
\cG’. Some control codes also have standardized (ISO-10646, ISO C) aliases, as shown above (e.g., ‘
\a’, ‘
\n’, ‘
\t’):.
- ‘
\$NAME’
- Non-standard extension: expand the given variable name, as above. Brace enclosing the name is supported.
- ‘
\`{command}’
- Not yet supported, just to raise awareness: Non-standard extension.
? echo 'Quotes '${HOME}' and 'tokens" differ!"# no comment ? echo Quotes ${HOME} and tokens differ! # comment ? echo Don"'"t you worry$'\x21' The sun shines on us. $'\u263A'
Raw data arguments for codec commandsA special set of commands, which all have the string “codec” in their name, e.g.,, e.g.,
?Filenames, where expected, and unless documented otherwise, are subsequently subject to the following filename transformations, in sequence:
- If the given name is a registered shortcut, it will be replaced with the expanded shortcut.
- The filename is matched against the following patterns or strings:
- #
- (Number sign) is expanded to the previous file.
- %
- (Percent sign) is replaced by the invoking user's primary system mailbox, which either is the (itself expandable) inbox if that is set, the standardized absolute pathname indicated by
-
~’ character will be replaced by the expansion of, e.g.,, e.g., trailing comments on a line are not possible.
- +
- Goes to the next message in sequence and types it (like “ENTER”).
- -
- Display the preceding message, or the n'th previous message if given a numeric argument n.
- =
- Show the current message number (the “dot”).
- ?
-
\’ in order to remove the special meaning; this might change interpretation of the entire argument from what has been desired, however! Specify one plus sign to remark that parenthesis shall be left alone, two for not turning double quotation marks into quoted-pairs, and three for also leaving any user-specified reverse solidus alone. The result will always be valid, if a successful exit status is reported ([v15 behaviour may differ] the current parser fails this assertion for some constructs). [v15 behaviour may differ] Addresses need to be specified in between angle brackets ‘
<’, ‘
>’ if the construct becomes more difficult, otherwise the current parser will fail; it is not smart enough to guess right.
? “any
*’, so that, e.g., ‘
un “keys” which form the binding, and any remaining arguments form the expansion. To indicate that a binding shall not be auto-committed, but that the expansion shall instead be furtherly editable by the user, a commercial at ‘
@’ (that will be removed) can be placed last in the expansion, from which leading and trailing whitespace will finally be removed. Reverse solidus cannot be used as the last character of expansion. to compose mode only. “Keys”, Delete ? bind default $'\cA',:khome,w 'echo An editable binding@' ? bind default a,b,c rm -irf / @ # Another editable binding ? bind default :kf1 File % ? bind compose :kf1 ~e
TERMor the given terminal type; using the -xor the given argument. Synonym for chdir.
- certsave
- [Option] Only applicable to S/MIME signed messages. Takes aor the given argument. Synonym for cd.
- collapse, uncollapse
- Only applicable to threaded mode.
256’ for 256-colour terminals, ‘
8’, ‘
ansi’ or ‘
iso’ for the standard 8-colour ANSI / ISO 6429 color palette and ‘
1’ or ‘
mono’ for monochrome terminals. Monochrome terminals cannot deal with colours, but only (some) font attributes. Without further arguments the list of all currently defined mappings for the given colour type is shown (as a special case giving ‘
all’ or ‘
*’ will show the mappings of all types). Otherwise the second argument defines the mappable slot, and the third argument a (comma-separated list of) colour and font attribute specification(s), and the optional precondition available depend on the mappable slot (see Coloured display for some examples),.
view-’ are used when displaying messages.
- view-from_
- This mapping is used for so-called ‘
From_’ lines, which are MBOX file format specific header lines.
- view-header
- For header lines. A comma-separated list of headers to which the mapping applies may be given as a precondition; if the [Option]al regular expression support is available then if any of the “magical” (extended) regular expression characters is seen the precondition will be evaluated as (an extended) one.
- view-msginfo
- For the introductional message info line.
- view-partinfo
- For MIME part info lines.
- ft=
- a font attribute: ‘
bold’, ‘
reverse’ or ‘
underline’. It is possible (and often applicable) to specify multiple font attributes for a single mapping.
- fg=
- foreground colour attribute: ‘
black’, ‘
blue’, ‘
green’, ‘
red’, ‘
brown’, ‘
magenta’, ‘
cyan’ or ‘
white’. To specify a 256-color mode a decimal number colour specification in the range 0 to 255, inclusive, is supported, and interpreted as follows:
- 0 - 7
- the standard ISO 6429 colors, as above.
- 8 - 15
- high intensity variants of the standard colors.
- 16 - 231
- 216 colors).
*’ selects all) the given mapping; if the optional precondition argument is given only the exact tuple of mapping and precondition is removed. The special name ‘
*’ will remove all mappings (no precondition allowed), thus ‘
*’ will remove all existing aliases. When used without arguments the former shows a list of all currently known aliases, with one argument only the expansion of the given one. \, one of the Command modifiers.
? commandalias xx mail: `commandalias':
*’ will discard all existing macros. Deletion of (a) macro(s) can be performed from within running (a) macro(s), including self-deletion. Without arguments the former command prints the current list of macros, including their content, otherwise it., “as what”: normal macro, folder hook, hook, account switch).
- discard
- (di) Identical to ignore. Superseded by the multiplexer headerpick.
- dp, dt
- Delete the given messages and automatically type the] Mail has a strict notion about which variables are INTERNAL VARIABLES and which are managed in the program ENVIRONMENT. Since some of the latter are a vivid part of Mails functioning, however, they are transparently integrated into the normal handling of internal variables via set and unset. To integrate other environment variables of choice into this transparent handling, and also to export internal variables into the process environment where they normally are not, a ‘
link’ needs to become established with this command, as in, e.g.
environ link PERL5LIB TZ
link’ed will be lost. Note that this implies that localopts may cause loss of such links. The command ‘
unlink’ will remove an existing link, but leaves the variables as such intact. Additionally the subcommands ‘
set’ and ‘
unset’ are provided, which work exactly the same as the documented commands set and unset, but (additionally un)link the variable(s) with the program environment and thus immediately export them to, or remove them from (if possible), respectively, the program environment.
- errors
- [Option] Since Mail
protocol://’ prefixes are, i.e., URL syntax is understood, e.g., ‘
ma Mail will use the found hook to load and save data into and from a temporary file, respectively. Changing hooks will not affect already opened mailboxes. For example, the following creates hooks for the gzip(1) compression tool and a combined compressed and encrypted format:
[no v15-compat] protocol://[user@]host[:port][/path]
? filetype \ gzip 'gzip -dc' 'gzip -c' \ zst.pgp 'gpg -d | zstd -dc' 'zstd -19 -zc | gpg -e'
a’ a lock file ‘. Possible dotlock creation errors can be catched by setting dotlock-ignore-error. Mail Mail to load and save MBOX files from and to files with the registered file extensions; it will use an intermediate temporary file to store the plain data. The latter command removes the hooks for all given extensions, ‘
*’ will remove all existing handlers., respectively, to deal with the file type,
!’ prefix to load and save commands may mean to bypass this shell instance: placing a leading space will avoid any possible misinterpretations.
?' ? set record=+sent.zst.pgp
- flag, unflag
- Take message lists and mark the messages as being flagged, or not being flagged, respectively, for urgent/special attention. See the section Message states.
- folder
- (fold)
forward’ (via, e.g., type), ‘
save’ for selecting which headers shall be stored persistently when save, copy, move
*’) will establish a (fast) shorthand setting which covers all fields. The latter command always takes three or more arguments and can be used to remove selections, i.e., from the given context, the given type of list, all the given headers will be removed, the special argument ‘
*’ will remove all headers.
- headers
- (h) Show the current group of headers, the size of which depends on the variable screen, and the style of which can be adjusted with the variable. Mail
r’eceive and ‘
s’end, all remaining conditions are non-portable extensions. [v15 behaviour may differ] These commands do not yet use Shell-style argument quoting and therefore do not know about input tokens, so that syntax elements have to be surrounded by whitespace; in v15 Mail will inspect all conditions bracket group wise and consider the tokens, representing values and operators, therein, which also means that variables will already have been expanded at that time (just like in the shell).
if receive commands ... else commands ... endif
t’erminal will evaluate to true if the standard input is a terminal, i.e., in interactive sessions. Another condition can be any boolean value (see the section INTERNAL VARIABLES for textual boolean representations) to mark an enwrapped block as “never execute” or “always
i’, which turns the comparison into a case-insensitive one: this is implied if no modifier follows the trigger. Available string operators are ‘
<’ (less than), ‘
<=’ (less than or equal to), ‘
==’ (equal), ‘
!=’ (not equal), ‘
>=’ (greater than or equal to), ‘
>’ (greater than), ‘
=%’ (is substring of) and ‘
!%’ (is not substring of). By default these operators work on bytes and (therefore) do not take into account character set specifics. If the case-insensitivity modifier has been used, case is ignored according to the rules of the US-ASCII encoding, i.e., bytes are still compared. When the [Option]al regular expression support is available, the additional string operators ‘
=~’ and ‘
!~’ can be used. They treat the right hand side as an extended regular expression that is matched according to the active locale (see Character sets), i.e., character sets should be honoured’
- commands needs an active mailbox, a file.
- ‘
ok: batch/interactive’
- command may only be used in interactive or -# batch mode.
- ‘
ok: send mode’
- command can be used in send mode.
- ‘
not ok: compose mode’
- command is not available when in compose mode.
- ‘
not ok: startup’
- command cannot be used during program startup, e.g., while loading Resource files.
- ‘
ok: subprocess’
- command is allowed to be used when running in a subprocess instance, e.g., from within a macro that is called via on-compose-splice.
- ‘
gabby’
- The command produces history-gabby history entries.
- localopts
- This command can be used to localize changes to which sets some variables that are already covered by localizations, their scope will be extended, and in fact leaving the account will reply
-..
- Similar to mail, but saves the message in a file named after the local part of the first recipient's address (instead of in record).
- (m) Takes a (list of) recipient address(es) as (an) argument(s), or asks on standard input if none were given; then collects the remaining mail content and sends it out. Unless the internal variable fullnames is set recipient addresses will be stripped from comments, names etc..
-
*’ will discard all existing MIME types, just as will ‘
reset’, “magical” Mails Mail will try to load the file only once, use ‘
netrc clear’ to unlock further attempts.
+’ or selection, and all MIME parts. Identical to More.
- page
- Invokes the
PAGERon the given messages, even in non-interactive mode and as long as the standard output is a terminal. Identical to more.
- Pipe
- Like pipe but also pipes header fields which would not pass the headerpick selection, and all parts of MIME ‘
multipart/alternative’ messages.
- pipe
- (pi) Takes a message list and a shell command and pipes the messages through the command. Without an argument the current message is piped through the command given by the cmd variable. or preserve mail -R# hey, you $ LC_ALL=C printf 'echon "hey, "\nread a\necho $a' |\ LC_ALL=C 6<<< 'you' mail
Subject:’ etc. flipr will exchange this command with reply. Unless the internal variable fullnames is set the recipient address will be stripped from comments, names etc. ‘
Reply.
- Resend
- Like resend, but does not add any header lines. This is not a way to hide the sender's identity, but useful for sending a message again to the same recipients.
- resend
- Takes a list of messages and a user name and sends each message to the named user. ‘
Resent-From:’ and related header fields are prepended to the new copy of the message.. Filename transformations will be applied. To filter the saved header fields to the desired subset use the ‘
save’ slot of the white- and blacklisting command headerpick.
- commandably. command, e
|’ then the argument will instead be interpreted as a shell command and Mail will read the output generated by it. Dependent on the settings of posix and errexit, and also dependent on whether the command modifier ignerr had been used, encountering errors will stop sourcing of the given input. [v15 behaviour may differ] Note that source cannot).
- Top
- Like top but always uses the headerpick ‘
type’ slot for white- and blacklisting header fields.
- top
- (to) Takes a message list and types out the first toplines lines of each message on the users' terminal. Unless a special selection has been established for the ‘
top’ slot of the headerpick command, the only header fields that are displayed are ‘
From:’, ‘
To:’, ‘
CC:’, and ‘
Subject:’. Top will always use the ‘
type’
multipart/alternative’ messages.
- type
- (t) Takes a message list and types out each message on the users can
~’, and will neither accept hyphen-minus ‘
-’ nor dot ‘
’. as an initial character. The remains of the line form the URL data which is to be converted., as well as the build and running system environment. Supports vput (see Command modifiers).
- “soft” errors, e.g., when a search operation failed,, e.g., ‘
16#AFFE’ is a different way of specifying a hexadecimal number. Unsigned interpretation of a number can be enforced by prefixing a ‘
u’ (case-insensitively), e). One integer is expected by assignment (equals sign ‘
=’), which does nothing but parsing the argument, thus detecting validity and possible overflow conditions, and unary not (tilde ‘
~’), which creates the bitwise complement. Two integers are used by addition (plus sign ‘
+’), subtraction (hyphen-minus ‘
-’), multiplication (asterisk ‘
*’), division (solidus ‘
/’) and modulo (percent sign ‘
%’), as well as for the bitwise operators logical or (vertical bar ‘
|’, to be quoted) , bitwise and (ampersand ‘
&’, to be quoted) , bitwise xor (circumflex ‘
^’), the bitwise signed left- and right shifts (‘
<<’, ‘
>>’), as well as for the unsigned right shift ‘
>>>’. Another numeric operation is pbase, which takes a number base in between 2 and 36, inclusive, and will act on the second number given just the same as what equals sign ‘
=’ does, but the number result will be formatted in the base given. All numeric operators can be prefixed with a commercial at ‘
@’, e.g., ‘
@*’: this will turn the operation into a saturated one, which means that.
? vexpr @- +1 -9223372036854775808 ? echo $?/$!/$^ERRNAME
- file-expand
- Performs the usual Filename transformations on its argument.
- random
- Generates a random string of the given length, or of
PATH_MAXbytes (a constant from /usr/include) if the value 0 is given; the random string will be base64url encoded according to RFC 4648, and thus be usable as a (portable) filename.
- character display editor on each message. Modified contents are discarded unless the writebackedited variable is set, and are not used unless the mailbox can be written to and the editor returns a successful exit status.
- undergoes the usual Filename transformations, including shell pathname wildcard pattern expansions (glob(7)) and shell variable expansion for the message as such, not the individual parts, and contents of the destination file are overwritten if the file previously existed.
#’ to the name until file creation succeeds (or fails due to other reasons).
- xcall
- [Only new quoting rules] The sole difference to callHere.
- ~~ string
- Insert the string of text in the message prefaced by a single ‘
~’. (If the escape character has been changed, that character must be doubled instead.)
- ~! command
-.
- ~: Mail-command or ~_ Mail-command
- Execute the given Mail (-#), the list of attachments is effectively not edited but instead recreated; again, an empty input ends list creation. For all modes, if a given filename solely consists of the number sign ‘
#’ followed by a valid message number of the currently active mailbox, then the given message is attached as a ‘
message Mail private namespace, which may not exist (except for the first):
- ‘
Mailx-Command:’
- The name of the command that generates the message, one of ‘
forward’, ‘
Lreply’, ‘
Reply’, ‘
reply’, ‘
resend’.
- ‘
Mailx-Raw-To:’
-
- ‘
Mailx-Raw-Cc:’
-
- ‘
Mailx-Raw-Bcc:’
- Represent the frozen initial state of these headers before any transformation (e.g., alias, alternates, recipients-in-cc etc.) took place.
- ‘
Mailx-Orig-From:’
-
- ‘
Mailx-Orig-To:’
-
- ‘
Mailx-Orig-Cc:’
-
- ‘
Mailx-Orig-Bcc:’
- The values of said headers of the original message which has been addressed by any of reply, forward, resend.
- ‘
210’
- Status ok; the remains of the line are the result.
- ‘
211’
- Status ok; the rest of the line is optionally used for more status. What follows are lines of result addresses, terminated by an empty line. The address lines consist of two fields, the first of which is the plain address, e.g., ‘
bob
212’
- Status ok; the rest of the line is optionally used for more status. What follows are lines of furtherly unspecified, or an attempt was made to modify anything in Mail's own namespace.
- ‘
506’
- Error: an otherwise valid argument is rendered invalid due to context. For example, a second address is added to a header which may consist of a single address only.
500’
210’; this command is the default command of header if no second argument has been given. A third argument restricts output to the given header only, which may fail with ‘
501’ if no such field is defined.
- show
- Shows the content of the header given as the third argument. Dependent on the header type this may respond with ‘
211’ or ‘
212’; any failure results in ‘
501’.
- remove
- This will remove all instances of the header given as the third argument, reporting ‘
210’ upon success, ‘
501’ if no such header can be found, and Q.
- insert
- Create a new or an additional instance of the header given in the third argument, with the header body content as given in the fourth argument (the remains of the line).
210’
212’, or report ‘
501’ if no attachments exist. This command is the default command of attachment if.
- attribute
- This uses the same search mechanism as described for remove and prints any known attributes of the first found attachment via ‘
212’ upon success or ‘
501’variable “dot”.
- ~f messages
- Read the named messages into the message being sent. If no messages are specified, read in the current message, the “dot”. Strips down the list of header fields according to the ‘
type’ white- and blacklist selection of headerpick. For MIME multipart messages, only the first displayable part is included.
- ~H
- Edit the message header fields ‘
From:’, ‘
Reply-To:’ and ‘
Sender:’ by typing each one in turn and allowing the user to edit the field. The default values for these fields originate from the from, reply-to and sender variables.
- ~h
- Edit the message header fields ‘
To:’, ‘
Cc:’, ‘
Bcc:’ and ‘
Subject:’ by typing each one in turn and allowing the user to edit the field.
- ~I variable
- Insert the value of the specified variable.
- ~i variable
- Insert the value of the specified variable followed by a newline character.
- ~M messages
- Read the named messages into the message being sent, indented by indentprefix. If no messages are specified, read the current message, the “dot”.
- ~m messages
- Read the named messages into the message being sent, indented by indentprefix. If no messages are specified, read the current message, the “dot”. Strips down the list of header fields according to the ‘
type’variable. Only in this latter mode HERE-delimiter may be given: if it is data will be read in until the given HERE-delimiter is seen on a line by itself, and encountering EOF is an error; the HERE-delimiter is a required argument in non-interactive mode; if it is single-quote quoted then the pasted content will not be expanded, [v15 behaviour may differ] otherwise a future version of MailenvironmentInternal. Creation or editing of variables can be performed in the
EDITORwith the command varedit.”, which is a boolean string that can optionally be prefixed with the (case-insensitive) term ‘
ask-’, as in ‘
ask-yes’, which causes prompting of the user in interactive mode, with the given boolean as the default value. Variable chains extend a plain ‘
variable’ with ‘
variable-HOST’ and ‘
variable-USER@HOST’ variants. Here ‘
HOST’he standard POSIX 2008/Cor 2-2016 mandates the following initial variable settings: noallnet, noappend, asksub, noaskbcc, noautoprint, nobang, nocmd, nocrt, nodebug, nodot, escape set to ‘
~’,
5’. Notes: Mail does not support the noonehop variable – use command line options or mta-arguments to pass options through to a mta. And the default global mail.rc file, which is loaded unless the -: (with according argument) or -n command line options have been used, or the
MAILX_NO_SYSTEM_RCenvironment this expands to the entire matching expression. It represents the program name in global context.
- 1
- (Read-only) Access of the positional parameter stack. All further parameters can be accessed with this syntax, too, e.g., ‘
2’, ‘
3’ etc.; positional parameters can be shifted off the stack by calling shift. The parameter stack contains, e.g., Mail to prompt
+’
- start of a collapsed thread.
- ‘
-’
- an uncollapsed thread (TODO ignored for now).
- ‘
$’
- classified as spam.
- ‘
~’
- threaded mode is entered (see the collapse command).
- autoprint
- (Boolean) Enable automatic typeing of a(n existing) “successive” message after delete and undelete commands, e.g., the message that becomes the new “dot” is shown automatically, as via dp or dt.
- autosort
- Causes sorted mode (see the sort command) to be entered automatically with the value of this variable as sorting method when a folder is opened, e.g., ‘
set autosort=thread’.
- bang
- (Boolean) Enables the substitution of all not (reverse-solidus) escaped exclamation mark ‘
!’ characters by the contents of the last executed command for the ! shell escape command and ~!, one of the compose mode COMMAND ESCAPES. If this variable is not set no reverse solidus stripping is performed.
- bind-timeout
- [Option] Terminals generate multi-byte sequences for certain forms of input, for example for function and other special keys. Some terminals however do not write these multi-byte sequences as a whole, but byte-by-byte, and the latter is what Mail actually reads. This variable specifies the timeout in milliseconds that the MLE (see On terminal control and line editor) waits for more bytes to arrive unless it considers a sequence “complete”. The default is 200.
- COMMAND ESCAPES.
- build-os, build-osenv
- (Read-only) The operating system Mail has been build for, usually taken from uname(1) via ‘
uname -s’ and ‘
uname -srm’, respectively, the former being lowercased.
-, in which case the only supported character set is ttycharset and this variable is effectively ignored. Refer to the section Character sets for the complete picture of character set conversion in Mail.
- -R and lv(1) the option -c in order to support colours. Often doing manual adjustments is unnecessary since Mail may perform adjustments dependent on the value of the environment variable
PAGER(see there for more).
- contact-mail, contact-web
- (Read-only) Addresses for contact per email and web, respectively, e.g., for bug reports, suggestions, or help regarding Mail.
:’ and the field content body. Standard header field names cannot be overwritten by a custom header. Different to the command line option -C the variable value is interpreted as a comma-separated list of custom headers: to include commas in header bodies they need to become escaped with reverse solidus ‘
\’. Headers can be managed more freely in compose mode via ~^.
? option-ignore-error
- (Boolean)[Option] Synchronization of mailboxes which Mail treats as primary system mailboxes (see, e.g., the notes on Filename transformations, as well as the documentation of file) will be protected with so-called dotlock files—the traditional mail spool file locking method—in addition to system file locking. Because Mail.
- escape
- The first character of this value defines the escape character for COMMAND ESCAPES in compose mode. The default value is the character tilde ‘
~’. If set to the empty string, command escapes are disabled.
- expandaddr
- If -~ or -#, set this to the (case-insensitive) value ‘
restrict’ (it actually acts like ‘
restrict,-all,+name,+addr’, so that care for ordering issues must be taken) . In fact the value is interpreted as a comma-separated list of values. If it contains ‘
fail’
all’ addresses all possible address specifications, ‘
file’ file targets, ‘
pipe’ command pipeline targets, ‘
name’ plain user names and (MTA) aliases and ‘
addr’ network addresses. These kind of values are interpreted in the given order, so that ‘
restrict,fail,+file,-all,+addr’ will cause hard errors for any non-network address recipient address unless Mail is in interactive mode or has been started with the -~ or -# command line option; in the latter case(s) any address may be used, then. Historically invalid network addressees are silently stripped off. To change this so that any encountered invalid email address causes a hard error it must be ensured that ‘
failinvaddr’ is an entry in the above list. Setting this automatically enables network addressees (it actually acts like ‘
failinvaddr,+addr’, so that care for ordering issues must be. The output of the command version will include. The commands replysender, respondsender, followupsender as well as replyall, respondall, followupall are not affected by the current setting of flipr.
- folder
- The default path under which mailboxes are to be saved: filenames that begin with the plus sign ‘
+’ will have the plus sign replaced with the value of this variable if set, otherwise the plus sign will remain unchanged when doing Filename transformations; also see file for more on this topic.. Also see followup-to-honour and the commands mlist, mlsubscribe, reply and Lreply.
- followup-to-honour
- Controls whether a ‘
Mail-Followup-To:’ header is honoured when group-replying to a message via reply or Lreply. This is a quadoption; if set without a value it defaults to “yes”.
message/rfc822’ attachments with all of their parts included.
- forward-inject-head
- The string to put before the text of a message with the forward command instead of the default “-------- Original Message --------”. No heading is put if it is set to the empty string. This variable is ignored if the forward-as-attachment variable is set.
- -r command line option (with an empty argument; see there for the complete picture on this topic), or by setting the internal variable).
- ‘
%$’
- [Option] The spam score of the message, as has been classified via the command spamrate. Shows only a replacement character if there is no spam support.
- ‘
%a’
- Message attribute character (status flag); the actual content can be adjusted by setting attrlist.
- ‘
%d’
- The date found in, and honours headline-plain.)
-] Mail
From:’ (also see On sending mail, and non-interactive mode).
@’ characters and discard the current line.
- ignoreeof
- (Boolean) Ignore end-of-file conditions (‘
control-D’) in compose mode on message input and in interactive command input. If set an interactive command input session can only be left by explicitly using one of the commands exit and quit, and message input in compose mode can only be terminated by entering a period ‘
.’ on a line by itself or by using the ~. COMMAND ESCAPES; Setting this implies the behaviour that dot describes in posix mode.
- inbox
- If this is set to a non-empty string it will specify the users primary system mailbox, overriding
%’ when doing Filename transformations; also see file for more on this topic. The value supports a subset of transformations itself.
- indentprefix
- String used by the ~m, ~M and ~R COMMAND
mail:’).
- mailbox-display
- (Read-only) The name of the current mailbox (file), possibly abbreviated for display purposes.
- mailbox-resolved
- (Read-only) The fully resolved path of the current mailbox.
--rfc4155
- (Boolean) When opening MBOX mailbox databases Mail by default uses tolerant POSIX rules for detecting message boundaries (so-called ‘
From_’ lines) due to compatibility reasons, instead of the stricter rules that have been standardized in RFC 4155. This behaviour can be switched to the stricter RFC 4155 rules by setting this variable. (This is never necessary for any message newly generated by Mail, it only applies to messages generated by buggy or malicious MUAs, or may occur in old MBOX databases: Mail itself will choose a proper mime-encoding to avoid false interpretation of ‘
From_’ content lines in the MBOX database.). (E.g., at the time of this writing some newsletters ship their full content only in the rich HTML part, whereas the plain text part only contains topic subjects.)
-
0b1111’.
- sent as-is, without a transfer encoding.) Valid values are:
- ‘.
- ‘, like, e.g., ISO-8859-1. The encoding will cause a large overhead for messages in other character sets: e.g., it will require up to twelve (12) bytes to encode a single UTF-8 character of four (4) bytes. It is the default encoding.
- ‘
=’ then it is instead parsed as a comma-separated list of the described letters plus
- To choose an alternate Mail-Transfer-Agent, set this option to either the full pathname of an executable (optionally prefixed with the protocol ‘
file://’), or [Option]ally a SMTP a.k.a. SUBMISSION protocol URL, e.g., [v15-compat]([no v15-compat]: ‘
smtps?://[user[:password]@]server[:port]
command line option Mail will also (not) pass -f as well as possibly -F. [Option]ally Mail can send mail over SMTP a.k.a.. Mail also supports forwarding of all network traffic over a specified socks-proxy. The following SMTP variants may be used:
- The plain SMTP protocol (RFC 5321) that normally lives on the server port 25 and requires setting the smtp-use-starttls variable to enter a SSL SSL/TLS secured session state; e.g., [v15-compat] ‘
submission://[user[:password]@]server[:port]’.
- The SUBMISSIONS protocol (RFC 8314) that lives on server port 465 and is SSL-arguments
- Arguments to pass through to a file-based mta can be given via this variable, which is parsed according to Shell-style argument quoting into an array of arguments, and which will be joined onto MTA options from other sources, and then passed individually to the MTA: ‘
? wysh set mta-arguments='-t -X "/tmp/my log"'’.
- mta-no-default-arguments
- (Boolean) Unless this variable is set Mail.
- netrc-lookup-USER@HOST, netrc-lookup-HOST, netrc-lookup
- (Boolean)[v15-compat][Option] Used to control usage of the users and netrc-lookup) then Mail
From:’ of the given message.
- mailx-orig-to, mailx-orig-cc, mailx-orig-bcc
- When replying, forwarding or resending, this will be set to the receivers of the given message.
define t_ocl { vput ! i cat ~/.mysig if [ $? -eq 0 ] vput vexpr message-inject-tail trim-end $i end # Alternatively readctl create ~/.mysig if [ $? -eq 0 ] readall i if [ $? -eq 0 ] vput vexpr, the message-inject-tail is injected Mail ~^ has been especially designed for scriptability (via these hooks). The first line the hook will read on its standard input is the protocol version of said command escape, currently “0 0 1”: backward incompatible protocol changes have to be expected. command without performing MIME and character set conversions.
- pipe-TYPE/SUBTYPE
- When a MIME message part of type ‘
TYPE/SUBTYPE’ (case-insensitive) is displayed or quoted, its text is filtered through the value of this variable interpreted as a shell command. Note that only parts which can be displayed inline as plain text (see copiousoutput) are displayed unless otherwise noted, other MIME parts will only be considered by and for the command mimeview. The special value commercial at ‘
@’ forces interpretation of the message part as plain text, e.g., ‘
set pipe-application/xml=@’ will henceforth display XML .
- ‘
!’
- The command must be run on an interactive terminal, Mail will temporarily release the terminal to it: needsterminal.
- ‘
+’
- to forcefully terminate interpretation of remaining characters. (Any character not in this list will have the same effect.).
- pipe-EXTENSION
- This is identical to pipe-TYPE/SUBTYPE except that ‘
EXTENSION’ (normalized to lowercase using character mappings of the ASCII charset) names a file extension, e.g., ‘
xhtml’. Handlers registered using this method take precedence.
- pop3-auth-USER@HOST, pop3-auth-HOST, pop3-auth
- [Option][v15-compat] Variable chain that sets the POP3 authentication method. The only possible value as of now is ‘
plain’, which is thus the default.
- ‘
APOP’ authentication method will be used when connecting to a POP3 server that advertises support. The advantage of ‘
AP will be set implicitly before the Resource files are loaded if the environment variable
POSIXLY_CORRECTis set, and adjusting any of those two will be reflected by the other one implicitly. will replace the list of alternate addresses instead of appending to it.
- The variable inserting COMMAND ESCAPES ~A, ~a, ~I and ~i will expand embedded character sequences ‘
\t’ horizontal tabulator and ‘
\n’ line feed. [v15 behaviour may differ] For compatibility reasons this step will always be performed.
- Upon changing the active file no summary of headers will (a.k.a. ‘
set noprompt’).
- prompt2
- This string is used for secondary prompts, but is otherwise identical to prompt. The default is ‘
..’.
- quiet
- (Boolean) Suppresses the printing of the version when first invoked.
- quote
- If set, Mail starts a replying message with the original message prefixed by the value of the variable indentprefix. Normally, a heading consisting of “Fromheaderfield wrote:” is put before the quotation. If the string ‘
noheading’ is assigned to the quote variable, this heading is omitted. If the string ‘
headers’ is assigned, only the headers selected by the ‘
type’ headerpick selection are put above the message body, thus quote acts like an automatic `~m' COMMAND ESCAPES command, then. If the string ‘
allheaders’ is assigned, all headers are put above the message body and all MIME parts are included, making quote act like an automatic `~M' command; also see quote-as-attachment.
- -r option (empty argument case).
- recipients-in-cc
- (Boolean) When doing a reply, the original ‘
From:’ and ‘
To:’ are by default merged into the new ‘
To:’. If this variable is set, only the original ‘
From:’ ends in the new ‘
To:’, the rest is merged into and Resend commands.
- reply-in-same-charset
- (Boolean) If this variable is set Mail first tries to use the same character set of the original message for replies. If this fails, the mechanism described in Character sets is evaluated as usual.
- reply-strings
- list.
- replyto
- [Obsolete] Variant of reply-to.
- reply-to-honour
- Controls whether a ‘
Reply-To:’ header is honoured when replying to a message via reply or Lreply. This is a quadoption; if set without a value it defaults to “yes”.
- rfc822-body-from_
- (Boolean) This variable can be used to force displaying a so-called ‘
From_’ line for messages that are embedded into an envelope mail via the ‘
message/rfc822’ MIME mechanism, for more visual convenience.
- save
- (Boolean) Enable saving of (partial) messages in
DEADupon interrupt or delivery error.
- screen
- The number of lines that represents a “screenful” of lines, used in headers summary: Character sets). This might be a problem for scripts which use the suggested ‘
LC_ALL=C’ setting, since in this case the character set is US-ASCII by definition, so that it is better to also override ttycharset, Mail will also be non-zero.
- showlast
- (Boolean) This setting causes Mail to start at the last message instead of the first one when opening a mail folder, as well as with from does not contain any text in its first or only message part, do not send it but discard it silently (see also the command line option -E).
-, e.g.,.
- smime-force-encryption
- (Boolean)[Option] Causes Mail to refuse sending unencrypted messages.
- smime-sign
- (Boolean)
USER@HOST.smime-cert-key’ for the private key (and ‘
USER@HOST.smime-cert-cert’ for the certificate stored in the same file) will be used for performing any necessary password lookup, therefore the lookup can be automated via the mechanisms described in On URL syntax and credential lookup. For example, the hypothetical address ‘
bob@exam.ple’ could be driven with a private key / certificate pair path defined in smime-sign-cert-bob@exam.ple, and needed passwords would then be looked up via the pseudo hosts ‘
bob@exam.ple.smime-cert-key’ (and ‘
bob@exam.ple.smime-cert-cert’). To include intermediate certificates, use smime-sign-include-certs.
- won
- [Option] Specifies the message digest to use when signing S/MIME messages. RFC 5751 mandates a default of ‘
sha1’. Possible values are (case-insensitive and) in decreasing cipher strength: ‘
sha512’, ‘
sha384’, ‘
sha256’, ‘
sha224’ and ‘
md5’. The actually available message digest algorithms depend on the cryptographic library that Mail uses. [Option] Support for more message digest algorithms may be available through dynamic loading via, e.g., EVP_get_digestbyname(3) (OpenSSL) if Mail has been compiled to support this. Remember that for this ‘
USER@HOST’ refers to the variable from (or, if that contains multiple addresses, sender).
-’ as well as the [Option]al methods ‘
cram-md5’ and ‘
gssapi’. The ‘
none’ method does not need any user credentials, ‘
g ‘
HOST’ SSL/TLS encrypted, i.e., to enable transport layer security.
- socks-proxy-USER@HOST, socks-proxy-HOST, socks-proxy
- [Option] If this is set to the hostname (SOCKS URL) of a SOCKS5 server then Mail will proxy all of its network activities through it. This can be used to proxy SMTP, POP3 etc. network traffic through the Tor anonymizer, for example. The following would create a local SOCKS proxy on port 10000 that forwards to the machine ‘
HOST’, and from which the network traffic is actually instantiated:
# Create local proxy server in terminal 1 forwarding to HOST $ ssh -D 10000 USER@HOST # Then, start a client that uses it in terminal 2 $ mail , e.g.,
;’ and an extended regular expression. Then the latter is used to parse the first output line of the spamfilter-rate hook, and, in case the evaluation is successful, the group that has been specified via the number is interpreted as a floating point scan score.
- ssl-ca-dir-USER@HOST, ssl-ca-dir-HOST, ssl-ca-dir, ssl-ca-file-USER@HOST, ssl-ca-file-HOST, ssl-ca-file
- . Mail SSL mail, e.g.:
# Register a configuration section for mail mail =
SSLv3’, ‘
TLSv1’, ‘
TLSv1.1’, ‘
TLSv1.2’,
ctx-set-maxmin-proto’ then using MaxProtocol and MinProtocol is preferable. Fallback is SSL_CTX_set_options(3), driven via an internal parser which understands the strings ‘
SSLv3’, ‘
TLSv1’, ‘
TLSv1.1’, ‘
TLSv1.2’, and the special value ‘
ALL’. Multiple protocols may be given as a comma-separated list, any whitespace is ignored, an optional plus sign ‘
+’ prefix enables, a hyphen-minus ‘
-’ prefix disables a protocol, so that ‘
-ALL, TLSv1.2’ enables only the TLSv1.2 protocol.
-
libressl’ (LibreSSL) ,: ‘
modules-load-file’ (ssl-config-file), ‘
conf-ctx’ (ssl-config-pairs), ‘
ctx-config’ (ssl-config-module), ‘
ctx-set-maxmin-proto’ (ssl-config-pairs) and ‘
rand
strict’ (fail and close connection immediately), ‘
ask’ (ask whether to continue on standard input), ‘
warn’ (show a warning and continue), ‘
ignore’ (do not perform validation). The default is ‘
ask’.
-.
- termcap
- ([Option]) This specifies a comma-separated list of Terminal Information Library (libterminfo, -lterminfo) and/or Termcap Access Library (libtermcap, -ltermcap) capabilities (see On terminal control and line editor, escape commas with reverse solidus) to be used to overwrite or define entries. Note this variable will only be queried once at program startup and can thus only be specified in resource files or on the command line.
^[’ could lead to misreadings when a left bracket follows, which it does for the standard CSI sequence); finally three letter octal sequences, as in ‘
\061’, are supported. To specify that a terminal supports 256-colours, and to define sequences that home the cursor and produce an audible bell, one might write:
? set termcap='Co#256,home=\E[H,bel=^G'
- colors or Co
- max_colors: numeric capability specifying the maximum number of colours. Note that Mail does not actually care about the terminal beside that, but always emits ANSI / ISO 6429 escape sequences.
- rmcup or te / smcup or ti
- exit_ca_mode and enter_ca_mode, respectively: exit and enter the alternative screen ca-mode, effectively turning Mail into a fullscreen application. This must be enabled explicitly by setting termcap-ca-mode.
- smkx or ks / rmkx or ke
- keypad_xmit’.
- termcap-ca-mode
- [Option] Allow usage of the exit_ca_mode and enter_ca_mode terminal capabilities, effectively turning Mail into a fullscreen application, as documented for termcap. -S had), e Mail.
- v15-compat
- (Boolean) Setting this enables upward compatibility with Mail -v, causes Mail will include this information.
- writebackedited
- If this variable is set messages modified using the edit or visual that, e.g., they can be managed via set and unset, causing automatic program environment updates (to be inherited by newly created child processes). In order to transparently integrate other environment variables equally they need to be imported (linked) with the command environ. This command can also be used to set and unset non-integrated environment variables from scratch, sufficient system support provided. The following example, applicable to a POSIX shell, sets the
COLUMNSenvironment variable for Mail only, and beforehand exports the
EDITORin order to affect any further processing in the running shell:
$ EDITOR="vim -u ${HOME}/.vimrc" $ export EDITOR $ COLUMNS=80 mail
HOMEdirectory. If the variable debug is set no output will be generated, otherwise the contents of the file will be replaced.
EDITOR
- Pathname of the text editor to use in the edit command and ~e COMMAND ESCAPES. A default editor is used if this value is not the variable settings this directory is a default write target, e.g. for’. (Mail at startup is inhibited, i.e., the same effect is achieved as if Mail had been started up with the option -: (and according argument) or -n. This variable is only used when it resides in the process environment.
MBOX
- The name of the users secondary mailbox file. A logical subset of the special Filename transformations (also see file) ‘
Ri’, likewise for “lv”
LVwill in the visual command and ~v COMMAND ESCAPES.
FILES
- ~/.mailrc
- File giving initial commands, one of the Resource files.
- mailUpon startup Mail reads in several resource files:
- mail.rc
- System wide initialization file. Reading of this file can be suppressed, either by using the -: (and according argument) or -n command!)
# This line is a comment command. And y\ es, it is really continued here. set debug \ verbose set editheaders
The mime.types filesAs stated in HTML mail and MIME attachments Mail needs to learn about MIME (Multipurpose Internet Mail Extensions) media types in order to classify message and attachment content. One source for them are mime.types files, the loading of which can be controlled by setting the variable mimetypes-load-control. Another is the command mimetype, which also offers access to Mails MIME type cache. mime.types files have the following syntax:
type/subtype extension [extension ...] # E.g., text/html html htm-marker’:
The following type markers are supported:
[type-marker ]type/subtype extension [extension ...]
- @
-.
The Mailcap filesThis feature is not available in v14.9.0, sorry! RFC 1524 defines a “User Agent Configuration Mechanism” which Mail [Option]ally supports (see HTML mail and MIME attachments). It defines a file format “mailcap” files and the
MAILCAPSenvironment variable that can be used to overwrite that (repeating here that it is not a search path, but instead a path search specification). Any existing files will be loaded in sequence, appending any content to the list of MIME type handler directives. “Mail “escaped” by preceding them with the reverse solidus character ‘
\’. The standard does not specify how leading whitespace of follow lines is to be treated, therefore Mail retains it. “Mail
TYPE
audio/*’ would match any audio type. The second field defines the shell command which shall be used to “display” MIME parts of the given type; it is implicitly called the view command. For data “consuming” “producing” Mail “Mail
Content Mail's normal visual display. It is mutually exclusive with needsterminal.
- textualnewlines
- A flag field which indicates that this type of data is line-oriented and that, if encoded in ‘
base
nametemplate=%s.gif’. Note that Mail Mail.
- Mail..
x-’. Flag fields apply to the entire “Mail Mail will show information about handler evaluation):
application/postscript; ps-to-terminal %s; needsterminal application/postscript; ps-to-terminal %s; compose=idraw %s
%t’ will be replaced by the ‘
TYPE/SUBTYPE’ specification. Named parameters from the ‘
Content-type:’ field may be placed in the command execution line using ‘
%{’ followed by the parameter name and a closing ‘
}’ character. The entire parameter should appear as a single command line argument, regardless of embedded spaces; thus:
# Message Content-type: multipart/mixed; boundary=42 # Mailcap file multipart/*; /usr/local/bin/showmulti \ %t %{boundary} ; composetyped = /usr/local/bin/makemulti # Executed shell command /usr/local/bin/showmulti multipart/mixed 42
MAILCAPS, mime-counter-evidence, pipe-TYPE/SUBTYPE, pipe-EXTENSION.
The .netrc fileThe .netrc file contains user credentials for machine accounts. The default location in the user's
HOMEdirectory may be overridden by the
NETRCenvironment variable. The file consists of space, tabulator or newline separated tokens. Mail implements a parser that supports a superset of the original BSD syntax, but users should nonetheless be aware of portability glitches of that file format, shall their .netrc be usable across multiple programs and platforms:
- BSD does not support single, but only double quotation marks, e.g., ‘
password="pass with spaces"’.
- BSD (only?) supports escaping of single characters via a reverse solidus (e.g., a space can be escaped via ‘
\’), in- as well as outside of a quoted string.
-, Mail token for any other login than “anonymous”, Mail will always require these strict permissions.
- machine name
- The hostname of the entries' machine, lowercase-normalized by Mail before use. Any further file content, until either end-of-file or the occurrence of another machine or a default first-class token is bound (only related) to the machine name. As an extension that should not be the cause of any worries Mail supports a single wildcard prefix for name:
machine *.example.com login USER password PASS machine pop3.example.com login USER password PASS machine smtp.example.com login USER password PASS
xy.example.com’ as well as ‘
pop3.example.com’, but neither ‘
example.com’ nor ‘
local.smtp.example.com’. Note that in the example neither ‘
pop3.example.com’ nor ‘
sm.1 if [ "$ssl-features" =% +ctx-set-maxmin-proto ] wysh set ssl-config-pairs='\ CipherList=TLSv1.2:!aNULL:!eNULL:@STRENGTH,\ Curves=P-521:P-384:P-256,\ MinProtocol=TLSv1.1' else wysh set ssl-config-pairs='\ CipherList=TLSv1.2:!aNULL:!eNULL:@STRENGTH,\ Curves=P-521:P-384:P-256,\ Protocol=-ALL\,+TLSv1.1 \, +TLSv1.2' #set record="+[Gmail]/Sent Mail" # Select: File imaps://imap.gmXil.com/[Gmail]/Sent\ Mail_COLOR_FLAG} -aFlrS' commandalias llS '!ls ${LS_COLOR }
machine *.yXXXXx.ru login USER password PASS
$ echo text | mail :
$ openssl req -nodes -newkey rsa:4096 -keyout key.pem -out creq.pem
This is the file Mail will work with. If you have created your private key with a passphrase then Mail will ask you for it whenever a message is signed or decrypted. Set the following variables to henceforth use S/MIME (setting smime-ca-file is of interest for verification only):
$ cat key.pem pub.crt > ME@HERE.com.paired
? set smime-ca-file=ALL-TRUSTED-ROOT-CERTS-HERE \ smime-sign-cert=ME@HERE.com.paired \ smime-sign-message-digest=SHA256 \ smime-sign
Using CRLs with S/MIME or SSL/TLS[Option] currently offers no mechanism to fetch CRLs, nor to access them on the Internet, so they have to be retrieved by some external mechanism. Mail accepts CRLs in PEM format only; CRLs in DER format must be converted, like, e.g.:
To tell Mail about the CRLs, a directory that contains all CRL files (and no other files) must be created. The smime-crl-dir or ssl-crl-dir variables, respectively, must then be set to point to that directory. After that, Mail requires a CRL to be present for each CA that is used to verify a certificate.
$ openssl crl -inform DER -in crl.der -out crl.pem aka GMailSince support OAuth. Because of this).
Not "defunctional", but the editor key does not workIt can happen that the terminal library (see On terminal control and line editor, bind, termcap) reports different codes than the terminal really sends, in which case Mail -v, if available, to see the byte sequences which are actually produced by keypresses, and use the variable termcap to make Mail aware of them. E.g., the terminal this is typed on produces some false sequences, here an example showing/s-mailx, e.g., the following lists
should be used (the last character is the server's hierarchy delimiter). The following IMAP-specific commands exist:
imaps://mylogin@imap.myisp.example/INBOX.
-. Valid values are `login' for the usual password-based authentication (the default), `cram-md5', which is a password-based authentication that does not send the password over the network in clear text, and `gssapi' for GSS-API based authentication.
-
- IMAP servers may close the connection after a period of inactivity; folders command stops after it has reached a certain depth to avoid possible infinite loops. The value of this variable sets the maximum depth allowed. The default is 2. If the folder separator on the current IMAP server is a slash `/', this variable has no effect and the folders command does not descend to subfolders.
- imap-use-starttls-USER@HOST, imap-use-starttls-HOST, imap-use-starttls
- Causes Mail to issue a `STARTTLS' command to make an unencrypted IMAP session SSLM. Douglas McIlroy writes in his article “A “The Mail Reference Manual” that was originally written by Kurt Shoens.
AUTHORSKurt Shoens, Edward Wang, Keith Bostic, Christos Zoulas, Gunnar Ritter. Mail is developed by Steffen Nurpmeso ⟨steffen@sdaoden.eu⟩.
CAVEATS[v15 behaviour may differ] Interrupting an operation via
SIGINTakaAfter mail: ‘
? eval mail $contact-mail’. Including the output of the command version may be helpful, e.g.,
? vput version xy; wysh set escape=!; eval mail $contact-mail Bug subject !I xy !.
$ mail -X 'echo $contact-web' -Xx’. | https://jlk.fjfi.cvut.cz/arch/manpages/man/mail.1 | CC-MAIN-2018-13 | refinedweb | 13,444 | 54.73 |
This article is a brief introduction to Neo4j, one of the most popular graph databases, and its integration with Python.
Graph Databases
Graph databases are a family of NoSQL databases, based on the concept of modelling your data as a graph, i.e. a collection of nodes (representing entities) and edges (representing relationships).
The motivation behind the use of a graph database is the need to model small records which are deeply interconnected, forming a complex web that is difficult to represent in a relational fashion. Graph databases are particularly good at supporting queries that actually make use of such connections, i.e. by traversing the graph. Examples of suitable applications include social networks, recommendation engines (e.g. “show me movies that my best friends like”) and many other cases of link-rich domains.
Quick Installation
From the Neo4j web-site, we can download the community edition of Neo4j. At the moment of this writing, the last version is 2.2.0, which provides improved performance and a re-design of the UI. To install the software, simply unzip it:
tar zxf neo4j-community-2.2.0-unix.tar.gz ln -s neo4j-community-2.2.0 neo4j
We can immediately run the server:
cd neo4j ./bin/neo4j start
and now we can point the browser to for a nice web GUI. The first time you open the interface, you’ll be asked to set a password for the user “neo4j”.
If you want to stop the server, you can type:
./bin/neo4j stop
Interfacing with Python
There is no shortage of Neo4j clients available for several programming languages, including Python. An interesting project, which makes use of the Neo4j REST interface, is Neo4jRestClient. Quick installation:
pip install neo4jrestclient
All the features of this client are listed in the docs.
Creating a sample graph
Let’s start with a simple social-network-like application, where users know each others and like different “things”. In this example, users and things will be nodes in our database. Each node can be associated with labels, used to describe the type of node. The following code will create two nodes labelled as User and two nodes labelled as Beer:
from neo4jrestclient.client import GraphDatabase db = GraphDatabase("", username="neo4j", password="mypassword") # Create some nodes with labels user = db.labels.create("User") u1 = db.nodes.create(name="Marco") user.add(u1) u2 = db.nodes.create(name="Daniela") user.add(u2) beer = db.labels.create("Beer") b1 = db.nodes.create(name="Punk IPA") b2 = db.nodes.create(name="Hoegaarden Rosee") # You can associate a label with many nodes in one go beer.add(b1, b2)
The second step is all about connecting the dots, which in graph DB terminology means creating the relationships.
# User-likes->Beer relationships u1.relationships.create("likes", b1) u1.relationships.create("likes", b2) u2.relationships.create("likes", b1) # Bi-directional relationship? u1.relationships.create("friends", u2)
We notice that relationships have a direction, so we can easily model subject-predicate-object kind of relationships. In case we need to model bi-directional relationship, like in a friend-of link in a social network, there are essentially two options:
- Add two edge per relationship, one for each direction
- Add one edge per relationship, with an arbitrary direction, and then ignoring the direction in the query
In this example, we’re following the second option.
Querying the graph
The Neo4j Browser available at provides a nice way to query the DB and visualise the results, both as a list of record and in a visual form.
The query language for Neo4j is called Cypher. It allows to describe patterns in graphs, in a declarative fashion, i.e. just like SQL, you describe what you want, rather then how to retrieve it. Cypher uses some sort of ASCII-art to describe nodes, relationships and their direction.
For example, we can retrieve our whole graph using the following Cypher query:
MATCH (n)-[r]->(m) RETURN n, r, m;
And the outcome in the browser:
In plain English, what the query is trying to match is “any node n, linked to a node m via a relationship r“. Suggestion: with a huge graph, use a LIMIT clause.
Of course we can also embed Cypher in our Python app, for example:
from neo4jrestclient import client q = 'MATCH (u:User)-[r:likes]->(m:Beer) WHERE u.name="Marco" RETURN u, type(r), m' # "db" as defined above results = db.query(q, returns=(client.Node, str, client.Node)) for r in results: print("(%s)-[%s]->(%s)" % (r[0]["name"], r[1], r[2]["name"])) # The output: # (Marco)-[likes]->(Punk IPA) # (Marco)-[likes]->(Hoegaarden Rosee)
The above query will retrieve all the triplets User-likes-Beer for the user Marco. The results variable will be a list of tuples, matching the format that we gave in Cypher with the RETURN keyword.
Summary
Graph databases, one of the NoSQL flavours, provide an interesting way to model data with rich interconnections. Examples of applications that are particularly suitable for graph databases are social networks and recommendation systems. This article has introduced Neo4j, one of the main examples of Graph DB, and its use with Python using the Neo4j REST client. We have seen how to create nodes and relationships, and how to query the graph using Cypher, the Neo4j query language.
9 thoughts on “Getting started with Neo4j and Python”
Reblogged this on Dinesh Ram Kali..
from neo4jrestclient import GraphDatabase -> Unresolved?
I have in my virtualenv neo4jrestclient version 2.1.0 so I am not sure why it is unresolved?
Thanks
Seb
Hi, I fixed the correct line for the import:
from neo4jrestclient.client import GraphDatabase
Thanks,
Marco
Thanks Marco, I look forward to using neo4j!
port = gdb.labels.create(“Port”)
buenos_aires = gdb.nodes.create(name=”Buenos Aires”)
new_york = gdb.nodes.create(name=”New York”)
liverpool = gdb.nodes.create(name=”Liverpool”)
casablanca = gdb.nodes.create(name=”Casablana”)
cape_town = gdb.nodes.create(name=”Cape Town”)
port.add(buenos_aires, new_york, liverpool, casablanca, cape_town)
buenos_aires.relationships.create(“6 days”, new_york)
buenos_aires.relationships.create(“5 days”, casablanca)
buenos_aires.relationships.create(“4 days”, cape_town)
then how to do the query as each relationship is not the same ‘ ‘LIKE’ but different number of days?
q = ‘MATCH (u:Port)-[r:likes]->(m:Beer) WHERE u.name=”Marco” RETURN u, type(r), m’
say I like to know number of days from Buenos Aires → New York → Liverpool
From following your example. Not sure to to resolve this:
raise StatusException(401, “Authorization Required”)
neo4jrestclient.exceptions.StatusException: Code [401]: Unauthorized. No permission — see authorization schemes.
Authorization Required
Excellent post. Able execute with very few changes | https://marcobonzanini.com/2015/04/06/getting-started-with-neo4j-and-python/ | CC-MAIN-2021-31 | refinedweb | 1,107 | 58.99 |
If I have log4j.xml file in calss path and don't have any dom implementation.
It would be nice to catch java.lang.NoClassDefFoundError error and not load
xml file at all.
now it just want initialize the program at all. And Java error mesage is not
so descriptive as usual. So Untill I caught the exception myself I could not
figure out why and what call is missing.
This is how this looks:
Exception in thread "main" java.lang.NoClassDefFoundError
at uut.TastCase01.method1(TastCase01.java:61)
at uut.TastCase01.main(TastCase01.java:45)
I'm using: log4j-1.2.9.jar, jdk1.3.1_11, Windows XP,
Also I have -Dlog4j.configuration=log4jJour.xml
and log4jJour.xml is in calss path.
If I have:
} catch (NoClassDefFoundError e) {
e.printStackTrace();
}
then I could see:
java.lang.NoClassDefFoundError: org/w3c/dom/Node
at java.lang.Class.newInstance0(Native Method)
at java.lang.Class.newInstance(Class.java:232)
at
org.apache.log4j.helpers.OptionConverter.instantiateByClassNameOptionConverter.
java:319)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure
(OptionConverter.java:449
)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:113)
at org.apache.log4j.Logger.getLogger(Logger.java:107)
at net.sf.jour.util.PropertiesBase.<clinit>(PropertiesBase.java:47)
--
Vlad
ps
public class PropertiesBase {
/*line 47*/ protected static final Logger log = Logger.getLogger
(PropertiesBase.class);
I can sort of see your point here that Log4j should do something more useful
than allow a NoClassDefFoundError or the like to propogate to the user and kill
the app. Or, at a minimum, provide a clear stacktrace showing the reason for
the error as you have below. However, I would also argue that you have made a
conscious choice to use an XML config file and ought to also be expected to make
sure that the standard XML libraries are in the classpath as well. What else is
going to parse an XML file?
You can avoid all this by simply using a Properties file.
In any case, this is not likely to be addressed in Log4j-1.2.x since Log4j-1.3
is now the actively developed version. I don't see this as super high priority,
but it's not a bad idea. If you provide a patch, this will get addressed much
more quickly. And keep in mind that the JoranConfigurator is now the default
XML config file parser for Log4j, not DOMConfigurator which is deprecated. Just
check out the HEAD branch of the logging-log4j module and create a patch against
that.
Jake
The case was simple I forgot to add dom implementation to calss path because I
was testing in 1.4 initialy.
Then I started to test in 1.3 and had been very disappointed because My app
was realy complex and calsses was instrumented using javassist under java 1.4
and then run under 1.3.
So I was thinking that somthing wron happens during instumentation and so on.
Spend hell of the time debudin instrumentation and removing all 1.4 classes
from my app that was not realy used in the application but did not know that
untill I fixed it all.
And finaly I decided to catch exception in one class and then call my main
class.
Any way I submited the bug because this is: "not a bad idea"
--
Vlad
Added try/catch block in LogManager's default initialization code in rev 568761. | https://bz.apache.org/bugzilla/show_bug.cgi?id=32527 | CC-MAIN-2017-26 | refinedweb | 566 | 52.36 |
Hey all. this is my first day python programming (been programming java for almost a year tho) and I was trying to put together the classic "shout" method. so far it is like this:
import time def shout(string): for c in string: print("Gimme a " + c) print(c + "!") time.sleep(1) print("\nWhat's that spell... " + string + "!") shout("COUGARS")
Basically you can see what it does... Spells out the word in cheerleader fashion. Anyway the thing I was wondering is if I could get the program to pause for a second in between the "What does that spell..." and the string. so it would go like this:
What does that spell... <one second pause> COUGARS!
Any help? Seems trivial but I haven't been able to figure it out :confused: | https://www.daniweb.com/programming/software-development/threads/208921/pause-and-print-same-line | CC-MAIN-2017-17 | refinedweb | 131 | 84.47 |
Written by Chris Lattner,Dinakar Dhurjati, and Joel Stanley.
Here are some useful links:
You are also encouraged to take a look at the LLVM Coding Standards guide which focuses on how to write maintainable code more than where to put your curly braces..
if (AllocationInst *AI = dyn_cast<AllocationInst>(Val)) {
...
}
This form of the if statement effectively combines together a call to isa<> and a call to cast<> into one statement, which is very convenient.
Another common example is:
// Loop over all of the phi nodes in a basic block
BasicBlock::iterator BBI = BB->begin();
for (; PHINode *PN = dyn_cast<PHINode>(BBI); ++BBI)
cerr << *PN;.
Naturally, because of this, you don't want to delete the debug printouts, but you don't want them to always be noisy. A standard compromise is to comment them out, allowing you to enable them if you need them in the future.
The :
...
DEBUG.
Because this is a "how-to" section, you should also read about the main classes that you will be working with. The Core LLVM Class Hierarchy Reference contains details and descriptions of the main classes that you should know about..
// func is a pointer to a Function instanceNote.
for (Function::iterator i = func->begin(), e = func->end(); i != e; ++i) {
// print out the name of the basic block if it has one, and then the
// number of instructions that it contains
cerr << "Basic block (name=" << i->getName() << ") has "
<< i->size() << " instructions.\n";
}
// blk is a pointer to a BasicBlock instanceHowever, this isn't really the best way to print out the contents of a BasicBlock! Since the ostream operators are overloaded for virtually anything you'll care about, you could have just invoked the print routine on the basic block itself: cerr << *blk << "\n";.
for (BasicBlock::iterator i = blk->begin(), e = blk->end(); i != e; ++i)
// the next statement works since operator<<(ostream&,...)
// is overloaded for Instruction&
cerr << *i << "\n";
Note that currently operator<< is implemented for Value*, so it will print out the contents of the pointer, instead of the pointer value you might expect. This is a deprecated interface that will be removed in the future, so it's best not to depend on it. To print out the pointer value for now, you must cast to void*.
#include "llvm/Support/InstIterator.h";
std::set<Instruction*> worklist;The STL set worklist would now contain all instructions in the Function pointed to by F.
worklist.insert(inst_begin(F), inst_end(F));
Instruction& inst = *i; // grab reference to instruction referenceHowever,; // grab pointer to instruction reference
const Instruction& inst = *j;
Instruction* pinst = &*i;is semantically equivalent to
Instruction* pinst = i;It's also possible to turn a class pointer into the corresponding iterator. Usually, this conversion is quite inexpensive. The following code snippet illustrates use of the conversion constructors provided by LLVM iterators. By using these, you can explicitly grab the iterator of something without actually obtaining it via iteration over some structure:
void printNextInstruction(Instruction* inst) {Of course, this example is strictly pedagogical, because it'd be much better to explicitly grab the next instruction directly from inst.
BasicBlock::iterator it(inst);
++it; // after this line, it refers to the instruction after *inst.
if (it != inst->getParent()->end()) cerr << *it << "\n";
}
initialize callCounter to zeroAnd the actual code is (remember, since we're writing a FunctionPass, our FunctionPass-derived class simply has to override the runOnFunction method...):
for each Function f in the Module
for each BasicBlock b in f
for each Instruction i in b
if (i is a CallInst and calls the given function)
increment callCounter is supposed to have "value semantics". So it should be passed by value, not by reference; it should not be dynamically allocated or deallocated using operator new or operator delete. It is efficiently copyable, assignable and constructable, with costs equivalents to that of a bare pointer. (You will notice, if you look at its definition, that it has only a single data member.)
Function*):
for (Value::use_iterator i = F->use_begin(), e = F->use_end(); i != e; ++i) {
if (Instruction *Inst = dyn_cast<Instruction>(*i)) {
cerr << "F is used in instruction:\n";
cerr << *Inst << "\n";
}
}
Instruction* pi = ...;
for (User::op_iterator i = pi->op_begin(), e = pi->op_end(); i != e; ++i) {
Value* v = *i;
...
}
Creation of Instructions is straightforward: runtime. Each Instruction subclass is likely to have varying default parameters which change the semantics of the instruction, so refer to the where indexLoc is now the logical name of the instruction's execution value, which is a pointer to an integer on the runtime
Instruction instances that are already in BasicBlocks are implicitly associated with an existing instruction list: the instruction list of the enclosing basic block. Thus, we could have accomplished the same thing as the above code without being given a BasicBlock by doing:
Instruction *pi = ... *newInst = new Instruction(...);
pi->getParent()->getInstList().insert(pi, newInst);
Instruction* pi = ...;which is much cleaner, especially if you're creating a lot of instructions and adding them to BasicBlocks.
Instruction* newInst = new Instruction(..., pi);You can use Value::replaceAllUsesWith and User::replaceUsesOfWith to change more than one use at a time. See the doxygen documentation for the Value Class and User Class, respectively, for more int 1, 2The.
These two methods expose the operands of the User in a convenient form for direct access.
Together, these methods make up the iterator based interface to the operands of a User.:.ator...
Constructor used when you need to create new Functions to add the the program. The constructor must specify the type of the function to create and whether or not it should start out with internal or external linkage..
Global variables are represented with the (suprise suprise) GlobalVariable class. Like functions, GlobalVariables are also subclasses of GlobalValue, and as such are always referenced by their address (global values must live in memory, so their "name" refers to their. | http://llvm.org/releases/1.1/docs/ProgrammersManual.html | CC-MAIN-2016-07 | refinedweb | 976 | 52.6 |
this java class public class Foo {
public void com(java.lang.Comparable c) {
}
} in jython I want to pass it a jython string: jython
Jython 2.1 on java1.4.1_01 (JIT: null)
Type "copyright", "credits" or "license" for more information.
>>> import Foo
>>> f=Foo()
>>> f.com('as')
Traceback (innermost last):
File "<console>", line 1, in ?
TypeError: com(): 1st arg can't be coerced to java.lang.Comparable
>>> I would expect jython to know that Java String classimplements the java.lang.Comparable interface, anddoes some coercion so that it would work, justas if I typed this:>>>from java.lang import String...>>>f.com(String('as')) Am I missing something? sorry if this is an old question. thanksma0hai
---------------------------------
Do you Yahoo!?
The New Yahoo! Search - Faster. Easier. Bingo.
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/jython/mailman/jython-users/thread/20030422224640.46034.qmail@web20506.mail.yahoo.com/ | CC-MAIN-2017-30 | refinedweb | 170 | 63.05 |
Previous | Contents | Index | Next
xterm-style mouse reporting
winadj’ requests’
ssh-connection’ protocol the destination you want to connect to’,.28.32..
&Pwill be replaced by the port number you are connecting to on the target host.).
This option allows you to choose whether to include a header line with the date and time when the log file is opened. It may be useful to disable this if the log file is being used as realtime input to other programs that don't expect the header line. CR, and so the newly written line is overwritten by the following line. This option causes a line feed so that all lines are displayed..15.3.8). (sometimes called ‘passthrough printing’)..15.3 for details. If none of the settings here seems to help, you may find question A.7.23.2).:
PuTTY has the ability to clear the terminal's scrollback buffer in response to a command from the server. If you find PuTTY is doing this unexpectedly or inconveniently, you can tell PuTTY not to respond to that server command..11..10.
The Window configuration panel allows you to control aspects of the PuTTY window.
The ‘Columns’ and ‘Rows’..
If PuTTY is configured to treat data from the server as encoded in UTF-8, then by default it disables the older VT100-style system of control sequences that cause the lower-case letters to be temporarily replaced by line drawing characters.
The rationale is that in UTF-8 mode you don't need those control sequences anyway, because all the line-drawing characters they access are available as Unicode characters already, so there's no need for applications to put the terminal into a special state to get at them.
Also, it removes a risk of the terminal accidentally getting into that state: if you accidentally write uncontrolled binary data to a non-UTF-8 terminal, it can be surprisingly common to find that your next shell prompt appears as a sequence of line-drawing characters and then you have to remember or look up how to get out of that mode. So by default, UTF-8 mode simply doesn't have a confusing mode like that to get into, accidentally or on purpose.
However, not all applications will see it that way. Even UTF-8 terminal users will still sometimes have to run software that tries to print line-drawing characters in the old-fashioned way. So the configuration option ‘Enable VT100 line drawing even in UTF-8 mode’ puts PuTTY into a hybrid mode in which it understands the VT100-style control sequences that change the meaning of the ASCII lower case letters, and understands UTF-8.
The Selection panel allows you to control the way copy and paste work in the PuTTY window.
PuTTY's copy and paste mechanism is by default modelled on the Unix
xterm application. The X Window System uses a three-button mouse, and the convention in that system is that the left button selects, the right button extends an existing selection, and the middle button pastes.
Windows often only has two mouse buttons, so when run on Windows, PuTTY is configurable..)
(When PuTTY iself is running on Unix, it follows the X Window System convention.).
Here you can configure which clipboard(s) are written or read by PuTTY's various copy and paste actions.
Most platforms, including Windows, have a single system clipboard. On these platforms, PuTTY provides a second clipboard-like facility by permitting you to paste the text you last selected in this window, whether or not it is currently also in the system clipboard. This is not enabled by default.
The X Window System (which underlies most Unix graphical interfaces) provides multiple clipboards (or ‘selections’), and many applications support more than one of them by a different user interface mechanism. When PuTTY itself is running on Unix, it has more configurability relating to these selections.
The two most commonly used selections are called ‘
PRIMARY’ and ‘
CLIPBOARD’; in applications supporting both, the usual behaviour is that
PRIMARY is used by mouse-only actions (selecting text automatically copies it to
PRIMARY, and middle-clicking pastes from
PRIMARY), whereas
CLIPBOARD is used by explicit Copy and Paste menu items or keypresses such as Ctrl-C and Ctrl-V.
The checkbox ‘Auto-copy selected text to system clipboard’ controls whether or not selecting text in the PuTTY terminal window automatically has the side effect of copying it to the system clipboard, without requiring a separate user interface action.
On X, the wording of this option is changed slightly so that ‘
CLIPBOARD’ is mentioned in place of the ‘system clipboard’. Text selected in the terminal window will always be automatically placed in the
PRIMARY selection, as is conventional, but if you tick this box, it will also be placed in ‘
CLIPBOARD’ at the same time.
PuTTY has three user-interface actions which can be configured to paste into the terminal (not counting menu items). You can click whichever mouse button (if any) is configured to paste (see section 4.11.1); you can press Shift-Ins; or you can press Ctrl-Shift-V, although that action is not enabled by default.
You can configure which of the available clipboards each of these actions pastes from (including turning the paste action off completely). On platforms with a single system clipboard (such as Windows), the available options are to paste from that clipboard or to paste from PuTTY's internal memory of the last selected text within that window. On X, the standard options are
CLIPBOARD or
PRIMARY.
(
PRIMARY is conceptually similar in that it also refers to the last selected text – just across all applications instead of just this window.)
The two keyboard options each come with a corresponding key to copy to the same clipboard. Whatever you configure Shift-Ins to paste from, Ctrl-Ins will copy to the same location; similarly, Ctrl-Shift-C will copy to whatever Ctrl-Shift-V pastes from.
On X, you can also enter a selection name of your choice. For example, there is a rarely-used standard selection called ‘
SECONDARY’, which Emacs (for example) can work with if you hold down the Meta key while dragging to select or clicking to paste; if you configure a PuTTY keyboard action to access this clipboard, then you can interoperate with other applications' use of it. Another thing you could do would be to invent a clipboard name yourself, to create a special clipboard shared only between instances of PuTTY, or between just instances configured in that particular way.
It is possible for the clipboard to contain not just text (with newlines and tabs) but also control characters such as ESC which could have surprising effects if pasted into a terminal session, depending on what program is running on the server side. Copying text from a mischievous web page could put such characters onto the clipboard.
By default, PuTTY filters out the more unusual control characters, only letting through the more obvious text-formatting characters (newlines, tab, backspace, and DEL).
Setting this option stops this filtering; on paste, any character on the clipboard is sent to the session uncensored. This might be useful if you are deliberately using control character pasting as a simple form of scripting, for instance.
The Copy configuration panel controls behaviour specifically related to copying from the terminal window to the clipboard.
PuTTY will select a word at a time in the terminal window if you double-click to begin the drag. This section.
If you enable ‘Copy’.
This option is enabled by default. If it is disabled, PuTTY will ignore any control sequences sent by the server which use the control sequences supported by modern terminals to specify arbitrary 24-bit RGB colour value.
When the server sends a control sequence indicating that some text should be displayed in bold, PuTTY can handle this..13.7), instead going with the system-wide defaults.
Note that non-bold and bold text will be the same colour if this option is enabled. You might want to change to indicating bold text by font changes (see section 4.13 chosen to indicate that by colour (see section 4.13.4),.18.2.)
Therefore, you might find that keepalives help connection loss, or you might find they make it worse, depending on what kind of network problems you have between you and the server.
Keepalives are only supported in Telnet and SSH; the Rlogin, SUPDUP, and Raw protocols offer no way of implementing them. (For an alternative, see section 4.14.3.)
Note that if you are using SSH-1 and the server has a bug that makes it unable to deal with SSH-1 ignore messages (see section 4.26.11), instead key exchange and user authentication (see section 4.22 and section 4.18.1.1)...11.3.14 for more information.
You can also enable this mode on the command line; see section 3.11.3.26..
Often the proxy interaction has its own diagnostic output; this is particularly the case for local proxy commands.
The setting ‘Print proxy diagnostics in the terminal window’ lets you control how much of the proxy's diagnostics are printed to the main terminal window, along with output from your main session.
By default (‘No’), proxy diagnostics are only sent to the Event Log; with ‘Yes’ they are also printed to the terminal, where they may get mixed up with your main session. ‘Only until session starts’ is a compromise; proxy messages will go to the terminal window until the main session is deemed to have started (in a protocol-dependent way), which is when they're most likely to be interesting; any further proxy-related messages during the session will only go to the Event Log..11 to use SSH protocol version 2 or the older version 1.
You should normally leave this at the default of ‘2’. As well as having fewer features, the older SSH-1 protocol is no longer developed, has many known cryptographic weaknesses, and is generally not considered to be secure. PuTTY's protocol 1 implementation is provided mainly for compatibility, and is no longer being enhanced.
If a server offers both versions, prefer ‘2’. If you have some server or piece of equipment that only talks SSH-1, select ‘1’ here, and do not treat the resulting connection as secure.
PuTTY will not automatically fall back to the other version of the protocol if the server turns out not to match your selection here; instead, it will put up an error message and abort the connection. This prevents an active attacker downgrading an intended SSH-2 connection to SSH-1..
It is possible to test programmatically for the existence of a live upstream using Plink. See section 7.2.3.4..20).
PuTTY currently supports the following key exchange methods:
If the first algorithm PuTTY finds is below the ‘warn below here’ line, you will see a warning box when you make the connection, similar to that for cipher selection (see section 4.20).
PuTTY supports a set of key exchange methods that also incorporates GSSAPI-based authentication. They are enabled with the ‘Attempt GSSAPI key exchange’ checkbox (which also appears on the ‘GSSAPI’ panel).
PuTTY can only perform the GSSAPI-authenticated key exchange methods when using Kerberos V5, and not other GSSAPI mechanisms. If the user running PuTTY has current Kerberos V5 credentials, then PuTTY will select the GSSAPI key exchange methods in preference to any of the ordinary SSH key exchange methods configured in the preference list.
The advantage of doing GSSAPI authentication as part of the SSH key exchange is apparent when you are using credential delegation (see section 4.22.1). The SSH key exchange can be repeated later in the session, and this allows your Kerberos V5 credentials (which are typically short-lived) to be automatically re-delegated to the server when they are refreshed on the client. (This feature is commonly referred to as ‘cascading credentials’.)
If your server doesn't support GSSAPI key exchange, it may still support GSSAPI in the SSH user authentication phase. This will still let you log in using your Kerberos credentials, but will only allow you to delegate the credentials that are active at the beginning of the session; they can't be refreshed automatically later, in a long-running session.
Another effect of GSSAPI key exchange is that it replaces the usual SSH mechanism of permanent host keys described in section 2.2. So if you use this method, then you won't be asked any interactive questions about whether to accept the server's host key. Instead, the Kerberos exchange will verify the identity of the host you connect to, at the same time as verifying your identity to it. occurring Host Keys panel allows you to configure options related to SSH-2 host key management.
Host keys are used to prove the server's identity, and assure you that the server is not being spoofed (either by a man-in-the-middle attack or by completely replacing it on the network). See section 2.2 for a basic introduction to host keys.
This entire panel is only relevant to SSH protocol version 2; none of these settings affect SSH-1 at all.
PuTTY supports a variety of SSH-2 host key types, and allows you to choose which one you prefer to use to identify the server. Configuration is similar to cipher selection (see section 4.20).
PuTTY currently supports the following host key types:
2^255-19.
If PuTTY already has one or more host keys stored for the server, it will by default prefer to use one of those, even if the server has a key type that is higher in the preference order. You can add such a key to PuTTY's cache from within an existing session using the ‘Special Commands’ menu; see section 3.1.3.2.
Otherwise, PuTTY will choose a key type based purely on the preference order you specify in the configuration.
If the first key type PuTTY finds is below the ‘warn below here’ line, you will see a warning box when you make the connection, similar to that for cipher selection (see section 4.20).
By default, PuTTY will adjust the preference order for host key algorithms so that any host keys it already knows are moved to the top of the list.
This prevents you from having to check and confirm a new host key for a server you already had one for (e.g. because the server has generated an alternative key of a type higher in PuTTY's preference order, or because you changed the preference order itself).
However, on the other hand, it can leak information to a listener in the network about whether you already know a host key for this server.
For this reason, this policy is configurable. By turning this checkbox off, you can reset PuTTY to always use the exact order of host key algorithms configured in the preference list described in section 4.19.1, so that a listener will find out nothing about what keys you had stored..11.3.21.
For situations where PuTTY's automated host key management simply picks the wrong host name to store a key under, you may want to consider setting a ‘logical host name’ instead; see section 4:
SHA256:’ followed by 43 case-sensitive characters.
MD5:’. (The case of the characters does not matter.)
, unless you explicitly do so..
In SSH-2, it is in principle possible to establish a connection without using SSH's mechanisms to identify or prove who you are to the server. An SSH server could prefer to handle authentication in the data channel, for instance, or simply require no user authentication whatsoever.
By default, PuTTY assumes the server requires authentication (we've never heard of one that doesn't), and thus must start this process with a username. If you find you are getting username prompts that you cannot answer, you could try enabling this option. However, most SSH servers will reject this.
This is not the option you want if you have a username and just want PuTTY to remember it; for that see section 4.15.1. It's also probably not what if you're trying to set up passwordless login to a mainstream SSH server; depending on the server, you probably wanted public-key authentication (chapter 8) or perhaps GSSAPI authentication (section 4.22). (These are still forms of authentication, even if you don't have to interact with them.)
This option only affects SSH-2 connections. SSH-1 connections always require an authentication step.
This option causes PuTTY to abandon an SSH session and disconnect from the server, if the server accepted authentication without ever having asked for any kind of password or signature or token.
This might be used as a security measure. There are some forms of attack against an SSH client user which work by terminating the SSH authentication stage early, and then doing something in the main part of the SSH session which looks like part of the authentication, but isn't really.
For example, instead of demanding a signature from your public key, for which PuTTY would ask for your key's passphrase, a compromised or malicious server might allow you to log in with no signature or password at all, and then print a message that imitates PuTTY's request for your passphrase, in the hope that you would type it in. (In fact, the passphrase for your public key should not be sent to any server.)
PuTTY's main defence against attacks of this type is the ‘trust sigil’ system: messages in the PuTTY window that are truly originated by PuTTY itself are shown next to a small copy of the PuTTY icon, which the server cannot fake when it tries to imitate the same message using terminal output.
However, if you think you might be at risk of this kind of thing anyway (if you don't watch closely for the trust sigils, or if you think you're at extra risk of one of your servers being malicious), then you could enable this option as an extra defence. Then, if the server tries any of these attacks involving letting you through the authentication stage, PuTTY will disconnect from the server before it can send a follow-up fake prompt or other type of attack.
On the other hand, some servers legitimately let you through the SSH authentication phase trivially, either because they are genuinely public, or because the important authentication step happens during the terminal session. (An example might be an SSH server that connects you directly to the terminal login prompt of a legacy mainframe.) So enabling this option might cause some kinds of session to stop working. It's up to you..11. They can even be used to prompt for simple passwords.
With this switch enabled, PuTTY will attempt these forms of authentication if the server is willing to try them. You will be presented with a challenge string (which may.14.
You can use the authentication agent Pageant so that you do not need to explicitly configure a key here; see chapter 9.
If a private key file is specified here with Pageant running, PuTTY will first try asking Pageant to authenticate with that key, and ignore any other keys Pageant may have. If that fails, PuTTY will ask for a passphrase as normal. You can also specify a public key file in this case (in RFC 4716 or OpenSSH format), as that's sufficient to identify the key to Pageant, but of course if Pageant isn't present PuTTY can't fall back to using this file itself. to implement passwordless login.
GSSAPI authentication is only available in the SSH-2 protocol.
PuTTY supports two forms of GSSAPI-based authentication. In one of them, the SSH key exchange happens in the normal way, and GSSAPI is only involved in authenticating the user. The checkbox labelled ‘Attempt GSSAPI authentication’ controls this form.
In the other method, GSSAPI-based authentication is combined with the SSH key exchange phase. If this succeeds, then the SSH authentication step has nothing left to do. See section 4.18.1.1 for more information about this method. The checkbox labelled ‘Attempt GSSAPI key exchange’ controls this form. (The same checkbox appears on the ‘Kex’ panel.)
If one or both of these controls is enabled, then GSSAPI authentication will be attempted in one form or the other, and (typically) if your client machine has valid Kerberos credentials loaded, then PuTTY should be able to authenticate automatically to servers that support Kerberos logins.
If both of those checkboxes are disabled, PuTTY will not try any form of GSSAPI at all, and the rest of this panel will be unused..
If your connection is not using GSSAPI key exchange, it is possible for the delegation to expire during your session. See section 4.18.1.1 for more information. (including Windows' SSPI),.
On Windows, such libraries are files with a
.dll extension, and must have been built in the same way as the PuTTY executable you're running; if you have a 32-bit DLL, you must run a 32-bit version of PuTTY, and the same with 64-bit (see question A.6.10). On Unix, shared libraries generally have a
.so extension., although the server is at liberty to ignore your changes. If you don't understand any of this, it's safe to leave these settings alone.
(None of these settings will have any effect if no pseudo-terminal is requested or allocated.)
You can change what happens for a particular mode by selecting it in the list, choosing one of the options and specifying the exact value if necessary, and hitting ‘Set’. The effect of the options is as follows:
PuTTY proper will send modes that it has an opinion on (currently only the code for the Backspace key,
ERASE, and whether the character set is UTF-8,
IUTF8).. (Explicitly specifying a value of
nois different from not sending the mode at all.)
IUTF8signals to the server whether the terminal character set is UTF-8 or not, for purposes such as basic line editing; if this is set incorrectly, the backspace key may erase the wrong amount of text, for instance. However, simply setting this is not usually sufficient for the server to use UTF-8; POSIX servers will generally also require the locale to be set (by some server-dependent means), although many newer installations default to UTF-8. Also, since this mode was added to the SSH protocol much later than the others, many servers (particularly older servers) do not honour this mode sent over SSH; indeed, a few poorly-written servers object to its mere presence, so you may find you need to set it to not be sent at all. When set to ‘Auto’, this follows the local configured character set (see section 4.10.1).
The X11 panel allows you to configure forwarding of X11 over an SSH connection.
If your server lets you run X Window System this.:
This overrides the general Internet protocol version preference on the Connection panel (see section 4.14.4). and More Bugs panels (there are two because we have so many bug compatibility modes) allow in SSH-2 to confuse the encrypted data stream and make it harder to cryptanalyse. It also uses ignore messages for connection keepalives (see section 4..18.
winadj’ requests’
PuTTY sometimes sends a special request to SSH servers in the middle of channel data, with the name
winadj@putty.projects.tartarus.org (see section.3), it's possible that when connecting to such a server it might receive a reply to a request after it thinks the channel has entirely closed, and terminate with an error along the lines of ‘Received
SSH2_MSG_CHANNEL_FAILURE for nonexistent channel 256’..
This is an SSH.14.1).
If this bug is detected, PuTTY will stop using ignore messages. This means that keepalives will stop working, and PuTTY will have to fall back to a secondary defence against SSH-1 password-length eavesdropping. See section 4.26.12..11),.
ssh-connection’ protocol
In addition to SSH itself, PuTTY also supports a second protocol that is derived from SSH. It's listed in the PuTTY GUI under the name ‘Bare
ssh-connection’.
This protocol consists of just the innermost of SSH-2's three layers: it leaves out the cryptography layer providing network security, and it leaves out the authentication layer where you provide a username and prove you're allowed to log in as that user.
It is therefore completely unsuited to any network connection. Don't try to use it over a network!
The purpose of this protocol is for various specialist circumstances in which the ‘connection’ is not over a real network, but is a pipe or IPC channel between different processes running on the same computer. In these contexts, the operating system will already have guaranteed that each of the two communicating processes is owned by the expected user (so that no authentication is necessary), and that the communications channel cannot be tapped by a hostile user on the same machine (so that no cryptography is necessary either). Examples of possible uses involve communicating with a strongly separated context such as the inside of a container, or a VM, or a different network namespace.
Explicit support for this protocol is new in PuTTY 0.75. As of 2021-04, the only known server for the bare
ssh-connection protocol is the Unix program ‘
psusan’ that is also part of the PuTTY tool suite.
(However, this protocol is also the same one used between instances of PuTTY to implement connection sharing: see section 4.17.5. In fact, in the Unix version of PuTTY, when a sharing upstream records ‘Sharing this connection at [pathname]’ in the Event Log, it's possible to connect another instance of PuTTY directly to that Unix socket, by entering its pathname in the host name box and selecting ‘Bare
ssh-connection’ as the protocol!)
Many of the options under the SSH panel also affect this protocol, although options to do with cryptography and authentication do not, for obvious reasons.
I repeat, DON'T TRY TO USE THIS PROTOCOL FOR NETWORK CONNECTIONS! That's not what it's for, and it's not at all safe to do it. SUPDUP panel allows you to configure options that only apply to SUPDUP sessions. See section 3.10 for more about the SUPDUP protocol.
In SUPDUP, the client sends a piece of text of its choice to the server giving the user's location. This is typically displayed in lists of logged-in users.
By default, PuTTY just defaults this to "The Internet". If you want your location to show up as something more specific, you can configure it here.
This declares what kind of character set extension your terminal supports. If the server supports it, it will send text using that character set. ‘None’ means the standard 95 printable ASCII characters. ‘ITS’ means ASCII extended with printable characters in the control character range. This character set is documented in the SUPDUP protocol definition. ‘WAITS’ is similar to ‘ITS’ but uses some alternative characters in the extended set: most prominently, it will display arrows instead of
^ and
_, and
} instead of
~. ‘ITS’ extended ASCII is used by ITS and Lisp machines, whilst ‘WAITS’ is only used by the WAITS operating system from the Stanford AI Laboratory.
When **MORE** processing is enabled, the server causes output to pause at the bottom of the screen, until a space is typed.
This controls whether the terminal will perform scrolling then the cursor goes below the last line, or if the cursor will return to the first line. USB stick, you probably want to store it on the USB stick.
If you want to provide feedback on this manual or on the PuTTY tools themselves, see the Feedback page. | https://tartarus.org/~simon/putty-snapshots/htmldoc/Chapter4.html | CC-MAIN-2021-31 | refinedweb | 4,698 | 59.64 |
In this work you will materialise a algorithm that solves the problem of Stable Marriage. In this problem we aim at "n" men and "n" women and our goal is the contracting of constant marriages between men and women. Obviously each man can be wedded only one woman and one woman one man similarly. Each individual, or man or woman, does not prefer to himself all the individuals of opposite sex but has a order of preference.
All men and the women can very easily shape "n" pairs between them. The problem is how stable are these marriages that are contracted. Be considered that between the wedded pairs, exist two pairs (a,g) and (a',g') where a and a' are men and g and g' women and that also a prefers more the g' from the g and the g' prefers more the a from the a'. It is very likely the a and the g' abandon their couples and become pair. Consequently, the question is finding "n" of pairs between men and women which constitutes viable marriages that is to say there should be no cases as the one that was reported above. The algorithm that it solves the problem of stable marriage is :
Initially all the men and women are free.There is a man "a" which is free and has not made proposal in any of the women.
Choose a such man "a".
Be it "g" the woman who has a higher ranking in the preferences of "a" and in which "a" has still not made proposal.
If the "g" is free then
"a" and "g" becomes pair.
Otherwise if the "g" is already pair with man "a'" then
If the g prefers more the "a'" from the "a" then
the "a" remains free.
Otherwise the g abandons the "a' " and becomes pair with the "a".
the "a" is henceforth free.
End If
End If
End While.
You are called materialise algorithm adopting suitable structures of data. Concretely, the structures that you will use will be supposed to ensure that each repetition of bronchus While requires time O(1), that is to say the crowd of action that is executed in a repetition of is constant independent size of problem "n". Also the structures that you will use will be supposed to occupy space o(n^2) maximum.
Finally, before the beginning of implementation of repetitive bronchus it can precede a phase of arhjkopoj'isis of structures where hrisjmopojsete. The cost in time of this phase should be very o(n^2).
i have done this so far :
#include <stdio.h> #include <stdlib.h> // Structure and Global Variables //------------------------------------------------- //VARIABLES AND NAMES THAT YOU CAN CHANGE //Number of Men and Women you need to match #define MWTOTAL 4 // Add Women and Men names below to match the MWTOTAL char *wnames[50] = { "Sarah", "Michelle", "Liz", "Estella" }; char *mnames[50] = { "John", "Travis", "Don", "Launnie" }; //VARIABLES AND NAMES CHANGE ENDS HERE //-------------------------------------------------- struct women { int free; int womenPreference[MWTOTAL]; } wset[MWTOTAL]; struct men { int free; int menPreference[MWTOTAL]; int proposeToW; } mset[MWTOTAL]; int engageSet[MWTOTAL][MWTOTAL]; int totalEngaged=0; // Function Definitions and Declarations void initializeSets() { int i; for(i=0; i<MWTOTAL; i++) { mset[i].free = -1; //Initially all men are free to propose to every women mset[i].proposeToW = 0; //Gives the next free women that a man can propose wset[i].free = -1; } } void startStableMatch() { int i,j,k,l,temp,temp1=0,cnt=0; while (1) { for (i=0; i<MWTOTAL; i++) { if (mset[i].free == -1) { // A man is free to propose in decreasing order of preference if (wset[mset[i].menPreference[mset[i].proposeToW]].free == -1) { //If Women is free, engage the Man/Women engageSet[totalEngaged][0] = i; engageSet[totalEngaged][1] = mset[i].menPreference[mset[i].proposeToW]; totalEngaged++; wset[mset[i].menPreference[mset[i].proposeToW]].free = 0; //Women is not free anymore mset[i].free = 0; mset[i].proposeToW++; } else { for (j=0; j<MWTOTAL; j++) { if ( i == wset[mset[i].menPreference[mset[i].proposeToW]].womenPreference[j] ) break; } for (k=0; k<totalEngaged; k++) { if ( engageSet[k][1] == mset[i].menPreference[mset[i].proposeToW] ) { for (l=0; l<MWTOTAL; l++) { if (engageSet[k][0] == wset[mset[i].menPreference[mset [i].proposeToW]].womenPreference[l]) { if ( j < l ) { temp = engageSet[k][0]; engageSet[k][0] = i; // Women is engaged to the man that just proposed mset[temp].free = -1; // Free the previously engaged man of W mset[i].free = 0; mset[i].proposeToW++; } else { mset[i].free = -1; // Else the Women rejects the man's proposal.. free the man mset[i].proposeToW++; } break; } } break; } } } } } for (temp1=0; temp1<MWTOTAL; temp1++) { if (mset[temp1].free == 0) cnt++; } if (cnt == 4) break; // No men is free, the algorithm terminates here. cnt = 0; } } void printEngageSet() { int i; printf ("\n\n\tThe stable pairs are listed below\n\n"); printf ("----------------------------------------------------------"); for (i=0; i<totalEngaged; i++) { printf ("\n\t( %s , %s )\n",mnames[engageSet[i][0]],wnames[engageSet[i][1]]); } printf ("----------------------------------------------------------\n\n"); } int main(void) { int i; printf ("\n\nPlease Wait while the Algorithm computes the Stable Set\n\n"); initializeSets(); startStableMatch(); printEngageSet(); return 0; }
any helpful tips or if you find any mistakes...please tell me | https://www.daniweb.com/programming/software-development/threads/122401/stable-marriage | CC-MAIN-2017-17 | refinedweb | 866 | 55.74 |
In this project I made an application program for Text-to-Speech Conversion. To build this application, we must install the SDK speech from Microsoft on our computer. You can download Speech SDK (it's free) from
The SAPI API provides a high-level interface between an application and speech engines. SAPI implements all the low-level details needed to control and manage the real-time operations of various speech engines.
Applications can control text-to-speech (TTS) using the
ISpVoice Component
Object Model (COM) interface. Once an application has created an
ISpVoice
object,.
The project devided into five steps :
First you will create the initial ATL project using the MFC AppWizard.
Your dialog box should look like this:
Figure 1: New Project
Click OK and the MFC AppWizard presents a dialog box offering several choices to configure the type of MFC project (figure 2), choose Dialog based. After that, click Finish button
Figure 2: MFC AppWizard Step 1, choose Dialog based
To use SAPI (Speech Application Interface) in our application, we must set
our project. In file StdAfx.h, Add code like this (after "
#include
<stdio.h>" but before the "
#endif" statement) :
#include <atlbase.h> extern CComModule _Module; #include <atlcom.h>
Change the project settings to reflect the paths. Using the Project->Settings. menu item, set the SAPI.h path. Click the C/C++ tab and select Preprocessor from the Category drop-down list. Enter the following in the "Additional include directories": with directory that Speech SDK available , such as D:\Program Files\Microsoft Speech SDK 5.1\Include. (see figure 3)
Figure 3: Setting path
To set the SAPI.lib path (see figure 4):
Figure 4: Add library module Sapi.lib and set path
Model of GUI in this project like figure 5 :
Figure 5: GUI project
In GUI, double click Button, type
OnSpeak as name of method. This code:
UpdateData(); ISpVoice * pVoice = NULL; if (FAILED(CoInitialize(NULL))) { AfxMessageBox("Error to intiliaze COM"); return; } HRESULT hr = CoCreateInstance(CLSID_SpVoice, NULL, CLSCTX_ALL, IID_ISpVoice, (void **)&pVoice); if( SUCCEEDED( hr ) ) { hr = pVoice->Speak(m_sText.AllocSysString(), 0, NULL); pVoice->Release(); pVoice = NULL; } CoUninitialize();
Note:
m_sText is variable of Edit Box
After that, you can compile and run this project.
Speech SDK 5.1
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/audio-video/speech.aspx | crawl-002 | refinedweb | 377 | 57.47 |
Variable sometimes not logged
in OpenSesame
I have this inline_script:
def my_form_validator(): """Checks whether both the gender and age fields have been filled out""" #also checks for the number of characters return var.sex != u'no' and var.byear != u'' and var.bmonth != u'' and var.bday != u'' and len(str(var.byear))==4 and len(str(var.bmonth)) in [1,2] and len(str(var.bday)) in [1,2] def filter_digits(ch): """Allows only digit characters as input""" return ch in ['0','1','2','3','4','5','6','7','8','9','backspace'] # Define all widgets button_ok = Button(text=u'Ok') #label_gender= Label(u'Your gender') checkbox_male = Checkbox(text=u'Maennlich', group=u'sex', var=u'sex') checkbox_female = Checkbox(text=u'Weiblich', group=u'sex', var=u'sex') label_age = Label(u'Bitte geben Sie Ihr Geburtsdatum ein') label_year = Label(u'Jahr') label_month = Label(u'Monat') label_day = Label(u'Tag') # Specify a key filter so that only digits are accepted as text input input_year = TextInput(stub=u'JJJJ', var=u'byear', key_filter=filter_digits) #actually a selection thing would be nice input_month = TextInput(stub=u'MM', var=u'bmonth', key_filter=filter_digits) input_day = TextInput(stub=u'TT', var=u'bday', key_filter=filter_digits) # Build the form. Specify a validator function to make sure that the form is # completed. my_form = Form( validator=my_form_validator, cols=3, rows=[3,1,1,1,1,1,1,1,1,3], spacing=10, margins=(100, 100, 100, 100), ) #my_form.set_widget(label_gender, (0, 0)) my_form.set_widget(checkbox_male, (1, 2)) my_form.set_widget(checkbox_female, (2, 2)) my_form.set_widget(label_age, (1, 3)) my_form.set_widget(label_day, (0, 4)) my_form.set_widget(label_month, (0, 5)) my_form.set_widget(label_year, (0, 6)) my_form.set_widget(input_day, (1, 4), colspan=2) my_form.set_widget(input_month, (1, 5), colspan=2) my_form.set_widget(input_year, (1, 6), colspan=2) my_form.set_widget(button_ok, (1, 7)) my_form._exec() # Calculate age from current time and Birtdate import shutil,os,datetime now = datetime.datetime.now() current_year = now.strftime("%Y") current_month = now.strftime("%m") current_day = now.strftime("%d") #current_time = now.strftime("%Y-%m-%d") years = int(current_year) - var.byear months = int(current_month) - var.bmonth days = int(current_day) - var.bday var.age= years + float(months)/12 + float(days)/365 var.Bday = str(var.byear)+'.'+str(var.bmonth)+'.'+str(var.bday) var.Sex=str(var.sex) print var.age, var.Bday log.write_vars()
Mostly the variable sex is logged. But sometimes it is not. I have no clue about why it is sometimes not logged. And i cannot reproduce the behavior. It seems like random.
Could it have to do with garbage collection? Other ideas?
I tried full screen and quick run. In both it usually logs. But sometimes not.
Hi Stephan,
What do you mean with sometimes? Some trials in one session, or some session? So, if you hit the RUN button will it either log everything (usually) or nothing (sometimes), or will it log some trials but not others?
Eduard
Hi eduard,
it is just in some sessions. If i repeat the session the variable usually appears in the inspector (and the logfile).
The variable 'sex' is only coded once, so there is only one trial. It is logged at the same time as the birthday variables (which always appear in the inspector/output).
You see that i tried a workaround with
var.Sex=str(var.sex), which unfortunately does not really help.
What i noticed is the following: Sex is always in the variable inspector. However, sometimes its values remain empty (and in those cases sex does not appear in the variable inspector).
What's the meaning of the 'u' in var=u'sex'? Could it have to do with that?
It is so strange, because if the variable is not filled the experiment should not go on (see my_form_validator), however it always does. So there must be a value for sex, but the variable gets lost on the way (somewhere)...
How often does that happen? I just tried it 10 times and it always worked. I used the expyriment backend and the quickrun button on a linux machine.
Hi eduard,
I just replicated the error on the second run (new day, same problem). Yesterday i also made like 10 runs, where it worked. I cannot say how often it occurs.
I am using Win7, expyriment backend. The error also occured on Mac.
Maybe it has to do with the rest of my experiment? Can I send you the whole experiment privately somewhere?
I shortened the script a little:
What i know for now:
-the form_validator is not the problem
-it is definitely the checkbox widget (if it occurs in one checkbox widget, the problem appears in another checkbox widget too)
-most variables appear at the start of the experiment in the inspector, however the checkbox variables appear on presentation of the widget (which takes some miliseconds to appear in the inspector). But in some cases (don't know why nor when), the variables do not appear. Even if they do not appear in the inspector, they are somewhere, because they do work in the form_validator --> the experiment goes on.
Update:
Actually, my workaround with the copy of the variable helped. Although not visible in the inspector, the values of the new variable Sex are logged in the logfile. However, the old variable sex is not in the logfile. Seems like a temporary variable, which surprises me, because in the form_validator i draw on it with var.sex and not with sex.
Maybe it has to do with the group definition?
Honestly, I have no idea. I tried it now close to 20 times and the variable is always present. Maybe it has to do with the Operating System?
Anyway, if this work around works for you, that that's good I guess. If you feel like it, It'd be cool if you could do some more testing, try to further hone in on the problem and eventually submit a bug report on github.
Thanks,
Eduard
Possibly, it had to do with the operating system. Actually, it was never present on a certain MacBook. And there it never appeared in the logfile. However, i want to be independent from operating systems (one of the strengths of OS!) using participants notebooks.
Now i solved it by changing the OpenSesameVar to a PythonVar, and then back again. | http://forum.cogsci.nl/discussion/comment/14356/ | CC-MAIN-2019-09 | refinedweb | 1,047 | 60.92 |
Context: you need to check that the values you have in a List of Strings are the same as the contents of a file. An element in the List will correspond to an entire line from the file. How can you achieve this easily?
Well, first, let’s see what the data to compare is. Let’s go with an example, and assume that your List of Strings contains the following values: “apples”, ”pears”, ”cherries”, ”peaches”, ”plums”. The file that you need to compare the List to has the following structure:
apples pears cherries peaches plums
You can easily see that the contents of the List and the contents of the file are the same, as they all contain the same names of some very much-loved fruit. Testing this can be done in several ways of course, but below you will find a suggestion using the FileUtils class from the Apache Commons library. Specifically, we will use the ‘readLines’ method from this class.
Writing the test that will check the equality of the two will start with the import of the ‘readLines’, which I will do as a static import, so I can shorten the call to the method from the test:
import static org.apache.commons.io.FileUtils.readLines;
Let us assume the name of the List of Strings is ‘fruitList’. I will not go over how you create a List of Strings. Also, let’s assume that the path of the file containing the fruit names is ‘filePath’. I will not go over how the fruit names were added to the file either. The comparison between the List and the contents of the file (remember, we are checking an element of the list to a line of the file, based on position – first element with first line, second element with second line, etc.), will be done using an ‘assertEquals’, as follows:
assertEquals(readLines(new File(“filePath”)), fruitList);
The ‘readLines’ method returns a List of Strings, so the comparison will be done, in fact, between two Lists. Hence, the order is also checked to be the same, when using assertEquals (the order of the elements in the List needs to be the same as the order of the lines in the file, for the assertEquals to return true).
One thought on “Easily compare a list of Strings with the contents of a file”
Thank you. Very useful | https://imalittletester.com/2018/08/09/easily-compare-a-list-of-strings-with-the-contents-of-a-file/?replytocom=367 | CC-MAIN-2021-31 | refinedweb | 399 | 73.71 |
Object-oriented or functional? Why not both with Scala?
Just as there are pundits proclaiming the death of the Java programming language specifically, there are pundits proclaiming the death of object-oriented programming in general.
There are problems with object-oriented programming, but maybe a complete discard of object-oriented programming in favor of functional programming is not the answer. Maybe the answer is the hybrid approach of Scala.
A week ago, before Thanksgiving, I gave a talk at Detroit Labs on this very topic, titled “Dipping a Toe in Functional Programming with Scala.” I got a lot of good feedback.
For one thing I should have mentioned closer to the beginning that I am not a Scala expert. Part of the beauty of Scala is that you can start out in it using only its object-oriented features and very gradually start using its functional capabilities.
I misjudged the proper balance of certain topics for the Audience; some things I should have said more about, some less. Also, some of the things I put in speaker notes should have gone on the slides themselves, and vice-versa.
I am thankful to Detroit Labs for giving me the opportunity to give this talk. If I give this talk somewhere else, it’ll be much better because presenting it at Detroit Labs allowed me to see very clearly what works, what doesn’t work, what needs to be tweaked, what needs to stay the same, etc.
Because this article covers material for an hour-long talk, it’s not a quick read. If you want to install Scala on your system to follow along, you might want to figure that into the reading time.
What now follows is not a transcript of my talk, but perhaps a template to give this talk again in a similar context to the talks at Detroit Labs (there won’t be a talk at Detroit Labs in December but they’ll resume in January with a slate of new speakers).
On the plus side, with this article you can breeze through the familiar and slow down for the unfamiliar, whereas for the talk I had to keep going even if I worried that a small minority was not understanding what I was saying.
On the minus side, I might take hours or even days to answer questions posted in the comments, whereas at the talk people could raise their hands to ask me to clarify something before I moved on to the next slide.
I do think that I was right not to go too in depth on the history of Scala. The most salient point is that Martin Odersky, the inventor of Scala, is also the co-author of an early Java compiler, and he was a driving force to add generics to Java.
So if you’ve ever written a line like this one in Java:
List<MyCustomObject> myCustObjList = new ArrayList<>();
thank Martin Odersky.
A couple of years ago, our own Charles Scalfani here on Medium wrote “Goodbye, Object Oriented Programming.” He listed several problems with object-oriented programming, like the banana monkey jungle problem, the diamond problem, the fragile base class problem, etc.
Scalfani’s most important point is, I think, that object-oriented programming, because of its emphasis on inheritance hierarchies rather than containment hierarchies, does not really reflect the real world.
“But the real world is filled with Containment Hierarchies. A great example of a Containment Hierarchy is your socks. They are in a sock drawer which is contained in one drawer in your dresser which is contained in your bedroom which is contained in your house, etc.” — Charles Scalfani
In this sock example, it doesn’t quite matter if a
Sock is subclassed as a
TubeSock or an
AnkleSock or whatever, but that it is in a
Drawer in a
Dresser. And it doesn’t matter so much if the
Dresser is some kind of
Cabinet.
When you first learned about object-oriented programming, you might have been given an exercise in which you had to create an inheritance hierarchy of bicycles.
You might have created an elaborate hierarchy of several levels of inheritance with
Bicycle,
MountainBike,
DownhillBike,
TrailBike,
CityBike,
PennyfarthingBicycle, etc.
Even in a shallow inheritance hierarchy, having to cast objects up or down the hierarchy can quickly grow tiresome. It’s something you might not have to deal with if you just do the exercise and then move on.
If you took the basic inheritance exercise further, though, you might run into annoying situations that make you question the logic of your inheritance hierarchy design.
For example, you know that a particular
bike is a
TrailBike, but because you initialized it as a
MountainBike, you have to cast it to a
TrailBike just to access one little property or subroutine unique to
TrailBike.
To give another example, suppose
bike is initialized as a
TrailBike, and then you run into problems when you need the computer to see it as a
MountainBike.
Another inheritance hierarchy you may have seen given as an exercise is one of stringed, fretted instruments. Like
Guitar,
Ukulele,
Banjo,
Mandolin, etc. You might have created an
ElectricalInstrument interface that any subclass of
StringedInstrument can implement.
And you might have also created a string class (but called it something other than
String), so that when you construct a
StringedInstrument you have to pass it an array of strings (six for a
Guitar, four for a
Ukulele, four pairs for a
Mandolin, etc.).
To change the tuning of a
StringedInstrument object, you’d probably have to access the strings’
retune() procedure through
StringedInstrument. That’s a containment hierarchy, so this goes to Scalfani’s point.
In a practical situation, you might not really care what the inheritance is on your
ElectricDescantBalalaika, only that you’re able to tune it and play it without unnecessary hassle.
Maybe object-oriented programming doesn’t always reflect mathematical objects either.
For example, in a program that I’m working on, an algebraic integer calculator (source and tests are available from GitHub) I need to distinguish between numbers of the form a + b√2 and numbers of the form a + b√3, to give just two examples of sets of numbers the program deals with.
Here a and b are from the familiar set of integers which mathematicians often denote by the symbol Z. The set Z is infinite, and so are the sets Z[√2] (all numbers of the form a + b√2) and Z[√3] (all numbers of the form a + b√3).
In my program, there is the class
QuadraticRing which implements the
IntegerRing interface, and the classes
RealQuadraticRing and
ImaginaryQuadraticRing, which extend
QuadraticRing.
Since these objects represent infinite sets, it’s obvious that your computer can’t hold every number in any of those. This is of course a purely philosophical problem which we overcome by accepting that there are practical limitations on how closely an object in our computer can model something in reality or in our minds.
In my program you will find a far more serious problem that you might not think is as easy to dismiss as being purely philosophical. In the aforementioned
QuadraticRing, you will find:
public abstract class QuadraticRing implements IntegerRing { // ...several lines omitted... public abstract double getRadSqrt(); // ...several lines omitted...}
Seems reasonable enough.
public class RealQuadraticRing extends QuadraticRing { // ...several lines omitted... private double realRadSqrt; @Override
public double getRadSqrt() {
return this.realRadSqrt;
} // ...several lines omitted...}
So far so good. But then:
public class ImaginaryQuadraticRing extends QuadraticRing { // ...several lines omitted... @Override
public double getRadSqrt() {
String exceptionMessage = "Since the radicand " +
this.radicand + " is negative, this operation" +
" requires an object that can represent a" +
" purely imaginary number.";
throw new UnsupportedOperationException(exceptionMessage);
} // ...several lines omitted...}
So this function that supposedly returns a
double never actually does so when called on an instance of
ImaginaryQuadraticRing, because it always throws an exception.
Does my practical need to access the number √d/i when d is negative outweigh the inelegance of a function overridden to always throw an exception instead of giving a result?
I suppose I could redesign this so that
getRadSqrt() can only be called on instances of
RealQuadraticRing. And that becomes another headache in an inheritance hierarchy that is not very deep.
This might say more about my ability to design inheritance hierarchies than it does about the soundness of object-oriented programming as a concept. But if it’s not intuitive, maybe it’s not a very useful paradigm.
Should we just completely get rid of object-oriented programming and switch over to functional programming? I say no, because of the great investment we have all made in object-oriented programming.
At my Detroit Labs talk, I asked the attendees how many of them work with object-oriented programming for their jobs. Almost all of them raised their hands. For better or worse, object-oriented programming is here to stay.
One way to leverage our years of expertise on object-oriented programming to learn functional programming is to use a programming language with a hybrid approach, like Scala.
The nice things about Scala is that it can run on the Java Virtual Machine (JVM), it can use everything in the Java Development Kit (JDK), and it can probably use any Java third-party library. For the most part, Scala classes and traits interoperate smoothly with Java classes and interfaces.
There are a couple of small caveats: unlike the Groovy compiler, the Scala compiler can’t compile Java source code, but it still needs to see it if it’s not already compiled to a class file or a JAR; and IntelliJ apparently can’t auto-generate Scala, so it will auto-generate a Java test class for a Scala source class.
To follow along with the rest of this article, I strongly recommend that you install the Scala binaries, including the Scala REPL, on your system, if you don’t have them already. Go to and scroll down to “Other ways to install Scala.”
You can also follow along in IntelliJ by having it download the Scala plugin. I haven’t yet figured out how to enable Scala in NetBeans.
With the Scala binaries on your system and the path environment variable adjusted, you can run the Scala REPL from the command line with the command
scala. For much more detail about the Scala REPL, see my article from a few months ago.
The Scala REPL will work if you don’t have
scala\bin in your system path, but the Scala compiler will not (you’ll get a not very helpful error message about an unexpected
toolcp).
The foregoing doesn’t apply if IntelliJ takes care of compiling Scala for you, or maybe if you’re using the Scala Build Tool.
Scala’s functionality is paradoxically enabled by the fact that everything in Scala is an object. Objects are objects. Primitives are objects. Functions are objects. In the Scala REPL, or in Scastie in worksheet mode, you can try this out:
scala> 1729.getClass
res0: Class[Int] = intscala> "Hello, World!".getClass
res1: Class[_ <: String] = class java.lang.Stringscala> (Math.sqrt _).getClass
res2: Class[_ <: Double => Double] = class $$Lambda$1097/1998603857
I admit I don’t fully understand that last one, but the important point here is that the function
Math.sqrt(), which comes from Java, has a class in Scala and is therefore an object (the underscore character clarifies to Scala that we’re referring to the function, not to one of its possible outputs, like
Math.sqrt(2), which is then of course of class
[Double] = double).
Let’s take a step back from functional programming and talk about operator overloading. If you’re a C# programmer (there was precisely one at my talk), this is no big deal.
Like Java, C# is object-oriented and it runs on a virtual machine, but like C++, C# has always had operator overloading.
To illustrate operator overloading at my talk, I used the example of
Fraction, which, if you compile, you can load into the Scala REPL with the command line option
-cp.
package fractions;public class Fraction {
private final long fractNumer;
private final long fractDenom;
public Fraction plus(Fraction addend) {
long interNumerA = this.fractNumer * addend.fractDenom;
long interNumerB = addend.fractNumer * this.fractDenom;
long newNumer = interNumerA + interNumerB;
long newDenom = this.fractDenom * addend.fractDenom;
return new Fraction(newNumer, newDenom);
} \\ ...other arithmetic functions omitted... \\ ...constructor omitted...}
That’s a Java class I wrote, had NetBeans compile into a JAR, then loaded that into the Scala REPL. Then I could instantiate objects of type
Fraction.
scala> val oneHalf = new fractions.Fraction(1, 2)
oneHalf: fractions.Fraction = 1/2scala> val twoThirds = new fractions.Fraction(2, 3)
twoThirds: fractions.Fraction = 2/3scala> oneHalf.plus(twoThirds)
res3: fractions.Fraction = 7/6scala> oneHalf.times(twoThirds)
res5: fractions.Fraction = 1/3
It sure would be nice to be able to use the plus, minus, times and divides operators directly.
scala> oneHalf + twoThirds
<console>:14: error: type mismatch;
found : fractions.Fraction
required: String
oneHalf + twoThirds
^scala> oneHalf * twoThirds
<console>:14: error: value * is not a member of fractions.Fraction
oneHalf * twoThirds
^
So let’s rewrite this as a Scala class. We could keep all the semicolons, but to help differentiate between Java and Scala, I will omit them.
package fractionsclass Fraction(numerator: Long, denominator: Long = 1L) {
if (fractDenom == 0) {
throw new IllegalArgumentException("fractDenom 0 invalid.")
}
long gcdNumDen = euclideanGCD(numerator, denominator)
if (fractDenom < 0) {
gcdNumDen = gcdNumDen * -1
}
val fractNumer = numerator / gcdNumDen
val fractDenom = denominator / gcdNumDen // ...toString() omitted... def +(summand: Fraction): Fraction = {
interNumerA = this.fractNumer * addend.fractDenom
interNumerB = addend.fractNumer * this.fractDenom
newNumer = interNumerA + interNumerB
newDenom = this.fractDenom * addend.fractDenom
new Fraction(newNumer, newDenom)
} // ...several lines omitted...}
Note that the default constructor has to be close to the top. Also note that the “
= 1L” bit allows us to declare a
Fraction equal to an integer by leaving out the denominator when we instantiate, instead of explicitly having to give a denominator of 1.
Functions can also have default parameters defined in this manner. One more thing to take note of before moving on:
return at the end of
+ is not needed. The value at the end of a block is the value of the block as a whole.
With that compiled and loaded into the Scala REPL, we can now do this:
scala> val oneHalf = new fractions.Fraction(1, 2)
oneHalf: fractions.Fraction = 1/2scala> val twoThirds = new fractions.Fraction(2, 3)
twoThirds: fractions.Fraction = 2/3scala> oneHalf + twoThirds
res0: fractions.Fraction = 7/6scala> oneHalf - twoThirds
res1: fractions.Fraction = -1/6scala> oneHalf * twoThirds
res2: fractions.Fraction = 1/3scala> oneHalf / twoThirds
res3: fractions.Fraction = 3/4
Technically, though, this is not actually operator overloading, but rather the result of the combination of Scala allowing operator characters as function names and Scala allowing infix notation.
With the JAR compiled from the Java sources loaded into the Scala REPL, we can very well do things like this:
scala> oneHalf plus twoThirds
res6: fractions.Fraction = 7/6
Likewise with the JAR compiled from the Scala sources we can do:
scala> oneHalf.+(twoThirds)
res4: fractions.Fraction = 7/6
Though then again, in C++ it would be valid to write
oneHalf.operator+(twoThirds) (according to the Microsoft page on the topic). So I guess Scala does technically have operator overloading after all.
Operator overloading is not specifically a functional concept, as our colleagues with C++ or C# experience can tell us, but it can certainly help functional programming.
At this point in the talk I should have gone on to talk about passing functions to standard Scala functions. A really good one to start with is
map(), which will probably be familiar to JavaScript programmers.
In his talk at Detroit Labs about JavaScript anti-patterns last month, James York mentioned
map() as a generally preferable alternative to the foreach loop, both of which are also available in Scala.
Of course in Scala there is a wider variety of what you can use
map() on. Let’s get some kind of array or collection to hold the first one hundred positive integers in the Scala REPL:
scala> 1 to 100
res5: scala.collection.immutable.Range.Inclusive = Range 1 to 100scala> res5.mkString(", ")
res6:
Now, let’s convert that to numbers of the form 4n + 1.
scala> res5.map(4 _ + 1)
<console>:13: error: _ must follow method; cannot follow Int(4)
res5.map(4 _ + 1)
^
Oops, I forgot this is not Wolfram Mathematica, I can’t use an implied multiplication operator, it has to be made explicit.
scala> res5.map(4 * _ + 1)
res8: scala.collection.immutable.IndexedSeq[Int] = Vector(5, 9, 13, 17, 21, 25, 29, 33, 37, 41, 45, 49, 53, 57, 61, 65, 69,, 285, 289, 293, 297, 301, 305, 309, 313, 317, 321, 325, 329, 333, 337, 341, 345, 349, 353, 357, 361, 365, 369, 373, 377, 381, 385, 389, 393, 397, 401)
That’s more like it. Now let’s do a more elaborate example in which we have a collection of fractions, multiply by them each by 4 and add 1. For this one we’ll need to take care of some overhead.
scala> def recip(n: Int): fractions.Fraction = new fractions.Fraction(1, n)
recip: (n: Int)fractions.Fractionscala> recip(3) // Just checking that it works
res9: fractions.Fraction = 1/3scala> res8.map(recip)
res10: scala.collection.immutable.IndexedSeq[fractions.Fraction] = Vector(1/5, 1/9, 1/13, 1/17, 1/21, 1/25, 1/29, 1/33, 1/37, 1/41, 1/45, 1/49, 1/53, 1/57, 1/61, 1/65, 1/69, 1/73, 1/77, 1/81, 1/85, 1/89, 1/93, 1/97, 1/101, 1/105, 1/109, 1/113, 1/117, 1/121, 1/125, 1/129, 1/133, 1/137, 1/141, 1/145, 1/149, 1/153, 1/157, 1/161, 1/165, 1/169, 1/173, 1/177, 1/181, 1/185, 1/189, 1/193, 1/197, 1/201, 1/205, 1/209, 1/213, 1/217, 1/221, 1/225, 1/229, 1/233, 1/237, 1/241, 1/245, 1/249, 1/253, 1/257, 1/261, 1/265, 1/269, 1/273, 1/277, 1/281, 1/285, 1/289, 1/293, 1/297, 1/301, 1/305, 1/309, 1/313, 1/317, 1/321, 1/325, 1/329, 1/333, 1/337, 1/341, 1/345, 1/349, 1/353, 1/357, 1/361, 1/365, 1/369, 1/373, 1/377, 1/381, 1/385, 1/389, 1/393, 1/397, 1/401)scala> val four = new fractions.Fraction(4)
four: fractions.Fraction = 4scala> val one = new fractions.Fraction(1)
one: fractions.Fraction = 1scala> res10.map(four * _ + one)
res11: scala.collection.immutable.IndexedSeq[fractions.Fraction] = Vector(9/5, 13/9, 17/13, 21/17, 25/21, 29/25, 33/29, 37/33, 41/37, 45/41, 49/45, 53/49, 57/53, 61/57, 65/61, 69/65, 73/69, 77/73, 81/77, 85/81, 89/85, 93/89, 97/93, 101/97, 105/101, 109/105, 113/109, 117/113, 121/117, 125/121, 129/125, 133/129, 137/133,
141/137, 145/141, 149/145, 153/149, 157/153, 161/157, 165/161, 169/165, 173/169, 177/173, 181/177, 185/181, 189/185, 193/189, 197/193, 201/197, 205/201, 209/205, 213/209, 217/213, 221/217, 225/221, 229/225, 233/229, 237/233, 241/237, 245/241, 249/245, 253/249, 257/253, 261/257, 265/261, 269/265, 273/269, 277/273, 281/
277, 285/281, 289/285, 293/289, 297/293, 301/297, 305/301, 309/305, 313/309, 317/313, 321/317, 325/321, 329/325, 333/329, 337/333, 341/337, 345/341, 349/345, ...
I defined
four and
one of type
Fraction so as to not deal with operator overloading between
Int and
Fraction; I haven’t figured out how to do it yet.
I’m not quite sure what happened with
res11 there towards the end, but I’m satisfied this has given the right results and that it would hold up under the scrutiny of unit testing.
Now I return to the content of my talk last week. How do we write our own functions that take functions as parameters? The best example I can think of for that is implementing the Euclidean GCD algorithm.
At this point in the talk, I asked for someone in the Audience to tell me what gcd(−27, 18) is. William Rusnack, who is very knowledgeable about functional programming in Haskell, said 9, which is the right answer.
I’m sure almost everyone else, or maybe all of them, also thought of the correct answer. I’m no neurologist, but I think they came up with the answer by noticing that both −27 and 18 are divisible by 9.
A computer, on the other hand, would probably have to use the Euclidean algorithm even for this very simple example: −27 = −2 × 18 + 9, then 18 = 2 × 9 + 0, and the remainder 0 lets the computer know it has gotten the answer.
In programming this, we almost always take it for granted that the Euclidean function for Z is usually the absolute value function: |n| = n if n is 0 or positive, |n| = −n if n is negative.
It’s simply the most practical choice, from the heyday of FORTRAN and COBOL to the heyday of Pascal, C and C++, to today with Kotlin, Haskell, and maybe even Malbolge… just kidding on that last one.
Although −27 < 18, as far as the Euclidean algorithm is concerned, it matters more that 18 is closer to 0 than −27 is. The absolute value function tells us that |−27| > |18|.
The square function also works, since (−27)² > 18². The problem with the square function is that we could run into overflow issues, as in, for example,
euclideanGCD(Integer.MIN_VALUE, Integer.MAX_VALUE).
These are the mathematical requirements for f(n) to be a valid Euclidean function in a given domain of numbers R such as Z:
- f(n) maps all numbers of R to N⁰ (meaning the positive integers and 0, so Z without the negative integers).
- If d is a divisor of n, then f(d) ≤ f(n).
- f(n) = 0 if and only if n = 0.
It would be nice if whenever d is a divisor of n we likewise have that f(d) is a divisor of f(n). And though that is often the case, it is actually not a mathematical requirement. I’ll come back to this point later on.
For the third requirement, we may add, as a practical matter, the caveat that numbers like
Integer.MIN_VALUE could lead to incorrect zeroes with the square function.
Though the absolute value function could also be problematic when applied to
int values.
scala> Integer.MIN_VALUE * Integer.MIN_VALUE
res12: Int = 0scala> Math.abs(Integer.MIN_VALUE)
res13: Int = -2147483648
For my purpose here, however, I’m dealing with numbers closer to 0, so I’m not too bothered by these potential errors in my Euclidean GCD implementation; I don’t plan to write tests for potential overflows.
In Java, if we want to program the Euclidean GCD algorithm with a different Euclidean function, we have a lot of rewriting to do. But in Scala, we can simply pass it the function we want to use as needed, with just a little rewriting.
This stub illustrates the syntax:
// STUB TO FAIL THE TEST
def euclideanGCD(a: Int, b: Int, eucFn: Int => Int): Int = {
-1 // Clearly wrong value to fail the test
}
So now we can pass
euclideanGCD() any function that takes in an
Int and returns an
Int, though of course we also have to flesh out the stub to actually use that function.
For this part, it might be better to follow along in IntelliJ rather than the Scala REPL, since it’s a lot easier to import JUnit into IntelliJ than into the Scala REPL.
There is such a thing as ScalaTest, but I haven’t figured out how to use it yet. Maybe as I get more proficient with functional programming, I will find that JUnit doesn’t quite cut it for testing Scala. But for now, JUnit is just fine.
Like I mentioned before, IntelliJ can’t auto-generate Scala code for use with JUnit. But you can just right-click on the test folder as shown in IntelliJ and create a new Scala file there. I’m guessing it’s similar in NetBeans if Scala is properly set up for that integrated development environment (IDE).
Okay, so our first test should be, by the tenets of test-driven development, rather simple, and easy to make it go from failing to passing.
@Test def testEuclideanGCD18N27(): Unit = {
println("gcd(-27, 18)")
val expected = 9
val actual = euclideanGCD(-27, 18, Math.abs)
assertEquals(expected, actual)
}
The
void in Java is
Unit in Scala. I don’t like it, I think it should be called
Void instead, but it’s not something worth making a fuss about. Anyway, IntelliJ will fill it in for you if you neglect it.
To make this first test pass, it’s enough to change
-1 to
9 in
euclideanGCD(). For this test, it wouldn’t matter if we defined
def square(n: Int): Int = n * n
and then used that instead of
Math.abs with the same numbers, since (−27)² = 729 is a small enough number that we don’t have to worry about overflows.
Next we could do a slightly more elaborate test that pseudorandomly chooses two consecutive integers, so
val expected = 1, and also uses
Math.abs as the Euclidean function. I leave this one as an exercise if you’re so inclined.
Those two tests are perhaps enough to motivate us to actually write a proper
euclideanGCD(). Something like this:
def euclideanGCD(a: Int, b: Int, eucFn: Int => Int): Int = {
var currA = a
var currB = b
var tempMultiple: double
while (eucFn(currB) != 0) {
tempMultiple = Math.floor(currA / currB) * currB
currRemainder = currA - tempMultiple
if (eucFn(currRemainder) >= eucFn(currB)) {
val excMsg = "Z is not Euclidean for the function " +
eucFn.getClass.getName
throw new NonEuclideanDomainException(excMsg, a, b, eucFn)
}
currA = currB
currB = currRemainder
}
currA
}
The more dogmatic adherents of test-driven development would probably point out that our two tests so far don’t really require
euclideanGCD() to actually use
eucFn().
And they’d be right. We should probably edit
euclideanGCD() so that it still takes
eucFn() but doesn’t use it. Then we write a test to check that
euclideanGCD() does use the
eucFn() we gave it in order to pass.
We do that by defining functions that are invalid by the mathematical requirements of the Euclidean GCD algorithm, but valid by the rules of Scala syntax, and pass them to
euclideanGCD().
def invalidFunctionF(n: Int): Int = -3 def invalidFunctionG(n: Int): Int = 3 @Test(expected = classOf[IllegalArgumentException])
def testEuclideanGCDThrowsIAE(): Unit = {
euclideanGCD(-27, 18, invalidFunctionF)
} @Test(expected = classOf[NonEuclideanDomainException])
def testEuclideanGCDThrowsNEDE(): Unit = {
euclideanGCD(-27, 18, invalidFunctionG)
}
It’s perfectly possible to define exceptions in the Scala REPL. It’s only because of JUnit that I say this part is better followed along with IntelliJ.
scala> class NonEuclideanDomainException(exceptionMessage: String, a: Int, b: Int, eucFn: Int => Int) extends Exception(exceptionMessage: String) { }
defined class NonEuclideanDomainException
I admit this does feel kind of silly, though. But it does help illustrate several important points about Scala.
You might notice that the Scala REPL is smart enough to notice if you define a custom exception subclassing
java.lang.Exception in the REPL but it’s nothing more than a rename of
java.lang.Exception.
Even though the
NonEuclideanDomainException presented here does not explicitly define any new “methods,” the fact that it requires two integers and an
Int to
Int function to be constructed means that it actually enriches the inheritance hierarchy in a small but important way.
Back when this project was Java-only, I was going back and forth on whether the custom exceptions I was defining needed to be checked exceptions or runtime exceptions.
On the Scala side of the project, it might not matter: there are no checked exceptions in Scala. At least as long as we’re not calling from Java a Scala subroutine that might throw a checked exception, in which case we might need either an
@throws annotation or make the exception subclass
RuntimeException instead of
Exception.
For this next sample output, I actually wrote all the source files in Windows Notepad, compiled them on the command line with
scalac, packaged them with
jar and then loaded them into the Scala REPL.
scala> calculators.NTFC.euclideanGCD(-27, 18, invalidFunctionF)
java.lang.IllegalArgumentException: The function $$Lambda$1081/1587485260 is not a valid Euclidean function because it sometimes returns negative values.
at calculators.NTFC$.euclideanGCD(NTFC.scala:41)
... 28 elidedscala> calculators.NTFC.euclideanGCD(-27, 18, invalidFunctionG)
exceptions.NonEuclideanDomainException: Z is not Euclidean for the function f = $$Lambda$1087/1004308853 since f(-9) = 3 but f(18) = 3.
at calculators.NTFC$.euclideanGCD(NTFC.scala:48)
... 28 elided
I mentioned earlier that f(d) being a divisor of f(n) is not mathematically required when d is a divisor of n. With Scala we now have a framework to help us explore that question.
Consider the function f(n) = 0 if n = 0, f(n) = 1 if n = 1 or −1, and f(n) = |n| + 1 for all other values of n.
scala> def alternateFunctionF(n: Int): Int = Math.abs(n) + (if (n < -1 || n > 1) 1 else 0)
alternateFunctionF: (n: Int)Int
By the way, in Java, don’t try to use an if statement as a summand. Or do try it to check how quickly your IDE gives you a red flag.
I suppose that in Scala it would be more proper to use a match statement for this purpose. But using the familiar if statement in this perhaps unfamiliar way helps highlight how thoroughly functional Scala is.
Now that we have
alternateFunctionF() defined, we can use it in
euclideanGCD().
scala> euclideanGCD(-27, 18, alternateFunctionF)
res14: Int = 9
So f(9) = 10 and f(18) = 19, and 9 is a divisor of 18, but 10 is clearly not a divisor of 19. I also tried it out with a few different pairs of numbers, just to make sure
alternateFunctionF doesn’t trigger an exception in some case I overlooked.
This is of course a toy example to illustrate this nuance, which can be a lot more useful in certain other domains of numbers than it is in Z.
Trying things out in the REPL can be a very helpful complement to automated testing. You can come up with a scenario, try it out in the REPL and either confirm that your tests cover that scenario… or that you need to write a new test.
This next example demonstrates very clearly why f(n) = n² is not a good Euclidean function from a practical standpoint even though it is good from a mathematical standpoint:
scala> calculators.NTFC.euclideanGCD(46341, 46342, square)
java.lang.IllegalArgumentException: The function $$Lambda$1107/1806440863 is not a valid Euclidean function because it sometimes returns negative values.
at calculators.NTFC$.euclideanGCD(NTFC.scala:41)
... 28 elided
Of course 46341² is not negative, but in a signed 32-bit integer, the computation overflows and gets erroneously turned into a negative integer.
Such technical issues are of some interest to me, but not as interesting as the idea of applying the Euclidean algorithm to number domains other than Z.
For example, in Z[√14], what is the function that would allow the Euclidean algorithm to resolve gcd(2, 1 + √14)? Would that function work for any pair of numbers in Z[√14]?
To investigate questions like that, I will still be using objects to represent the various number domains, and also objects to represent numbers in those domains.
And
NonEuclideanDomainException will be a Java class with the instance function
tryEuclideanGCDAnyway() which can only use the absolute value of the norm as the Euclidean function for its attempt.
But I will also include in the project a Scala function that allows me to experiment with various different valid and invalid Euclidean functions in the Scala REPL.
Before I get to that point, I still have a lot of testing and refactoring to do on Java objects.
Although Java itself is making moves towards functional programming, for now I’m finding it quite satisfactory for my project to leave the functional side of it to Scala.
As it turns out, JavaScript is also sort of functional. Today it’s looking a hell of a lot more functional than it did fifteen years ago, but it’s also looking a lot more object-oriented.
Given its rather haphazard development, it has been easy for JavaScript to embrace both paradigms. This reminds me of Herman Melville’s description of the penguin:
.” — Herman Melville
Indeed JavaScript possesses “rudimental” claims to both the object-oriented and functional paradigms. Licensing the “Java” name is of course not a strong claim to object-oriented programming. I’ll probably catch some flak in the comments for the penguin comparison.
Like Kotlin, Scala can also compile to JavaScript. It might be interesting to see if JavaScript compiled from Scala really does look functional, or object-oriented, or if it looks more procedural.
A couple of weeks ago, Rainer Hahnekamp published “An introduction to Object-Oriented Programming in JavaScript” here on Medium. It was even more recently that he read Scalfani’s 2016 farewell to object-oriented programming.
Hahnekamp responded that it’s possible to misuse object-oriented programming. He’s right about that. And of course it’s also possible to misuse functional programming.
With its functional capabilities and static typing, Scala is robust, but it also allows the programmer the flexibility to choose between the object-oriented and functional paradigms according to the situation.
And by enabling functionality by making everything an object, Scala demonstrates that, for all its flaws, the object-oriented paradigm is still valid and very useful in a wide variety of situations. | https://alonso-delarte.medium.com/object-oriented-or-functional-why-not-both-with-scala-8d1f949302fd | CC-MAIN-2022-33 | refinedweb | 5,704 | 62.07 |
humm. you don't need to "break" the transitive trust, then try to setup an one way to keep net2.local users out of net1.local users. I think the term trust tends to be a misnomer or mistook for allowing access to resources lickty split. It doesn't. The trust established between domains within the same domain namespace can be configured to allow only those with permissions to access shared resources. Thats what makes AD pretty neat. you can have a bunch of different "sites" all their own seperate domains [within the root domain namespace]with their own DC's and whatnot, but each one can't access others shared resources without explicit permission to do so. Even the Admin accounts. But should the need arise, you can allow individual users from one site to access specific AD resources at another site. Limit that access for a specific time period and limit what they do.
General discussion
| Thread display: Collapse - | Expand +
All Comments
Collapse -
secure network
2 domain network.
net1.local 172.17.1.X
255.255.0.0
net2.local 172.17.2.X
255.255.0.0
net1.local users need access to the net2.local domain resources (i.e printers, exchange server etc.).
NO one net2.local users can have access to net1.local resources.
how can i break the transitive trust and set up a one trust?
how will the AD replicate??
Where wil i find the net1.local AD users in the net2.local AD?
Will The exchange 03 server have any problem with this set up??
will the IP scheme be sufficent?
This conversation is currently closed to new comments. | https://www.techrepublic.com/forums/discussions/secure-network-1/ | CC-MAIN-2018-30 | refinedweb | 277 | 76.93 |
Quick and dirty gis server
June 26, 2007
Here is a quick and dirty ArcGIS geoprocessing server. I mostly did this so that I can call ArcGIS routines from scripts on my linux box across a cluster of ArcGIS servers.
It is not without caveats, mostly security caveats. I work in a firewalled environment and would not recommend exposing the entire arcgisscripting geoprocessing object in a non-firewalled environment, without adding some security. With the client code I am able to write scripts that work on either Win32 or Linux with the same code.
Additionally, you will need to refer to files and directories in your scripts as they would appear to the server instance (with its windows permissions). As ArcGIS scripting routines do not directly allow the passing of GIS data to them, you will have to point to them on the server’s filesystem, geodatabase or remote ArcSDE server as you would locally on the server. Server:
from SimpleXMLRPCServer import SimpleXMLRPCServer import arcgisscripting as agis geop = agis.create() server = SimpleXMLRPCServer(("10.0.0.128",8000)) server.register_instance(geop) print "Starting server..." server.serve_forever()
Client:
#!/usr/bin/python REMOTE_SERVER = "" import sys print "We're running on: ", sys.platform if sys.platform == "win32": # We're on win32, call instance of gp object as we normally would. print "Importing the geoprocessing object" import arcgisscripting as agis gp = agis.create() else: # Run our calls over XMLRPC to REMOTE_SERVER from xmlrpclib import ServerProxy print "Using XMLRPC to connect to:", REMOTE_SERVER gp = ServerProxy(REMOTE_SERVER) # Just check that we can call the geoprocessing object. print gp.ProductInfo() #Show which product we're running just to make sure it is working.
Use: On the server run:
python server.py Starting server...
On the client call the Client code at the top of your geoprocessing script, then refer to the gp object as you would normally.comments powered by Disqus | https://jduck.net/blog/2007/06/26/quick-and-dirty-gis-server/ | CC-MAIN-2021-25 | refinedweb | 313 | 56.76 |
lio_listio - initiate a list of I/O requests
Synopsis
Description
Return Value
Errors
Versions
Notes
Colophon
#include <aio.h>
int lio_listio(int mode, struct aiocb *const aiocb_list[], int nitems, struct sigevent *sevp);
Link with -lrt.
The lio_listio() function initiates the list of I/O operations described by the array aiocb_list.
The mode operation has one of the following values).
The lio_listio() function may fail for the following reasons:.
The lio_listio() function is available since glibc 2.1.
POSIX.1-2001, POSIX.1-2008..
aio_cancel(3), aio_error(3), aio_fsync(3), aio_return(3), aio_suspend(3), aio_write(3), aio(7)
This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.sgvulcan.com/lio_listio.3.php | CC-MAIN-2017-09 | refinedweb | 125 | 59.4 |
How do I close the entire program once the user hits OK in the message box?
Here is the portion of the script that contains the button and command:
class Application(tk.Tk):
def create_options(self):
tk.Button(self,
text = "Begin search", command=self.pop_up1
).place(x=465, y=285)
def pop_up1(self):
"""shows submitted box"""
tkMessageBox.showinfo(title="Finished", message="Query Submitted")
It depends what you mean by exit the entire program, you can close the window by using
self.root.destroy()
But depending on your implementation closing the window may not end your program but generally this is how you close a tkinter application. Just add it to the end of your button event. | https://codedump.io/share/YwHEbNNaSsS2/1/tkinter-message-box-close-program-on-click-quotokquot | CC-MAIN-2017-09 | refinedweb | 116 | 50.43 |
You often need to create objects when working with .NET classes. An object is an instance of a particular class. Methods are functions that operate exclusively on objects of a class. Data types package together objects and methods so that the methods operate on objects of their own type. For more information about objects, see MATLAB Objects.
You construct .NET objects in the MATLAB workspace by calling the class constructor, which has the same name as the class. The syntax to create a .NET object classObj is:
classObj = namespace.ClassName(varargin)
where varargin is the list of constructor arguments to create an instance of the class specified by ClassName in the given namespace. For an example, see Create .NET Object From Constructor.
You can use C# code examples in MATLAB, such as the NetDocCell assembly provided in Converting .NET Arrays to Cell Arrays. Build an application using a C# development tool, like Microsoft Visual Studio and then load it into MATLAB using the NET.addAssembly function. The following are basic steps to do this;.
The product documentation for your assembly contains information about its classes. However, you can use the NET.addAssembly command to read basic information about an assembly. For example, to view the class names for the private assembly netdoc.NetSample, type:
dllPath = fullfile('c:','work','NetSample.dll'); sampleInfo = NET.addAssembly(dllPath); sampleInfo.Classes
ans = 'netdoc.SampleMethodSignature' 'netdoc.SampleMethods'
If your assembly has hundreds of entries, you can consult the product documentation, or open a window to an online document, such as the System namespace reference page on the Microsoft Developer Network. For information about using this documentation, see To Learn More About the .NET Framework. For example, to find the number of classes nclasses in mscorlib, type:
asm = NET.addAssembly('mscorlib'); [nclasses,x] = size(asm.Classes);
Objects created from .NET classes appear in MATLAB as reference types, or handle objects. Calling the delete function on a .NET handle releases all references to that .NET object from MATLAB, but does not invoke any .NET finalizers. The .NET Framework manages garbage collection.
For more information about managing handle objects, see Destroying Objects.
Get MATLAB trial software
Includes the most popular MATLAB recorded presentations with Q&A sessions led by MATLAB experts. | http://www.mathworks.com/help/techdoc/matlab_external/brpb58s-1.html#brpb5m_-1 | crawl-003 | refinedweb | 372 | 52.46 |
Introduction: My Arduino Ping Display Robot
Goals
Hello all.
I hope to please share a little robot that I have just finished building.
There are many Ping Boat, perhaps with a tutorials and display less so without pretension, will illustrate what I could do.
I gave myself the goal of realizing a robot that avoids obstacles, hence the use of Ping, who had more autonomy (and less hassle to reload always stylus) and displayed on a LCD display the distance measured obstacle.
The tutorial is directed for beginners but with knowledge of electronics and Arduino programming.
Step 1: List of Materials
1 - Arduino UNO or compatible
2 - Module OCTAGON GioBlu Robotics or other frame
1 - Ping))) Parallax mounting bracket kit
2 - Servo motors with 360 rotation , in my case HSR-1425SCR
2 - 6 cm wheels with rubber no-slip
1 - std servo for the Ping (in my case is not required but can be implemented)
2 - servo brackets Gioblu Robotics
4 - PCB spacers from 4 cm
2 - protoschield for servos, battery and display and added(or other sites)
1 - 16x2 HD44780 compatible LCD Display
1- potentiometer
1 - 9v battery holder
1 - 2s Lipo 1200mAh 7.4 v
1 - UBEC 5.4 / 6V-(2-dsh-6S-Lipo/Detail
1 - pivot wheel
1 - piece of matrix board for the display shield
- some strip for shield
- some screw patience and passion..
Step 2: SERVO SHIELD CONSTRUCTION
First we begin by shield, why I have chosen this path?. When you need to connect several servo motors, a sensor, batteries etc, of course, there are several PIN signals, power supply and ground to use.
Normally the various little robots are joined by a breadboard to allow you to manage everything at the expense of a lot of threads around that maybe every time you disconnect and still always give a feeling of chaos.
A shield to be attached to Arduino simplifies everything and makes it more convenient connections. As you know for the servos or motors there are already on the market different and functional, eg one or dell'Adafruit or SparkFun, in any case the realization in the house is very simple and inexpensive.
As you can see from the first two photos are soldered several rows of three legs (I have made ââ6, but you can add several more), each leg corresponds to the three poles of the servo and the sensor (signal, 5v, gnd -white/red/black or yellow/red/back). Then sold in parallel (third photo) the 5v (center pin) and GND (pictured blue wire) and the latter connected to the power connector, so all you will connect to the row of pins will be powered together.
For the signal pin, you can decide if you knew at the start (as in my case) corresponds to which pin, solder a wire directly to the corresponding pin, or if you want to manage it from time to time just add (second photo), a row of strip females to connect with a wire to the desired pin. In the above I added a reset button and a LED connected to pin 13 directly to Arduino board.
Step 3: LCD SHIELD CONSTRUCTION
Skipping of course, as you will find dozens of tutorials on how to connect one display to Arduino board eg
but I repeat, if you want to take the opportunity to enter into the project, which necessary to reduce the number of wires around that in this case are several, so we can again make a shield that can be useful for other projects with the Arduino.
Of the 16 pins of a display, only 12 are needed for our purpose.
If you look at the first and third photo should crop a breadboard that will serve us well as a support for the display, and like a "bridge" connecting to the Arduino pins.
So what should we do, sold (I have soldered slightly inclined, as shown in picture 2, in order to better read the display) a female strip of 16 feet to the matrix board, make from each pin that is used (see the tables below) a wire soldered directly to the corresponding pin of Protoshield, then we're going to overlap to the Arduino board and above this shield (last photo) will get that one made for the servo and sensor with zero visible wires.
The last photo is the display in operation.
Step 4: TABLE FOR THE CONNECTIONS
Put the display power under servo shield so it is managed by a battery's capacity.
Step 5: ASSEMBLING THE ROBOT- FIRST FRAME
At first frame I have inserted two servos rotated 360, a support for the 9v battery, and I attacked, I know ... look like shit, a lipo battery.
I chose this system because, who uses lipo for model knows, I like to take the residual voltage monitored to prevent damage to the battery, with tape so I can remove it easily.
Why two power supplies? Here you can breed different opinions, in any case I think it is better to keep the logic separated (Arduino) to avoid interference from the motors.
Where you see the bolt I put a wheel pivoting and two weights because the front was slightly unbalanced.
The third photo is the bottom.
Step 6: ASSEMBLING THE ROBOT- SECOND FRAME
Now prepare the upper frame.
I made ââa space for the Ping servo, four holes to secure the Arduino and I placed the strap securing it with a UBEC that you see on the left and right of the two photos.
For those not familiar in practice, is used to regulate the output voltage of the battery, because the lipo 2s has a voltage of 7.4 v, while the servo has a conduction voltage range from 4.8 v to 6.
Can also be used diodes or other.
Insert spacers and overlap the first frame to the second.
Step 7: Done!- CODE AND THEORY
Small introduction.
You'll find dozens of variations on how to run the PING sensor, in essence the practical operation of the sensor takes place in the cycle loop () that implements the protocol for the operation of the component PING.
Based on its specific you can control the operation of the sensor through a series of pulses. In summary we can say that you are using a digital pin to measure an analog pin.
Improvement: If you want to make cool, you can also insert a temperature sensor type TMP36 to improve reading, this is because as you know, sound travels at a speed of 343 m / s, this value ment is not absolute because it depends on the air temperature, the bibliography shows this relationship between the speed of sound (C) and the air temperature (t) C = 331.5 + (0.6 * t), knowing the temperature (and here it helps the sensor) we eventually put a few lines of code to implement the reading.
#include <Servo.h>
#include <LiquidCrystal.h>
LiquidCrystal lcd(3, 4, 5, 6, 11, 12);
Servo PingServo;
Servo LeftServo;
Servo RightServo;
int ultraSoundSignal = 7; // Ultrasound signal pin
int val = 0;
int val2 = 0;
int ultrasoundValue = 0;
int timecount = 0; // Echo counterint
ledPin = 13; // LED connected to digital pin 13
void setup() {
lcd.begin(16, 2);
Serial.begin (9600);
pinMode(8,OUTPUT);
pinMode(9,OUTPUT);
LeftServo.attach(8);
RightServo.attach(9);
PingServo.attach(10);
pinMode(ledPin, OUTPUT);
}
void moveServoLeftTo(int angle, int duration){
// controls the servo to move for a given angle and for a given number of milliseconds
LeftServo.write(angle);
for( ; duration > 0; duration -= 20){
// cycle for the specified number of milliseconds by subtracting 20ms each iteration
delay(20);
}
}
void moveServoRightTo(int angle, int duration){
RightServo.write(angle);
for( ; duration > 0; duration -= 20){
delay(20);
}
}
void loop() {
// Funzioni Ping)))
timecount = 0;
val = 0;}
ultrasoundValue = timecount; // Append echo pulse time to ultrasoundValue
lcd.clear();
lcd.setCursor(0,0);
lcd.print("OBSTACLE cm ");
lcd.print(timecount/10);
delay(200);
if(timecount > 0){
digitalWrite(ledPin, HIGH);}
// Servo functions:
if(ultrasoundValue > 500){
moveServoLeftTo(45,50); //servo moves the left of 45 degrees for 500 milliseconds
moveServoRightTo(180,50);
}
if(ultrasoundValue < 500){
moveServoLeftTo(180,40);
moveServoRightTo(180,40);
}
if(ultrasoundValue < 100){
moveServoLeftTo(180,100);
moveServoRightTo(45,100);
}
}
Step 8: Video and If You Like It Please Vote!
Participated in the
Arduino Challenge
Be the First to Share
Recommendations
16 Comments
9 years ago on Introduction
hallo Cello62thank you my friend for the information you give us
Sorry for the english not so well
9 years ago on Introduction
Hallo very nice im have the 4 wd robot with 4 motors and i will the distance on the display like your robot. please help me with the code. Rinus The code for the 4wd robot : thank you
Reply 9 years ago on Introduction
You need to study on any Arduino tutorial how to see a variable in the display. Once connected the display, is a very simple code, look that I've already done in the tutorials:
lcd.clear();
lcd.setCursor(0,0);
lcd.print("OBSTACLE cm ");
lcd.print(timecount/10);
You need to associate the name of the variable to the command by placing the string depending on the pixels you have available on the display
Reply 9 years ago on Introduction
Hello i have you sketch from jou site applied, when checking i have an error message, i put at the end of an extra } applied, At the start-up is running the servo a moment, and the two engines one on the left and the other to the right that continue to rotate. The ultrasonically pings 1 times and then do nothing. The display shows (obstacel 0 cm) what can be wrong? Have the ultrasonically tested works well.
Greeting
Pe2HLC
Reply 9 years ago on Introduction
Dunno. Very difficult to answer. My sketch is one found in several tutorials on web. Check all connections well first.
Reply 9 years ago on Introduction
Hello Cello 62
Were dropt im the code's from the lcd in the code from akeric? I'm try many times but it dont work .
When the distance is very well, the motor don't work,please help.
Reply 9 years ago on Introduction
Sry, i don't understand your english, "were dropt im the code's from the code from akeric?". Who's akeric? I've posted the code above, try to have a look at it it very simple, or i suggest if you wanna have another code try to post it in a Arduino forum.
Regards
Reply 9 years ago on Introduction
Sorry for my english here you'll find the code of Eric
Where do I place the code of the lcd distance in the code of Eric thanks.
Rinus
Reply 9 years ago on Introduction
I suppose you have to use g_cm or gcollidedistance variables.
Try to put the code after the variable are on, (// Do stuff, based on the current mode step), to see which value the ping read. Use a tutorial of lcd display to initialize a variable. This is all i can do . Regards
Reply 9 years ago on Introduction
kloosterma@zeelandnet.nl
Regards
10 years ago on Introduction
Very cool. I really want to make a robot like this but ping sensors are somewhat expensive. I'm thinking about getting an infrared rangefinder.
Reply 10 years ago on Introduction
Ty for your comment. I'm agree about is more expansive, however now does it cost 29$ and belive me you got a sensor more accurate than infrared.
Remember to vote me please!
Reply 10 years ago on Introduction
you could actually use the cheaper $9.95 HC-SR04 ultrasonic rangefinder, which works like the ping))) but is cheaper and has a much easier to use library. i chose this sensor over the ping))) and have been happy with it so far.
10 years ago on Introduction
I saw the video.
Is desired that robot tends to curve to the right?
however, very interesting project.
Good Job!
Reply 10 years ago on Introduction
It is true slightly, depending on the speed of servo rotation, but is not important.
Thanks for your comment
Reply 10 years ago on Introduction
^_^ | https://www.instructables.com/My-Arduino-Ping-Display-Robot/ | CC-MAIN-2022-27 | refinedweb | 2,031 | 58.42 |
Wow.. I guess I did miss it. For some reason Google wasn't my friend. I
could only find people asking for it, and no record of it being given. :)
Thanks,
Dan
On Wed, Dec 3, 2008 at 8:57 PM, Jeff Butler <jeffgbutler@gmail.com> wrote:
> See the release notes for version 2.2.0 - Clinton added support for
> private properties in August 2006! :)
>
> Jeff Butler
>
> On Wed, Dec 3, 2008 at 7:25 PM, Dan Turkenkopf <dturkenk@gmail.com> wrote:
> > Maybe I'm just missing something, but I thought that iBATIS required the
> > objects used in a result map to have declared setters.
> >
> > I was in the process if I could figure out a way to use a builder pattern
> to
> > get immutable objects through iBATOR and was working my way through a
> sample
> > application.
> >
> > To my surprise, when I removed the setters from my domain object,
> everything
> > still seemed to work.
> >
> > Did I miss this functionality being introduced?
> >
> > Thanks,
> > Dan Turkenkopf
> >
> > Here's my domain object:
> >
> > public class Master {
> >
> > private String id;
> > private String firstName;
> > private String lastName;
> >
> > /**
> > * @return the firstName
> > */
> > public String getFirstName() {
> > return firstName;
> > }
> > /**
> > * @return the id
> > */
> > public String getId() {
> > return id;
> > }
> > /**
> > * @return the lastName
> > */
> > public String getLastName() {
> > return lastName;
> > }
> >
> > And here's my SQL Map:
> >
> > <sqlMap namespace="Master">
> >
> > <!-- Result maps describe the mapping between the columns returned
> > from a query, and the class properties. A result map isn't
> > necessary if the columns (or aliases) match to the properties
> > exactly. -->
> > <resultMap id="MasterResult"
> >
> > <result property="id" column="lahmanid"/>
> > <result property="firstName" column="nameFirst"/>
> > <result property="lastName" column="nameLast"/>
> > </resultMap>
> >
> > <!-- Select with no parameters using the result map for Master class.
> -->
> > <select id="selectAllMasters" resultMap="MasterResult">
> > select * from MASTER
> >
> > </sqlMap>
> >
> >
> >
> | http://mail-archives.apache.org/mod_mbox/ibatis-user-java/200812.mbox/%3C35dbee350812031806p2a58a79bj51fb6407844510e@mail.gmail.com%3E | CC-MAIN-2018-30 | refinedweb | 285 | 55.84 |
There must be a better way to delete the old NetCons in the list.
Code: Select all
def create_netcons(n, mylist=None): """" create a list with <n> number of NetCon objects """" if mylist is not None: for nc in mylist: nc.weight[0] = 0.0 # set NetCon to zero? use del? else: mysyn = mylist for _ in range(n): myNetCon = h.NetCon(target, source, sec) # pseudo-code myNetCon.weight[0] = 1e-4 mysyn.append( myNetCon ) return( mysyn ) # Simulation mylist = create_netcons(3) # create 3 NetCons mylist = create_netcons(5, mylist) # create 5 new NetCons, set the old ones to zero
I would appreciate your help
Thanks!!! | https://www.neuron.yale.edu/phpBB/viewtopic.php?f=2&t=3576&p=15183&sid=7fca2888d89c15463cdc381b42e31a06 | CC-MAIN-2019-04 | refinedweb | 104 | 67.04 |
Summary editNEM 2004-05-26: Here's a little experiment I've been doing with unifying Tcl's traditional first-class transparent values (like string, lists and dicts), with the handle-based OO types. I call this package TOOT.
See Also edit
- TOOT with {*}
- WHD: A slightly different take
- TOOT revisited
- NEM: And another one
Description editUsing class-is-command-name prefix, and auto-expansion of leading word, it is possible to achieve quite an amazing degree of unification between the two ideas. This allows for use of two different syntaxes:
- Normal Tcl syntax, using ensembles, e.g., [List length $item]
- OO syntax with polymorphic method dispatch, e.g., [$item length]
set l [::List create a b c d] set d [::Dict | $l]Type-checking is done in a lazy manner at present - the cast will always succeed, but subsequent operations may fail if the string rep doesn't match what that type requires. Note that in most uses, you shouldn't have to do explicit type conversions. Also note, that most operations have no side-effects: values are immutable, making this an object-functional extension by default. You can create mutable values by creating a handle type (just make the string representation a name, by overriding the "create" method).This has some interesting connections to Feather's interfaces, which I haven't fully explored yet, but will do, to see what the implications are. I think they could work quite well together.Anyway, here's some simple first-stab at the code. It's pretty poor as a framework currently, and inefficient (most uses involve multiple command lookups, and relies on an ::unknown handler for auto-expansion of leading word). I intend to remedy these defects, and build this up into a powerful OO system over time. This code is just here as a proof-of-concept to show the basic idea.This idea came to me at about 4 a. m. last night - I went to bed thinking about Tcl's value semantics, and the prefix/auto-expand approach taken by TIP 194 [1] (anonymous procs as apply command prefix) and just suddenly woke up in the middle of the night with a "Eureka!" moment. I'd be very interested in feedback, as I think this could be quite a powerful new approach to building code in Tcl.TODO:
- Rewrite to make handling packing-unpacking of values simpler.
- Define everything a bit better, so most functionality is in an object class at the base of the class hierachy.
- Implement inheritance.
- Steal as many interesting features from XOTcl as possible :-)
- Try and make it rely on less tricks, to improve performance (may need Tcl 9 with new features).
# toot.tcl -- # # TOOT - Transparent OO for Tcl. This package implements a simple # object oriented programming environment, where all values are fully # transparent. This is an experiment in how well the worlds of Tcl's # transparent values, and OO design, traditionally implemented using handle # types, can be combined. The design presented here is a functional view of # objects, in that procedures/methods which update a value return the new # value. All values are represented as a tagged list of length three: # # {$type | $value} # # The "|" is some sugar, which also helps to disambiguate method calls from # construction cases. I was going to use ":", but that interacts badly with # Tcl's namespace separator. # # Classes in the system are namespaces which form ensembles. You can use # either an OO syntax or a functional syntax when accessing values. When # using the OO syntax [$value method args], type is preserved. When using # the functional syntax [type method $value $args], the value will # automatically be converted to the correct type, in the usual Tcl way (the # original value will not change type however). The type command also serves # as a type-conversion copy-constructor, which means that you can explicitly # create a copy of a value and change it's type. E.g. to get a list object # from a dict object, you could do: # # set l [List | $dict] # # Most of the time, this will be unnecessary. This also has the added # benefit of transparent serialisation - values can be written to a file and # read in at a later date, and will just work (assuming the correct type is # still available). Requires auto-expansion of leading word to work # correctly, and currently achieves this through a global unknown handler. # If TIP 181 passes, then this could be done on a per-namespace basis. # Auto-expansion as a core feature for Tcl-9 would rock! # # Copyright (c) 2004, Neil Madden. # License: Tcl/BSD style. package provide toot 0.1 namespace eval toot { namespace export class extract method } # Helper for extracting real value proc toot::extract {value} { return [lindex $value 2] } # Helper for creating procs which do the right thing proc toot::method {name arglist body} { set arglist [linsert $arglist 0 self] uplevel 1 [list proc $name $arglist $body] } # Class is itself a class. namespace eval toot::class { } proc toot::class {cmd args} { uplevel 1 [linsert $args 0 ::toot::class::$cmd] } proc toot::class::| {self args} { # Copy constructor/instance command if {[llength $args] == 0} { # Copy constructor return [list ::toot::class | [::toot::extract $self]] } else { # Redispatch - assume uplevel 1 is within correct namespace uplevel 1 [linsert $args 1 $self] } } # Real constructor proc toot::class::create {name body} { # Fully qualify the name if {![string match ::* $name]} { set ns [uplevel 1 namespace current] if {$ns eq "::"} { set ns "" } set name ${ns}::$name } # Create the namespace uplevel 1 [list namespace eval $name $body] # Create dispatch method set body [string map [list %NAME% $name] { if {[llength $args] == 0} { return [list %NAME% | [::toot::extract $self]] } else { uplevel 1 [linsert $args 1 [list %NAME% | $self]] } }] uplevel 1 [list proc ${name}::| {self args} $body] # Default constructor uplevel 1 [list proc ${name}::create {args} [string map [list %N $name] { return [list %N | $args] }]] # Create the type-method uplevel 1 [list proc $name {args} [string map [list %NAME% $name] { uplevel 1 [list namespace eval %NAME% $args] }]] return [list ::toot::class | [list $name]] } # Install unknown handler for auto-expand if {[llength [info commands ::_toot_unknown]] == 0} { rename ::unknown ::_toot_unknown proc unknown {cmd args} { if {[llength $cmd] > 1} { uplevel 1 $cmd $args } else { uplevel 1 [linsert $args 0 ::_toot_unknown $cmd] } } }Some test code:
# Test stuff proc test {} { namespace import toot::* class create List { method index {args} { return [eval [linsert $args 0 lindex \ [extract $self] $args]] } method length {} { return [llength [extract $self]] } method append {args} { set s [extract $self] return [list ::List | [eval lappend s $args]] } method loop {varname body} { uplevel 1 [list ::foreach $varname [extract $self] $body] } } class create Dict { method get {key} { return [dict get [extract $self] $key] } method set {args} { return [eval ::Dict create [eval [linsert $args 0 \ dict replace [extract $self]]]] } } # Now some tests set l [List create a b c d e f] puts "l = $l" # Loop $l loop item { puts "Item = $item" } # Use dict method puts "a = [Dict get $l a]" # Convert to dict set d [Dict | $l] # Set a key set d [$d set a "Hello!"] # Loop as list List loop $d item { puts "item = $item" } puts "d = $d" puts "l = $l" }
Discussion editjcw: 2004-05-26: Whoa, Neil, this is fascinating. I'll try to wrap my mind around it, especially in the context of a generic relational algebra system I've been working on, early stage is at [2]. (eurein, Gr - to find!)AM 2004-05-27: Just a few remarks:
- How would I define and set "fields" for an object?
- It seems to me that programming a specific method is a bit involved
# a reference handle type class create Reference { variable ref method <- {val} { variable ref set ref([extract $self]) $val } method -> {} { variable ref return $ref([extract $self]) } } set a [Reference create foo] $a <- [List create a b c d e] $a <- [[$a ->] append f g h] puts [$a ->]This example for instance, uses a per-class array and the data part of each object is used as an index into that array. You could easily make each instance a new namespace, as well. If you want fields in a fully transparent object, then you have to start programming in a more functional way:
class create Person { method setName {name} { set d [dict replace [extract $self] name $name] return [list ::Person | $d] } method getName {} { return [dict get [extract $self] name] } method setAge {age} { set d [dict replace [extract $self] age $age] return [list ::Person | $d] } method getAge {} { return [dict get [extract $self] age] } } set neil [Person create name "Neil Madden" age 23] set neil [$neil setAge 82] puts "[$neil getName] is [$neil getAge]!"As you can see, in order for this style to work, any setter methods have to return a new instance - side-effect free objects.As for the other point, yes programming methods is a little bit involved now, due to the need to pack and unpack the "actual" value from the wrapper. I'm trying to come up with a simpler way of doing this, where things happen automatically.
RS: This is very amazing. I put my tiny reinvention of this wheel at Toot as toot can :)jcw 2004-05-28: Not to muddle the waters, but if I may summarize your (NEM) and RS's approaches, and re-phrase a bit:
- every object is by value and carries its class as well as its state: {class | {state...}}
- a call relies on expansion, so "$o meth args..." will be evaluated as "class | {state...} meth args..."
- a reshuffle is needed, to put the method call in the right spot: "class meth {state...} args..."
- ensembles as namespaces would now be useful, turning it into "::class::meth {state...} args ..."
- to make the story complete, methods are defined as: "proc ::class::meth {self args} {...}"
- agree with NEM that a handle is the way to go, name of an array for example
- it would be great to see refcounts work in this context, i.e., array be unset when last handle ref to it goes away
- perhaps it can be done: a special list variant, which knows the 2nd item is a handle
- copies of the list bump the refcount
- stringify when dual rep is lost causes a copy of the handle contents to be inserted, i.e. the name to be dropped
- re-use later creates a new handle, puts contents in it, and proceeds with a handle again
- this has the effect that when dual rep is lost, state is copied, mutability is lost
- but as long as it isn't it's a real handle, with shared mutable content across all copies of the same Tcl_Obj
- not sure this makes sense or works, just wanted to dump it here...
::List length $obj ;# -> ::List length {::List | {state}} $obj length ;# -> ::List | {state} length -> ::List length {::List | {state}} (i.e. same as above)With, the "|" command, this would become:
::List length $obj ;# -> ::List length {| ::List {state}} $obj length ;# -> | ::List {state} length -> ::List length {| ::List {state}}so, it could work, with "|" defined as:
proc ::| {type state method args} { uplevel 1 [linsert $args 0 ${type}::$method [list ::| $type $state]] }Yes, definitely interesting. The reason for repacking the object is so that method always receive "self" in the same format, and also I was thinking of allowing passing messages to self, which this allows:
method foo {a b} { $self bar $a $b }Needs more thought though, and some clever wrapping could be done to do conversions transparently, I think. One thing which is a pain with the current method, is that each nested dispatch causes an increment on the reference count of the state, so it's pretty much guaranteed to be shared when it reaches the method, so copy-on-write is inevitable. I can get round some of this using K, but it will obfuscate the code somewhat, and can't be completely hidden. Regarding what you wrote about handles and mutable state... I need to think about it a lot more before forming an opinion. Seems possibly fragile in the face of [evil]... I mean [eval]! ;)jcw: Uh, oh - more to think about! Wild idea - would a new "interp alias" be useful which can inject/reshuffle/drop args?CMcC: The infix operator | worries me too, it helps conceptually to reinforce the idea that {class | content} is a special form, but in the end it's got to be inefficient to carry around '|' as essentially dead weight with no real tcl-interpretation.Next Q: is '|', properly speaking, in the re-interpreted form jcw suggests, not itself some kind of class? If it can be usefully construed as such, it would compel me to think that you're onto something unarguably valid (because the beautiful answer is usually the correct answer qv [aesthetic fallacy] :)RS: I understand the "|" as a guard that expresses "this is a toot value". You're right that it need not be in infix position, although this is kind of beautiful in my eyes. One could alternatively use
{| class {the values}}which would simplify things:
$x get membergets substituted as:
{| class {the values}} get membergets auto-expanded first word as:
| class {the values} get memberAt this point, some reordering is needed to dispatch to
toot::class::get {the values} memberSo the function of prefix "|" could also be called ::toot::dispatch ...Just that "|" looks nicer. On the other hand, it consumes a possible command name others might want to use, e.g., as prefix for bitwise OR. By having it infix, only the class name itself (under control of the user) is used as command name, and "|" is a subcommand of that (as of any toot class):
{class | {the values}} get memberis auto-expanded to:
class | {the values} get memberwhere class is an alias resolving as
::toot::dispatch class | {the values} get memberwhich ends up calling
::toot::class::get {the values} memberDoes that make sense?CMcC: Yes, it makes more sense, thank you. | isn't merely syntactic sugar, but is a method on everything that is a TOOT class, and this dispatch is managed by means of a clever use of alias.NEM: Indeed, the | method (it could be anything, and I'm open to better suggestions) is absolutely essential to the technique, to ensure that both means of calling a method can be used (::List length $obj vs $obj length). You can rearrange things to make that a prefix command, instead of a message - in which case you can make things a bit more efficient, but then you can't overload it on a per-class basis (not sure if you'd ever want to, but still).
NEM Some more thoughts on TOOT's view of values and types at Interpreting TOOT.
CMcC Another naive/lazy question ... what if TOOT's 'state' component was always interpreted as a namespace name? Would that not provide for mutable state? This consideration arose from DKF's comment on the chat that 'everything is a string, or has a name (which is a string)'NEM Yes, that would work as well. Indeed, you could have the state be an XOTcl object if you wanted. I'm not sure whether you gain anything in this case, though. At present, I'm playing around with using concepts from monadic programming (see [3] for a "gentle" introduction to monads in Haskell) which (with a bit of syntactic sugar) can make dealing with functional/immutable objects feel like working with stateful objects. It works by threading an extra state argument through function invocations in a way which is transparent to the user of the function, and to the function itself. Not sure how useful this will be in practice, but it's a fun experiment, and monads are a crazy concept to try and get your head round. I'll create a wiki page when I'm done (this week sometime, hopefully). | http://wiki.tcl.tk/11543 | CC-MAIN-2017-04 | refinedweb | 2,630 | 62.61 |
Java Quiz 2: Comparing Class Variables With Instance Variables
Stop by for the answer to the first Java quiz in this series as well as a new challenge—this time involving the differences between class and instance variables.
Join the DZone community and get the full member experience.Join For Free
Before we dive into this week's quiz, here is the answer to Quiz 1: Overriding Methods.
By creating the object of MySub, the constructor of the superclass MySuper is called.
The constructor of MySuper invokes the method myMethod.
The method myMethod is overridden in the subclass MySub.
The method myMethod writes the value of str2 to the standard output.
The constructor of the superclass tries to access the variable str2 inside its subclass before it is initialized.
Therefore the program writes null instead of "y" to the standard output.
If you have any other explanation or opinion, share it please in the comments!
Here is today's quiz!
What is written to the standard output as the result of executing the following code?
public class MyClass { static int x; String s = ""; static String s2 = ""; public MyClass(int i) { x += i; s += x; s2 += x; } public static void main(String[] args) { new MyClass(2); MyClass mc = new MyClass(1); new MyClass(4); System.out.print(mc.s + mc.s2); } }
Select the correct answer.
A. This code writes "1137" to the standard output.
B. This code writes "3237" to the standard output.
C. This code writes "35" to the standard output.
D. This code writes "312" to the standard output.
E. This code writes "323" to the standard output.
The correct answer and its explanation will be included in the next quiz in two weeks! For more Java quizzes, puzzles, and assignments, take a look at my site!
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/java-quiz-2-comparing-class-variables-with-instance | CC-MAIN-2022-21 | refinedweb | 306 | 66.44 |
A daemon process is a process which runs in background and has no controlling terminal.
Since a daemon process usually has no controlling terminal so almost no user interaction is required. Daemon processes are used to provide services that can well be done in background without any user interaction.
For example a process that runs in background and observes network activity and logs any suspicious communication can be developed as a daemon process.
Daemon Process Design
A daemon process can be developed just like any other process but there is one thing that differentiates it with any other normal process ie having no controlling terminal. This is a major design aspect in creating a daemon process. This can be achieved by :
- Create a normal process (Parent process)
- Create a child process from within the above parent process
- The process hierarchy at this stage looks like : TERMINAL -> PARENT PROCESS -> CHILD PROCESS
- Terminate the the parent process.
- The child process now becomes orphan and is taken over by the init process.
- Call setsid() function to run the process in new session and have a new group.
- After the above step we can say that now this process becomes a daemon process without having a controlling terminal.
- Change the working directory of the daemon process to root and close stdin, stdout and stderr file descriptors.
- Let the main logic of daemon process run.
So we see that above steps mark basic design steps for creating a daemon.
C fork() Function
Before creating an actual running daemon following the above stated design steps, lets first learn a bit about the fork() system call.
fork() system creates a child process that is exact replica of the parent process. This new process is referred as ‘child’ process.
This system call gets called once (in parent process) but returns twice (once in parent and second time in child). Note that after the fork() system call, whether the parent will run first or the child is non-deterministic. It purely depends on the context switch mechanism. This call returns zero in child while returns PID of child process in the parent process.
Following are some important aspects of this call :
- The child has its own unique process ID, and this PID does not match the ID of any existing process group.
- The child’s parent process ID is the same as the parent’s process ID.
- The child does not inherit its parent’s memory locks.
- Process resource utilization and CPU time counters are reset to zero in the child.
- The child’s set of pending signals is initially empty.
- The child does not inherit semaphore adjustments from its parent.
- The child does not inherit record locks from its parent.
- The child does not inherit timers from its parent.
- The child does not inherit outstanding asynchronous I/O operations from its parent, nor does it inherit any asynchronous I/O contexts from its parent.
For more insight information, please read the man page of this system call.
The Implementation
Based on the design as mentioned in the first section. Here is the complete implementation :
#include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include <sys/stat.h> #include <string.h> int main(int argc, char* argv[]) { FILE *fp= NULL; pid_t process_id = 0; pid_t sid = 0; // Create child process process_id = fork(); // Indication of fork() failure if (process_id < 0) { printf("fork failed!\n"); // Return failure in exit status exit(1); } // PARENT PROCESS. Need to kill it. if (process_id > 0) { printf("process_id of child process %d \n", process_id); // return success in exit status exit(0); } //unmask the file mode umask(0); //set new session sid = setsid(); if(sid < 0) { // Return failure exit(1); } // Change the current working directory to root. chdir("/"); // Close stdin. stdout and stderr close(STDIN_FILENO); close(STDOUT_FILENO); close(STDERR_FILENO); // Open a log file in write mode. fp = fopen ("Log.txt", "w+"); while (1) { //Dont block context switches, let the process sleep for some time sleep(1); fprintf(fp, "Logging info...\n"); fflush(fp); // Implement and call some function that does core work for this daemon. } fclose(fp); return (0); }
Following is the way through which the code was compiled and executed:
$ gcc -Wall deamon.c -o deamon $ sudo ./deamon process_id of child process 2936
Just observe that the control immediately came back to the terminal ie the daemon is now not associated to any terminal.
When you check the log.txt file located in the root directory, you could see that this daemon process is running.
$ $ tail -f /Log.txt Logging info... Logging info... Logging info... Logging info... Logging info... Logging info... Logging info... Logging info... Logging info... Logging info...
{ 25 comments… add one }
Thank you.
You have a knack of explaining things in the simplest way by taking out all the fluff.
Very few engineers have this ability to explain things this straighforwardly.
Very well written.
Really learned something today without having to google a million search items and get more confused in the process.
Excellent article !!!
Thank you very much.
Nice article. I wish you were my professor when I was in college.
The only change I would make to this is that I would not run it in the root directory. e.g. change –
chdir(“/”);
to –
chdir(“/tmp”);
But that’s just me.
Thank you all for your appreciation.
Do I need to use the ‘kill’ command to end the child process? Not seeing anything in the while{} loop to automatically end the example program.
@Nathan
Since this is an example of a daemon process and daemon processes are meant to run like background services for indefinite time so they are usually not ended from within the process code. That is the reason I have used the kill command to terminate the process.
Note that most unixy systems, including even those weird ones with Glibc [;-)], have a daemon(3) function which is a bit more portable and safe to use in real-world programs.
As with popen(3) or system(3) it’s not going to teach people as much about the lower-level calls, of course, but I think it’s remiss to not mention it.
Thanks a lot.
Very clear and full explanation.
You solved all my doubts about it.
Couldn’t find an easier one than this. Gateway to entirely new turf. Thanks a ton.
Thank you for your excellent articles really useful….
-Srinivas
it is nice tutorial.
but where should i put my program if i want to run it at system startup and with root privileges.
This is a great article. My coworker is trying to find a way to create a timing system that runs perfectly at once a second. His explanation is absolutely crazy. This is simple and straight to the point.
Hi, I’m sorry but this is not the correct execution, 0,1,2 file descriptors must be closed and signal captures where is?
Thank you for that great tutorial. I was able to make my daemon for deleting files form directory with that skeleton. Also changing /tmp is good idea too.
Great!, been scanning around for such info. Apparently am in love ith linux, opensource, and everything that ships with that.
I wonder that the function “exit” (and “printf”) is shown after the fork() call. How do you
think about to use the function “_exit” (and “write”) instead?
How much does async-signal-safety (from POSIX view) matter here?
I run the code but I get the errors: undefined reference to fork, setsid and sleep. I have
included unistd.h, and I am using windows os.Does that have anything to do with these errors?
Nice explanation.This article is very useful for me.Thank you very much.
how can i execute the each step in daemon process while running the daemon in some file ..using c program .
Reallly good explanation
Thanks Friend
Thank you! Very nice tutorial. Easy to follow. Worked perfectly for me!
How can we interact with mysql database in daemon ? can we use web services as well ?
Thank you for such simple and clear explanation!!! | http://www.thegeekstuff.com/2012/02/c-daemon-process/ | CC-MAIN-2017-13 | refinedweb | 1,348 | 76.01 |
How to add private uri to QML Plugin
Hi everyone!
I have a plugin for QML on C++. In this plugin I have a few classes which I want to use as a private in QML but don't want to share it to public users. I don't know any ways to create private QML types so I decide to create a special url. For example my plugins has a uri like:
com.company.sdk
and I've use it in QML like:
import com.company.sdk 1.0
But I want to create another uri for private classes something like this:
com.company.sdk.private
And than use it like:
import com.company.sdk.private 1.0
I try do this:
QString privateUri = QString("%1.private").arg(uri); qmlRegisterSingletonType<MySingelton>(privateUri.toStdString().c_str(), 1, 0, "MySingelton", cache_manager_singletontype_provider);
But when I try to load my test application I received error:
QQmlApplicationEngine failed to load component qrc:/main.qml:5 static plugin for module "com.company.sdk" with name "SDKPlugin" cannot be loaded: Cannot install singleton type 'MySingelton' into unregistered namespace 'com.company.sdk.private'
What I do wrong? How I can register a new namespace for my plugin? Is this possible?
- workaround A: register private types in initializeEngine() instead
- workaround B: remove the protected module identifier line from the qmldir file | https://forum.qt.io/topic/69605/how-to-add-private-uri-to-qml-plugin/1 | CC-MAIN-2019-51 | refinedweb | 222 | 50.94 |
Why doesn't this very simple code work when it seems to me that it should?
All I expect it to do is to display the contents of a text file in the console of Eclipse...All I expect it to do is to display the contents of a text file in the console of Eclipse...Code Java:
import java.io.*; import javax.swing.*; public class FileInput { public static void main(String[] args) { String fileName = JOptionPane.showInputDialog("Open file: "); PrintWriter inputStream = null; try { inputStream = new PrintWriter(new File(fileName)); } catch(FileNotFoundException e) { System.out.println("Error opening the file " + fileName); System.exit(0); } inputStream.close(); } }
Thanks | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/6340-saving-file-using-dialog-box-printingthethread.html | CC-MAIN-2015-27 | refinedweb | 106 | 58.58 |
- Example Servlet
- Cactus Test
- Ant Integration
- Conclusion
In its basic form, Cactus is a layer on top of JUnit that allows for test code to be run in both a client space and a server space. Thus, a test can initiate a call from the client, test the server-side code, and review the results. One of the major benefits of Cactus is that it handles all the session creation, request creation, and response creation for meso I can write my tests quickly and consistently.
Example Servlet
The best way to explain how to use Cactus is to set up a simple bean and write tests for it. In this example, I use the following tools:
The first step is to set up the servlet I want to test:
package com.dzrealms.example.servlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import javax.servlet.ServletException; import java.io.IOException; /** * A very simple Servlet example */ public class ExampleServlet extends HttpServlet { /** * This method is the entry point for a post from a client. I gather the * parameters being passed in then hand them off to some internal methods. * @param req HttpServletRequest * @param resp HttpServletResponse */ protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws IOException, ServletException { String variable1 = req.getParameter("variable1"); String variable2 = req.getParameter("variable2"); boolean valid = validate(variable1, variable2); if (!valid) { //forward the user to a page to handle the error req.getRequestDispatcher("someErrorURL").forward(req, resp); return; } //Since it is valid the user should be forward to another jsp. req.getRequestDispatcher("someURL").forward(req, resp); } /** * This method is supposed to validate the data that was received. In this * example it just returns true. * @param var1 First variable to validate * @param var2 Second variable to validate * @return true if the data is valid */ protected boolean validate(String var1, String var2) { //Do something interesting with this method return true; } }
Although this very simple servlet does not do much of anything, it has a method I want to test. Therefore, I want to set up a class that will test the validate method for me. | http://www.informit.com/articles/article.aspx?p=358759 | CC-MAIN-2017-04 | refinedweb | 349 | 56.15 |
User Manual for the LMD Martian Atmospheric General Circulation Model Ehouarn MILLOUR, François FORGET Contributors to earlier versions: Karine Dassas, Christophe HOURDIN, Frédéric HOURDIN, and Yann WANHERDRICK initial version translated by Gwen Davis (LMD) January 26, 2012 1 Contents 1 Introduction 2 Main features of the model 2.1 Basic principles . . . . . . . . 2.2 Dynamical-Physical separation 2.3 Grid boxes: . . . . . . . . . . 2.3.1 Horizontal grids . . . 2.3.2 Vertical grids . . . . . 2.4 Variables used in the model . . 2.4.1 Dynamical variables . 2.4.2 Physical variables . . . 2.4.3 Tracers . . . . . . . . 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 5 5 6 6 8 10 10 10 10 3 The physical parameterizations of the Martian model: some references 3.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Radiative transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 CO2 gas absorption/emission: . . . . . . . . . . . . . . . . 3.2.2 Absorption/emission and diffusion by dust: . . . . . . . . . 3.3 Subgrid atmospheric dynamical processes . . . . . . . . . . . . . . . 3.3.1 Turbulent diffusion in the upper layer . . . . . . . . . . . . . 3.3.2 Convection . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Effects of subgrid orography and gravity waves . . . . . . . . 3.4 Surface thermal conduction . . . . . . . . . . . . . . . . . . . . . . . 3.5 CO2 Condensation . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Tracer transport and sources . . . . . . . . . . . . . . . . . . . . . . 3.7 Thermosphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 12 12 12 13 13 13 14 14 14 14 14 15 4 Running the model: a practice simulation 4.1 Obtaining the model . . . . . . . . . . . . . . . . . . . . 4.2 Installing the model . . . . . . . . . . . . . . . . . . . . . 4.3 Compiling the model . . . . . . . . . . . . . . . . . . . . 4.4 Input files (initial states and def files) . . . . . . . . . . . . 4.5 Running the model . . . . . . . . . . . . . . . . . . . . . 4.6 Visualizing the output files . . . . . . . . . . . . . . . . . 4.6.1 Using GrAds to visualize outputs . . . . . . . . . 4.7 Resuming a simulation . . . . . . . . . . . . . . . . . . . 4.8 Chain simulations . . . . . . . . . . . . . . . . . . . . . . 4.9 Creating and modifying initial states . . . . . . . . . . . . 4.9.1 Using program “newstart” . . . . . . . . . . . . . 4.9.2 Creating the initial start archive.nc file . . . . . . 4.9.3 Changing the horizontal or vertical grid resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 16 16 18 19 19 19 19 21 21 22 22 23 23 . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Program organization and compilation script 5.1 Organization of the model source files . . 5.2 Programming . . . . . . . . . . . . . . . 5.3 Model organization . . . . . . . . . . . . 5.4 Compiling the model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 25 26 26 26 6 Input/Output 6.1 NetCDF format . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 NetCDF file editor: ncdump . . . . . . . . . . . . . . 6.1.2 Graphic visualization of the NetCDF files using GrAds 6.2 Input and parameter files . . . . . . . . . . . . . . . . . . . . 6.2.1 run.def . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 callphys.def . . . . . . . . . . . . . . . . . . . . . . . 6.2.3 traceur.def . . . . . . . . . . . . . . . . . . . . . . . . 6.2.4 z2sig.def . . . . . . . . . . . . . . . . . . . . . . . . 6.2.5 Initialization files: start and startfi . . . . . . . . . . . 6.3 Output files . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 NetCDF restart files - restart.nc and restartfi.nc . . . . 6.3.2 NetCDF file - diagfi.nc . . . . . . . . . . . . . . . . . 6.3.3 Stats files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 30 30 31 31 32 34 36 36 38 44 44 44 45 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Zoomed simulations 49 7.1 To define the zoomed area . . . . . . . . . . . . . . . . . . . . . . . . . . 49 7.2 Making a zoomed initial state . . . . . . . . . . . . . . . . . . . . . . . . . 49 7.3 Running a zoomed simulation and stability issue . . . . . . . . . . . . . . . 50 8 Water Cycle Simulation 51 9 Photochemical Module 53 10 1D version of the Mars model 10.1 Compilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 1-D runs and input files . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Output data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 55 55 57 A GCM Martian Calendar 58 B Utilities B.1 concatnc . . B.2 lslin . . . . B.3 localtime . B.4 zrecast . . . B.5 lslin . . . . B.6 hrecast . . . B.7 expandstartfi B.8 extract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 60 60 60 60 60 61 61 61 Chapter 1 Introduction This document is a user manual for the General Circulation Model of the Martian atmosphere developed by the Laboratoire de Météorologie Dynamique of the CNRS in Paris in collaboration with the Atmospheric and Oceanic Planetary Physics sub-department in Oxford. It corresponds to the version of the model available since November 2002, that includes the new dynamic code lmdz3.3 and the input and output data in NetCDF format. The physical part has been available since June 2001, including the NLTE radiative transfer code valid at up to 120 km, tracer transport, the water cycle with water vapour and ice, the ”double mode” dust transport model, and with optional photochemistry and extension in the thermosphere up to 250km. A more general, scientific description of the model without tracers can be found in Forget et al. [1999]. Chapter 2 of this document, to be read before any of the others, describes the main features of the model. The model is divided into two relatively independent parts: (1) The hydrodynamic code, that is shared by all atmospheres (Earth, Mars, etc.) that integrates the fluid mechanics equations in time and on the globe, and (2) a set of Martian physical parameterizations, including, for example, the radiative transfer calculation in the atmosphere and the turbulence mix in the upper layer. It is followed by a list of references for anyone requiring a detailed description of the physics and the numerical formulation of the parameterizations of the Martian physical part (Chapter 3). For your first contact with the model, chapter 4 guides the user through a practice simulation (choosing the initial states and parameters and visualizing the output files). The document then describes the programming code for the model, including a user computer manual for compiling and running the model (Chapter 5). Chapter 6 describes the input/output data of the model. The input files are the files needed to initialize the model (state of the atmosphere at the initial time t0 as well as a dataset of boundary conditions) and the output files are ”time series”,i.e. records of the atmospheric flow evolution as simulated by the model, the “diagfi files”, the “stats files”, the daily averages etc. Some means to edit or visualize these files (editor “ncdump” and the graphics software “grads”) are also described. Chapter 8 explains how to run a simulation including the water cycle. Chapter 9 illustrates how to run the model with the photochemical module. Finally, chapter 10 will help you to use a 1-dimensional version of the model, which may be a simplier tool for some analysis work. 4 Chapter 2 Main features of the model 2.1 Basic principles The General Circulation Model (GCM) calculates the temporal evolution of the different variables (listed below) that control or describe the Martian meteorology and climate at different points of a 3D “grid” (see below) that covers the entire atmosphere. From an initial state, the model calculates the evolution of these variables, timestep by timestep: • At instant t, we know variable Xt (temperature for example) at one point in the atmosphere. ∂X • We calculate the evolution (the tendencies) ( ∂X ∂t )1 , ( ∂t )2 , etc. arising from each physical phenomenon, calculated by a parameterization of each of these phenomenon (for example, heating due to absorption of solar radiation). • At the next time step t + δt, we can calculate Xt+δt from Xt and ( ∂X ∂t ). This is the “integration” of the variables in time. (For example, Xt+δt = Xt + δt( ∂X ∂t )1 + δt( ∂X ) + ...) ∂t 2 The main task of the model is to calculate these tendencies ( ∂X ∂t ) arising from the different parameterized phenomenon. 2.2 Dynamical-Physical separation In practice, the 3D model operates in two parts: - one dynamical part containing the numerical solution of the general equations for atmospheric circulation. This part (including the programming) is common to the Earth and Martian model, and in general for all atmospheres of the terrestrial type. - a second physical part that is specific to the planet in question and which calculates the forced circulation and the climate details at each point. The calculations for the dynamical part are made on a 3D grid with horizontal exchanges between the grid boxes, whereas the physical part can be seen as a juxtaposition of atmosphere “columns” that do not interact with each other (see diagram 2.1). The dynamical and physical parts deal with variables of different natures, and operate on grids that are differently constructed. The temporal integration of the variables is based on different numerical schemes (simple, such as the one above for the physical part, and more complicated, the “Matsuno-Leapfrog” scheme for the dynamical part). The timesteps are also different. The physical timestep is iphysiq times longer than the dynamical 5 Dynamics Physics Dynamical tendencies T(x,y,z) q1(x,y,z),... T(z) q1(z) ... ... T(z) q1(z) ... Tendencies due to : - radiative transfer - condensation - subgrid dynamics - ... Physical fields Figure 2.1: Physical/dynamical interface 2.3 Processes and variables of a typical Martian GCM timestep, as the solution of the dynamic equations requires a shorter timestep than the forced calculation for the physical part. 2.3.1 Variables In practice, the main program that handles the whole model (gcm.F) is located in the Here are the physical (i.e. local) variables of thecalculated, model : at each timestep the dynamical part.main When the temporal evolution is being program calls the following: local temperature. 1. Call to the subroutine that handles the variance) total tendency calculation ( ∂X ∂t ) arising from kinetic turbulent energy (i.e. wind the dynamical part (caldyn.F) concentration of various aerosols6 (dust, clouds, etc...) or minor gaseous 2. Integration these dynamical tendencies to calculate the evolution of the variables constituentsof (water vapor especially). at the following timesteps (subroutine integrd.F) surface pressure. 3. Every iphysiq dynamical timestep, a call to the interface subroutine (calfis.F) with ground subsurface temperature. that calculates the evolution of some of the the and physical model (physiq.F), purely physical variables (e.g: temperature tsurf) various ground properties (icesurface coverage, moisture, etc...)and returns the tenden) arising from the physical part. cies ( ∂X ∂t winds 4. Integration of the physical variables (subroutine addfi.F) Note that : 5. Similarly, calculation and integration of tendencies due to the horizontal dissipation and Some the previous areidissip also dynamical variables (i.e. they are theof“sponge layer” isvariables done every dynamical time step. used in the dynamical part of the code to compute dynamical tendencies forThe all physical physicalpart variables, including themselves). These arefor temperature, Remark: can be run separately for a 1-D calculation a single column winds, tracer mixing ratio, and surface pressure. using program testphys1d.F. Surprisingly, the local pressure is not a local variable. That is because it is the boxes: actual vertical coordinate : the cells are vertically dened by 2.3 Grid pressure levels, or more exactly by their coordinate : = p=pS where pS isofthe ground you have to64x48x32 see the vertical grid as an elastic Examples typical grid pressure values are: 64x48x25, or 32x24x25 in longitudexlatione, dilating or contracting with winds and horizontal temperatures changes. This tudexaltitude. For Mars (radius∼3400 km), a 64x48 grid corresponds to grid choice makes very easykilometers to includenear thetheeects of topography on the circuboxes of the order of 330x220 equator. lation (by changing the value of ps ). The altitude coordinates used in the diagrams are actually pseudo-altitudes, given by z = h ln(p=pS ) where h 2.3.1 is Horizontal gridsof the Martian atmosphere (often considered equal to the typical height That would match with the 10 km, givenusemore formally h = RT Dynamics and but physics different grids.byFigure the correspondance and ing2.2). shows true height if the atmosphere was isothermal and h matching it... of variables dexing of the physical and dynamical grids as well as the different locations on 6these grids.tracers To identify the coordinates of a variable (at one grid point up, down, right also called or left) we use coordinates rlonu, rlatu, rlonv, rlatv (longitudes and latitudes, in radians). 9 the same as at i=IM+1 as the latter node is On the dynamical grid, values at i=1 are a redundant point (due to the periodicity in longitude, these two nodes are actualy located 6 grille "scalaire" de la dynamique Exemple: IM = 6 JM = 4 grille physique IM 6 i j 1 1 2 u v 2 15 u v 11 16 u v 17 u v u v u v 19 u u v v v u 13 u 18 u v u v v v v 12 u u 7 u v u v v 6 u v 10 u v 14 5 v 9 v JM+1 5 IM+1 7 u u v u v 8 u v 4 u u 4 v 3 v JM 5 u v u 3 4 1 u 2 3 u v rlatv 20 u u u u u "boite" grille scalaire v(i,j-1) rlonu (1,IM+1) rlonv (1,IM+1) rlatu (1,JM+1) rlatv(1,JM) T(i,j) u(i-1,j) u(i,j) v(i,j) u u rlonv rlonu Figure 2.2: Dynamical and physical grids for a 6 × 7 horizontal resolution. In the dynamics (but not in the physics) winds u and v are on specific staggered grids. Other dynamical variables are on the dynamical “scalar” grid. The physics uses the same “scalar” grid for all the variables, except that nodes are indexed in a single vector containing NGRID=2+(JM1)×IM points when counting from the north pole. N.B.: In the Fortran program, the following variables are used: iim=IM , iip1=IM+1, jjm=JM , jjp1=JM+1. 7 rlatu at the same place). Similarly, the extreme j=1 and j=JM+1 nodes on the dynamical grid (respectively corresponding to North and South poles) are duplicated IM+1 times. In contrast, the physical grid does not contain redundant points (only one value for each pole and no extra point along longitudes), as shown in figure 2.2. In practice, computations relative to the physics are made for a series of ngrid atmospheric columns, where NGRID=IMx(JM-1)+2. Vertical grids [km] [km] 2.3.2 Figure 2.3: Sketch illustrating the difference between hybrid and non-hybrid coordinates The GCM was initially programmed using sigma coordinates σ = p/ps (atmospheric pressure over surface pressure ratio) which had the advantage of using a constant domain (σ = 1 at the surface and σ = 0 at the top of the atmosphere) whatever the underlying topography. However, it is obvious that these coordinates significantly disturb the stratospheric dynamical representation as the topography is propagated to the top of the model by the coordinate system. This problem can elegantly be solved by using a hybrid sigmaP (sima-pressure) hybrid coordinate which is equivalent to using σ coordinates near the surface and gradualy shifting to purely pressure p coordinates with increasing altitude. Figure 2.3 illustrates the importance of using these hybrid coordinates compared to simple σ coordinates. The distribution of the vertical layers is irregular, to enable greater precision at ground level. In general we use 25 levels to describe the atmosphere to a height of 80 km, 32 levels for simulations up to 120 km, or 50 levels to rise up to thermosphere. The first layer describes the first few meters above the ground, whereas the upper layers span several kilometers. Figure 2.4 describes the vertical grid representation and associated variables. 8 DYNAMICS -------[coordinates ap(),bp()] ap(llm+1)=0,bp(llm+1)=0 aps(llm),bps(llm) ap(llm),bp(llm) aps(llm-1),bps(llm-1) ap(llm-1),bp(llm-1) aps(2),bps(2) ap(2),bp(2) aps(1),bps(1) ap(1)=1,bp(1)=0 PHYSICS ------[pressures] **************************** .. llm .......... nlayer ... **************************** .. llm-1 ........ nlayer-1 . **************************** : : : : : : ... 2 ............. 2 .... **************************** ... 1 ............. 1 .... **********surface*********** plev(nlayer+1)=0 play(nlayer) plev(nlayer) play(nlayer-1) plev(nlayer-1) play(2) plev(2) play(1) plev(1)=Ps Figure 2.4: Vertical grid description of the llm (or nlayer) atmospheric layers in the programming code (llm is the variable used in the dynamical part, and nlayer is used in the physical part). Variables ap, bp and aps, bps indicate the hybrid levels at the interlayer levels and at middle of the layers respectively. Pressure at the interlayer is P lev(l) = ap(l) + bp(l) × P s and pressure in the middle of the layer is defined by P lay(l) = aps(l) + bps(l) × P s, (where P s is surface pressure). Sigma coordinates are merely a specific case of hybrid coordinates such that aps = 0 and bps = P/P s. Note that for the hybrid coordinates, bps = 0 above ∼ 50 km, leading to purely pressure levels. The user can choose whether to run the model using hybrid coordinates or not by setting variable hybrid in run.def to True or False. 9 2.4 Variables used in the model 2.4.1 Dynamical variables The dynamical state variables are the atmospheric temperature, surface pressure, winds and tracer concentrations. In practice, the formulation selected to solve the equations in the dynamics is optimised using the following less “natural” variables: - potential temperature θ (teta in the code), linked to temperature T by θ = −κ T (P/P ref ) with κ = R/Cp (note that κ is called kappa in the dynamical code, and rcp in the physical code). We take P ref = 610 Pa on Mars. - surface pressure (ps in the code). - mass the atmosphere mass in each grid box (masse in the code). - the covariant meridional and zonal winds ucov and vcov. These variables are linked to the ”natural” winds by ucov = cu * u and vcov = cv * v, where cu and cv are constants that only depend on the latitude. - mixing ratio of tracers in the atmosphere, typically expressed in kg/kg (array q in the code). ucov and vcov, “vectorial” variables, are stored on “scalari” grids u and v respectively, in the dynamics (see section 2.2). teta, q, ps, masse, “scalar variables”, are stored on the “scalar” grid of the dynamics. 2.4.2 Physical variables In the physics, the state variables of the dynamics are transmitted via an interface that interpolates the winds on the scalar grid (that corresponds to the physical grid) and transforms the dynamical variables into more “natural” variables. Thus we have winds u and v (m.s−1 ), temperature T (K), pressure at the middle of the layers play (Pa) and at interlayers plev (Pa), tracers q, etc. (kg/kg) on the same grid. Furthermore, the physics also handle the evolution of the purely physical state variables: - co2ice CO2 ice on the surface (kg.m−2 ) - tsurf surface temperature (K), - tsoil temperature at different layers under the surface (K), - emis surface emissivity, - q2 wind variance, or more precisely the square root of the turbulent kinetic energy. - qsurf tracer on the surface (kg.m −2 ). 2.4.3 Tracers The model may include different types of tracers: - dust particles, which may have several modes - chemical species which depict the chemical composition of the atmosphere - water vapor and water ice particles - ... 10 In the code, all tracers are stored in one three-dimensional array q, the third index of which corresponds to each individual tracer. In input and output files (“start.nc”, “startfi.nc”, see Section 4) tracers are stored seperately using their individual names. Loading specific tracers requires that the approriate tracer names are set in the traceur.def file (see Section 6.2.3), and specific computations for given tracers (e.g.: computing the water cycle, chemistry in the upper atmosphere, ...) is controled by setting coresponding options in the callphys.def file (see Section 6.2.2). 11 Chapter 3 The physical parameterizations of the Martian model: some references 3.1 General The Martian General Circulation Model uses a large number of physical parameterizations based on various scientific theories and some generated using specific numerical methods. A list of these parameterizations is given below, along with the most appropriate references for each one. Most of these documents can be consulted at:. General references: A document attempts to give a complete scientific description of the current version of the GCM (a version without tracers): • Forget et al. [1999] (article published in the JGR) 3.2 Radiative transfer The radiative transfer parameterizations are used to calculate the heating and cooling ratios in the atmosphere and the radiative flux at the surface. 3.2.1 CO2 gas absorption/emission: Thermal IR radiation ( lwmain) • New numerical method, solution for the radiative transfer equation: Dufresne et al. [2005]. • Model validation and inclusion of the “Doppler” effect (but using an old numerical formulation): Hourdin [1992] (article). • At high altitudes, parameterization of the thermal radiative transfer (nltecool) when the local thermodynamic balance is no longer valid (e.g. within 0.1 Pa) : Lopez-Valverde et al. [2001] : Report for the ESA available on the web as: “CO2 non-LTE cooling rate at 15-um and its parameterization for the Mars atmosphere”. 12 Absorption of near-infrared radiation ( nirco2abs) • Forget et al. [1999] 3.2.2 Absorption/emission and diffusion by dust: Dust spatial distribution ( aeropacity) • The method for semi-interactive dust vertical distribution is detailed in Madeleine et al. [2011] • Vertical distribution and description of “MGS” and “Viking” scenarios in the ESA report Mars Climate Database V3.0 Detailed Design Document by Lewis et al. (2001), available on the web. • For the “MY24”-“MY26” scenarios, the dust distributions were derived from observations made by TES data is used. See technical note WP12.2.1 of ESA contract Ref ESA 11369/95/NL/JG(SC) ”New dust scenarios for the Mars Climate Model : Martian Years 24-29”, available online at Thermal IR radiation ( lwmain) • Numerical method: Toon et al. [1989] • Optical properties of dust: Madeleine et al. [2011] Solar radiation ( swmain) • Numerical method: Toon et al. [1989] • Optical properties of dust: see the discussion in Madeleine et al. [2011], which quotes properties from Wolff et al. [2009]. 3.3 Subgrid atmospheric dynamical processes 3.3.1 Turbulent diffusion in the upper layer ( vdifc) • Implicit numerical scheme in the vertical: see the thesis of Laurent Li (LMD, Université Paris 7, 1990), Appendix C2. • Calculation of the turbulent diffusion coefficients: Forget et al. [1999]. • fluxes in the near-surface layer: 13 3.3.2 Convection ( convadj) • For some details on the convective adjustement, see Hourdin et al. [1993] • The thermals’ mass flux scheme is described in 3.3.3 Effects of subgrid orography and gravity waves ( calldrag_noro , drag_noro ) See Forget et al. [1999] and Lott and Miller [1997] 3.4 Surface thermal conduction (soil) The numerical scheme is described in section 2 of technical note WP11.1 of ESA contract Ref ESA 11369/95/NL/JG(SC) ”Improvement of the high latitude processes in the Mars Global Climate Model”, available online at 3.5 CO2 Condensation • In Forget et al. [1998] (article published in Icarus): – Numerical method for calculating the condensation and sublimation levels at the surface and in the atmosphere ( newcondens) explained in the appendix. – Description of the numerical scheme for calculating the evolution of CO2 snow emissivity (co2snow) explained in section 4.1 • Noncondensable gaz treatment: see Forget et al. [2008], available online at • Inclusion of sub-surface water ice table thermal effect, varying albedo of polar caps and tuning of the CO2 cycle are descibed in technical note WP13.1.3e of ESA contract Ref ESA 11369/95/NL/JG(SC) ”New Mars Global Climate Model: e) Improved CO2 cycle and seasonal pressure variations”, available online at 3.6 Tracer transport and sources • “Van-Leer” transport scheme used in the dynamical part ( tracvl and vlsplt in the dynamical part): Hourdin and Armengaud [1999] • Transport by turbulent diffusion (in vdifc), convection (in convadj), sedimentation ( sedim), dust lifting by winds ( dustlift) : see note “Preliminary design of dust lifting and transport in the Model” (ESA contract, Work Package 4, 1998, available on the web). • Dust transport by the “Mass mixing ratio / Number mixing ratio” method for grain size evolution: see article by Madeleine et al. [2011] 14 • Watercycle, see Montmessin et al. [2004] and technical note WP13.1.3c of ESA contract Ref ESA 11369/95/NL/JG(SC) ”New Mars Climate Model: c) Inclusion of cloud microphysics, dust scavenging and improvement of the water cycle”, available online at • Radiative effect of clouds: see technical note WP13.1.3b of ESA contract Ref ESA 11369/95/NL/JG(SC) ”New Mars Climate Model: b) Radiative effects of water ice clouds and impact on temperatures”, available online at • Chemistry, see Lefèvre et al. [2004] and Lefèvre et al. [2008] 3.7 Thermosphere • A general description of the model is given in González-Galindo et al. [2009] • Details on photochemistry and EUV radiative transfer can be found in Angelats i Coll et al. [2005] and González-Galindo et al. [2005] 15 Chapter 4 Running the model: a practice simulation This chapter is meant for first time users of the LMD model. As the best introduction to the model is surely to run a simulation, here we explain how to go about it. All you will need are files and scripts necessary to build the GCM (all are in the LMDZ.MARS directory which you will download as explained in the next sections) as well as some initial states to initiate simulations and, if not working on the LMD machines, some extra datafiles for external forcings (topography, dust scenarios, radiative properties of dust and water ice, etc.). Once you have followed the example given below, you can then go on to change the control parameters and the initial states as you wish. A more detailed description of the model’s organization as well as associated inputs and outputs are given in sections 5 and 6. 4.1 Obtaining the model The LMD model project is developped using subversion (svn), the free software versioning and a revision control system. To obtain (download) the latest version of the model, simply go to the directory where you want to install the model and use the relevant svn command: svn checkout which will output a LMDZ.MARS directory (the contents of this directory are described in chapter 5). If you are not on the LMD machines, you will also need to download the set of files available online at:˜forget/datagcm/datafile (preserve the file names and subdirectory structure). This directory contains input files (topography, dust scenarios, radiative properties of scatteres, etc.) which the GCM needs to run. Where you put your local datafile directory (or whatever name you choose for this directory) is not critical, as that location can be specified at runtime (see sections 4.5 and 6.2.2). To run the model, you will also need some initial condition files that can be downloaded from:˜forget/datagcm/Starts (see section 4.4). 4.2 Installing the model - Set some environment variables needed for the compilation of the model (it is also pos- 16 sible to set the environment variables in the makegcm script, as explained below): LMDGCM : Path to the directory where you have put the model (full path). If using Csh: setenv LMDGCM /where/you/put/the/model/LMDZ.MARS If using Bash: export LMDGCM=/where/you/put/the/model/LMDZ.MARS LIBOGCM : Path to the directory (libo for example) where intermediate objects will be stored during the compilation of the model with the makegcm script (if that directory does not exist then makegcm will create it). If using Csh: setenv LIBOGCM /where/you/want/objects/to/go/libo If using Bash: export LIBOGCM=/where/you/want/objects/to/go/libo - Install NetCDF and set environment variables NCDFINC and NCDFLIB: The latest version of the NetCDF package is available on the web at the following address: along with instructions for building (or downloading precompiled binaries of) the library. Once the NetCDF library has been compiled (or downloaded), you should have access to the library libnetcdf.a itself, the various files (netcdf.inc, netcdf.mod, ...) to include in programs, and basic NetCDF software (ncdump and ncgen). To ensure that during compilation, the model can find the NetCDF library and include files, you must declare environment variables NCDFLIB and NCDFINC (again, it is also possible to set these environment variables in the makegcm script, as explained below). NCDFLIB must contain the path to the directory containing the object library libnetcdf.a and NCDFINC must contain the path to the directory containing the include files (netcdf.inc,...) If using Csh: setenv NCDFINC /wherever/is/netcdf/include setenv NCDFLIB /wherever/is/netcdf/lib If using Bash: export NCDFINC=/wherever/is/netcdf/include export NCDFLIB=/wherever/is/netcdf/lib - Install software with which can load and display NetCDF files such as GrAdS or Ferret 17 For people working at LMD, thanks to the excellent Laurent Fairhead, Grads and Ferret are installed and ready to go. - Go to your LMDZ.MARS and adapt the makegcm script to fit your needs: • Examples of makegcm scripts, adapted for different compilers (pgf90, g95, gfortran and ifort) are provided (files makegcm, makegcm g95, makegcm gfortran, makegcm ifort) copy or rename the relevant one as makegcm in the same directory. • As mentionned above, you may edit the script to hard code values of LMDGCM, LIBOGCM, NCDFINC and NCDFLIB instead of relying on the use of environment variables (see the commented out examples in the scripts at lignes 20-30). Note that since the makegcm is a Csh script, Csh syntax must be used there. - Finally, make sure that you have access to all the executables needed for building and using the model and remember to set environment variables to the correct corresponding pathes (note that if you do not want to have to redefine these every session, you should put the definitions in the corresponding .cshrc or .bashrc files). - UNIX function ımake - a Fortran compiler - ncdump - grads (or ferret) 4.3 Compiling the model - Example 1: Compiling the Martian model at grid resolution 64x48x25 for example, type (in compliance with the manual for the makegcm function given in section 5.4) makegcm -d 64x48x25 -p mars gcm You can find executable gcm.e (the compiled model) in the directory where you ran the makegcm command. - Example 2: Compiling the Martian model with 3 tracers (e.g. CO2, water vapour and ice to simulate the water cycle): makegcm -d 64x48x25 -t 2 -p mars gcm - Example 3: Compiling the the Martian model with your choice of compiler options, e.g. to check for array overflow (useful for debugging: warning, the model is then much slower!): makegcm -d 64x48x25 -p mars -O "-C" gcm Note that the makegcm script also has a ”debug” option which includes a collection of adequate debugging options. To use it, simply add the -debug option: makegcm -d 64x48x25 -p mars -debug gcm 18 4.4 Input files (initial states and def files) - In directory LMDZ.MARS/deftank you will find some examples of run parameter files (.def files) which the model needs at runtime. The four files the model requires (they must be in the same directory as the executable gcm.e) are: run.def (described in section 6.2) callphys.def (see section 6.2.2), callphys.def, z2sig.def and traceur.def. The example .def files given in the deftank directory are for various configurations (e.g. model resolution), copy (and eventually rename these files to match the generic names) to the directory where you will run the model. - Copy initial condition files start.nc and startfi.nc (described in section 6.2) to the same directory. You can extract such files from start archive ‘banks of initial states’ (i.e. files which contain collections of initial states from stndard scenarios and which can thus be used to check if the model is installed correctly) stored on the LMD website at˜forget/datagcm/Starts. See section 4.9 for a description of how to proceed to extract start files from start archives. 4.5 Running the model Once you have the program gcm.e, input files start.nc startfi.nc, and parameter files run.def callphys.def traceur.def z2sig.def in the same directory, simply execute the program to run1 a simulation: gcm.e You might also want to keep all messages and diagnotics written to standard output (i.e. the screen). You should then redirect the standard output (and error) to some file, e.g. gcm.out: If using Csh: gcm.e >! gcm.out If using Bash: gcm.e > gcm.out 2>&1 4.6 Visualizing the output files As the model runs it generates output files diagfi.nc and stats.nc files. The former contains instantaneous values of various fields and the later statistics (over the whole run) of some variables. 4.6.1 Using GrAds to visualize outputs If you have never used the graphic software GrAds, we strongly recommend spending half an hour to familiarize yourself with it by following the demonstration provided for that purpose. The demo is fast and easy to follow and you will learn the basic commands. To do this read file 1 Note that if you ar not running on the LMD machines, you’ll have to modify or add, in file callphys.def, the line: datadir = /path/to/datafile Where /path/to/datafile is the full path to the directory which contains the set of files downloaded from: forget/datagcm/datafile+ 19 Creation of the initial state start_archive.nc surface.nc Newstart start.nc Simulation 1 startfi.nc GCM diagfi.nc restart.nc run.def callphys.def z2sig.def run.def callphys.def z2sig.def restartfi.nc stats.nc GCM Simulation 2 Figure 4.1: Input/output data 20 run.def callphys.def z2sig.def /distrib/local/grads/sample For example, to visualize files diagfi.nc and stats.nc NetCDF files diagfi.nc and stats.nc can be accessed directly using GrAdS thanks to utility program gradsnc, (the user does not need to intervene). To visualize the temperature in the 5th layer using file diagfi.nc for example: - GrAdS session: grads return return (opens a landscape window) ga-> sdfopen diagfi.nc ga-> query file (displays info about the open file, including the name of the stored variables. Shortcut: q file) ga-> set z 5 (fixes the altitude to the 5th layer) ga-> set t 1 (fixes the time to the first stored value) ga-> query dims (indicates the fixed values for the 4 dimensions. Shortcut: q dims) ga-> display temp (displays the temperature card for the 5th layer and for the first time value stored. Shortcut: d T) ga-> clear (clears the display. Shortcut: c) ga-> set gxout shaded (not a contour plot, but a shaded one) ga-> display temp ga-> set gxout contour (returns to contour mode to display the levels) ga-> display temp (superimposes the contours if the clear command is not used) 4.7 Resuming a simulation At the end of a simulation, the model generates restart files (files restart.nc and restartfi.nc) which contain the final state of the model. As shown in figure 4.1, these files (which are of the same format as the start files) can later be used as initial states for a new simulation. The restart files just need to be renamed: mv restart.nc start.nc mv restartfi.nc startfi.nc and running a simulation with these will in fact resume the simulation from where the previous run ended. 4.8 Chain simulations In practice, we recommend running a chain of simulations lasting several days or longer (or hundreds of days at low resolution). To do this, a script named run0 is available in LMDZ.MARS/deftank , which should be used as follows: • Set the length of each simulation in run.def (i.e. set the value of nday) 21 • Set the maximum number of simulations at the beginning of the run0 script (i.e. set the value of nummax) • Copy start files start.nc startfi.nc over and rename them start0.nc startfi0.nc. • Run script run0 run0 runs a series of simulations that generate the indexed output files (e.g. start1, startfi1, diagfi1, etc.) including files lrun1, lrun2, etc. containing the redirection of the display and the information about the run. NOTE: to restart a series of simulations after a first series (for example, starting from start5 and startfi5), just write the index of the initial files (e.g. 5) in the file named num run. If num run exists, the model will start from the index written in num run. If not it will start from, start0 and startfi0. NOTE: A script is available for performing annual runs with 12 seasons at 30o solar longitude as it is in the database (script run mcd, also found in directory deftank). This script functions with script run0. Just set the number of simulations to 1 in run0. Then copy run.def into run.def.ref and set nday to 9999 in this file. To start from startN.c, edit the file run mcd and comment (with a #) the N months already created and describe N in num run. Then run run mcd. 4.9 Creating and modifying initial states 4.9.1 Using program “newstart” Several model parameters (for example, the dust optical depth) are stored in the initial states (NetCDF files start.nc and startfi.nc). To change these parameters, or to generally change the model resolution, use program newstart. This program is also used to create an initial state. In practice, we usually reuse an old initial state, and modify it using newstart. Like the GCM, program newstart must be compiled (using the makegcm script) to the required grid resolution. For example: makegcm -d 64x48x25 -p mars newstart Then run newstart.e The program then gives you two options: A partir de quoi souhaitez vous creer vos etats initiaux ? 0 - d un fichier start_archive 1 - d un fichier start • - Option “1” allows you to read and modify the information needed to create a new initial state from the files start.nc, startfi.nc • - Option “0” allows you to read and modify the information needed to create a new initial state from file start_archive.nc (whatever the start_archive.nc grid resolution is). If you use tracers, make sure that they are taken into account in your start files (either start or start archive). Then answer to the various questions in the scroll menu. These questions allow you to modify the initial state for the following parameters. 22 First set of questions: Modifications of variables in tab_cntrl: ˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜ day_ini : Jour initial (=0 a Ls=0) z0 : surface roughness (m) emin_turb : energie minimale lmixmin : longueur de melange emissiv : Emissivite du sol martien emisice : Emissivite des calottes albedice : Albedo des calotte iceradius : mean scat radius of CO2 snow dtemisice : time scale for snow, . ’ metamorphism tauvis : profondeur optique visible, . ’ moyenne obliquit : planet obliquity (deg) peri_day : perihelion date (sol since Ls=0) periheli : min. sun-mars dist (Mkm) aphelie : max. sun-mars dist (Mkm) Second set of questions : flat : no topography ("aquaplanet") bilball : albedo, inertie thermique uniforme coldspole : sous sol froid et haut albedo au pole sud q=0 : traceurs a zero ini_q : traceurs initialises pour la chimie ini_q-H2O : idem, sauf le dernier traceur (H2O) ini_q-iceH2O : idem, sauf ice et H2O watercapn : H20 ice sur la calotte permanente nord watercaps : H20 ice sur la calotte permanente sud wetstart : start with a wet atmosphere iqset : give a specific value to tracer iq isotherm : Temperatures isothermes et vents nuls co2ice=0 : elimination des calottes polaires de CO2 ptot : pression totale Program newstart.e creates files restart.nc and restartfi.nc that you generally need to rename (for instance rename them in start0.nc and startfi0.nc if you want to use run0 or run mcd, starting with season 0; rename them start.nc and startfi.nc if you just want to perform one run with gcm.e). 4.9.2 Creating the initial start archive.nc file Archive file start archive.nc is created from files start.nc and startfi.nc by program start2archive. Program start2archive compiles to the same grid resolution as the start.nc and startfi.nc grid resolution. For example: makegcm -d 64x48x25 -p mars start2archive Then run start2archive.e You now have a start_archive.nc file for one season that you can use with newstart. If you want to gather other states obtained at other times of year, rerun start2archive.e with the start.nc and startfi.nc corresponding to these. These additional initial states will automatically be added to the start archive.nc file present in the directory. 4.9.3 Changing the horizontal or vertical grid resolution To run at a different grid resolution than available initial conditions files, one needs to use tools newstart and start2archive 23 For example, to create initial states at grid resolution 32×24×25 from NetCDF files start and startfi at grid resolution 64×48×32 : • Create file start_archive.nc with start2archive.e compiled at grid resolution 64×48×25 using old file z2sig.def used previously • Create files newstart.nc and newstartfi.nc with newstart.e compiled at grid resolution 32×24×25, using new file z2sig.def If you want to create starts files with tarcers for 50 layers using a start archive.nc obtained for 32 layers, do not forget to use the ini_q option in newstart in order to correctly initialize tracers value for layer 33 to layer 50. You just have to answer yes to the question on thermosphere initialization if you want to initialize the thermosphere part only (l=33 to l=50), and no if you want to initialize tracers for all layers (l=0 to l=50). 24 Chapter 5 Program organization and compilation script All the elements of the LMD model are in the LMDZ.MARS directory (and subdirectories). As explained in Section 4, this directory may be associated with environment variable LMDGCM: If using Csh: setenv LMDGCM /where/you/put/the/model/LMDZ.MARS If using Bash: export LMDGCM=/where/you/put/the/model/LMDZ.MARS An alternative to using anvironment variables is to set the LMDGCM variable in the makegcm script. Here is a brief description of the LMDZ.MARS directory contents: libf/ All the model FORTRAN Sources (.F or .F90) and include files (.h) organised in sub-directories (physics (phymars), dynamics (dyn3d), filters (filtrez)...) deftank/ A collection of examples of parameter files required to run the GCM (run.def, callphys.def, ...) util/ A set of programs useful for post-processing GCM outputs. makegcm Script that should be used to compile the GCM as well as related utilities (newstart, start2archive, testphys1d) create_make_gcm Executable used to create the makefile. This command is run automatically by "makegcm" (see below). 5.1 Organization of the model source files The model source files are stored in various sub directories in directory libf. These subdirectories correspond to the different parts of the model: 25 grid: mainly made up of ”dimensions.h” file, which contains the parameters that define the model grid, i.e. the number of points in longitude (IIM), latitude (JJM) and altitude (LLM), as well as the number of tracers (NQMX). dyn3d: contains the dynamical subroutines. bibio: contains some generic subroutines not specifically related to physics or dynamics but used by either or both. phymars: contains the Martian physics routines. filtrez: contains the longitudinal filter sources applied in the upper latitudes, where the Courant-Friedrich-Levy stability criterion is violated. aeronomars: contains the Martian chemistry and thermosphere routines. 5.2 Programming The model is written in Fortran-77 and Fortran-90. • The program sources are written in “file.F” or “file.F90” files. The extension .F is the standard extension for fixed-form Fortran and the extension .F90 is for freeform Fortran. These files must be preprocessed (by aC preprocessor such as (cpp)) before compilation (this behaviour is, for most compilers, implicitly obtained but using a capital F in the extention of the file names). • Constants are placed in COMMON declarations, located in the common “include” files ”file.h” • In general, variables are passed from subroutine to subroutine as arguments (and never as COMMON blocks). • In some parts of the code, for “historical” reasons, the following rule is sometimes used: in the subroutine, the variables (ex: name) passed as an argument by the calling program are given the prefix p (ex: pname) while the local variables are given the prefix z (ex: zname). As a result, several variables change their prefix (and thus their name) when passing from a calling subroutine to a called subroutine. 5.3 Model organization Figure 5.1 describes the main subroutines called by physiq.F. 5.4 Compiling the model Technically, the model is compiled using the Unix utility make. The file makefile, which describes the code dependencies and requirements, is created automatically by the script create_make_gcm This utility script recreates the makefile file when necessary, for example, when a source file has been added or removed since the last compilation. None of this is visible to the user. To compile the model just run the command makegcm 26 Figure 5.1: Organigram of subroutine function physiq.F 27 with adequate options (e.g. makegcm -d 62x48x32 -p mars gcm), as discussed below and described in section 4.3. The makegcm command compiles the model (gcm) and related utilities (newstart, start2archive, testphys1d). A detailed description of how to use it and of the various parameters that can be supplied is given in the help manual below (which will also be given by the makegcm -h command). Note that before compiling the GCM with makegcm you should have set the environment variable LIBOGCM to a path where intermediate objects and libraries will be generated. If using Csh: setenv LIBOGCM /where/you/want/objects/to/go/libo If using Bash: export LIBOGCM=/where/you/want/objects/to/go/libo Help manual for the makegcm script makegcm [Options] prog The makegcm script: ------------------1. compiles a series of subroutines located in the $LMDGCM/libf sub-directories. The objects are then stored in the libraries in $LIBOGCM. 2. then, makegcm compiles program prog.f located by default in $LMDGCM/libf/dyn3d and makes the link with the libraries. Environment Variables ’$LMDGCM’ and ’$LIBOGCM’ must be set as environment variables or directly in the makegcm file. The makegcm command is used to control the different versions of the model in parallel, compiled using the compilation options and the various dimensions, without having to recompile the whole model. The FORTRAN libraries are stored in directory $LIBOGCM. OPTIONS: -------The following options can either be defined by default by editing the makegcm "script", or in interactive mode: -d imxjmxlm -t ntrac where im, jm, and lm are the number of longitudes, latitudes and vertical layers respectively. Selects the number of tracers present in the model Options -d and -t overwrite file $LMDGCM/libf/grid/dimensions.h which contains the 3 dimensions of the horizontal grid im, jm, lm plus the number of tracers passively advected by the dynamics ntrac, in 4 PARAMETER FORTRAN format with a new file: $LMDGCM/libf/grid/dimension/dimensions.im.jm.lm.tntrac If the file does not exist already it is created by the script $LMDGCM/libf/grid/dimension/makdim 28 -s nscat Number of radiatively active scatterers -p PHYS Selects the set of physical parameterizations you want to compile the model with. The model is then compiled using the physical parameterization sources in directory: $LMDGCM/libf/phyPHYS -g grille Selects the grid type. This option overwrites file $LMDGCM/libf/grid/fxyprim.h with file $LMDGCM/libf/grid/fxy_grille.h the grid can take the following values: 1. reg - the regular grid 2. sin - to obtain equidistant points in terms of sin(latitude) 3. new - to zoom into a part of the globe -O "compilation options" set of fortran compilation options to use -include path Used if the subroutines contain #include files (ccp) that are located in directories that are not referenced by default. -adjnt Compiles the adjoint model to the dynamical code. -olddyn To compile GCM with "old dynamics" -filtre filter To select the longitudinal filter in the polar regions. "filter" corresponds to the name of a directory located in $LMDGCM/libf. The standard filter for the model is "filtrez" which can be used for a regular grid and for a grid with longitudinal zoom. -link "-Ldir1 -lfile1 -Ldir2 -lfile2 ..." Adds a link to FORTRAN libraries libfile1.a, libfile2.a ... located in directories dir1, dir2 ...respectively If dirn is a directory with an automatic path (/usr/lib ... for example) there is no need to specify -Ldirn. 29 Chapter 6 Input/Output 6.1 NetCDF format GCM input/output data are written in NetCDF format (Network Common Data Form). NetCDF is an interface used to store and access geophysical data, and a library that provides an implementation of this interface. The NetCDF library also defines a machineindependent format for representing scientific data. Together, the interface, library and format support the creation, access and sharing of scientific data. NetCDF was developed at the Unidata Program Center in Boulder, Colorado. The freely available source can be obtained from the Unidata website. A data set in NetCDF format is a single file, as it is self-descriptive. 6.1.1 NetCDF file editor: ncdump The editor is included in the NetCDF library. By default it generates an ASCII representation as standard output from the NetCDF file specified at the input. Main commands for ncdump ncdump diagfi.nc dump contents of NetCDF file diagfi.nc to standard output (i.e. the screen). ncdump -c diagfi.nc Displays the coordinate variable values (variables which are also dimensions), as well as the declarations, variables and attribute values. The values of the non-coordinate variable data are not displayed at the output. ncdump -h diagfi.nc Shows only the informative header of the file, which is the declaration of the dimensions, variables and attributes, but not the values of these variables. The output is identical to that in option -c except for the fact that the coordinated variable values are not included. ncdump -v var1,...,varn diagfi.nc The output includes the specific variable values, as well as all the dimensions, variables and attributes. More that one variable can be specified in the list following this option. The list must be a simple argument for the command, and must not contain any spaces. If no variable is specified, the command displays all the values of the variables in the file by default. 30 Figure 6.1: Example of temperature data at a given time using GrADS visualization 6.1.2 Graphic visualization of the NetCDF files using GrAds GrAdS (The Grid Analysis and Display System) is a graphic software developed by Brian Doty at the ”Center for Ocean-Land-Atmosphere (COLA)”. One of its functions is to enable data stored in NetCDF format to be visualized directly. In figure 6.1 for example, we can see the GrADS visualization of the temperature data at a given moment. However, unlike NetCDF, GrADS only recognizes files where all the variables are stored on the same horizontal grid. These variables can be in 1, 2, 3 or 4 dimensions (X,Y,Z and t). GrADS can also be obtained on the WWW. 6.2 Input and parameter files The (3D version of the) GCM requires the input of two initialization files (in NetCDF format): -start.nc contains the initial states of the dynamical variables. -startfi.nc contains the initial states of the physical variables. Note that collections of initial states can be retreived at:˜forget/datagcm/Starts Extracting start.nc and startfi.nc from these archived requires using program newstart, as described in section 4.9. To run, the GCM also requires the four following parameter files (ascii text files): -run.def the parameters of the dynamical part of the program, and the temporal integration of the model. -callphys.def the parameters for calling the physical part. -traceur.def the names of the tracer to use. -z2sig.def the vertical distribution of the atmospheric layers. Examples of these parameter files can be found in the LMDZ.MARS/deftank directory. 31 6.2.1 run.def A typical run.def file is given as an example below. The choice of variables to be set is simple (e.g. nday number of modeled days to run), while the others do not need to be changed for normal use. The format of the run.def file is quite straightforward (and flexible): values given to parameters must be given as: parameter = value Any blank line or line beginning with symbol # is a comment, and instruction lines may be written in any order. Moreover, not specifying a parameter/value set (e.g. deleting it or commenting it out) means you want the GCM to use a default built-in value. Additionally, one may use a specific keyword INCLUDEDEF to specify another (text) file in which to also read values of parameters; e.g.: INCLUDEDEF=callphys.def Here are some details about some of the parameters which may be set in run.def: • day step, the number of dynamical steps per day to use for the time integration. This needs to be large enough for the model to remain stable (this is related to the CFL stability criterion which essentially depends on the horizontal resolution of the model). On Mars, in theory, the GCM can run with day step=480 using the 64×48 grid, but model stability improves when this number is higher: day step=960 is recommended when using the 64×48 grid. According to the CFL criterion, day step should vary in proportion with the resolution: for example day step=480 using the 32×24 horizontal resolution. Note that day step must also be divisible by iperiod. • tetagdiv, tetagrot, tetatemp control the dissipation intensity. It is better to limit the dissipation intensity (tetagdiv, tetagrot, tetatemp should not be too low). However the model diverges if tetagdiv, tetagrot, tetatemp are too high, especially if there is a lot of dust in the atmosphere. Example used with nitergdiv=1 and nitergrot=niterh=2 : - using the 32×24 grid tetagdiv=6000 s ; tetagrot=tetatemp=30000 s - using the 64×48 grid: tetagdiv=3000 s ; tetagrot=tetatemp=9000 s • idissip is the time step used for the dissipation: dissipation is computed and added every idissip dynamical time step. If idissip is too short, the model waste time in these calculations. But if idissip is too long, the dissipation will not be parametrized correctly and the model will be more likely to diverge. A check must be made, so that: idissip < tetagdiv×daystep/88775 (same rule for tetagrot and tetatemp). This is tested automatically during the run. • iphysiq is the time step used for the physics: physical tendencies are computed every iphysiq dynamical time step. In practice, we usually set the physical time step to be of the order of half an hour. We thus generally set iphysiq= day step/48 Example of run.def file: # #----------------------------------------------------------------------#Parametres de controle du run: #-----------------------------# Nombre de jours d’integration nday=9999 # nombre de pas par jour (multiple de iperiod) ( ici pour 32 dt = 1 min ) day_step = 480 # periode pour le pas Matsuno (en pas) iperiod=5 # periode de sortie des variables de controle (en pas) iconser=120 # periode d’ecriture du fichier histoire (en jour) iecri=100 # periode de stockage fichier histmoy (en jour) periodav=60. # periode de la dissipation (en pas) idissip=1 # choix de l’operateur de dissipation (star ou lstardis=.true. non star ) # avec ou sans coordonnee hybrides hybrid=.true. # nombre d’iterations de l’operateur de dissipation nitergdiv=1 gradiv # nombre d’iterations de l’operateur de dissipation nitergrot=2 nxgradrot # nombre d’iterations de l’operateur de dissipation niterh=2 divgrad # temps de dissipation des plus petites long.d ondes pour u,v (gradiv) tetagdiv= 3000. # temps de dissipation des plus petites long.d ondes pour u,v(nxgradrot) tetagrot=9000. # temps de dissipation des plus petites long.d ondes pour tetatemp=9000. h ( divgrad) # coefficient pour gamdissip coefdis=0. # choix du shema d’integration temporelle (Matsuno ou Matsuno-leapfrog) purmats=.false. # avec ou sans physique physic=.true. # periode de la physique (en pas) iphysiq=10 # choix d’une grille reguliere grireg=.true. # frequence (en pas) de l’ecriture du fichier diagfi ecritphy=120 # longitude en degres du centre du zoom clon=63. # latitude en degres du centre du zoom clat=0. # facteur de grossissement du zoom,selon longitude grossismx=1. 33 # facteur de grossissement du zoom ,selon latitude grossismy=1. # Fonction f(y) fxyhypb=.false. hyperbolique # extension en longitude dzoomx= 0. si = .true. de la zone du zoom # extension en latitude de la zone dzoomy=0. # raideur du zoom en taux=2. X # raideur du zoom en tauy=2. Y # du zoom , sinon sinusoidale ( fraction de la zone totale) ( fraction de la zone totale) Fonction f(y) avec y = Sin(latit.) si = .TRUE. , ysinus= .false. Sinon y = latit. # Avec sponge layer callsponge = .true. # Sponge: mode0(u=v=0), mode1(u=umoy,v=0), mode2(u=umoy,v=vmoy) mode_sponge= 2 # Sponge: hauteur de sponge (km) hsponge= 90 # Sponge: tetasponge (secondes) tetasponge = 50000 # some definitions for the physics, in file ’callphys.def’ INCLUDEDEF=callphys.def 6.2.2 callphys.def The callphys.def file (along the same format as the run.def file) contains parameter/value sets for the physics. Example of callphys.def file: ##General options ##˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜ #Run with or without tracer transport ? tracer=.false. #Diurnal cycle ? diurnal=.true. if diurnal=False, diurnal averaged solar heating #Seasonal cycle ? if season=False, Ls stays constant, to value set in "start" season = .true. #write some more output on the screen ? lwrite = .false. #Save statistics in file "stats.nc" ? stats =.true. #Save EOF profiles in file "profiles" for Climate Database? calleofdump = .false. 34 ## Dust scenario. Used if the dust is prescribed (i.e. if tracer=F or active=F) ## ˜˜˜˜˜˜˜˜˜˜˜˜˜ # =1 Dust opt.deph read in startfi; =2 Viking scenario; =3 MGS scenario, # =4 Mars Year 24 from TES assimilation (same as =24 for now) # =24 Mars Year 24 from TES assimilation (ie: MCD reference case) # =25 Mars Year 25 from TES assimilation (ie: a year with a global dust storm) # =26 Mars Year 26 from TES assimilation iaervar = 24 # Dust vertical distribution: # (=0: old distrib. (Pollack90), =1: top set by "topdustref", # =2: Viking scenario; =3 MGS scenario) iddist = 3 # Dust top altitude (km). (Matters only if iddist=1) topdustref = 55. ## Physical Parameterizations : ## ˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜ # call radiative transfer ? callrad = .true. # call NLTE radiative schemes ? matters only if callrad=T callnlte = .true. # call CO2 NIR absorption ? matters only if callrad=T callnirco2 = .true. # call turbulent vertical diffusion ? calldifv = .true. # call convective adjustment ? calladj = .true. # call CO2 condensation ? callcond =.true. # call thermal conduction in the soil ? callsoil = .true. # call Lott’s gravity wave/subgrid topography scheme ? calllott = .true. ## Radiative transfer options : ## ˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜ # the rad.transfer is computed every "iradia" physical timestep iradia = 1 # Output of the exchange coefficient mattrix ? for diagnostic only callg2d = .false. # Rayleigh scattering : (should be .false. for now) rayleigh = .false. ## = .false. # WATERICE: Radiatively active transported atmospheric water ice ? activice = .false. # WATER: Compute water cycle water = .false. # WATER: current permanent caps at both poles. True IS RECOMMENDED # (with .true., North cap is a source of water and South pole # is a cold trap) 35 caps = .true. # PHOTOCHEMISTRY: include chemical species photochem = .false. ## Thermospheric options (relevant if tracer=T) : ##˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜ # call thermosphere ? callthermos = .false. # WATER: included without cycle (only if water=.false.) thermoswater = .false. # call thermal conduction ? (only if callthermos=.true.) callconduct = .false. # call EUV heating ? (only if callthermos=.true.) calleuv=.false. # call molecular viscosity ? (only if callthermos=.true.) callmolvis = .false. # call molecular diffusion ? (only if callthermos=.true.) callmoldiff = .false. # call thermospheric photochemistry ? (only if callthermos=.true.) thermochem = .false. # date for solar flux calculation: (1985 < date < 2002) ## (Solar min=1996.4 ave=1993.4 max=1990.6) solarcondate = 1993.4 6.2.3 traceur.def Tracers in input (start.nc and startfi.nc) and output files (restart.nc and restartfi.nc) are stored using individual tracer names (e.g. co2 for CO2 gas, h2o vap for water vapour, h2o ice for water ice, ...). The first line of the traceur.def file (an ASCII file) must contain the number of tracers to load and use (this number should be the same as given to the -t option of the makegcm script when the GCM was compiled), followed by the tracer names (one per line). Note that if the corresponding tracers are not found in input files start.nc and startfi.nc, then the tracer is initialized to zero. Example of a traceur.def file: (with water vapour and ice tracers) 2 h2o_ice h2o_vap 6.2.4 z2sig.def The Z2sig.def file contains the pseudo-altitudes (in km) at which the user wants to set the vertical levels. Note that levels should be unevenly spread, with a higher resolution near the surface in order to capture the rapid variations of variables there. It is recommended to use the altitude levels as set in the z2sig.def file provided in the deftank directory. Example of z2sig.def file (this version for 50 layers between 0 and 400 km): 10.00000 0.0040 0.018 0.0400 0.1000 0.228200 0.460400 0.907000 1.73630 H: atmospheric scale height (km) (used as a reference only) Typical pseudo-altitude (m) for 1st layer (z=H*log(sigma)) ,, ,, ,, ,, ,, ,, ,, ,, ,, 2nd layer, etc... 36 3.19040 5.54010 8.97780 13.5138 18.9666 25.0626 31.5527 38.4369 45.4369 52.4369 59.4369 66.4369 73.4369 80.4369 87.4369 94.4369 101.4369 108.437 115.437 122.437 129.437 136.437 143.437 150.437 157.437 164.437 171.437 178.437 185.437 192.437 199.437 206.437 213.437 220.437 227.437 234.437 241.437 248.437 255.437 262.437 269.437 276.437 283.437 290.437 297.437 304.437 311.437 318.437 325.437 332.437 339.437 346.437 353.437 360.437 37 367.437 374.437 381.437 388.437 395.437 6.2.5 Initialization files: start and startfi DYNAMIQUE (ex: start) PHYSIQUE (ex: startfi) Entête 1_ controle (tab_cntrl) Entête 1_ controle (tab_cntrl) 2_ rlonu 3_ rlatu 4_ rlonv ... 2_ hor_coor 3_ vert_coor 4_ vert2_coor ... Informations sur la grille Informations sur la grille Conditions de surface 1_ phisinit Conditions de surface 1_ phisfi 2_albedodat 4_zmea ... temps Valeur des instants auxquels sont stockées les variables temps Valeur des instants auxquels sont stockées les variables Stockage des variables temporelles t=1 ucov vcov h ... Stockage des variables temporelles t=1 co2ice tsurf tsoil ... t=2 ucov vcov h ... t=2 co2ice tsurf tsoil ... t=3 ucov vcov h ... t=3 co2ice tsurf tsoil ... Figure 6.2: Organization of NetCDF files Files start.nc and startfi.nc, like all the NetCDF files of the GCM, are constructed on the same model (see NetCDF file composition, figure 6.2). They contain: - a header with a “control” variable followed by a series of variables defining the (physical and dynamical) grids - a series of non temporal variables that give information about surface conditions on the planet. - a “time” variable giving the values of the different instants at which the temporal variables are stored (a single time value (t=0) for start, as it describes the dynamical initial 38 states, and no time values for startfi, as it describes only a physical state). To visualize the contents of a start.nc file using the ncdump command: ncdump -h start.nc netcdf start { dimensions: index = 100 ; rlonu = 33 ; latitude = 25 ; longitude = 33 ; rlatv = 24 ; altitude = 18 ; interlayer = 19 ; Time = UNLIMITED ; // (1 currently) variables: float controle(index) ; controle:title = "Parametres de controle" ; float rlonu(rlonu) ; rlonu:title = "Longitudes des points U" ; float rlatu(latitude) ; rlatu:title = "Latitudes des points U" ; float rlonv(longitude) ; rlonv:title = "Longitudes des points V" ; float rlatv(rlatv) ; rlatv:title = "Latitudes des points V" ; float ap(interlayer) ; ap:title = "Coef A: hybrid pressure levels" ; float bp(interlayer) ; bp:title = "Coef B: hybrid sigma levels" ; float aps(altitude) ; aps:title = "Coef AS: hybrid pressure at midlayers" ; float bps(altitude) ; bps:title = "Coef BS: hybrid sigma at midlayers" ; float presnivs(altitude) ; float latitude(latitude) ; latitude:units = "degrees_north" ; latitude:long_name = "North latitude" ; float longitude(longitude) ; longitude:long_name = "East longitude" ; longitude:units = "degrees_east" ; float altitude(altitude) ; altitude:long_name = "pseudo-alt" ; altitude:units = "km" ; altitude:positive = "up" ; float cu(latitude, rlonu) ; cu:title = "Coefficient de passage pour U" ; float cv(rlatv, longitude) ; cv:title = "Coefficient de passage pour V" ; float aire(latitude, longitude) ; aire:title = "Aires de chaque maille" ; float phisinit(latitude, longitude) ; phisinit:title = "Geopotentiel au sol" ; float Time(Time) ; Time:title = "Temps de simulation" ; Time:units = "days since 1-01-01 00:00:00" ; float ucov(Time, altitude, latitude, rlonu) ; ucov:title = "Vitesse U" ; float vcov(Time, altitude, rlatv, longitude) ; vcov:title = "Vitesse V" ; float teta(Time, altitude, latitude, longitude) ; teta:title = "Temperature" ; float h2o_ice(Time, altitude, latitude, longitude) ; h2o_ice:title = "Traceur h2o_ice" ; float h2o_vap(Time, altitude, latitude, longitude) ; 39 h2o_vap:title = "Traceur h2o_vap" ; float masse(Time, altitude, latitude, longitude) ; masse:title = "C est quoi ?" ; float ps(Time, latitude, longitude) ; ps:title = "Pression au sol" ; // global attributes: :title = "Dynamic start file" ; } List of contents of a startfi.nc file: ncdump -h startfi.nc netcdf startfi { dimensions: index = 100 ; physical_points = 738 ; subsurface_layers = 18 ; nlayer_plus_1 = 19 ; number_of_advected_fields = 3 ; variables: float controle(index) ; controle:title = "Control parameters" ; float soildepth(subsurface_layers) ; soildepth:title = "Soil mid-layer depth" ; float longitude(physical_points) ; longitude:title = "Longitudes of physics grid" ; float latitude(physical_points) ; latitude:title = "Latitudes of physics grid" ; float area(physical_points) ; area:title = "Mesh area" ; float phisfi(physical_points) ; phisfi:title = "Geopotential at the surface" ; float albedodat(physical_points) ; albedodat:title = "Albedo of bare ground" ; float ZMEA(physical_points) ; ZMEA:title = "Relief: mean relief" ; float ZSTD(physical_points) ; ZSTD:title = "Relief: standard deviation" ; float ZSIG(physical_points) ; ZSIG:title = "Relief: sigma parameter" ; float ZGAM(physical_points) ; ZGAM:title = "Relief: gamma parameter" ; float ZTHE(physical_points) ; ZTHE:title = "Relief: theta parameter" ; float co2ice(physical_points) ; co2_ice:title = "CO2 ice cover" ; float inertiedat(subsurface_layers, physical_points) ; inertiedat:title = "Soil thermal inertia" ; float tsurf(physical_points) ; tsurf:title = "Surface temperature" ; float tsoil(subsurface_layers, physical_points) ; tsoil:title = "Soil temperature" ; float emis(physical_points) ; emis:title = "Surface emissivity" ; float q2(nlayer_plus_1, physical_points) ; q2:title = "pbl wind variance" ; float h2o_ice(physical_points) ; h2o_ice:title = "tracer on surface" ; // global attributes: :title = "Physics start file" ; } 40 Physical and dynamical headers There are two types of headers: one for the physical headers, and one for the dynamical headers. The headers always begin with a “control’ variable (described below), that is allocated differently in the physical and dynamical parts. The other variables in the header concern the (physical and dynamical) grids. They are the following: the horizontal coordinates - rlonu, rlatu, rlonv, rlatv for the dynamical part, - lati, long for the physical part, the coefficients for passing from the physical grid to the dynamical grid - cu,cv only in the dynamical header and finally, the grid box areas - aire for the dynamical part, - area for the physical part. Surface conditions The surface conditions are mostly given in the physical NetCDF files by variables: - phisfi for the initial state of surface geopotential, - albedodat for the bare ground albedo, - inertiedat for the surface thermal inertia, - zmea, zstd, zsig, zgam and zthe for the subgrid scale topography. For the dynamics: - physinit for the initial state of surface geopotential Remark: variables phisfi and physinit contain the same information (surface geopotential), but phisfi gives the geopotential values on the physical grid, while physinit give the values on the dynamical grid. Physical and dynamical state variables To save disk space, the initialization files store the variables used by the model, rather than the “natural” variables. For the dynamics: - ucov and vcov the covariant winds These variables are linked to the “natural” winds by ucov = cu * u and vcov = cv * v - teta the potential temperature, or more precisely, the potential enthalpy linked to temperature T by θ = −K P T P ref - the tracers, - ps surface pressure. - masse the atmosphere mass in each grid box. 41 “Vectorial” variables ucov and vcov are stored on “staggered” grids u and v respectively (in the dynamics) (see section 2.2). Scalar variables h, q (tracers), ps, masse are stored on the “scalar” grid of the dynamical part. For the physics: - co2ice surface dry ice, - tsurf surface temperature, - tsoil temperatures at different layers under the surface, - emis surface emissivity, - q2 wind variance, or more precisely, the square root of the turbulent kinetic energy. - the surface “tracer” budget (kg.m −2 ), All these variables are stored on the “physical” grid (see section 2.2). The “control” array Both physical and dynamical headers of the GCM NetCDF files start with a controle variable. This variable is an array of 100 reals (the vector called tab cntrl in the program), which contains the program control parameters. Parameters differ between the physical and dynamical sections, and examples of both are listed below. The contents of table tab cntrl can also be checked with the command ncdump -ff -v controle. The ”control” array in the header of a dynamical NetCDF file: start tab_cntrl(1) = FLOAT(iim) ! number of nodes along longitude tab_cntrl(2) = FLOAT(jjm) ! number of nodes along latitude tab_cntrl(3) = FLOAT(llm) ! number of atmospheric layers tab_cntrl(4) = FLOAT(idayref) ! initial day tab_cntrl(5) = rad ! radius of the planet tab_cntrl(6) = omeg ! rotation of the planet (rad/s) tab_cntrl(7) = g ! gravity (m/s2) ˜3.72 for Mars tab_cntrl(8) = cpp tab_cntrl(9) = kappa ! = r/cp tab_cntrl(10) = daysec ! lenght of a sol (s) ˜88775 tab_cntrl(11) = dtvr ! dynamical time step (s) tab_cntrl(12) = etot0 ! total energy tab_cntrl(13) = ptot0 ! total pressure tab_cntrl(14) = ztot0 ! total enstrophy tab_cntrl(15) = stot0 ! total enthalpy tab_cntrl(16) = ang0 ! total angular momentum tab_cntrl(17) = pa tab_cntrl(18) = preff ! reference pressure (Pa) tab_cntrl(19) = clon ! longitude of center of zoom tab_cntrl(20) = clat ! latitude of center of zoom tab_cntrl(21) = grossismx ! zooming factor, along longitude tab_cntrl(22) = grossismy ! zooming factor, along latitude tab_cntrl(24) = dzoomx ! extention (in longitude) of zoom tab_cntrl(25) = dzoomy ! extention (in latitude) of zoom tab_cntrl(27) = taux ! stiffness factor of zoom in longitude tab_cntrl(28) = tauy ! stiffness factor of zoom in latitude The ”controle” array in the header of a physical NetCDF file: startfi.nc 42 c Informations on the physics grid tab_cntrl(1) = float(ngridmx) ! number of nodes on physics grid tab_cntrl(2) = float(nlayermx) ! number of atmospheric layers tab_cntrl(3) = day_ini + int(time) ! initial day tab_cntrl(4) = time -int(time) ! initiale time of day c Informations about Mars, used tab_cntrl(5) = rad ! tab_cntrl(6) = omeg ! tab_cntrl(7) = g ! tab_cntrl(8) = mugaz ! tab_cntrl(9) = rcp ! tab_cntrl(10) = daysec ! tab_cntrl(11) = phystep tab_cntrl(12) = 0. tab_cntrl(13) = 0. c Informations about Mars, only tab_cntrl(14) = year_day tab_cntrl(15) = periheli tab_cntrl(16) = aphelie tab_cntrl(17) = peri_day tab_cntrl(18) = obliquit by dynamics and physics radius of Mars (m) ˜3397200 rotation rate (rad.s-1) gravity (m.s-2) ˜3.72 Molar mass of the atmosphere (g.mol-1) ˜43.49 = r/cp ˜0.256793 (=kappa dans dynamique) length of a sol (s) ˜88775 ! time step in the physics for physics ! length of year (sols) ˜668.6 ! min. Sun-Mars distance (Mkm) ˜206.66 ! max. SUn-Mars distance (Mkm) ˜249.22 ! date of perihelion (sols since N. spring) ! Obliquity of the planet (deg) ˜23.98 c Boundary layer and turbulence tab_cntrl(19) = z0 ! surface roughness (m) ˜0.01 tab_cntrl(20) = lmixmin ! mixing length ˜100 tab_cntrl(21) = emin_turb ! minimal energy ˜1.e-8 c Optical properties of polar caps tab_cntrl(22) = albedice(1) tab_cntrl(23) = albedice(2) tab_cntrl(24) = emisice(1) tab_cntrl(25) = emisice(2) tab_cntrl(26) = emissiv tab_cntrl(31) = iceradius(1) tab_cntrl(32) = iceradius(2) tab_cntrl(33) = dtemisice(1) tab_cntrl(34) = dtemisice(2) c dust aerosol properties tab_cntrl(27) = tauvis and ground emissivity ! Albedo of northern cap ˜0.5 ! Albedo of southern cap ˜0.5 ! Emissivity of northern cap ˜0.95 ! Emissivity of southern cap ˜0.95 ! Emissivity of martian soil ˜.95 ! mean scat radius of CO2 snow (north) ! mean scat radius of CO2 snow (south) ! time scale for snow metamorphism (north) ! time scale for snow metamorphism (south) ! mean visible optical depth tab_cntrl(28) = 0. tab_cntrl(29) = 0. tab_cntrl(30) = 0. ! Soil properties: tab_cntrl(35) = volcapa ! soil volumetric heat capacity 43 6.3 Output files 6.3.1 NetCDF restart files - restart.nc and restartfi.nc These files are of the exact same format as start.nc and startfi.nc 6.3.2 NetCDF file - diagfi.nc NetCDF file diagfi.nc stores the instantaneous physical variables throughout the simulation at regular intervals (set by the value of parameter ecritphy in parameter file run.def; note that ecritphy should be a multiple of iphysiq as well as a divisor of day step). Any variable from any sub-routine of the physics can be stored by calling subroutine writediagfi Illustrative example of the contents of a diagfi.nc file (using ncdump): ncdump -h diagfi.nc netcdf diagfi { dimensions: Time = UNLIMITED ; // (12 currently) index = 100 ; rlonu = 65 ; latitude = 49 ; longitude = 65 ; rlatv = 48 ; interlayer = 26 ; altitude = 25 ; subsurface_layers = 18 ; variables: float Time(Time) ; Time:long_name = "Time" ; Time:units = "days since 0000-00-0 00:00:00" ; float controle(index) ; controle:title = "Control parameters" ; float rlonu(rlonu) ; rlonu:title = "Longitudes at u nodes" ; float latitude(latitude) ; latitude:units = "degrees_north" ; latitude:long_name = "North latitude" ; float longitude(longitude) ; longitude:long_name = "East longitude" ; longitude:units = "degrees_east" ; float altitude(altitude) ; altitude:long_name = "pseudo-alt" ; altitude:units = "km" ; altitude:positive = "up" ; float rlatv(rlatv) ; rlatv:title = "Latitudes at v nodes" ; float aps(altitude) ; aps:title = "hybrid pressure at midlayers" ; aps:units = "Pa" ; float bps(altitude) ; bps:title = "hybrid sigma at midlayers" ; bps:units = "" ; float ap(interlayer) ; ap:title = "hybrid pressure at interlayers" ; ap:units = "Pa" ; float bp(interlayer) ; bp:title = "hybrid sigma at interlayers" ; bp:units = "" ; float soildepth(subsurface_layers) ; soildepth:long_name = "Soil mid-layer depth" ; soildepth:units = "m" ; soildepth:positive = "down" ; 44 float cu(latitude, rlonu) ; cu:title = "Conversion coefficients cov <--> natural" ; float cv(rlatv, longitude) ; cv:title = "Conversion coefficients cov <--> natural" ; float aire(latitude, longitude) ; aire:title = "Mesh area" ; float phisinit(latitude, longitude) ; phisinit:title = "Geopotential at the surface" ; float emis(Time, latitude, longitude) ; emis:title = "Surface emissivity" ; emis:units = "w.m-1" ; float tsurf(Time, latitude, longitude) ; tsurf:title = "Surface temperature" ; tsurf:units = "K" ; float ps(Time, latitude, longitude) ; ps:title = "surface pressure" ; ps:units = "Pa" ; float co2ice(Time, latitude, longitude) ; co2ice:title = "co2 ice thickness" ; co2ice:units = "kg.m-2" ; float mtot(Time, latitude, longitude) ; mtot:title = "total mass of water vapor" ; mtot:units = "kg/m2" ; float icetot(Time, latitude, longitude) ; icetot:title = "total mass of water ice" ; icetot:units = "kg/m2" ; float tauTES(Time, latitude, longitude) ; tauTES:title = "tau abs 825 cm-1" ; tauTES:units = "" ; float h2o_ice_s(Time, latitude, longitude) ; h2o_ice_s:title = "surface h2o_ice" ; h2o_ice_s:units = "kg.m-2" ; } The structure of the file is thus as follows: - the dimensions - variable “time” containing the time of the timestep stored in the file (in Martian days since the beginning of the run) - variable “control” containing many parameters, as described above. - from “ rhonu” to ’phisinit”: a list of data describing the geometrical coordinates of the data file, plus the surface topography - finally, all the 2D or 3D data stored in the run. 6.3.3 Stats files As an option (stats must be set to .true. in callphys.def), the model can accumulate any variable from any subroutine of the physics by calling subroutine wstat This save is performed at regular intervals 12 times a day. An average of the daily evolutions over the whole run is calculated (for example, for a 10 day run, the averages of the variable values at 0hTU, 2hTU, 4hTU,...24hTU are calculated), along with RMS standard deviations of the variables. This ouput is given in file stats.nc. Illustrative example of the contents of a stats.nc file (using ncdump): ncdump -h stats.nc netcdf stats { dimensions: 45 latitude = 49 ; longitude = 65 ; altitude = 25 ; llmp1 = 26 ; Time = UNLIMITED ; // (12 currently) variables: float Time(Time) ; Time:title = "Time" ; Time:units = "days since 0000-00-0 00:00:00" ; float latitude(latitude) ; latitude:title = "latitude" ; latitude:units = "degrees_north" ; float longitude(longitude) ; longitude:title = "East longitude" ; longitude:units = "degrees_east" ; float altitude(altitude) ; altitude:long_name = "altitude" ; altitude:units = "km" ; altitude:positive = "up" ; float aps(altitude) ; aps:title = "hybrid pressure at midlayers" ; aps:units = "" ; float bps(altitude) ; bps:title = "hybrid sigma at midlayers" ; bps:units = "" ; float ps(Time, latitude, longitude) ; ps:title = "Surface pressure" ; ps:units = "Pa" ; float ps_sd(Time, latitude, longitude) ; ps_sd:title = "Surface pressure total standard deviation over th e season" ; ps_sd:units = "Pa" ; float tsurf(Time, latitude, longitude) ; tsurf:title = "Surface temperature" ; tsurf:units = "K" ; float tsurf_sd(Time, latitude, longitude) ; tsurf_sd:title = "Surface temperature total standard deviation o ver the season" ; tsurf_sd:units = "K" ; float co2ice(Time, latitude, longitude) ; co2ice:title = "CO2 ice cover" ; co2ice:units = "kg.m-2" ; float co2ice_sd(Time, latitude, longitude) ; co2ice_sd:title = "CO2 ice cover total standard deviation over t he season" ; co2ice_sd:units = "kg.m-2" ; float fluxsurf_lw(Time, latitude, longitude) ; fluxsurf_lw:title = "Thermal IR radiative flux to surface" ; fluxsurf_lw:units = "W.m-2" ; float fluxsurf_lw_sd(Time, latitude, longitude) ; fluxsurf_lw_sd:title = "Thermal IR radiative flux to surface tot al standard deviation over the season" ; fluxsurf_lw_sd:units = "W.m-2" ; float fluxsurf_sw(Time, latitude, longitude) ; fluxsurf_sw:title = "Solar radiative flux to surface" ; fluxsurf_sw:units = "W.m-2" ; float fluxsurf_sw_sd(Time, latitude, longitude) ; fluxsurf_sw_sd:title = "Solar radiative flux to surface total st andard deviation over the season" ; fluxsurf_sw_sd:units = "W.m-2" ; float fluxtop_lw(Time, latitude, longitude) ; fluxtop_lw:title = "Thermal IR radiative flux to space" ; fluxtop_lw:units = "W.m-2" ; float fluxtop_lw_sd(Time, latitude, longitude) ; fluxtop_lw_sd:title = "Thermal IR radiative flux to space total standard deviation over the season" ; fluxtop_lw_sd:units = "W.m-2" ; 46 float fluxtop_sw(Time, latitude, longitude) ; fluxtop_sw:title = "Solar radiative flux to space" ; fluxtop_sw:units = "W.m-2" ; float fluxtop_sw_sd(Time, latitude, longitude) ; fluxtop_sw_sd:title = "Solar radiative flux to space total stand ard deviation over the season" ; fluxtop_sw_sd:units = "W.m-2" ; float dod(Time, latitude, longitude) ; dod:title = "Dust optical depth" ; dod:units = "" ; float dod_sd(Time, latitude, longitude) ; dod_sd:title = "Dust optical depth total standard deviation over the season" ; dod_sd:units = "" ; float temp(Time, altitude, latitude, longitude) ; temp:title = "Atmospheric temperature" ; temp:units = "K" ; float temp_sd(Time, altitude, latitude, longitude) ; temp_sd:title = "Atmospheric temperature total standard deviatio n over the season" ; temp_sd:units = "K" ; float u(Time, altitude, latitude, longitude) ; u:title = "Zonal (East-West) wind" ; u:units = "m.s-1" ; float u_sd(Time, altitude, latitude, longitude) ; u_sd:title = "Zonal (East-West) wind total standard deviation ov er the season" ; u_sd:units = "m.s-1" ; float v(Time, altitude, latitude, longitude) ; v:title = "Meridional (North-South) wind" ; v:units = "m.s-1" ; float v_sd(Time, altitude, latitude, longitude) ; v_sd:title = "Meridional (North-South) wind total standard devia tion over the season" ; v_sd:units = "m.s-1" ; float w(Time, altitude, latitude, longitude) ; w:title = "Vertical (down-up) wind" ; w:units = "m.s-1" ; float w_sd(Time, altitude, latitude, longitude) ; w_sd:title = "Vertical (down-up) wind total standard deviation o ver the season" ; w_sd:units = "m.s-1" ; float rho(Time, altitude, latitude, longitude) ; rho:title = "Atmospheric density" ; rho:units = "none" ; float rho_sd(Time, altitude, latitude, longitude) ; rho_sd:title = "Atmospheric density total standard deviation ove r the season" ; rho_sd:units = "none" ; float q2(Time, altitude, latitude, longitude) ; q2:title = "Boundary layer eddy kinetic energy" ; q2:units = "m2.s-2" ; float q2_sd(Time, altitude, latitude, longitude) ; q2_sd:title = "Boundary layer eddy kinetic energy total standard deviation over the season" ; q2_sd:units = "m2.s-2" ; float vmr_h2ovapor(Time, altitude, latitude, longitude) ; vmr_h2ovapor:title = "H2O vapor volume mixing ratio" ; vmr_h2ovapor:units = "mol/mol" ; float vmr_h2ovapor_sd(Time, altitude, latitude, longitude) ; vmr_h2ovapor_sd:title = "H2O vapor volume mixing ratio total sta ndard deviation over the season" ; vmr_h2ovapor_sd:units = "mol/mol" ; float vmr_h2oice(Time, altitude, latitude, longitude) ; vmr_h2oice:title = "H2O ice volume mixing ratio" ; vmr_h2oice:units = "mol/mol" ; float vmr_h2oice_sd(Time, altitude, latitude, longitude) ; 47 vmr_h2oice_sd:title = "H2O ice volume mixing ratio total standar d deviation over the season" ; vmr_h2oice_sd:units = "mol/mol" ; float mtot(Time, latitude, longitude) ; mtot:title = "total mass of water vapor" ; mtot:units = "kg/m2" ; float mtot_sd(Time, latitude, longitude) ; mtot_sd:title = "total mass of water vapor total standard deviat ion over the season" ; mtot_sd:units = "kg/m2" ; float icetot(Time, latitude, longitude) ; icetot:title = "total mass of water ice" ; icetot:units = "kg/m2" ; float icetot_sd(Time, latitude, longitude) ; icetot_sd:title = "total mass of water ice total standard deviat ion over the season" ; icetot_sd:units = "kg/m2" ; } The structure of the file is simillar to the diagfi.nc file, except that, as stated before, the average of variables are given for 12 times of the day and that RMS standard deviation are also provided. 48 Chapter 7 Zoomed simulations The LMD GCM can use a zoom to enhance the resolution locally. In practice, one can increase the latitudinal resolution on the one hand, and the longitudinal resolution on the other hand. 7.1 To define the zoomed area The zoom is defined in run.def. Here are the variables that you want to set: • East longitude (in degrees) of zoom center clon • latitude (in degrees) of zoom center clat • zooming factors, along longitude grossismx. Typically 1.5, 2 or even 3 (see below) • zooming factors, along latitude grossismy. Typically 1.5, 2 or even 3 (see below) • fxyhypb: must be set to ”T” for a zoom, whereas it must be F otherwise • extention in longitude of zoomed area dzoomx. This is the total longitudinal extension of the zoomed region (degree). It is recommended that grossismx × dzoomx < 200o • extention in latitude of the zoomed region dzoomy. This is the total latitudinal extension of the zoomed region (degree). It is recommended that grossismy × dzoomy < 100o • stiffness of the zoom along longitudes taux. 2 is for a smooth transition in longitude, more means sharper transition. • stiffness of the zoom along latitudes taux. 2 is for a smooth transition in latitude, more means sharper transition. 7.2 Making a zoomed initial state One must start from an initial state archive start archive.nc obtained from a previous simulation (see section 4.9) Then compile and run newstart.e using the run.def file designed for the zoom. After running newstart.e. The zoomed grid may be visualized using grads, for instance. Here is a grads script that can be used to map the grid above a topography map: 49 set mpdraw off set grid off sdfopen restart.nc set gxout grid set digsiz 0 set lon -180 180 d ps close 1 *** replace the path to surface.nc in the following line: sdfopen /u/forget/WWW/datagcm/datafile/surface.nc set lon -180 180 set gxout contour set clab off set cint 3 d zMOL 7.3 Running a zoomed simulation and stability issue • dynamical timestep Because of their higher resolution, zoomed simulation requires a higher timestep. Therefore in run.def, the number of dynamical timestep per day day step must be increased by more than grossismx or grossismy (twice that if necessary). However, you can keep the same physical timestep (48/sol) and thus increase iphysiq accordingly (iphysiq = day step/48). • It has been found that when zooming in longitude, on must set ngroup=1 in dyn3d/groupeun.F. Otherwise the run is less stable. • The very first initial state made with newstart.e can be noisy and dynamically unstable. It may be necessary to strongly increase the intensity of the dissipation and increase day step in run.def for 1 to 3 sols, and then use less strict values. • If the run remains very unstable and requires too much dissipation or a too small timestep, a good tip to help stabilize the model is to decrease the vertical extension of your run and the number of layer (one generally zoom to study near-surface process, so 20 to 22 layers and a vertical extension up to 60 or 80 km is usually enough). 50 Chapter 8 Water Cycle Simulation In order to simulate the water cycle with the LMD GCM: • In callphys.def, set tracer to true: tracer=.true.. Use the same options as below for the Tracer part, the rest does not change compared to the basic callphys.def. The important parameters are water=.true., to use water vapor and ice tracers, and sedimentation=.true. to allow sedimentation of water ice clouds. ##. • Compilation You need to compile with at least 2 tracers. If you don’t have dust (dustbin=0) or other chemical species (photochem=F), compilation is done with the command lines: makegcm -d 64x48x25 -t 2 -p mars newstart makegcm -d 64x48x25 -t 2 -p mars gcm Of course, you will also need an appropriate traceur.def file indicating you will use tracers h2o vap and h2o ice; if you only run with 2 tracers, then the contents of the traceeur.def file should be: 51 2 h2o_ice h2o_vap Note that the order in which tracers are set in the tracer.def file is not important. • Run Same as usual. Just make sure that your start files contains the initial states for water, with an initial state for water vapor and water ice particles. 52 Chapter 9 Photochemical Module The LMD GCM now includes a photochemical module, which allows to compute the atmospheric composition. • 14 chemical species are included: CO2 (background gas), CO, O, O(1 D), O2 , O3 , H, H2 , OH, HO2 , H2 O2 , N2 , Ar (inert) and H2 O. • In callphys.def, set tracer to true tracer=.true.. Use the same options as shown below for the tracer part of callphys.def. You need to set photochem=.true., and to include the water cycle (water=.true., sedimentation=.true.; see Chapter 8), because composition is extremely dependent on the water vapor abundance. ##. # PHOTOCHEMISTRY: include chemical species photochem = .true. • You will need the up-to-date file jmars.yyyymmdd (e.g. jmars.20030707), which contains the photodissociation rates. It should be in the datafile directory in which are stored datafiles used by the GCM (the path to these files is set in file datafile.h, in the phymars directory). • Compilation 53 You need to compile with 15 tracers (if you don’t have dust, dustbin=0): 13 chemical species (co2, co, o, o(1d), o2, o3, h, h2, oh, ho2, h2o2, n2, ar) along with water ice (h2o ice) and water vapor (h2o vap). Compilation is done with the command lines: makegcm -d 64x48x25 -t 15 -p mars newstart makegcm -d 64x48x25 -t 15 -p mars gcm Of course, the traceur.def file should contain the number and name of all the tracers, e.g.: 15 co2 co o o1d o2 o3 h h2 oh ho2 h2o2 n2 ar h2o_ice h2o_vap • Run Same as usual. Just make sure that your start files contains the correct number of tracers. If you need to initialize the composition, you can run newstart and use the options - ini q: the 15 tracers are initialized, including water ice and vapor. - ini q-h2o: the 13 chemical species are initialized, water ice is put to zero, and water vapor is kept untouched. - ini q-iceh2o: the 13 chemical species are initialized, water ice and vapor are kept untouched. The initialization is done with the files atmosfera LMD may.dat and atmosfera LMD min.dat, which should also be found in the datafile directory. • Outputs The outputs can be done from the aeronomars/calchim.F routine for the 14 chemical species. The variables put in the diagfi.nc and stats.nc files are labeled (where name is the name of the chemical species, e.g. co2): - n name: local density (in molecule cm−3 , 3-dimensional field) - c name: integrated column density (in molecule cm−2 , 2-dimensional field) 54 Chapter 10 1D version of the Mars model The physical part of the model can be used to generate realistic 1-D simulations (one atmosphere column). In practice, the simulation is controlled from a main program called testphys1d.F which, after initialization, then calls the master subroutine of the physics physiq.F described in the preceeding chapters. 10.1 Compilation - For example, to compile the Martian model in 1-D with 25 layers, type (in compliance with the makegcm function manual described in section 5.4) makegcm -d 25 -p mars testphys1d You can find executable testphys1d.e (the compiled model) in the directory from which you ran the makegcm command. 10.2 1-D runs and input files The 1-D model does not use an initial state file (the simulation must be long enough to obtain a balanced state). Thus, to generate a simulation simply type: > testphys1d.e The following example files are available in the deftank directory (copy them into your working directory first): - callphys.def : controls the options in the physics, just like for the 3D GCM. - z2sig.def : controls the vertical discretization (no change needed, in general), functions as with the 3D GCM. - traceur.def : controls the tracer names (this file may not be present, as long as you run without tracers (option tracer=.false. in callphys.def) - run.def : controls the 1-D run parameters and initializations (this is actally file run.def.1d the deftank directory, which must be renamed run.def to be read by the program). The last file is different from the 3D GCM’s run.def input file, as it contains options specific to the 1-D model, as shown in the example below: # #----------------------------------------------------------------------# Run parameters for the 1D ’testphys1d.e’ model #----------------------------------------------------------------------- 55 #### Time integration parameters # # Initial date (in martian sols ; =0 at Ls=0) day0=0 # Initial local time (in hours, between 0 and 24) time=0 # Number of time steps per sol day_step=48 # Number of sols to run ndt = 100 #### Physical parameters # # Surface pressure (Pa) psurf= 610 # Reference dust opacity at 700 Pa, in the visible (true tau˜tauref*psurf/700) tauvis=0.2 # latitude (in degrees) latitude= 0. # Albedo of bare ground albedo=0.2 # Soil thermal inertia (SI) inertia=400 # zonal eastward component of the geostrophic wind (m/s) u=10. # meridional northward component of the geostrophic wind (m/s) v=0. # Initial CO2 ice on the surface (kg.m-2) co2ice=0 # hybrid vertical coordinate ? (.true. for hybrid and .false. for sigma levels) hybrid=.true. ###### Initial atmospheric temperature profile # # Type of initial temperature profile # ichoice=1 Constant Temperature: T=tref # ichoice=2 Savidjari profile (as Seiff but with dT/dz=cte) # ichoice=3 Lindner (polar profile) # ichoice=4 inversion # ichoice=5 Seiff (standard profile, based on Viking entry) # ichoice=6 constant T + gaussian perturbation (levels) # ichoice=7 constant T + gaussian perturbation (km) # ichoice=8 Read in an ascii file "profile" ichoice=5 # Reference temperature tref (K) tref=200 # Add a perturbation to profile if isin=1 isin=0 # peak of gaussian perturbation (for ichoice=6 or 7) pic=26.522 # width of the gaussian perturbation (for ichoice=6 or 7) largeur=10 # height of the gaussian perturbation (for ichoice=6 or 7) hauteur=30. # some definitions for the physics, in file ’callphys.def’ INCLUDEDEF=callphys.def Note that, just as for the 3-D GCM run.def file, input parameters may be given in any order, or even not given at all (in which case default values are used by the program). 56 10.3 Output data During the entire 1D simulation, you can obtain output data for any variable from any physical subroutine by using subroutine writeg1d. This subroutine creates file g1d.nc that can be read by GRADS. This subroutine is typically called at the end of subroutine physiq . Example of a call to subroutine writeg1d requesting temperature output: ( ngrid horizontal point, nlayer layers, variable pt called “T” in K units): CALL writeg1d(ngrid,nlayer,pt,’T’,’K’) 57 Appendix A GCM Martian Calendar For Mars, dates and seasons are expressed in Solar Longitude (Ls , in degrees or in radians) counting from the northern hemisphere spring equinox. In the GCM, time is counted in Martian solar days, or “sols” (1 sols = 88775 s) from the northern spring equinox. The following table gives the correspondence between sols and Ls , calculated for the GCM using one Martian year = 669 sols exactly. 58 sol 0. 5. 10. 15. 20. 25. 30. Ls 360.000 2.550 5.080 7.590 10.081 12.554 15.009 35. 40. 45. 50. 55. 60. 65. 70. 75. 80. 85. 90. 95. 100. 105. 110. 115. 120. 125. 130. . 135. 140. 145. 150. 155. 160. 165. 170. 175. 180. 185. 17.447 19.869 22.275 24.666 27.043 29.407 31.758 34.096 36.423 38.739 41.046 43.343 45.631 47.912 50.186 52.453 54.714 56.970 59.222 61.471 190. 193.47 195. 200. 205. 210. 215. 220. 225. 230. 235. 240. 88.427 90. 90.693 92.965 95.245 97.532 99.827 102.131 104.446 106.770 109.107 111.455 Spring equinox N 63.716 65.959 68.201 70.442 72.683 74.925 77.168 79.413 81.661 83.912 86.167 sol 240. 245. 250. 255. 260. 265. 270. Ls 111.455 113.816 116.190 118.578 120.981 123.400 125.835 Ls 247.408 250.666 253.925 257.182 260.435 263.683 266.924 270. 270.156 273.377 276.587 279.783 282.965 286.130 289.277 292.406 295.515 298.604 301.671 304.715 307.737 310.735 313.709 316.658 319.583 322.483 325.358 328.207 215.203 sol 480. 485. 490. 495. 500. 505. 510. 514.76 515. 520. 525. 530. 535. 540. 545. 550. 555. 560. 565. 570. 575. 580. 585. 590. 595. 600. 605. 610. . 615. 620. 625. 630. 635. 640. 645. 650. 655. 660. 665. 669. 670. 275. 280. 285. 290. 295. 300. 305. 310. 315. 320. 325. 330. 335. 340. 345. 350. 355. 360. 365. 370. 371.99 375. 380. 385. 390. 395. 400. 405. 410. 415. 420. 425. 128.287 130.756 133.243 135.750 138.275 140.821 143.388 145.975 148.585 151.217 153.872 156.550 159.251 161.977 164.727 167.502 170.301 173.126 175.975 178.850 180. 181.750 184.675 187.624 190.598 193.596 196.618 199.662 202.729 205.818 208.927 212.056 430. 435. 440. 445. 450. 455. 460. 465. 470. 475. 480. 218.368 221.549 224.746 227.955 231.177 234.409 237.650 240.898 244.151 247.408 675. 680. 685. 690. 695. 700. 705. 710. 715. 720. 3.057 5.583 8.089 10.577 13.046 15.498 17.933 20.351 22.755 25.143 Autumn equinox N 331.032 333.831 336.606 339.356 342.082 344.783 347.461 350.116 352.748 355.357 357.945 0. 0.512 Summer solstice N 59 Winter solstice N Spring equinox N Appendix B Utilities A few post-processing tools, which handle GCM outputs (files diagfi.nc and stats.nc) are available in the LMDZ.MARS/util subdirectory. This directory contains the sources codes along with a README.exec file which explains what the various programs are for and how to compile them. B.1 concatnc This program concatenates consecutive output files (diagfi.nc or even stats.nc files) for a selection of variable, in order to obtain one single big file. The time dimension of the output can be ”sols” or ”Ls” (note that in that latter case, Ls values won’t be evenly distributed, and software like Grads may not be able to use and plot the data). To obtain an evenly sampled ”Ls” timescale, you can use the lslin.e program (described below). The output file created by conctanc.e is concat.nc B.2 lslin This program is designed to interpolate data given in irregular Solar Longitude (Ls) into an evenly sampled linear time coordinate (usable with Grads). Input Netcdf files may be diagfi.nc or concat.nc files and the resulting output file is lslin.nc lslin also create a lslin.ctl file that can be read directly by grads (>xdfopen lslin.ctl) to plot in Ls coordinate to avoid some problem with grads when Grads think that ”the time interval is too small”... B.3 localtime The localtime.e program is designed to re-interpolate data in order to yield values at the same given local time (useful to mimic satellite observations, or analyse day to day variations at given local time). Input files may be of diagfi.nc, stats.nc or concat.nc type and the output file name is build from the input one, to which LT.nc is appened (e.g. if the input file is myfile.nc then output file will be myfile LT.nc). B.4 zrecast With this program you can recast atmospheric (i.e.: 4D-dimentional longitude-latitude-altitude-time) data from GCM outputs (e.g. as given in diagfi.nc, concat.nc and stats.nc files) onto either pressure or altitude above areoid vertical coordinates. Since integrating the hydrostatic equation is required to recast the data, the input file must contain surface pressure and atmospheric temperature, as well as the ground geopotential. If recasting data onto pressure coordinates, then the output file name is given by the input file name to which P.nc will be appened. If recasting data onto altitude above areoid coordinates, then a A.nc will be appened. B.5 lslin This program is designed to interpolate data in Solar Longitude (Ls) linear time coordinate (usable with grads) diagfi.nc or concat.nc files. 60 lslin also create a lslin.ctl file that can be read directly by grads (>xdfopen lsllin.ctl) to plot in Ls coordinate to avoid some problem with grads when grads think that ”the time interval is too small”... B.6 hrecast This program can interpolate GCM output on any horizontal grid (regular lat - lon) as long as it cover all the planet. Useful to compare runs obtained at different horizontal resolutions. B.7 expandstartfi This program takes a physics start file (startfi.nc) and recasts it on the corresponding lonxlat grid (so its contents may easily be displayed using Grads, Ferret, etc.) B.8 extract This program extracts (ie: interpolates) pointwise values of an atmospheric variable from a ’zrecast’ed diagfi file (works if altitude is geometrical height or a pressure vertical coordinates). 61 Bibliography [1] M. Angelats i Coll, F. Forget, M. A. López-Valverde, and F. González-Galindo. The first Mars thermospheric general circulation model: The Martian atmosphere from the ground to 240 km. Geophys. Res. Lett., 32:4201, 2005. [2] J.-L. Dufresne, R. Fournier, C. Hourdin, and F. Hourdin. Net Exchange Reformulation of Radiative Transfer in the CO2 15-µm Band on Mars. Journal of Atmospheric Sciences, 62:3303–3319, 2005. [3] F. Forget, F. Hourdin, R. Fournier, C. Hourdin, O. Talagrand, M. Collins, S. R. Lewis, P. L. Read, and J.-P. Huot. Improved general circulation models of the Martian atmosphere from the surface to above 80 km. J. Geophys. Res., 104:24,155–24,176, 1999. [4] F. Forget, F. Hourdin, and O. Talagrand. CO2 snow fall on Mars: Simulation with a general circulation model. Icarus, 131:302–316, 1998. [5] F. González-Galindo, F. Forget, M. A. López-Valverde, and M. Angelats i Coll. A Ground-to-Exosphere Martian General Circulation Model. 2. The Atmosphere During Perihelion Conditions: Thermospheric Polar Warming . Journal of Geophysical Research (Planets), 114(E13):E08004, 2009. [6] F. González-Galindo, F. Forget, M. A. López-Valverde, M. Angelats i Coll, and E. Millour. A Ground-toExosphere Martian General Circulation Model. 1. Seasonal, Diurnal and Solar Cycle Variation of Thermospheric Temperatures. Journal of Geophysical Research (Planets), 114(E13):4001, 2009. [7]9):9008, 2005. [8] C. Hourdin, J.-L. Dufresnes, R. Fournier, and F. Hourdin. Net exchange reformulation of radiative transfer in the CO2 15µm band on mars. Article in preparation, 2000. [9] F. Hourdin. A new representation of the CO2 15 µm band for a Martian general circulation model. J. Geophys. Res., 97(E11):18,319–18,335, 1992. [10] F. Hourdin and A. Armengaud. Test of a hierarchy of finite-volume schemes for transport of trace species in an atmospheric general circulation model. Mon. Wea. Rev., 127:822–837, 1999. [11] F. Hourdin, P. Le Van, F. Forget, and O. Talagrand. Meteorological variability and the annual surface pressure cycle on Mars. J. Atmos. Sci., 50:3625–3640, 1993. [12]. [13] F. Lefèvre, S. Lebonnois, F. Montmessin, and F. Forget. Three-dimensional modeling of ozone on Mars . Journal of Geophysical Research (Planets), 109:E07004, 2004. [14] S. R. Lewis, M. Collins, P. L. Read, F. Forget, F. Hourdin, R. Fournier, C. Hourdin, O. Talagrand, and J.-P. Huot. A climate database for Mars. J. Geophys. Res., 104:24,177–24,194, 1999. [15] F. Lott and M. Miller. A new sub-grid scale orographic drag parametrization: its formulation and testing. Q. J. R. Meteorol. Soc., 123:101–128, 1997. [16] J.-B. Madeleine, F. Forget, E. Millour, L. Montabone, and M. J. Wolff. Revisiting the radiative impact of dust on Mars using the LMD Global Climate Model. Journal of Geophysical Research (Planets), 116:11010, Nov. 2011. [17] F. Montmessin, F. Forget, P. Rannou, M. Cabane, and R. M. Haberle. Origin and role of water ice clouds in the Martian water cycle as inferred from a general circulation model. Journal of Geophysical Research (Planets), 109(E18):10004, 2004. 62 [18] O. B. Toon, C. P. McKay, T. P. Ackerman, and K. Santhanam. Rapid calculation of radiative heating rates and photodissociation rates in inhomogeneous multiple scattering atmospheres. J. Geophys. Res., 94:16,287– 16,301, 1989. [19]), 114(E13):0–+, 2009. 63
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project | https://manualzz.com/doc/7077460/user-manual-for-the-lmd-martian-atmospheric-general-circu. | CC-MAIN-2020-40 | refinedweb | 16,920 | 56.45 |
C# Sharp Exercises: Declares a struct with a property, a method, and a private field
C# Sharp STRUCTURE: Exercise-6 with Solution
Write a program in C# Sharp to declares a struct with a property, a method, and a private field.
Sample Solution:-
C# Sharp Code:
using System; struct newStruct { private int num; public int n { get { return num; } set { if (value < 50) num = value; } } public void clsMethod() { Console.WriteLine("\nThe stored value is: {0}\n", num); } } class strucExer6 { public static void Main() { Console.Write("\n\nDeclares a struct with a property, a method, and a private field :\n"); Console.Write("----------------------------------------------------------------------\n"); newStruct myInstance = new newStruct(); myInstance.n = 15; myInstance.clsMethod(); } }
Sample Output:
Declares a struct with a property, a method, and a private field : ---------------------------------------------------------------------- The stored value is: 15
Flowchart:
C# Sharp Code Editor:
Contribute your code and comments through Disqus.
Previous: Write a program in C# Sharp to show what happen when a struct and a class instance is passed to a method.
Next: Write a program in C# Sharp to demonstrates struct initialization using both default and parameterized constructors.
What is the difficulty level of this exercise?
New Content: Composer: Dependency manager for PHP, R Programming | https://www.w3resource.com/csharp-exercises/structure/csharp-structure-exercise-6.php | CC-MAIN-2019-18 | refinedweb | 199 | 62.58 |
This is last of the series of posts I have made on loading external swf movies into Main Flash movie and controlling its timeline. In this post, I’ll add the last missing feature in the player which was seek bar. In the previous Post, I added Play, Pause, Forward, Rewind Buttons to control and manage the timeline of external loaded swf file and this addition will complete the player.
If you want to see the explanation for the whole code, you can view it here.
In this post, I have used Flash Slider component to make a seek bar. This seek bar shows the total number of frames of loaded swf file and then plays through it and update the slider position with the current frame position. It’s done on EnterFrame event so this slider location is checked on every frame and updated until the lastFrame of the swf file is reached.
Here’s the complete post as AS3 code. Make sure you drag the Slider component onto the stage from the Components.
function launchSWF(vBox, vFile):void{ //vBox.addChild(swfLoader);); var player:swfPlayerBt = new swfPlayerBt(); function loadProdComplete(e:Event):void { trace("swf file loaded"); vBox.removeChild(preLoader); vBox.addChild(swfLoader); currentSWF = MovieClip(swfLoader.content); currentSWF.gotoAndPlay(1); player.x =200; player.y =350; //attach the swfPlayer buttons vBox.addChild(player); /); currentSWF.addEventListener(Event.ENTER_FRAME , checkLastFrame); import fl.controls.Slider; //add slider component var aSlider:Slider = new Slider(); //Define the width and location on stage aSlider.width = 200; aSlider.move(160, 330); //add it to the container clip vBox.addChild(aSlider); trace(currentSWF.totalFrames); aSlider.maximum = currentSWF.totalFrames; aSlider.liveDragging=true; aSlider.addEventListener(Event.CHANGE,swfHandler); function swfHandler(e:Event){ trace("aSlider.value: "+aSlider.value); currentSWF.gotoAndStop(aSlider.value); } function checkLastFrame(e:Event):void { if (currentSWF.currentFrame == currentSWF.totalFrames) { currentSWF.stop(); // trace("DONE"); }else aSlider.value =currentSWF.currentFrame; } function button_forward(e:Event):void{ currentSWF.nextFrame(); } function button_rewind(e:Event):void{ currentSWF.prevFrame(); } function button_pause(e:Event):void{ currentSWF.stop(); } function button_play(e:Event):void{ currentSWF.play(); } } var preLoader:loader = new loader(); preLoader.x = 155; preLoader.y = 185; loadButton:btLoad = new btLoad(); //place load button in the cetner of stage loadButton.x = (stage.stageWidth / 2) - (loadButton.width / 2); loadButton.y = (stage.stageHeight / 2) - (loadButton.height / 2); var unloadButton:btUnload = new btUnload(); //place unload button in the cetner of stage unloadButton.x = (stage.stageWidth / 2) - (unloadButton.width / 2); unloadButton.y = 15; loadButton.addEventListener(MouseEvent.CLICK, button_load); function button_load(e:Event):void{ launchSWF(container, swfFile); //put it on stage addChild(container); //remove the load button, it won't work and needed now removeChild(loadButton); //add the unload button now addChild(unloadButton); } unloadButton.addEventListener(MouseEvent.CLICK, button_unload); function button_unload(e:Event):void{ swfLoader.unloadAndStop(); removeChild(container);//remove loaded swf container //gotoAndStop(10);// send to main timeline frame 10 //remove the unload button, it won't work and needed now removeChild(unloadButton); //add the uload button now addChild(loadButton); } var container:MovieClip = new MovieClip(); var swfFile:String = 'external-file.swf'; var currentSWF:MovieClip = new MovieClip(); var swfLoader:Loader = new Loader(); //place the load button on stage addChild(loadButton);
I have put together all this into this file. You can Download CS4 fla source file for loading-external-swf-with-preloader-play-stop-seekbar here. Just provide the swf file url to load and you are done!
Hope that helps.
Cheers!
27 Comments
my seekbar working on flv video player but not working in my flash non video animation play pause stop revine all buttons are working but seekbar only problem in as3 if anyone know the info pls tel me
seekbar will work on non video animation …? it’s posible….?
Where do you edit the url? And is it possible to make it auto load the external swf automactly? I downloaded a flv movie and converted it into .swf but i wanted to find a way to put the play/stop buttons and maybe a slider if possible using flash. This tool can accomplish that right? Thanks
You can change the url of the swf in the actions panel.
Yes, it does add the stop play button inside Flash.
After i while i released i had to change the so it would load.
var swfFile:String = ‘external-file.swf’;
It loads perfectly when i click to load and unload. But the buttons don’t seem to work at all :S Was it because the .swf was converted from .flv ?
It may well be the reason. Check and try that your external file content is spread on root timeline and not in a movie clip.
I think that’s the problem because i can’t get it to convert back to .fla so this won’t work at all right? Is there any way to make it like your example? I need to add some loader (with play button,stop etc) that controls the 2nd .swf
You don’t really need to convert flv back to swf. You can easily control flv timeline with buttons using video component. I developed this swf for those files which were swf and there was no way to manage external swf timeline. You can do that with externally loaded flv files.
Check this post where I have shown how you can manage that.
forget to mention *
The index.html at location
() is having a object tag pointing to swf file under the uploads directory.
This index.html file is not able to load the external swf file correctly.
Request to have a look, and provide any suggestions.
Thanks in advance.
It’s basically resources path issue. When you are loading any item, it must be referenced from the url where it’s embedded i.e. all links to other swfs in the embedded swf must include ‘uploads/’.
Another foolproof way is to use absoulute path starting with http://
Thanks a lot for your help Ali
God Bless you
🙂
Thanks for the wonderful work Ali,
But I am getting following error,
TypeError: Error #1034: Type Coercion failed: cannot convert flash.display::AVM1Movie@443ff31 to flash.display.MovieClip.
at Function/external_fla:MainTimeline/launchSWF/external_fla:loadProdComplete()[external_fla.MainTimeline::frame1:21]
Could please help me in what I am doing wrong here ?
Thanks.
Please make sure you are loading AS3 movie. Type Coercion errors appears usually when the AS2 movie is loaded into AS3.
Thanks a lot.
yeah, I was trying to load AS2 clip. I converted to AS3 and now it works fine.
I have one more problem
I have loaded all files (external swf file, main control swf file, and html file) in same directory “”
and everything is working fine.
But when I upload the html file having the object tag with src pointing to the main swf file ()
Then the swf file is not played at all.
Please suggest any idea to rectify this.
Brilliant, thanks for this, just what i was after.
I did notice that if you load and unload, the previous point on the slider bar is still there.
I moved:
import fl.controls.Slider;
var aSlider:Slider = new Slider();
out of the loadProdComplete function and to the top of the page, which seems to have done the trick.
I’m guessing it was adding a new slider each time a swf was loaded without removing the old one.
(i’m fairly new to AS3 so this may be incorrect!)
Please…
didn’t work on Flash CS5
seeker, play, and pause buttons didn’t work when clicked
Output:
swf file loaded
TypeError: Error #1034: Type Coercion failed: cannot convert flash.display::AVM1Movie@25938161 to flash.display.MovieClip.
at Function/()
Create a new AS3 flash movie and load its swf as test.
This is crap. It doesn’t work properly. It plays multiple instances of my streaming sound and shows none of the visuals. Crappity crap crap
The slideshow only shows and attempts to load and play the external swf files. Your external swfs must be coded correctly with the visuals and audio streamed on the root timeline. If you have placed everything in the nested clips and your swf consists of just 1 frame then this is not for you. Go ahead an make one for yourself. 🙂
hi ali yeah i tried this,
its working perfect on local pc but when i upload same problem.
—
i changed it.
loaded swf file
stop() 1st frame
and in ur code. i updated 1 -> 2 frame, its working perfect now.
currentSWF = MovieClip(swfLoader.content);
currentSWF.gotoAndPlay(2);
thank you so much for ur help ali
stay blessed.
I’m glad it worked.
thanks ali, i will try this
hello ali
great work ali, am using this facing bit problem
external loaded swf start playing its sound , it should load 100% first, then it should start playing sound.
sound is synchronized in external loaded file, so its should play when its fully load 100%.
please help.. thanks
If your sound is streaming and placed on separate layer, then start it from 2nd frame. And if you are using AS3 code to attach sound dynamically from library, then take that AS3 code inside where swf has already loaded. It all depends upon the inclusion method of sound in your swf.
yes sound is streaming and i start it from 2nd frame , but still its playing before loading whole swf file [100%].
—
i also tried to load from mp3 file from library by using AS3, that is working fine but i need streaming sound is there any way to deal that loaded sound to play like a stream sound on timeline?
Try this, stop all sound son frame 1 using following code, and once the swf starts playing, it will play the streaming sound when reaches to frame 2 and onward.
Thank you 🙂 | https://www.parorrey.com/blog/flash-development/external-swf-player-with-play-pause-forward-rewind-buttons-and-seek-bar/ | CC-MAIN-2020-16 | refinedweb | 1,621 | 67.25 |
"Embrace SQL keeps your SQL queries in SQL files. An anti-ORM inspired by HugSQL and PugSQL"
Project description
Does writing complex queries in an ORM feel like driving with the handbrake on? Embrace SQL! Put your SQL queries in regular .sql files, and embrace will load them.
Usage:
import embrace # Connect to your database, using any db-api connector. # If python supports it, so does embrace. conn = psycopg2.connect("postgresql:///mydb") # Create a module populated with queries from a collection of *.sql files: queries = embrace.module("resources/sql", conn) # Run a query users = queries.list_users(order_by='created_at')
Your query would be specified like this:
-- :name list_users :many select * from users where active = :active order by :identifier:order_by
By embrace returns rows using the underlying db-api cursor. Most db-api libraries have cursor types that return dicts or namedtuples. For example in Postgresql you could do this:
conn = psycopg2.connect( "postgresql:///mydb", cursor_factory=psycopg2.extras.NamedTupleCursor) )
What types can queries return?
Many rows:
-- :name list_users :many select * from users
A single row:
-- :name get_user_by_id :one select * from users where id=:id
A single value:
-- :name get_user_count :scalar select count(1) from users
An iterable over a single column:
-- :name get_ips :column select distinct ip_addr from connections
A single file may contain multiple queries, separated by a structured SQL comment. For example to create two query objects accessible as queries.list_users() and queries.get_user_by_id():
-- :name list_users :many select * from users -- :name get_user_by_id :one select * from users where id=:id
But if you don’t have the separating comment, embrace-sql can run multiple statements in a single query call, returning the result from just the last one.
Why? Because it makes this possible in MySQL:
-- :name create_user :column insert into users (name, email) values (:name, :email); select last_insert_id();
How do parameters work?
Placeholders inserted using the :name syntax are escaped by the db-api driver:
-- Outputs `select * from user where name = 'o''brien'`; select * from users where name = :name
You can interpolate lists and tuples too:
:tuple: creates a placeholder like this (?, ?, ?)
:value*: creates a placeholder like this ?, ?, ?
:tuple* creates a placeholder like this (?, ?, ?), (?, ?, ?), … (useful for multiple insert queries)
-- Call this with `queries.insert_foo(data=(1, 2, 3))` INSERT INTO foo (a, b, c) VALUES :tuple:data -- Call this with `queries.get_matching_users(names=("carolyn", "douglas"))` SELECT * from users WHERE name in (:values*:names)
You can escape identifiers with :identifier:, like this:
-- Outputs `select * from "some random table"` select * from :identifier:table_name
You can pass through raw sql too. This leaves you open to SQL injection attacks if you allow user input into such parameters:
-- Outputs `select * from users order by name desc` select * from users order by :raw:order_clause
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/embrace/ | CC-MAIN-2020-10 | refinedweb | 476 | 55.03 |
missing import when .buildout/default.cfg puts eggs outside ~/.buildout
Bug Description
ZODB 3.8.1 tag
Python 2.4.4
OSX
Problem occurs when I use non-standard location for my downloads and eggs...
$ cat ~/.buildout/
[buildout]
download-directory = /opt/zope/
eggs-directory = /opt/zope/
This seems to conflict with the install process that seems to expect eggs here:
/Users/
Reproduce:
$ svn co svn://svn.
$ cd ZODB
$ /opt/zope/
$ bin/buildout
$ bin/test -v
Traceback (most recent call last):
File "/opt/zope/
import zope.testing.
File "/Users/
File "/Users/
ImportError: No module named interface
Simple Workaround:
Make the default.cfg approximate what the buildout expects:
$ cat ~/.buildout/
[buildout]
download-directory = /Users/
eggs-directory = /Users/
$ bin/test -v
Running tests at level 1
Running zope.testing.
Set up zope.testing.
Running:
...
Ran 3023 tests with 0 failures and 0 errors in 8 minutes 2.798 seconds.
Tearing down left over layers:
Tear down zope.testing.
This sounds like it might be a zc.buildout bug, although buildout has no "download-
Just to clarify...
At the time of the failed test, the directory
/Users/
russ/.buildout/ eggs
does not exist -- the eggs are in /opt/zope/
stashes/ eggs
--r | https://bugs.launchpad.net/zc.buildout/+bug/299400 | CC-MAIN-2017-39 | refinedweb | 198 | 61.33 |
Recently i started playing with aurelia-framework and so far so good but when i edited config.js to add some of my files that are not installed via jspm things worked fine i was importing my scripts no errors but when i cloned to another machine and run jspm install it fails cause it does't like that i have other paths other than npm and github in my config.js
Configjs
paths: {
"*": "dist/*",
"github:*": "jspm_packages/github/*",
"npm:*": "jspm_packages/npm/*",
"lib:*": "lib/*",
"styles:*": "styles/*"
},
map: {
"app-styles": "styles:app-styles",
"uisearch": "lib:uisearch/uisearch@1.0.0",
"component": "lib:component/component",
"classie": "lib:classie/classie@2.0.0",
"material": "lib:material/material",
"ripples": "lib:ripples/ripples",
"bootstrap-select": "lib:bootstrap-select/bootstrap-select@1.7.2"
other deps...
}
err Registry lib not found.
err Unable to load registry lib
warn Installation changes not saved.
Avoid making changes to the map section of your config.js by hand. Instead use the jspm command line interface to add packages. The jspm CLI will maintain your config.js for you. For example, to add
classie to your project you would execute the following:
jspm install npm:desandro-classie
More information at jspm.io.
Note: you don't need to edit the config.js to enable importing javascript/css that is part of your project.
If I'm interpreting your original post correctly you have a lib folder containing a ripples subfolder which has a ripples.js file inside of it. You could access this "ripples" module like this:
import ripples from 'lib/ripples/ripples'; ripples.foo(); ... | https://codedump.io/share/pSsIwuo97Cee/1/jspm---jspm-install-gives-error-quotregistry-not-foundquot | CC-MAIN-2018-09 | refinedweb | 264 | 59.8 |
I'm a new Python programmer who is making the leap from 2.6.4 to 3.1.1. Everything has gone fine until I tried to use the 'else if' statement. The interpreter gives me a syntax error after the 'if' in 'else if' for a reason I can't seem to figure out.
def function(a): if a == '1': print ('1a') else if a == '2' print ('2a') else print ('3a') function(input('input:'))
I'm probably missing something very simple; however, I haven't been able to find the answer on my own.
In python "else if" is spelled "elif".
Also, you need a colon after the
elif and the
else.
Simple answer to a simple question. I had the same problem, when I first started (in the last couple of weeks).
So your code should read:
def function(a): if a == '1': print('1a') elif a == '2': print('2a') else: print('3a') function(input('input:')) | https://pythonpedia.com/en/knowledge-base/2395160/what-is-the-correct-syntax-for--else-if-- | CC-MAIN-2020-16 | refinedweb | 158 | 82.54 |
A text table item that reads text from string lists. More...
#include <qgscomposertexttable.h>
A text table item that reads text from string lists.
Definition at line 77 of file qgscomposertexttable.h.
Definition at line 90 of file qgscomposertexttable.cpp.
Definition at line 96 of file qgscomposertexttable.cpp.
Adds a frame to the multiframe.
Implements QgsComposerMultiFrame.
Definition at line 140 of file qgscomposertexttable.cpp.
Adds a row to the table.
Definition at line 101 of file qgscomposertexttable.cpp.
Fetches the contents used for the cells in the table.
Implements QgsComposerTableV2.
Definition at line 113 of file qgscomposertexttable.cpp.
Sets the contents of the text table.
Definition at line 107 of file qgscomposertexttable.cpp. | https://api.qgis.org/2.10/classQgsComposerTextTableV2.html | CC-MAIN-2020-34 | refinedweb | 113 | 63.25 |
im relatively new to C++, but have a general idea of the basics.
Anyway, what ive been trying to do is make a program that can do a basic form of conversation, by searching for certain keywords in input from the user, however, i am having trouble getting the hang of searching within input for certain keywords.
i have only just started by getting it to say hello, should a user enter a string containing the word hi.
The problem that occurs is that the first time, if you enter hi, it would reply, but from then on, if you say hi, it doesnt reply. However, it would reply if you have at least two characters before the word hi. does anyone know why this happens? and if so, how i can solve the problem?
Here is my program so far:
Code:#include <iostream> #include <fstream> #include <string> using namespace std; int main() { string talk; int search; while(1==1) { getline(cin, talk, '\n'); search = talk.find("hi",0); if (search >= 0) { cout<<"hello"; } cin.get(); cin.get(); } } | https://cboard.cprogramming.com/cplusplus-programming/67715-question-about-strings.html | CC-MAIN-2018-05 | refinedweb | 178 | 69.41 |
TreePanel scroll bar issues with long item texts
TreePanel scroll bar issues with long item texts
Hi!
I hope you guys can help me to figure out why this code is not working properly.
I am trying to render a TreePanel with some items which have long texts on it, so I would expect some scrollbars to appear if necessary, but they don't.
I am using FF 3.5.30729, GXT 2.1.1 and GWT 2.0.2.
Code:
public class MainEntryPoint implements EntryPoint { public void onModuleLoad() { ContentPanel panel = new ContentPanel(); panel.setHeading("Content Panel"); panel.setSize(200, 200); panel.setScrollMode(Scroll.AUTO); TreePanel tree = new TreePanel(new TreeStore()); tree.setDisplayProperty("text"); tree.setAutoHeight(true); tree.setAutoWidth(true); BaseTreeModel mLong = new BaseTreeModel(); mLong.set("text", "This is a very long item text which causes some issues with the scrollbars on ContentPanel/ TreePanel"); tree.getStore().add(mLong, false); for (int i = 0; i < 30; i++) { BaseTreeModel m = new BaseTreeModel(); m.set("text", "SomeText"); tree.getStore().add(m, false); } panel.add(tree); RootPanel.get().add(panel); } }
If I then use the vertical scrollbar to scroll down to the end of the tree, I can find the scrollbar, but if I use it another one appears, resulting in a weird behaviour.
Does anyone have the same trouble?
I can not find any scrollbar-methods on TreePanel to change that behaviour, too...
Also see these screenshots:! | http://www.sencha.com/forum/showthread.php?94299-TreePanel-scroll-bar-issues-with-long-item-texts | CC-MAIN-2014-52 | refinedweb | 234 | 60.92 |
Hey guys, I want to sum every k consecutive channels of a variable together. Assume that the input before summing is with the shape like NxCxWxH, and the output after the summation should be Nx(C/k)xWxH. I implemented the method using the code below. the function sum_up is called in the network’s forward function, but it seems to be extremely inefficient. Is there any better way to implement this in pytorch?
def sum_up(input, bottle_neck=2): output= [] for ch in xrange(0, input.data.shape[1], bottle_neck): output.append(torch.sum(input[:, ch:ch+bottle_neck, :, :], 1)) return torch.stack(output).permute(1, 0, 2, 3).cuda() | https://discuss.pytorch.org/t/how-to-sum-every-k-channels-for-a-cnn-feature-map/15795 | CC-MAIN-2019-30 | refinedweb | 109 | 53.07 |
In my first three articles on CodeProject, I explained the fundamentals of Windows Communication Foundation (WCF), including:
Starting last month, I have started to write a few articles to explain LINQ, LINQ to SQL, Entity Framework, and LINQ to Entities. Followings are the articles I wrote or plan to write for LINQ, LINQ to SQL, and LINQ to Entities:
After finishing these five articles, I will come back to write some more articles on WCF from my real work experience, which will be definitely helpful to your real world work if you are using WCF right now.
In the previous article, we learned a few new features of C# 3.0 for LINQ. In this article and the next, we will see how to use LINQ to interact with a SQL Server database, or in other words, how to use LINQ to SQL in C#.
In this article, we will cover the basic concepts and features of LINQ to SQL, which include:
In the next article, we will cover the advanced concepts and features of LINQ to SQL, such as Stored Procedure support, inheritance, simultaneous updating, and transaction processing.
LINQ to SQL is considered to be one of Microsoft's new ORM products. So before we start explaining LINQ to SQL, let us first understand what ORM is.
ORM stands for Object-Relational Mapping. Sometimes it is called O/RM, or O/R mapping. It is a programming technique that contains a set of classes that map relational database entities to objects in a specific programming language.
Initially, applications could call specified native database APIs to communicate with a database. For example, Oracle Pro*C is a set of APIs supplied by Oracle to query, insert, update, or delete records in an Oracle database from C applications. The Pro*C pre-compiler translates embedded SQL into calls to the Oracle runtime library (SQLLIB).
Then,.
No matter which method is used to connect to a database, the data returned from a database has to be presented in some format in the application. For example, if an Order record is returned from the database, there has to be a variable to hold the Order number, and a set of variables to hold the Order details. Alternatively, the application may create a class for Orders and another class for Order details. When another application is developed, the same set of classes may have to be created again, or if it is designed well, they can be put into a library and re-used by various applications.
This is exactly where ORM fits in. With ORM, each database is represented by an ORM context object in the specific programming language, and database entities such as tables are represented by classes, with relationships between these classes. For example, the ORM may create an Order class to represent the Order table, and an OrderDetail class to represent the Order Details table. The Order class will contain a collection member to hold all of its details. The ORM is responsible for the mappings and the connections between these classes and the database. So, to the application, the database is now fully-represented by these classes. The application only needs to deal with these classes, instead of with the physical database. The application does not need to worry about how to connect to the database, how to construct the SQL statements, how to use the proper locking mechanism to ensure concurrency, or how to handle distributed transactions. These databases-related activities are handled by the ORM.
Order
OrderDetail
The following diagram shows the three different ways of accessing a database from an application. There are some other mechanisms to access a database from an application, such as JDBC and ADO.NET. However, to keep the diagram simple, they have not been shown here.
LINQ to SQL is a component of the language-integrated queries in the object model into SQL and sends them to the database for execution. When the database returns the results, LINQ to SQL translates them back to objects that you can work with in your own programming language.
LINQ to SQL fully supports transactions, views, Stored Procedures, and user-defined functions. It also provides an easy way to integrate data validation and business logic rules into your data model, and supports single table inheritance in the object model.
LINQ to SQL is one of Microsoft's new ORM products to compete with many existing ORM products for the .NET platform on the market, like the Open Source products NHibernate, NPersist, and commercial products LLBLGen and WilsonORMapper. LINQ to SQL has many overlaps with other ORM products, but because it is designed and built specifically for .NET and SQL Server, it has many advantages over other ORM products. For example, it takes the advantages of all the LINQ features and it fully supports SQL Server Stored Procedures. You get all the relationships (foreign keys) for all tables, and the fields of each table just become properties of its corresponding object. You have even the intellisense popup when you type in an entity (table) name, which will list all of its fields in the database. Also, all of the fields and the query results are strongly typed, which means you will get a compiling error instead of a runtime error if you miss spell the query statement or cast the query result to a wrong type. In addition, because it is part of the .NET Framework, you don’t need to install and maintain any third party ORM product in your production and development environments.
Under the hood of LINQ to SQL, ADO.NET SqlClient adapters are used to communicate with real SQL Server databases. We will see how to capture the generated SQL statements at runtime later in this article.
Below is a diagram showing the usage of LINQ to SQL in a .NET application:
We will explore LINQ to SQL features in detail in this article and the following article..
EntityClient:
Features
LINQ to SQL
LINQ to Entities
Conceptual Data Model
No
Yes
Storage Schema
Mapping Schema
New Data Access Provider
Non-SQL Server Database Support
Direct Database Connection
Language Extensions Support
Stored Procedures
Single-table Inheritance
Multiple-table Inheritance
Single Entity from Multiple Tables
Lazy Loading Support.
Now that we have learned some basic concepts of LINQ to SQL, next let’s start exploring LINQ to SQL with real examples.
First, we need to create a new project to test LINQ to SQL. We will reuse the solution we have created in the previous article (Introducing LINQ—Language Integrated Query). If you haven't read that article, you can just download the source file from that article, or create a new solution TestLINQ.
You will also need to have a SQL Server database with the sample database Northwind installed. You can just search "Northwind dample database download", then download and install the sample database. If you need detailed instructions as how to download/install the sample database, you can refer to the section "Preparing the Database" in one of my previous articles, Implementing a WCF Service with Entity Framework".
Now follow these steps to add a new application to the solution:
The next thing to do is to model the Northwind database. We will now drag and drop two tables and one view from the Northwind database to our project, so later on we can use them to demonstrate LINQ to SQL.
To start with, let’s add a new item to our project TestLINQToSQLApp. The new item added should be of type LINQ to SQL Classes, and named Northwind, like in the Add New Item dialog window shown below.
After you click the button Add, the following three files will be added to the project: Northwind.dbml, Northwind.dbml.layout, and Northwind.designer.cs. The first file holds the design interface for the database model, while the second one is the XML format of the model. Only one of them can remain open inside the Visual Studio IDE. The third one is the code-behind for the model which defines the DataContext of the model.
DataContext
At this point, the Visual Studio LINQ to SQL designer should be open and empty, like the following diagram:
Now we need to connect to our Northwind sample database in order to drag and drop objects from the database.
The new connection Northwind.dbo should appear in the Server Explorer now. Next, we will drag and drop two tables and one view to the LINQ to SQL design surface.
The Northwind.dbml design surface on your screen should look like this:
If you open the file Northwind.Designer.cs, you will find following classes are generated for the project:
public partial class NorthwindDataContext : System.Data.Linq.DataContext
public partial class Product : INotifyPropertyChanging, INotifyPropertyChanged
public partial class Category : INotifyPropertyChanging, INotifyPropertyChanged
public partial class Current_Product_List
Among the above four classes, the DataContext class is the main conduit by which we'll query entities from the database as well as apply changes back to it. It contains various flavors of types and constructors, partial validation methods, and property members for all the included tables. It inherits from the System.Data.Linq.DataContext class which represents the main entry point for the LINQ to SQL framework.
System.Data.Linq.DataContext
The next two classes are for those two tables we are interested in. They all implement the INotifyPropertyChanging and INotifyPropertyChanged interfaces. These two interfaces define all the related property changing and property changed event methods, which we can extend to validate properties before and after the change.
INotifyPropertyChanging
INotifyPropertyChanged
The last class is for the view. It is a simple class with only two property members. Since we are not going to update the database through this view, it doesn’t define any property changing or changed event method.
Now that we have the entity classes created, we will use them to interact with the database. We will first work with the products table to query, update records, as well as to insert and delete records.
First, we will query the database to get some products.
To query a database using LINQ to SQL, we first need to construct a DataContext object, like this:
NorthwindDataContext db = new NorthwindDataContext();
Then we can use this LINQ query syntax to retrieve records from the database:
IEnumerable<Product> beverages = from p in db.Products
where p.Category.CategoryName == "Beverages"
orderby p.ProductName
select p;
The preceding code will retrieve all products in the Beverages category sorted by product name.
We can update any of the products that we have just retrieved from the database, like this:
// update one product
Product bev1 = beverages.ElementAtOrDefault(10);
if (bev1 != null)
{
Console.WriteLine("The price of {0} is {1}. Update to 20.0",
bev1.ProductName, bev1.UnitPrice);
bev1.UnitPrice = (decimal)20.00;
}
// submit the change to database
db.SubmitChanges();
We used ElementAtOrDefault, not the ElementAt method, just in case there is no product at element 10. Though, in the sample database, there are 12 beverage products, and the 11th (element 10 starting from index 0) is Steeleye Stout, whose unit price is 18.00. We change its price to 20.00, and called db.SubmitChanges() to update the record in the database. After you run the program, if you query the product with ProductID 35, you will find its price is now 20.00.
ElementAtOrDefault
ElementAt
db.SubmitChanges()
We can also create a new product, then insert this new product into the database, like in the following code:
Product newProduct = new Product {ProductName="new test product" };
db.Products.InsertOnSubmit(newProduct);
db.SubmitChanges();
To delete a product, we first need to retrieve it from the database, then just call the DeleteOnSubmit method, like in the following code:
DeleteOnSubmit
// delete a product
Product delProduct = (from p in db.Products
where p.ProductName == "new test product"
select p).FirstOrDefault();
if(delProduct != null)
db.Products.DeleteOnSubmit(delProduct);
db.SubmitChanges();
The file Program.cs so far is followed. Note that we declared db as a class member, and added a method to contain all the test cases for the table operations. We will add more methods to test other LINQ to SQL functionalities.
db
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data.Linq;
namespace TestLINQToSQLApp
{
class Program
{
// create data context
static NorthwindDataContext db = new NorthwindDataContext();
static void Main(string[] args)
{
// CRUD operations on tables
TestTables();
Console.ReadLine();
}
static void TestTables()
{
// retrieve all Beverages
IEnumerable<Product> beverages = from p in db.Products
where p.Category.CategoryName == "Beverages"
orderby p.ProductName
select p;
Console.WriteLine("There are {0} Beverages", beverages.Count());
// update one product
Product bev1 = beverages.ElementAtOrDefault(10);
if (bev1 != null)
{
Console.WriteLine("The price of {0} is {1}. Update to 20.0",
bev1.ProductName, bev1.UnitPrice);
bev1.UnitPrice = (decimal)20.0;
}
// submit the change to database
db.SubmitChanges();
// insert a product
Product newProduct = new Product { ProductName = "new test product" };
db.Products.InsertOnSubmit(newProduct);
db.SubmitChanges();
Product newProduct2 = (from p in db.Products
where p.ProductName == "new test product"
select p).SingleOrDefault();
if (newProduct2 != null)
{
Console.WriteLine("new product inserted with product ID {0}",
newProduct2.ProductID);
}
// delete a product
Product delProduct = (from p in db.Products
where p.ProductName == "new test product"
select p).FirstOrDefault();
if (delProduct != null)
{
db.Products.DeleteOnSubmit(delProduct);
}
db.SubmitChanges();
}
}
}
If you run the program, the output will be:
One important thing to remember when working with LINQ to SQL is the deferred execution of LINQ.. Those methods do not consume the target data until the query object is enumerated. This is known as deferred execution.
Average
Sum
In the case.
IEnumerable<(Of <(T>)>)
In contrast, methods that extend IQueryable<(Of <(T>)>) do not implement any querying behavior, but build an expression tree that represents the query to be performed. The query processing is handled by the source IQueryable<(Of <(T>)>) object.
IQueryable<(Of <(T>)>)
There are two ways to see when the query is executed. The first is Open Profiler (All Programs\Microsoft SQL Server 2005(or 2008)\Performance Tools\SQL 2005(or 2008) Profiler); start a new trace to the Northwind database engine, then debug the program. For example, when the following statement is executed, there is nothing in the profiler:
IEnumerable<Product> beverages = from p in db.Products
where p.Category.CategoryName == "Beverages"
orderby p.ProductName
select p;
However, when the following statement is being executed, from the profiler, you will see a query is executed in the database:
Console.WriteLine("There are {0} Beverages", beverages.Count());
The query executed in the database is like this:]
LEFT OUTER JOIN [dbo].[Categories] AS [t1] ON [t1].[CategoryID] = [t0].[CategoryID]
WHERE [t1].[CategoryName] = @p0
ORDER BY [t0].[ProductName]',N'@p0 nvarchar(9)',@p0=N'Beverages'
The profiler window should be like this diagram:
From the profiler, we know under the hood that LINQ actually called sp_executesql, and it also used a left outer join to get the categories of products.
sp_executesql
Another way to trace the execution time of a LINQ statement is using logs. The DataContext class provides a method to log every SQL statement it executes. To see the logs, we can first add this statement to the program in the beginning, right after Main:
Main
db.Log = Console.Out;
Then we can add this statement right after the variable beverages is defined, but before its Count is referenced:
beverages
Count
Console.WriteLine("After query syntax is defined, before it is referenced.");
So the first few lines of statements are now like this:
static void Main(string[] args)
{
// log database query statements to stand out
db.Log = Console.Out;
// CRUD operations on tables
TestTables();
Console.ReadLine();
}
static void TestTables()
{
// retrieve all Beverages
IEnumerable<Product> beverages = from p in db.Products
where p.Category.CategoryName == "Beverages"
orderby p.ProductName
select p;
Console.WriteLine("After query syntax beverages is defined, " +
"before it is referenced.");
Console.WriteLine("There are {0} Beverages", beverages.Count());
// rest of the file
Now if you run the program, the output will be like this:
From the logs, we see the query is not executed when the query syntax is defined. Instead, it is executed when beverages.Count() is being called.
beverages.Count()
But if the query expression will return a singleton value, the query will be executed immediately while it is defined. For example, we can add this statement to get the average price of all products:
decimal? averagePrice = (from p in db.Products
select p.UnitPrice).Average();
Console.WriteLine("After query syntax averagePrice is defined, before it is referenced.");
Console.WriteLine("The average price is {0}", averagePrice);
The output is like this:
From this output, we know the query is executed at the same time when the query syntax is defined.
However, just because a query is using one of those singleton methods like Sum, Average, or Count, it doesn’t mean the query will be executed when it is defined. If the query result is a sequence, the execution will still be deferred. Following is an example of this kind of query:
// deferred execution2
var cheapestProductsByCategory =
from p in db.Products
group p by p.CategoryID into g
select new
{
CategoryID = g.Key,
CheapestProduct =
(from p2 in g
where p2.UnitPrice == g.Min(p3 => p3.UnitPrice)
select p2).FirstOrDefault()
};
Console.WriteLine("Cheapest products by category:");
foreach (var p in cheapestProductsByCategory)
{
Console.WriteLine("categery {0}: product name: {1} price: {2}",
p.CategoryID, p.CheapestProduct.ProductName, p.CheapestProduct.UnitPrice);
}
If you run the above query, you will see it is executed when the result is being printed, not when the query is being defined. Part of the result is like this:
From this output, you can see when the result is being printed, it first goes to the database to get the minimum price for each category, then for each category, it goes to the database again to get the first product with that price. Though in a real product, you probably don’t want to write so complex a query in your application code, but put it in a Stored Procedure.
In one of the above examples, we retrieved the category name of a product by this expression:
p.Category.CategoryName == "Beverages"
Even though there is no such field called category name in the Products table, we can still get the category name of a product because there is an association between the Products and Category table. On the Northwind.dbml design surface, click on the line between the Products and Categories tables and you will see all the properties of the association. Note, its participating properties are Category.CategoryID -> Product.CategoryID, meaning category ID is the key field to link these two tables.
Because of this association, we can retrieve the category for each product, and on the other hand, we can also retrieve the products for each category.
However, even with the association, the associated data is not loaded when the query is executed. For example, if we retrieve all categories like this:
var categories = from c in db.Categories select c;
And later on we need to get the products for each category, the database has to be queried again. This diagram shows the execution result of the query:
From this diagram, we know that LINQ first goes to the database to query all categories, then for each category, when we need to get the total count of products, it goes to the database again to query all the products for that category.
This is because by default, lazy loading is set to true, meaning all associated data (children) are deferred loaded until needed.
To change this behavior, we can use the LoadWith method to tell the DataContext to automatically load the specified children in the initial query, like this:
LoadWith
// eager loading products of categories
DataLoadOptions dlo2 = new DataLoadOptions();
dlo2.LoadWith<Category>(c => c.Products);
// create another data context, because we can't change LoadOptions of db
// once a query has been executed against it
NorthwindDataContext db2 = new NorthwindDataContext();
db2.Log = Console.Out;
db2.LoadOptions = dlo2;
var categories2 = from c in db2.Categories select c;
foreach (var category2 in categories2)
{
Console.WriteLine("There are {0} products in category {1}",
category2.Products.Count(), category2.CategoryName);
}
db2.Dispose();
Note: DataLoadOptions is in the namespace System.Data.Linq, so you have to add a using statement to the program:
DataLoadOptions
System.Data.Linq
using
using System.Data.Linq;
Also, we have to create a new DataContext instance for this test, because we have ran some queries again the original db DataContext, and it is no longer possible to change its LoadOptions.
LoadOptions
Now after the category is loaded, all its children (products) will be loaded too. This can be proved from this diagram:
As you can see from this diagram, all products for all categories are loaded in the first query.
While LoadWith is used to eager load all children, AssociateWith can be used to filter which children to load with. For example, if we only want to load products for categories 1 and 2, we can write this query:
AssociateWith
// eager loading only certain children
DataLoadOptions dlo3 = new DataLoadOptions();
dlo3.AssociateWith<Category>(
c => c.Products.Where(p => p.CategoryID == 1 || p.CategoryID == 2));
// create another data context, because we can't change LoadOptions of db
// once query has been executed against it
NorthwindDataContext db3 = new NorthwindDataContext();
db3.LoadOptions = dlo3;
db3.Log = Console.Out;
var categories3 = from c in db3.Categories select c;
foreach (var category3 in categories3)
{
Console.WriteLine("There are {0} products in category {1}",
category3.Products.Count(), category3.CategoryName);
}
db3.Dispose();
Now if we query all categories and print out the products count for each category, we will find that only the first two categories contain products, all other categories have no product at all, like in this diagram:
However, from the output above, you can see it is lazy loading. If you want eager loading products with some filters, you can combine LoadWith and AssociateWith, like in the following code:
DataLoadOptions dlo4 = new DataLoadOptions();
dlo4.LoadWith<Category>(c => c.Products);
dlo4.AssociateWith<Category>(c => c.Products.Where(
p => p.CategoryID == 1 || p.CategoryID == 2));
// create another data context, because we can't change LoadOptions of db
// once q query has been executed
NorthwindDataContext db4 = new NorthwindDataContext();
db4.Log = Console.Out;
db4.LoadOptions = dlo4;
var categories4 = from c in db4.Categories select c;
foreach (var category4 in categories4)
{
Console.WriteLine("There are {0} products in category {1}",
category4.Products.Count(), category4.CategoryName);
}
db4.Dispose();
The output is like this diagram:
Note for each field of an entity, you can also set its Delay Loaded property to change its loading behavior. This is different from the children lazy/eager loading, as it only affects one property of that particular entity.
While associations are kinds of joins, in LINQ, we can also explicitly join two tables using the keyword Join, like in the following code:
Join
var categoryProducts =
from c in db.Categories
join p in db.Products on c.CategoryID equals p.CategoryID into products
select new {c.CategoryName, productCount = products.Count()};
foreach (var cp in categoryProducts)
{
Console.WriteLine("There are {0} products in category {1}",
cp.CategoryName, cp.productCount);
}
It is not so useful in the above example because the tables Products and Categories are associated with a foreign key relationship. When there is no foreign key association between two tables, this will be particularly useful.
From the output, we can see only one query is executed to get the results:
Besides joining two tables, you can also join three or more tables, join self, create left /right outer join, or join using composite keys.
Querying with a view is the same as with a table. For example, you can call the view “current product lists” like this:
var currentProducts = from p in db.Current_Product_Lists
select p;
foreach (var p in currentProducts)
{
Console.WriteLine("Product ID: {0} Product Name: {1}",
p.ProductID, p.ProductName);
}
This will get all the current products using the view.
In this article, we have learned what an ORM is, why we need an ORM, and what LINQ to SQL is. We also compared LINQ to SQL with LINQ to Entities and explored some basic features of LINQ to SQL.
The key points in this article include:
Note: this article is based on chapter 10 of my old book "WCF Multi-tier Services Development with LINQ" (ISBN 1847196624). Since LINQ to SQL is now not preferred by Microsoft, this book has been upgraded to using LINQ to Entities in my new book "WCF 4.0 Multi-tier Services Development with LINQ to Entities" (ISBN 1849681147). Both books are hands-on guides to learn how to build SOA applications on the Microsoft platform, with the old one using WCF and LINQ to SQL in Visual Studio 2008 and the new one using WCF and LINQ to Entities in Visual Studio 2010.
With either book, you can learn how to master WCF and LINQ to SQL/LINQ to Entities concepts by completing practical examples and applying them to your real-world assignments. They are among the first of few books to combine WCF and LINQ to SQL/LINQ to Entities in a multi-tier real-world WCF Service. They are ideal for beginners who want to learn how to build scalable, powerful, easy-to-maintain WCF Services. Both books are rich with example code, clear explanations, interesting examples, and practical advice. They are truly hands-on books for C++ and C# developers.
You don't need to have any experience in WCF or LINQ to SQL/LINQ to Entities to read either book. Detailed instructions and precise screenshots will guide you through the whole process of exploring the new worlds of WCF and LINQ to SQL/LINQ to Entities. These two books are distinguished from other WCF and LINQ to SQL/LINQ to Entities books by that, they focus on how to do it, not why to do it in such a way, so you won't be overwhelmed by tons of information about WCF and LINQ to SQL/LINQ to Entities. Once you have finished one of the books, you will be proud that you have been working with WCF and LINQ to SQL/LINQ to Entities in the most straightforward way.
You can buy either book from Amazon (search WCF and LINQ), or from the publisher's. | http://www.codeproject.com/Articles/215712/LINQ-to-SQL-Basic-Concepts-and-Features?fid=1635147 | CC-MAIN-2014-35 | refinedweb | 4,388 | 55.54 |
Amazon dedicated serverpekerjaan
..
Hi, please help us for our new product on Amazon with compelling and converting SEO product listing, Irresistible SEO Listing - Title, 5 Bullets that Sell, HTML Description + Backend Search Terms. Details of the product, please check out the uploaded files and pictures. Thanks.
I am looking for a freelancer who will increase my sales on amazon in Greman marketplace. Only preelancer who already has experience in amazon. I will gladly become acquainted with your idea
...this app. You can check it at: [log masuk untuk melihat URL] (make sure you run it from a PC desktop and enable Flash in your browser) Our idea is to use the existing server side, with the existing HTTP-based API (well documented), and just create a simple android mobile client for it. This is a POC (Proof Of Concept) to test the idea of running
..
...fold small and neatly so that we can place inside of our product tubes. Accordion-style or booklet is fine. (~2.5 inch/3 inches by 4 in) Each product should have its own dedicated folding page. (Example attached) What we are looking for: * Bright, fun and colorful luxury design that is super attractive! * No faces, no hands, no people on the design
..
I need you to build it and configure inbox smtp server
we move website to a new server we need you to fix 500 Internal Server Error
.. (we will share the API/SDK). The software can
...
- this report will be not less than 2500 -my required I want to give this job for some one specialist of this task - more details I will provide to you by chatting
We have built an An...who has knowledge of both Android app architecture and good UI design skills. The freelancer must have the ability to set up a local Node.js server on their computer, run the app and point it to the local server and make sure that all existing and future API calls are accounted for in terms of both success and failure responses.
Hello, ...data. Hiring will be based on your experience and given work with proof that you've experience with Survey Monkey API integration. We looking for resource who can give dedicated time to our project. To consider your bid please write 'i am not a robot' at the top of your proposal. Looking forward to award this job today itself. Good luck.
I have 2 GPS trackers. and i have back end server. once i enter GPS detail in the server it should show gps real time location in the app.
need to migrate server from VM to dedicate server and setup phpadmin
Hi my MAC OS was migrated earlier to Catalina and I'm unable to access...my MAC OS was migrated earlier to Catalina and I'm unable to access the server. I have the website files but unable the mysql database. I need somone with experience with the Ampps server to help me retrieve the mysql database so the website can be migrated over to a live server.
asp.net core mvc. need developer fix code compile issues&server configuration issue
Products are all amazon affiliate, looking for a funky/ outdoor design for fitness adventure website. Have all product details ready to be sent. Please look at budget and describe design in pitch to be considered for the job..
ONLY
We have an issue on a new SQL Express installation on a Windows Server platform. Tried some tutorial, did not helped. Need fast fix. Connection only trought teamviewer.
I have a WeMos
.. I get the result.
.. room with basic Raid, Server, Patch, Switch
...this error that might be related: (version numbers and some sensitive if remove for public posting) [log masuk untuk melihat URL] [log masuk untuk melihat URL] 500 (Internal Server Error)] @ [...
..
I will started my amazon niche website and I want native English writer to write and edit all the content of the project .
...existing EXPERIENCE with the [log masuk untuk melihat URL] Plugin and setting up Woocommerce (Subscription and Membership Plugins) please provide an example of your work. I'm looking for a dedicated Webmaster to help with the setup and maintaining of a Video Course website. Only experience Freelancer who know LifterLMS, Woocommerce should bid. Previous experience is
.. you do not have an amazon api key for the US, please do NOT...
I am looking for US person who can help me to do some personal work. it will include writing blog articles and english translation and server administration etc.. if applicant has some knowledge or experience about desktop or Web Programming, that would be great, but not essential. Thanks
...pc mac everything We need compiled qt that the visiting client from each platform will quickly install (tiny) that will simply return the mac code to my ASP (ACTIVE SERVER PAGES) SERVER their mac address each time they visit automatically. USING QT AND ARP TO GET MACHINE ADDRESS Using Qt5.8, tested on Windows 7: QString getMacForIP(QString ipAddress)
..
I need to connect from local to clould server. Just have to deal with TCP/IP socket. Any language you can choose. More details will be provided over chat. Good bonus is promised. Thank you.
I am using Microsoft sql server. I have three simple sql tables where one csv and one xml file have to be imported using Microsoft sql server stored-procedure.
...methods from a class or object and, if necessary, the methods should be overwrited. All returned content should be mocked. Refer to API documentation. I recomend to generate a server in [log masuk untuk melihat URL] 3 - Simple CRUD applications whith 5 fields maximum CNAE (National Register of Economic Activities), two fields, code and name;
...API (GET AND POST) with request parameter pass in as domain/<namespace>/{id}/get?param1=123 where I can get the {id} parameter and 'param1' Server Startup Load some data such as env toml file during server startup POST and retrieve JSON object cast jsonobject to java class (getter / setter primitive data type especially moment date in may be localdatetime | https://www.my.freelancer.com/work/amazon-dedicated-server/ | CC-MAIN-2019-47 | refinedweb | 1,017 | 65.12 |
This set of Python Multiple Choice Questions & Answers (MCQs) focuses on “Core Data Types”.
1. Which of these in not a core datatype?
a) Lists
b) Dictionary
c) Tuples
d) Class
View Answer
Explanation: Class is a user defined datatype.
2. Given a function that does not return any value, What value is thrown by default when executed in shell.
a) int
b) bool
c) void
d) None
View Answer
Explanation: Python shell throws a NoneType object back.
3. Following set of commands are executed in shell, what will be the output?
>>>>>str[:2]
>>>
a) he
b) lo
c) olleh
d) hello
View Answer
Explanation: We are printing only the 1st two bytes of string and hence the answer is “he”.
4. Which of the following will run without errors ?
a) round(45.8)
b) round(6352.898,2,5)
c) round()
d) round(7463.123,2,1)
View Answer
Explanation: Execute help(round) in the shell to get details of the parameters that are passed into the round function.
5. What is the return type of function id ?
a) int
b) float
c) bool
d) dict
View Answer
Explanation: Execute help(id) to find out details in python shell.id returns a integer value that is unique.
6.
View Answer
Explanation: // is integer operation in python 3.0 and int(..) is a type cast operator.
7. What error occurs when you execute?
apple = mango
a) SyntaxError
b) NameError
c) ValueError
d) TypeError
View Answer
Explanation: Mango is not defined hence name error.
8. Carefully observe the code and give the answer.
def example(a):
a = a + '2'
a = a*2
return a
>>>example("hello")
a) indentation Error
b) cannot perform mathematical operation on strings
c) hello2
d) hello2hello2
View Answer
Explanation: Python codes have to be indented properly.
9. What dataype is the object below ?
L = [1, 23, ‘hello’, 1].
a) list
b) dictionary
c) array
d) tuple
View Answer
Explanation: List datatype can store any values within it.
10. In order to store values in terms of key and value we use what core datatype.
a) list
b) tuple
c) class
d) dictionary
View Answer
Explanation: Dictionary stores values in terms of keys and values.
11. Which of the following results in a SyntaxError ?
a) ‘”Once upon a time…”, she said.’
b) “He said, ‘Yes!'”
c) ‘3\’
d) ”’That’s okay”’
View Answer
Explanation: Carefully look at the colons.
12. The following is displayed by a print function call:
tom
dick
harry
Select all of the function calls that result in this output
a) print(”’tom
\ndick
\nharry”’)
b) print(”’tomdickharry”’)
c) print(‘tom\ndick\nharry’)
d) print(‘tom
dick
harry’)
View Answer
Explanation: The \n adds a new line.
13. What is the average value of the code that is executed below ?
>>>grade1 = 80
>>>grade2 = 90
>>>average = (grade1 + grade2) / 2
a) 85
b) 85.1
c) 95
d) 95.1
View Answer
Explanation: Cause a decimal value to appear as output.
14. Select all options that print
hello-how-are-you
a) print(‘hello’, ‘how’, ‘are’, ‘you’)
b) print(‘hello’, ‘how’, ‘are’, ‘you’ + ‘-‘ * 4)
c) print(‘hello-‘ + ‘how-are-you’)
d) print(‘hello’ + ‘-‘ + ‘how’ + ‘-‘ + ‘are’ + ‘you’)
View Answer
Explanation: Execute in the shell.
15. What is the return value of trunc() ?
a) int
b) bool
c) float
d) None
View Answer
Explanation: Executle help(math.trunc) to get details.
Sanfoundry Global Education & Learning Series – Python.
To practice all areas of Python, here is complete set of 1000+ Multiple Choice Questions and Answers. | http://www.sanfoundry.com/python-mcqs-core-datatypes/ | CC-MAIN-2017-43 | refinedweb | 585 | 66.44 |
Here’s the stdlib implementation of zip:
public func zip<Sequence1 : Sequence, Sequence2 : Sequence>( _ sequence1: Sequence1, _ sequence2: Sequence2 ) -> Zip2Sequence<Sequence1, Sequence2> { return Zip2Sequence(_sequence1: sequence1, _sequence2: sequence2) }
So once again, I’m trying to figure out how I would fold this rationally. I’m not a huge fan of the parameter list on a single line like this, especially when there are both external (_) and internal labels involved, generic parameter types, and a complex return type.
I was thinking of something more like this:
public func zip<Sequence1, Sequence2> ( _ sequence1: Sequence1, _ sequence2: Sequence2 ) -> Zip2Sequence<Sequence1, Sequence2> where Sequence1: Sequence, Sequence2: Sequence { return Zip2Sequence(_sequence1: sequence1, _sequence2: sequence2) }
This update moves each parameter to its own line and splits off both the return type and the constraint clause. What do you think? How would you personally fold it? Let me know and thanks.
6 Comments
Probably like that but the opening curly on the where line.
I’d put the closing parentheses on the end of the previous line, but not the curly bracket. I would never want to hide the boundary between a function’s signature and the start of its statements.
Not sure if you were hoping for a response in comment form 🙂
A one-line function containing 18 instances of the word “Sequence”, I’m wondering if that says something bad about Swift. Wasn’t one of the goals of the language and its style conventions to somewhat minimize repetitions of type names? I guess the generic types could be named to something like “S1” & “S2”, parameters similarly “s1” & “s2”, reducing 18 to 5, but is the end result an improvement? Not sure I have a point.
I use exactly the same style.
I’m with smallduck on this one. People talk about angle bracket blindness, but I think the leakage of signatures like
_builtin2048piURLThunk: _underscoreInternalBoxWrapperImplinto the standard library’s public API is really ugly.
Those signatures really hit home when Xcode suggests all 20 inherited-protocol implementations of nearly the same function with slightly different semantics and/or naming conventions, and they all include their generically-typed arguments in the autocompletion menu instead of the actual dynamic type (e.g.
func reverse(_arg1: AnySlice)instead of
func reverse(colors: Slice)).
Don’t get me wrong, I’ve really loved learning, working, and playing with Swift (barring my everlasting 3.0 migration problems ????), but there are a few design decisions that I just don’t understand. But then again, my path to Swift was mostly web and server-oriented (HTML ~> CSS ~> JavaScript ~> PHP ~> C# ~> Python ~> Ruby ~> Objective-C ~> Swift), so maybe I just don’t understand the complexities of “real” programming languages (I mean, I’ll admit it: reading a few lines of C++ makes my head swirl).
Also, am I alone in wondering why Swift 3 introduced the global type(of:) function to access the dynamic type of a value? In Objective-C, accessing an object’s dynamic type (or class, I guess) was short and sweet: `self.class`. Boom! You now have access to all your static class methods, and your class names can easily be changed without too much refactoring. In Swift 2.x, it was pretty easy, too: `self.dynamicType`. Boom! You’ve got your static properties, methods, et al. Now, in Swift 3.0, instead of naturally accessing the dynamic type with
.notation, I have to pull out my JavaScript hat, and awkwardly spell out
type((with no help from Xcode’s autocompletion), followed by a pointless
of:argument label, then my variable
self.children, and finally, the closing parenthesis
). If I wrote my own programming language, I’d definitely opt for a built-in keyword at least, instead of polluting the Swift namespace with a free-floating function.
As I said, I don’t know the first thing about building a lexer, parser, or compiler, but I can’t help but think that Apple has a few brilliant people working for them (as well as an awesome open-source community) that could probably fix some of these oddities. Oh, well … | https://ericasadun.com/2016/09/22/style-how-would-you-fold-this/ | CC-MAIN-2020-40 | refinedweb | 683 | 61.06 |
Does anyone think this would be a decent system for a text-based RPG? I'm just starting to learn C++ so any constructive criticism is welcome.
Please note that I am planning to add a variable to hold the player's name, and to randomly generate the player's stats.Please note that I am planning to add a variable to hold the player's name, and to randomly generate the player's stats.Code://Program: Role-Playing Game //Programmer: MartinThatcher //Programming Date: 9/10/09 #include <iostream> #include <string> using namespace std; int main() { int choice = 0, workHours = 0, upgradeChoice = 0, trainingChoice = 0; int attack = 10, defense = 10, gold = 500, pay = 0; int combatExperience = 0, defenseExperience = 0; int swordLevel = 1, combatTraining = 0; int shieldLevel = 1, defenseTraining = 0; int swordCost = 0, shieldCost = 0; int combatCost = 0, defenseCost = 0; int combatPtsNeeded = 0; int defensePtsNeeded = 0; cout << "\t\t\tWelcome to Role-Playing Game!"; while(true) { cout << "\n\nAttack: " << attack << "\nDefense: " << defense << "\nGold: " << gold << "\n\n"; cout << "\n\nWhat would you like to do?\n\n"; cout << "1 - Work\n2 - Shop\n3 - Train\n4 - Quit\n\n"; cin >> choice; switch (choice) { case 1: cout << "\n\nHow long would you like to work? (eight-hour maximum) "; cin >> workHours; pay = workHours * 100; if (workHours > 8) { workHours = 8; } if (workHours >= 1) { cout << "\n\nYou worked for " << workHours << " hour(s) and earned " << pay << " gold piece(s)!\n\n"; gold += pay; } else { cout << "You did not work.\n\n"; } break; case 2: swordCost = swordLevel * 100; shieldCost = shieldLevel * 100; cout << "Welcome to the Shop!\n"; cout << "What would you like to upgrade?\n\n1 - Sword\n2 - Shield\n\n"; cin >> upgradeChoice; if (upgradeChoice == 1 && gold >= swordCost) { swordLevel++; cout << "You paid the blacksmith " << swordCost << " gold pieces.\nYour sword has been upgraded to level " << swordLevel << "!\n\n"; gold -= swordCost; } else if (upgradeChoice == 2 && gold >= shieldCost) { shieldLevel++; cout << "You paid the blacksmith " << shieldCost << " gold pieces.\nYour shield has been upgraded to level " << shieldLevel << "!\n\n"; gold -= shieldCost; } else if (upgradeChoice != 1 && upgradeChoice != 2 || gold < swordCost || gold < shieldCost) { cout << "You have entered an illegal value, or you do not have enough money.\n\n"; } break; case 3: combatCost = attack * 10; defenseCost = defense * 10; combatTraining = swordLevel * 5; defenseTraining = shieldLevel * 5; cout << "Welcome to the dojo!\nHow would you like to train?\n\n"; cout << "1 - Combat\n2 - Defense\n\n"; cin >> trainingChoice; if (trainingChoice == 1 && gold >= combatCost) { combatExperience += combatTraining; combatPtsNeeded = attack * 10 - combatExperience; cout << "You pay your trainer " << combatCost << " gold pieces and train with him.\nYou have earned " << combatTraining << " experience points.\n"; cout << "You still need " << combatPtsNeeded << " point(s) in order to level up.\n\n"; gold -= combatCost; if (combatPtsNeeded <= 0) { cout << "Congratulations! You have gained an attack level!"; combatExperience = 0; ++attack; } } else if (trainingChoice == 2 && gold >= defenseCost) { defenseExperience += defenseTraining; defensePtsNeeded = defense * 10 - defenseExperience; cout << "You pay your trainer " << defenseCost << " gold pieces and train with him.\nYou have earned " << defenseTraining << " experience points.\n"; cout << "You still need " << defensePtsNeeded << " point(s) in order to level up.\n\n"; gold -= defenseCost; if (defensePtsNeeded <= 0) { cout << "Congratulations! You have gained a defense level!"; defenseExperience = 0; ++defense; } } else if (trainingChoice < 1 || trainingChoice > 2 || gold < combatCost || gold < defenseCost) { cout << "You have entered an illegal value, or you do not have enough money.\n\n"; } break; case 4: cout << "\nGoodbye!\n\n"; system("pause"); return 0; break; default: cout << "You have entered an illegal value.\n\n"; system("pause"); return 0; } } }
I apologize if this isn't in the right section. It seemed like a sensible section to place it in, though. | http://cboard.cprogramming.com/game-programming/119463-text-based-rpg.html | CC-MAIN-2014-23 | refinedweb | 584 | 56.15 |
public class ConcurrentReaderHashMap extends AbstractMap
A hash table. ]
The default initial number of table slots for this table (32). Used when not otherwise specified in constructor.
The default load factor for this table (1.0). Used when not otherwise specified in constructor.
Lock used only for its memory effects.
The total number of mappings in the hash table.
field written to only to guarantee lock ordering.
The load factor for the hash table.
The hash table data.
The table is rehashed when its size exceeds this threshold. (The value of this field is always (int)(capacity * loadFactor).)
Constructs a new, empty map with the specified initial capacity and the specified load factor.
initialCapacity- the initial capacity The actual initial capacity is rounded to the nearest power of two.
loadFactor- the load factor of the ConcurrentReaderHashMap
Constructs a new, empty map with the specified initial capacity and default load factor.
initialCapacity- the initial capacity of the ConcurrentReaderHashMap.
Constructs a new, empty map with a default initial capacity and load factor.
Constructs a new map with the same mappings as the given map. The map is created with a capacity of twice the number of mappings in the given map or 16 (whichever is greater), and a default load factor.
Removes all mappings from this map.
Returns a shallow copy of this ConcurrentReaderHashMap instance: the keys and values themselves are not cloned.
Tests if some key maps into the specified value in this table.
This operation is more expensive than the
containsKey
method.
Note that this method is identical in functionality to containsValue, (which is part of the Map interface in the collections framework).
null.
value- a value to search for.
trueif and only if some key maps to the
valueargument in this table as determined by the equals method;
falseotherwise.
Tests if the specified object is a key in this table.
null.
key- possible key.
trueif and only if the specified object is a key in this table, as determined by the equals method;
falseotherwise.
Returns true if this map maps one or more keys to the specified value. Note: This method requires a full internal traversal of the hash table, and so is much slower than method containsKey.
null.
value- value whose presence in this map is to be tested.
Returns an enumeration of the values in this table. Use the Enumeration methods on the returned object to fetch the elements sequentially.
Returns a collection view of the mappings contained in this map. Each element in the returned collection is a Map.Entry. The collection is backed by the map, so changes to the map are reflected in the collection, and vice-versa. The collection supports element removal, which removes the corresponding mapping from the map, via the Iterator.remove, Collection.remove, removeAll, retainAll, and clear operations. It does not support the add or addAll operations.
Check for equality of non-null references x and y.
Helper method for entrySet.remove
Returns the value to which the specified key is mapped in this table.
null.
key- a key in the table.
nullif the key is not mapped to any value in this table.
Get ref to table; the reference and the cells it accesses will be at least as fresh as from last use of barrierLock
Returns true if this map contains no key-value mappings.
Returns a set view of the keys contained in this map. The set is backed by the map, so changes to the map are reflected in the set, and vice-versa. The set supports element removal, which removes the corresponding mapping from this map, via the Iterator.remove, Set.remove, removeAll, retainAll, and clear operations. It does not support the add or addAll operations.
Returns an enumeration of the keys in this table.
Maps the specified
key to the specified
value in this table. Neither the key nor the
value can be
null.
The value can be retrieved by calling the
get method
with a key that is equal to the original key.
null.
key- the table key.
value- the value.
nullif it did not have one.
Copies all of the mappings from the specified map to this one. These mappings replace any mappings that this map had for any of the keys currently in the specified Map.
t- Mappings to be stored in this map.
Force a memory synchronization that will cause all readers to see table. Call only when already holding main sync lock.
Rehashes the contents of this map into a new table with a larger capacity. This method is called automatically when the number of keys in this map exceeds its capacity and load factor.
Removes the key (and its corresponding value) from this table. This method does nothing if the key is not in the table.
null.
key- the key that needs to be removed.
nullif the key did not have a mapping.
Returns the number of key-value mappings in this map.
Continuation of put(), called only when sync lock is held and interference has been detected.
Continuation of remove(), called only when sync lock is held and interference has been detected.
Returns a collection view of the values contained in this map. The collection is backed by the map, so changes to the map are reflected in the collection, and vice-versa. The collection supports element removal, which removes the corresponding mapping from this map, via the Iterator.remove, Collection.remove, removeAll, retainAll, and clear operations. It does not support the add or addAll operations. | http://docs.groovy-lang.org/latest/html/gapi/org/codehaus/groovy/runtime/metaclass/ConcurrentReaderHashMap.html | CC-MAIN-2015-22 | refinedweb | 918 | 68.06 |
GitHub permissions get complicated. There are nuances to understanding what an owner can do vs what a member can do. So as your organization grows, more restrictions may be placed on what members can do. Here’s a problem we ran into at my day job:
- We have a public GitHub presence at github.com/xyz.
- Members of the
xyzare not allowed to create repos by default.
- New repo requests were emailed to org owners and they are done manually. (Boo.)
This tutorial shows an easy to use way to request new repos be made for a GitHub org. It involves asking members to request new repos by filing issues in a specific repo (the example solution uses a GitHub Enterprise instance, but any GitHub repo would work). This allows for multiple owners to see the same queue, and gives you a record of how many requests have been made. (Plus, I just wanted to mess around with serverless, so this was a good excuse.)
Here’s how it looks at a high level:
- A user files a GitHub issue with details about a repo they want created.
- When an issue is approved, a payload is sent to a serverless action.
- The serverless action calls some Python code.
- Using the Python requests module, you call GitHub APIs.
By completing this tutorial, you will understand how to:
- Set up an action with IBM Cloud Functions.
- Trigger an action from IBM Cloud Functions with a webhook.
- Interact with the GitHub API.
Prerequisites
- A free tier IBM Cloud account.
- A GitHub account.
- A GitHub organization where you are the owner. You can create your own organization if you don’t have access to one.
Estimated time
Walking through this tutorial should take you about 45 minutes.
Steps
This tutorial is split into a few parts:
- Generate a GitHub personal access token.
- Create a serverless trigger and action.
- Set up a GitHub repo for webhooks.
- Run it!
- Take a deeper look at the code.
1. Generate a GitHub personal access token
Generate a personal access token so you can call GitHub APIs programmatically (from your Python code). This is extensively documented on GitHub, but in short, perform the following to generate a personal access token:
- From your GitHub profile settings, click on Developer Settings
- Select Personal access tokens.
- Click Generate new token.
- Ensure the repo option is selected.
- Click Generate token at the bottom.
A random string of characters will be generated. Copy and paste these somewhere safe, as you’ll need them in the next few steps. Repeat this step if you are using a GitHub Enterprise account in your setup.
2. Create a serverless trigger and action
To start creating a trigger or action, you need to log into IBM Cloud and select the Fuctions option from the Navigation Menu, or go to it directly.
Create a custom trigger
The first thing you need to do on IBM Cloud is to create a serverless trigger. By creating a trigger, you will have a URL to provide to your GitHub webhook. The GitHub webhook will trigger whenever an event (such as a new issue or new pull request) happens and send a payload (such as a JSON interpretation of the event) to the URL associated with your serverless trigger.
From the Functions overview page, select Create Trigger.
Choose the Custom Trigger option.
Give the new trigger a name and enter a description.
Once created, you need to look at the trigger endpoint. Click the eyeball icon to uncover the full URL with the API key and secret.
Again, save this URL somewhere since you will need it soon!
Create an action
Now, you will create some Python code (an Action) that will be executed when your trigger is, well, triggered!
From the Functions overview page, choose Create Action.
Give the new action a name, select the default package, and choose Python 3 as the runtime.
Once the editor pops up, copy and paste the code below into the online editor. I’ll go through the code after the tutorial steps.
After the action has been created, you need to associate it with the trigger that you created in the previous step. Go to the menu on the left and select the Connected Triggers option.
Click Add Trigger, select Custom Trigger, and find the existing one that you just created.
- Go back to the action that was created and find the Parameters menu. You need to add two environment variables,
GHE_TOKENand
GH_PUB_TOKEN, as parameters. Use the values that were generated from Step 1.
Click Add.
You’re almost done!
3. Set up a GitHub repo for webhooks
- Create a GitHub repo where users will file issues to request new repos.
Give users an issue template to follow. The one I used is below. It requests a repo name, description, license, and usernames to add as administrators.
## Essentials * name: repo_name * users: user_1, user_2 * description: this is for fun, ain't it grand! * license: apache-2.0
In the repo settings, go to the Hooks menu to set up a webhook that will call your serverless trigger.
Create a new webhook. Set the Payload URL to be the trigger endpoint from Step 2.
- Set the Content type to be
application/json.
Set the SSL verification to be enabled. The payload URL looks like the following:
For the Which events would you like to trigger this webhook? option, choose Let me select individual events.
When the list appears, choose Issue comments.
Now that your repo is set up to talk to your action, your action is associated with your trigger. Your trigger also has GitHub tokens to use, so you can finally test it all out!
4. Run it
Start by having someone file an issue. Below is an example. You can see the repo name and description, usernames to add, and license to use. It’s all there.
Leaving any comment would call your trigger, but in your serverless action, you specifically check for an
/approvecomment. So leave that message in the issue and look at the payload that is sent from GitHub to your trigger.
Here’s what the payload looks like. It was greatly trimmed for readability, but you can see the issue body, the issue comment, the person who made the comment, and what the issue number is. This entire blob is made available in the
paramsvariable of your serverless action.
{ "action": "created", "issue": { "number": 18, "title": "Repo for Studio Learning Path assets", "user": { "login": "Rich-Hagarty", }, "body": "## Essentials\r\n\r\n* name: watson-studio-learning-path-assets\r\n* users: rhagarty\r\n* description: repo to store all assets (such as notebooks, data, etc) for Watson Studio Learning Path tutorials\r\n\r\n## Tips\r\n\r\n* Repo names **CANNOT** have spaces.\r\n* User IDs must be from **public github**, if you're not sure, go to and login.\r\n*" }, "comment": { "body": "/approve" }, "sender": { "login": "stevemar" } }
Finally, part of the script in the serverless action posts a follow up comment (indicating that the repo was created) and posts a URL.
5. Take a deeper look at the code
The entire source code used is available on Gist. Let’s take a look at a few code snippets.
Checking the comment sender
Here’s a simple guard, hardcoded to two approvers, to ensure when a
/approve comment is left, it’s actually from someone who is trusted. (Not the prettiest, but it works.)
def main(params): if params['comment']['body'] == "/approve": sender = params['sender']['login'] if sender == 'stevemar' or sender == 'chrisfer': print("proceeding with repo create") else: return { 'message': 'approve comment made by unauthorized user' }
Parsing markdown
The issue body came in the JSON payload, but it was in raw markdown. I had to get a little clever here to extract the text for all the corner cases and ended up creating several unit tests to ensure edge cases were caught.
def _get_info_from_body(body): m = re.search(r'\* name:(.*)(\r\n|$)', body) repo_name = m.group(1).strip() if m else None repo_name = repo_name.strip() if repo_name else None m = re.search(r'\* users:(.*)(\r\n|$)', body) users = m.group(1).strip() if m else [] users = [x.strip() for x in users.split(',')] if users else [] m = re.search(r'\* description:(.*)(\r\n|$)', body) description = m.group(1).strip() if m else '' m = re.search(r'\* license:(.*)(\r\n|$)', body) license = m.group(1).strip() if m else 'apache-2.0' license = license.strip() if license else 'apache-2.0' return {'repo_name': repo_name, 'users': users, 'description': description, 'license': license}
Calling GitHub APIs
The GitHub APIs are very well documented. To call these, I used the python-requests library.
gh_token = params['GH_PUB_TOKEN'] # set up auth headers to call github APIs headers = {'Authorization': 'token %s' % gh_token} # create the repo -- url = '' + PUBLIC_ORG + '/repos' payload = { 'name': info['repo_name'], 'description': info['description'], 'license_template': info['license'], 'auto_init': 'true' } r = requests.post(url, headers=headers, data=json.dumps(payload))
Summary
I hope you enjoyed reading this tutorial as much as I enjoyed writing it. Now you know how to set up an action with IBM Cloud Functions, trigger that action with a webhook, and invoke GitHub APIs using Python code. I hope you can use this to benefit your own organizational workflow. | https://developer.ibm.com/tutorials/github-task-automation-with-serverless-actions/ | CC-MAIN-2021-25 | refinedweb | 1,546 | 66.54 |
Theming Ext JS
Contents
Theming defines the visual motif of your application. A theme is a set of visual aspects that can be easily switched without affecting the basic functionality of your application such as base color, font family, borders, backgrounds, and other CSS properties. Theming is differentiated from "styling" by the abilty to flip a switch and change the theme.
Ext JS 4.2 includes an overhaul of the theming system that makes it much easier to customize the look and feel of an application. Ext JS themes leverage SASS and Compass which enable the use of variables and mixins in your stylesheets. Almost all of the styles for Ext JS Components can now be customized, including colors, fonts, borders, and backgrounds, by simply changing SASS variables.
Ext JS includes a default themes that you can use as a base to create your own Custom Theme package that can then be used in an Ext JS application. This tutorial shows you how to create a custom theme that is sharable between applications, how to use that theme in an application, and how to create application-specific styling that is not shared.
Requirements
Sencha Cmd 3.1
Sencha Cmd is a command-line tool used to package and deploy Ext JS and Sencha Touch applications. To build a theme in Ext JS 4.2, you must have Sencha Cmd 3.1 or higher installed on your computer. Sencha Cmd 3.1 removes the need to have SASS and Compass installed on your computer since it uses its own bundled version of SASS and Compass.
For more information about installing and getting started with Sencha Cmd see Introduction to Sencha Cmd.
Ruby
Ruby is an open source programming language that is required to run Sencha Cmd. Refer to the Introduction to Sencha Cmd guide for instructions on installing Ruby.
Ext JS
Custom themes are based on default themes that ship with the Ext JS SDK,
so you will need to download Ext JS 4.2 or later.
Unzip the Ext JS development kit (SDK) to a location of your choosing.
For this tutorial,
we assume that you unzipped the SDK to your home directory:
"~/extjs-4.2.0/"
Building a Custom Theme
Now that we have installed the requirements for theme building, let's get started creating a fully custom theme.
Set up the Workspace
The first step in building a custom theme
is to set up your workspace using Sencha Cmd.
Run the following from the command line,
replacing
"~/ext-4.2.0" with the path
where you unzipped the Ext JS SDK.
sencha -sdk ~/ext-4.2.0 generate workspace my-workspace
This creates a directory named
"my-workspace"
that contains your custom theme package;
you will also create an application here that uses the new custom theme.
This command copies the Ext JS SDK and packages into your workspace
so that the theme and application can find their required dependencies.
The commands for generating the theme and application
must be executed from inside the workspace directory
so change your working directory to the new
"my-workspace" directory now:
cd my-workspace
You should now see two directories inside your workspace
"ext"-- contains the Ext JS SDK
"packages"-- contains the Ext JS locale and theme packages
Generating an Application for Testing the Theme
Before creating a custom theme we need to set up a way to test the theme.
The best way to test a theme is to use it in an application.
Run the following command from the
"my-workspace" directory:
sencha -sdk ext generate app ThemeDemoApp theme-demo-app
This tells Sencha Cmd to generate an application named ThemeDemoApp
in a new sub-directory named
"theme-demo-app",
and to find the Ext JS SDK in the
"ext" directory
that was copied into
"my-workspace" when the workspace was generated.
Now let's build the app:
cd theme-demo-app sencha app build
There are two ways to run your app:
- Development mode: simply open
"theme-demo-app/index.html"in a browser.
- Production mode: open
"build/ThemeDemoApp/production/index.html"in a browser.
We will use development mode for this tutorial since it uses unminified source files for easy debugging.
Production mode uses minified source files for a smaller footprint and better performance for the application.
Generating the Theme Package and File Structure
Sencha Cmd allows you to automatically generate the theme package and file structure. Run the following command from the theme-demo-app directory:
sencha generate theme my-custom-theme
This tells Sencha Cmd to generate a theme package
named
"my-custom-theme" in the current workspace.
You should see a newly created directory
named
"my-custom-theme" in the "packages" directory of your workspace.
This is the theme package. Lets take a look at its contents:
"package.json"- This is the package properties file. It tells Sencha Cmd certain things about the package like its name, version, and dependencies (other packages that it requires).
"sass/"- This directory contains all of your theme's SASS files. The sass files are divided into 3 main sections:
"sass/var/"- contains SASS variables
"sass/src/"- contains SASS rules and UI mixin calls that can use the variables defined in "sass/var/"
"sass/etc/"- contains additional utility functions or mixins The files in
"sass/var/"and
"sass/src"should be structured to match the class path of the Component you are styling. For example, variables that change the appearance of Ext.panel.Panel should be placed in a file named
"sass/var/panel/Panel.scss".
"resources/" - contains images and other static resources that your theme requires.
"overrides/"- contains any JavaScript overrides to Ext JS Component classes that may be required for theming those Components.
Configuring Theme Inheritance.
All Sencha theme packages are part of a larger hierarchy of themes, and each theme package must extend a base theme. The next step in creating your custom theme is to figure out which theme to extend. In the packages directory your workspace you will see the following theme packages:
"ext-theme-base"- This package is the base theme for all other themes, and the only theme package that does not have a parent theme. It contains the bare minimum set of CSS rules that are absolutely required for Ext JS Components and Layouts to work correctly. The style rules in
"ext-theme-base"are not configurable in a derived theme, and-classic"- The classic blue Ext JS theme. Extends
"ext-theme-neutral".
"ext-theme-gray"- Gray theme. Extends
"ext-theme-classic".
"ext-theme-access"- Accessibility theme. Extends
"ext-theme-classic".
"ext-theme-neptune"- Modern borderless theme. Extends
"ext-theme-neutral".
So which theme should your custom theme extend? We recommend using either
"ext-theme-neptune"
or
"ext-theme-classic" as the starting point for custom themes. The reason for this is
because, whereas a derivation of neptune or classic theme can be up
and running in minutes by simply changing a couple variables. Additionally you can override
"ext-theme-gray" or
"ext-theme-access"
if they provide a more desirable starting point for your custom theme.
For this tutorial we will create a custom theme that extends the Neptune theme. To do this,
replace the following line in
"packages/my-custom-theme/package.json":
"extend": "ext-theme-classic"
with this:
"extend": "ext-theme-neptune"
You now need to refresh your application. This ensures that the correct theme JavaScript
files are included in the application's
"bootstrap.js" file so that the application can
be run in development mode. Run the following command from the
"theme-demo-app" directory.
sencha app refresh
Configuring Global Theme Variables
Now that you have set up your theme package, its time to begin modifying the visual appearance of the theme. Let's start by modify the base color from which many Ext JS Components' colors are derived. Create a file called Component.scss in "my-custom-theme/sass/var/". Add the following code to the Component.scss file:
$base-color: #317040 !default;
Be sure to include
!default at the end of all variable assignments if you want your
custom theme to be extensible. Without
!default you will not be able to override the
variable in a derived theme, because.
Building the Package
To generate a css file containing all the style rules for your theme, run the following
command from the
"packages/my-custom-theme/" directory:
sencha package build
Building the package generates but this is not recommended because the
"all"
file contains all styles for every Ext JS Component and most apps only use a subset of
Ext JS Components. Sencha Cmd has the ability to filter out unused CSS style rules when
you build an app, but first we need to configure the test app to use the custom theme.
Using a Theme in an Application
To configure your test application to use the custom theme that you just created,
find the following line in
theme-demo-app/.sencha/app/sencha.cfg
app.theme=ext-theme-classic
and replace it with:
app.theme=my-custom-theme
If you have already run a build of the app using the classic theme, you should clean the build directory. From the theme-demo-app directory run:
sencha ant clean
Now from the
"theme-demo-app" directory, run the following command to build the app:
sencha app build
Open
"theme-demo-app/index.html" in a browser. You should see the green color we specified
earlier as
$base-color applied to the components on the screen.
Configuring Component Variables
Each Ext JS Component has a list of global variables that can be used to configure its
appearance. Let's change the
font-family of Panel Headers in
"my-custom-theme". Create a
file named
"packages/my-custom-theme/sass/var/panel/Panel.scss" and add the following
code:
$panel-header-font-family: Times New Roman !default;
Now build your app by running the following command from the
"theme-demo-app" directory.
sencha app build
Open
"theme-demo-app/index.html" in a web browser and you should see that the panel header
uses "Times New Roman" font.
You can find the complete list of SASS variables for each Component in the "CSS Variables" section of each page in the API docs. For example, see Ext.panel.Panel and scroll down to the section titled "CSS Variables"
Creating Custom Component UIs
Every component in the Ext JS framework has a
ui configuration (which defaults to "
default").
This property can be configured on individual Component instances to give them a different
appearance from other Component instances of the same type. This is used in the Neptune
theme to create different types of Panels and Buttons. For example panels with the 'default' UI have dark blue headers and panels with
the 'light' UI have light blue headers. Buttons use UIs to give toolbar buttons a different
appearance from regular buttons.
The ext-theme-neutral theme includes SASS
"packages/my-custom-theme/sass/src/panel/Panel.scss" and add the following code to it:
@include extjs-panel-ui( $ui-label: 'highlight-framed', $ui-header-background-color: red, $ui-border-color: red, $ui-header-border-color: red, $ui-body-border-color: red, $ui-border-width: 5px, $ui-border-radius: 5px );
This mixin call produces a new Panel UI named "highlight-framed" which has a red header
background, red bordering, 5px border, and 5px border-radius. To use this UI, just configure
a Panel with "highlight" as its
ui property (the "-framed" suffix is added to the UI of
a panel when you set the frame config to
true). Open
"theme-demo-app/app/view/Viewport.js" and replace the items array with the following:
items: [{ // default UI region: 'west', xtype: 'panel', title: 'West', split: true, width: 150 }, { // custom "highlight" UI region: 'center', xtype: 'panel', layout: 'fit', bodyPadding: 20, items: [ { xtype: 'panel', ui: 'highlight', frame: true, bodyPadding: 10, title: 'Highlight Panel' } ] }, { // neptune "light" UI region: 'east', xtype: 'panel', ui: 'light', title: 'East', split: true, width: 150 }]
Now build your app by running the following command from the
"theme-demo-app" directory.
sencha app build
Open
"theme-demo-app/index.html" in a web browser and you should see the red "highlight" panel
in the center region.
While UI mixins are a handy way to configure multiple appearances for a component, they should not be overused. Because);
The reason for this is, because of the complexity and number of mixin parameters, we cannot guarantee that the order will stay the same if new parameters are added, or if a deprecated parameter is removed. It is therefore safest to always specify the parameters by name and not by ordinal names when calling UI mixins.
Modifying Image Assets
"packages/my-custom-theme/resources/images/shared/icon-info.png"
Now modify your test application to show a MessageBox that uses the custom icon. Add the
following items array to the highlight panel in your application's Viewport
(
"theme-demo-app/app/view/Viewport.js"):
... title: 'Highlight Panel', items: [{ xtype: 'button', text: 'Show Message', handler: function() { Ext.Msg.show({ title: 'Info', msg: 'Message Box with custom icon', buttons: Ext.MessageBox.OK, icon: Ext.MessageBox.INFO }); } }] ...
And add
Ext.window.MessageBox to the
requires array of the Viewport:
requires: [ ... 'Ext.window.MessageBox', ... ],
Now build the app and view it in the browser; when you click the button you should see a MessageBox containing the custom icon.
Slicing Images for CSS3 effects in IE
In many cases when creating new UI's, you will want to include background gradients or
rounded corners. Unfortunately, legacy browsers do not support the CSS3 properties for these
effects, so we must use images instead. Sencha Cmd includes the ability to automatically
slice these images for you. To do this, we need to tell Sencha Cmd which Components need
slicing. The files that contain the slicing configuration are contained in the
"sass/example/"
directory of a theme. To get an idea of what these files look like, let's look at the
"packages/ext-theme-base/sass/example/" directory in your workspace:
"shortcuts.js"- This file contains the base configurations for the types of components that can be sliced. Most custom themes do not need to contain a
"shortcuts.js"file; it is necessary only if your theme includes styling for custom Components. Your theme inherits all of the shortcuts defined in its base themes, and you can add additional shortcuts if needed by calling
Ext.theme.addShortcuts()in the
"shortcuts.js"file in your theme.
"manifest.js"- This file contains the list of Component UIs for which sliced images will be generated when you build your theme. A theme inherits all manifest entries from its parent themes, and can add its own manifest entries by calling the
Ext.theme.addManifest()function in its own
"manifest.js"file.
"theme.html"- This is the file that renders the Components defined in the
"manifest.js"file. Sencha Cmd renders
"theme.html"in a headless WebKit browser, and takes a screenshot of the page. It then uses this screenshot to slice the required images for displaying rounded corners and gradients in IE.
To create slices for the rounded corners of the "highlight" panel UI that you created
earlier in this tutorial, create a file named
"packages/my-custom-theme/sass/example/manifest.js" and add the following code to it.
Ext.theme.addManifest( { xtype: 'panel', ui: 'highlight' } );
Now edit
"packages/my-custom-theme/sass/example/theme.html" and add the following script
<!-- Required because Sencha Cmd doesn't currently add manifest.js from parent themes --> <script src="../../../ext-theme-neptune/sass/example/manifest.js"></script> <!-- Your theme's manifest.js file --> <script src="manifest.js"></script>
That ensures that the UIs defined in ext-theme-neptune and my-custom-theme get sliced
correctly when you build the my-custom-theme package using
sencha package build.
Youll musst also add these 2 script tags to
"theme-demo-app/sass/example/theme.html",
so that the UIs will get sliced when building the app using
sencha app build:
<script type="text/javascript" src="../../../packages/ext-theme-neptune/sass/example/manifest.js"></script> <script type="text/javascript" src="../../../packages/my-custom-theme/sass/example/manifest.js"></script>
In the future, it will seldom be necessary to modify
"theme.html" manually but,
in Sencha Cmd 3.1.0, script tags
for
"shortcuts.js" and
"manifest.js" are not added automatically.
That's all there is to it. Now just build the demo app again, and run it in IE8 or below. You should see rounded corners on the "highlight" panel that look just like the ones creates using CSS3 when you run the app in a modern browser.
Theme JS Overrides
"packages/my-custom-theme/overrides/panel/Panel.js" and add
the following code:
Ext.define('MyCustomTheme.panel.Panel', { override: 'Ext.panel.Panel', titleAlign: 'center' });
Now lets build the theme package so that
"packages/my-custom-theme/build/my-custom-theme.js"
will include the new override. From the
"packages/my-custom-theme/" directory run:
sencha package build
You should now refresh the application so that the theme's JS overrides will get included
when running the application in development mode. Run the following command from the
"theme-demo-app" directory:
sencha app refresh
Now build the app from the theme-demo-app directory:
sencha app build
Then open
"theme-demo-app/index.html" in the browser. You should notice that all Panel
headers have centered titles.
Although any Ext JS Component config can be overridden in this manner, best practice is to only use theme overrides to change those configs that directly affect the visual appearance of a Component.
The SASS Namespace
As described above, Sencha Cmd looks for files in
"sass/var" and
"sass/src" that match
up with JavaScript classes. By default, for themes, the
Ext namespace is assumed to be
the top-level namespace and so your theme would have a
"sass/src/panel/Panel.scss" file
corresponding to
Ext.panel.Panel.
For a theme to apply outside the
Ext namespace, you must change a config property
called
package.sass.namespace in
".sencha/package/sencha.cfg". To be able to style all
components in your theme, you will need to set this as blank:
package.sass.namespace=
With this set, the file you need to create to correspond with
Ext.panel.Panel
is
"sass/src/Ext/panel/Panel.scss".
Adding Custom Utility SASS
If your theme requires SASS functions or mixins that are not related to Component styling,
e.g. utilities, these should be placed in the theme's
"sass/etc" directory. You can organize
files in this directory however you like, but the only file that Sencha Cmd includes
in the build is
"sass/etc/all.scss".
Any other files must be imported by the
"all.scss" file.
For an example that follows this pattern see
"packages/ext-theme-base/sass/etc/".
Migrating a Theme from Ext JS 4.1 or earlier.
In Ext 4.1, theming was done quite differently. Typically,
"sass/etc/all.scss" file. Any SASS rules that
the legacy theme had should be placed in
"sass/src/Component.scss". Then try to build
the theme or an app that uses the theme as described above. Eventually you may want to
move the variables and rules into the files that correspond to the Components being styled.
Styling Your Application they can add their own custom variables and rules for styling the application's views.
Changing Theme Variables in Your Application
Let's continue using the
"theme-demo-app" application created above,
and override the theme's
$base-color in the application.
Create a file named
"theme-demo-app/sass/var/view/Viewport.scss"
and add the following code:
$base-color: #333;
Then build the app by running the following command from the
"theme-demo-app" directory:
sencha app build
Open the application's
"index.html" page in a browser and you will see that the color has
changed to gray.
Notice how we did not use
!default when setting the $base-color variable.
!default is
used for setting variables in themes, because those theme variables might need to be
overridden in a derived theme, or in an application.
!default is not needed here because
the application is the end of the line in the theme inheritance tree.
You may also be wondering why we set
$base-color in
"Viewport.scss"
instead of
"Component.scss" like we did when changing the
$base-color for a theme.
The reason for this is because the namespace that Sencha Cmd uses
for resolving scss file names is the namespace of the application.
For each class in your application, Sencha Cmd checks for a corresponding scss
file in
"sass/var/" for variables, and
"sass/src/" for rules.
Since the application has a class named ThemeDemoApp.view.Viewport,
the
"sass/var/view/Viewport.scss" file gets included in the build.
"sass/var/Componenent.scss" is not included unless the application
had a class named "ThemeDemoApp.Component".
Styling Your Application's Views
CSS style rules for your application's views should go in the app's
"sass/src/" directory
in a scss file that has the same path and name as the view it is styling. Let's style
the center panel in the ThemeDemoApp application. Since that panel is defined in
ThemeDemoApp.view.Viewport, the CSS rule that styles it goes in
"sass/src/view/Viewport.scss":
.content-panel-body { background-color: #ccc; }
Add the "content-panel-body" CSS class to the body of the center panel in your application's viewport:
... xtype: 'panel', ui: 'highlight', bodyCls: 'content-panel-body', frame: true, ...
Now build and run the app. You should see a gray background-color on the body of the center panel.
The SASS Namespace
Similar to themes, applications also have
"sass/var" and
"sass/src" folders and these
also correspond to the JavaScript class hierarchy. For an application, the top-level
namespace is specified by
app.sass.namespace in
".sencha/app/sencha.cfg". By default
this value is the application's namespace.
This default is most convenient for styling just your application's views since these are
most likely in your application's namespace. To style components outside your application
namespace, you can change
app.sass.namespace but it may be a better idea to create a
theme instead.
Organization of Generated SASS
When using themes as described above, the SASS from your theme and from your application
as well as from required packages (see Sencha Cmd Packages)
is combined in an
"app-all.scss" file that is then compiled by Compass. | | | | +-----------+---------------------------+ | | | | | application | | | | | +-----------+---------------+ | | | derived | | var | theme +---------------+ | | | base | | +-----------+---------------+ | | | | | packages (dep order) | | | | +-----------+-----------+---------------+ | | |
Ext.window.Window packaged.
- In the
"sass/var"space, the concerns are variable control and derived calculation.
- Applications must be able to control all variables so their
varscome first.
-.
Inclusion Flags
The "inclusion flags" section is a set of variables defined to be
true or
false for
each JavaScript class that could be included. The value of this variable is
true if
that class is being included. For example, if the build uses
Ext.grid.Panel, this line
is present in this section:
$include-ext-grid-panel: true;
If this build does not include
Ext.menu.ColorPicker then this line is present:
$include-ext-menu-colorpicker: false;
Sharing a Theme Between Applications
It's easy to share the theme you've just built with a second application. Simply navigate to the "my-workspace" directory and run the following command:
sencha -sdk ext generate app AnotherApp another-app
This tells Sencha Cmd to generate an app in the "another-app" directory named "AnotherApp" and to use the same Ext JS SDK as the first app you created.
The next step is to tell the app to use the custom theme: Edit "another-app/.sencha/app/sencha.cfg" and replace the following line:
app.theme=ext-theme-classic
with:
app.theme=my-custom-theme
Now build the app. From the "another-app" directory run:
sencha app build
Then open "another-app/index.html" page in your browser. You should see a starter app that uses the same custom theme as ThemeDemoApp. | http://docs.sencha.com/extjs/4.2.0/?_escaped_fragment_=/guide/theming | CC-MAIN-2016-50 | refinedweb | 4,030 | 56.45 |
Set the value of the specified device property of type void*.
#include <screen/screen.h>
int screen_set_device_property_pv(screen_device_t dev, int pname, void **param)
The handle of the device whose property is to be set.
The name of the property whose value is being set. The properties that you can set are of type Screen property types.
A pointer to a buffer containing the new value(s). This buffer must be of type void*.
Function Type: Delayed Execution
This function sets the value of a device property from a user-provided buffer.
0 if the command to set the new property value(s) was queued, or -1 if an error occurred (errno is set; refer to /usr/include/errno.h for more details). | http://www.qnx.com/developers/docs/6.6.0_anm11_wf10/com.qnx.doc.screen/topic/screen_set_device_property_pv.html | CC-MAIN-2018-22 | refinedweb | 122 | 67.15 |
Eclipse Community Forums - RDF feed Eclipse Community Forums Persistence <![CDATA[HiAll, I am stuck with the persistence of an entities. I have an entities like below Customer ---------------------- Pk customer_id customer_name CreditCard ----------------- PK customer_id number Credit card primary key is the customer id from the customer table. Code goes as follows @entity public class Customer { @Id @generatedValue int customer_id @column(name="customer_name") private string customerName; @OneToOne(mapped_by="customer",cascade=CascadeType.PERSIST) private CreditCard creditCard } @entity public class CreditCard { @Id @generatedValue int customer_id @column(name="number") private int number; @OneToOne @joinColumn(name="customer_id") private Customer customer } Test Class ---------------------- Customer cust = new Customer(); cust.setName("XYZ"); CreditCard credit = new CreditCard(); credit.setNumber(123); credit.setCustomer(cust); cust.setCreditCard(credit); entitymanager.persist(cust); As per my understanding of JPA, while persisting customer entity, it will generate a primary key for the customer table(customer_id) and then it will persist it. My requirement is that the same customer_id generated for the customer table should be set in the credit card table also when JPA persist creditCard entity as part of CascadeType.PERSIST How this can be achieved? Currenty it persist all the creditcard entity attributes except with the default value of 0 as customer id in the credit card table I can't change the database design or the method signature of persistence. I have to persist customer entity and not credit card Also, I want to achieve this in one database trip.]]> Mitesh Pandey 2012-03-29T05:56:46-00:00 Re: Persistence <![CDATA[Hello, If you wish to have CreditCard's Id set through the mapping, I am not sure why you marked the customer_id attribute with @generatedValue - that should be removed. It also seems odd that EclipseLink would allow this mapping as you have it set up - EclipseLink should throw an exception because you have both the customer_id and customer attributes able to change the "customer_id" field. The only thing I can think of is that EcliseLink is unable to tell they are the same field, since customer_id by default will use a "CUSTOMER_ID" field and EclipseLink is case sensitive by default - so this is not the same "customer_id" field used in the customer relationship. You should make the fields match the case that is used in your database or set the "eclipselink.jpa.uppercase-column-names" which will have EclipseLink behave in a case insensitive manner. If you are using JPA 2.0 you can use the @MapsId annotation instead of the @JoinColumn on CreditCard's customer relationhips. Or you could do with out the customer_id attribute in CreditCard outright and just mark the customer relationship with the @Id field - EntityManager's find would use Customer's customer_id as the id. If you are using JPA 1.0, you need to make the customer_id attribute read only by setting the updatable=false, insertable=false tags so that the field is only set through the relationship. Best Regards, Chris]]> Chris Delahunt 2012-03-29T15:14:25-00:00 | http://www.eclipse.org/forums/feed.php?mode=m&th=318760&basic=1 | CC-MAIN-2014-52 | refinedweb | 496 | 51.78 |
> Very few, but the spec says
> method ::= "xml" | "html" | "text" | qname-but-not-ncname
>
> I currently allow "fop", but that's non-compliant, so I'm changing it to
> have a namespace prefix.
Can I still just use "fop" and not write in a URI?
> > > OutputProperties: on the methods for handling the list of
> > CDATA elements, I
> > > can't see how namespace prefixes are supposed to be
> > handled. Again we could
> > > say that the names are expressed as uri^local-name.
> >
> > How would you specify that in xsl:output?
>
> cdata-section-elements is defined as a whitespace-separated list of qnames.
> The prefix in the qname must be interpreted in the context of the specific
> xsl:output statement, therefore a prefix must be resolved into a URI before
> the lists from different xsl:output statements are merged.
Two options:
1. Pass a list of uri^local-name
2. Pass the uri as one argument, the list of local-name as another
Which one works best?
arkin
>
> Mike Kay | http://mail-archives.apache.org/mod_mbox/cocoon-dev/200002.mbox/%3C38A2C362.785DBEE8@exoffice.com%3E | CC-MAIN-2017-09 | refinedweb | 168 | 66.13 |
Colin Ngam wrote: Gentlemen, I need you to say yes or no on this issue so that Tony can proceed with his decision on the next step towards this patch. Tony believes that this is still an outstanding issue. Basically, it does not matter which way we go. In the current patch, because pci_root_ops is static, we define sn_pci_root_ops. sn_pci_root_ops ends up calling raw_pci_ops - which is exactly what we need. Now, if we can remove the static from pci_root_ops, I can use it in io_init.c, that would be cleanest and that was what we started with. This is what the patch would look like ontop of the 002_add* patch: # This is a BitKeeper generated diff -Nru style patch. # # ChangeSet # 2004/10/07 11:53:49-05:00 cngam@attica.americas.sgi.com # pci.h: # Add prototype for pci_root_ops. # io_init.c: # Use pci_root_ops. # pci.c: # Remove static so that pci_root_ops can be externed. # # include/asm-ia64/pci.h # 2004/10/07 11:53:02-05:00 cngam@attica.americas.sgi.com +2 -0 # Add prototype for pci_root_ops. # # arch/ia64/sn/kernel/io_init.c # 2004/10/07 11:52:49-05:00 cngam@attica.americas.sgi.com +4 -27 # Use pci_root_ops. # # arch/ia64/pci/pci.c # 2004/10/07 11:52:30-05:00 cngam@attica.americas.sgi.com +1 -1 # Remove static so that pci_root_ops can be externed. # diff -Nru a/arch/ia64/pci/pci.c b/arch/ia64/pci/pci.c --- a/arch/ia64/pci/pci.c 2004-10-07 11:54:17 -05:00 +++ b/arch/ia64/pci/pci.c 2004-10-07 11:54:17 -05:00 @@ -124,7 +124,7 @@ devfn, where, size, value); } -static struct pci_ops pci_root_ops = { +struct pci_ops pci_root_ops = { .read = pci_read, .write = pci_write, }; diff -Nru a/arch/ia64/sn/kernel/io_init.c b/arch/ia64/sn/kernel/io_init.c --- a/arch/ia64/sn/kernel/io_init.c 2004-10-07 11:54:17 -05:00 +++ b/arch/ia64/sn/kernel/io_init.c 2004-10-07 11:54:17 -05:00 @@ -33,29 +33,6 @@ int sn_ioif_inited = 0; /* SN I/O infrastructure initialized? */ -static int -sn_pci_read(struct pci_bus *bus, unsigned int devfn, int where, int size, - u32 * value) -{ - return raw_pci_ops->read(pci_domain_nr(bus), bus->number, - devfn, where, size, value); -} - -static int -sn_pci_write(struct pci_bus *bus, unsigned int devfn, int where, int size, - u32 value) -{ - - return raw_pci_ops->write(pci_domain_nr(bus), bus->number, - devfn, where, size, value); -} - -struct pci_ops sn_pci_root_ops = { - .read = sn_pci_read, - .write = sn_pci_write, -}; - - /* * Retrieve the DMA Flush List given nasid. This list is needed * to implement the WAR - Flush DMA data on PIO Reads. @@ -281,10 +258,10 @@ } /* - * sn_pci_fixup_bus() - This routine sets up a bus's resources + * sn_pci_controller_fixup() - This routine sets up a bus's resources * consistent with the Linux PCI abstraction layer. */ -static void sn_pci_fixup_bus(int segment, int busnum) +static void sn_pci_controller_fixup(int segment, int busnum) { int status = 0; int nasid, cnode; @@ -307,7 +284,7 @@ BUG(); } - bus = pci_scan_bus(busnum, &sn_pci_root_ops, controller); + bus = pci_scan_bus(busnum, &pci_root_ops, controller); if (bus == NULL) { return; /* error, or bus already scanned */ } @@ -379,7 +356,7 @@ #endif for (i = 0; i < PCI_BUSES_TO_SCAN; i++) { - sn_pci_fixup_bus(0, i); + sn_pci_controller_fixup(0, i); } /* diff -Nru a/include/asm-ia64/pci.h b/include/asm-ia64/pci.h --- a/include/asm-ia64/pci.h 2004-10-07 11:54:17 -05:00 +++ b/include/asm-ia64/pci.h 2004-10-07 11:54:17 -05:00 @@ -105,6 +105,8 @@ #define PCI_CONTROLLER(busdev) ((struct pci_controller *) busdev->sysdata) #define pci_domain_nr(busdev) (PCI_CONTROLLER(busdev)->segment) +extern struct pci_ops pci_root_ops; + static inline int pci_name_bus(char *name, struct pci_bus *bus) { if (pci_domain_nr(bus) == 0) { Basically, we add an extern for pci_root_ops in asm-ia64/pci.h and remove the static for pci_root_ops in ia64/pci/pci.c. We need a resolution so that Tony can proceed. Silence is not going to help. Thank you so much for your help. colin >Matthew Wilcox wrote: > > > >>On Wed, Oct 06, 2004 at 01:48:32PM -0700, Grant Grundler wrote: >> >> >>>Agreed. I'm not real clear on why drivers/acpi didn't do that. >>>But apperently ACPI supports many methods to PCI or PCI-Like (can you >>>guess I'm not clear on this?) config space. raw_pci_ops supports >>>multiple methods in i386. ia64 only happens to use one so far. >>>It seems right for SN2 to use this mechanism if it needs a different >>>method. >>> >>>Willy tried to explain this to me yesterday and I thought I understood >>>most of it...apperently that was a transient moment of clarity. :^/ >>> >>> >>Let's try it again, by email this time. >> >>Fundamentally, there is a huge impedence mismatch between how the ACPI >>interpreter wants to access PCI configuration space, and how Linux wants >>to access PCI configuration space. Linux always has at least a pci_bus >>around, if not a pci_dev. So we can use dev->bus->ops to abstract the >>architecture-specific implementation of "how do I get to configuration >>space for this bus?" >> >>ACPI doesn't have a pci_bus. It just passes around a struct of { domain, >>bus, dev, function } and expects the OS-specific code to determine what >>to do with it. The original hacky code constructed a fake pci_dev on the >>stack and called the regular methods. This broke ia64 because we needed >>something else to be valid (I forget what), so as part of the grand "get >>ia64 fully merged upstream" effort, I redesigned the OS-specific code. >> >>Fortunately, neither i386 nor ia64 actually need the feature Linux has >>to have a per-bus pci_ops -- it's always the same. I think powerpc is >>the only architecture that needs it. So I introduced a pci_raw_ops that >>both ACPI and a generic pci_root_ops could call. >> >>The part I didn't seem to be able to get across to you yesterday was >>that pci_root_ops is not just used for the PCI root bridge, it's used >>for accessing every PCI device underneath that root bridge. >> >> > >Hi Guys, > >Therefore, would it be perfectly fine if we remove the static from pci_root_ops >so that we can use it outside of pci/pci.c?? I can include this in a follow-on >patch. > >Thanks. > >col-ia64" in the body of a message to majordomo@vger.kernel.org More majordomo info at on Fri Oct 8 18:45:08 2004
This archive was generated by hypermail 2.1.8 : 2005-08-02 09:20:31 EST | http://www.gelato.unsw.edu.au/archives/linux-ia64/0410/11391.html | CC-MAIN-2020-24 | refinedweb | 1,056 | 57.47 |
Getting database script from DbContext (Code First)
I was speaking at Gopas Teched few days ago and there was a good question from audience about how to get the SQL script the DbContext is using to create database.
I never thought about it as I always create database in ER tool as it provides more features (like triggers, stored procedures etc.). But I remembered I implemented this method in .NET provider for Firebird. So it has to be somewhere.
The method is called CreateDatabaseScript and it’s on ObjectContext. So it was easy to expose it directly from DbContext, because it has ObjectContext under the hood (you can access it via IObjectContextAdapter).
public static string CreateDatabaseScript(this DbContext context) { return ((IObjectContextAdapter)context).ObjectContext.CreateDatabaseScript(); }
Hope the questioner will find this blog post. | https://www.tabsoverspaces.com/232358-getting-database-script-from-dbcontext-code-first | CC-MAIN-2021-31 | refinedweb | 132 | 57.16 |
This page is a snapshot from the LWG issues list, see the Library Active Issues List for more information and the meaning of Resolved status.
Section: 20.2.2 [utility.swap] Status: Resolved Submitter: Orson Peters Opened: 2015-11-01 Last modified: 2016-03-09
Priority: 2
View all other issues in [utility.swap].
View all issues with Resolved status.
Discussion:
The noexcept specification for the std::swap overload for arrays has the effect that all multidimensional arrays — even those of build-in types — would be considered as non-noexcept swapping, as described in the following Stackoverflow article.Consider the following example code:
#include <utility> #include <iostream> int main() { int x[2][3]; int y[2][3]; using std::swap; std::cout << noexcept(swap(x, y)) << "\n"; }
Both clang 3.8.0 and gcc 5.2.0 return 0.The reason for this unexpected result seems to be a consequence of both core wording rules (6.4.2 [basic.scope.pdecl] says that "The point of declaration for a name is immediately after its complete declarator (Clause 8) and before its initializer (if any)" and the exception specification is part of the declarator) and the fact that the exception-specification of the std::swap overload for arrays uses an expression and not a type trait. At the point where the expression is evaluated, only the non-array std::swap overload is in scope whose noexcept specification evaluates to false since arrays are neither move-constructible nor move-assignable. Daniel: The here described problem is another example for the currently broken swap exception specifications in the Standard library as pointed out by LWG 2456. The paper N4511 describes a resolution that would address this problem. If the array swap overload would be declared instead as follows,
template <class T, size_t N> void swap(T (&a)[N], T (&b)[N]) noexcept(is_nothrow_swappable<T>::value);
the expected outcome is obtained.Revision 2 (P0185R0) of above mentioned paper will available for the mid February 2016 mailing.
[2016-03-06, Daniel comments]
With the acceptance of revision 3 P0185R1 during the Jacksonville meeting, this issue should be closed as "resolved": The expected program output is now 1. The current gcc 6.0 trunk has already implemented the relevant parts of P0185R1.
Proposed resolution: | https://cplusplus.github.io/LWG/issue2554 | CC-MAIN-2019-51 | refinedweb | 378 | 51.58 |
xmla 0.4
Access olap data sources through xmla
olap.xmla
This package is meant for accessing xmla datasources - see
BUILD
In this directory, run:
python setup.py build
TESTING
Tests are done against the Mondrian, SSAS, icCube XMLA providers. The testsDiscover module tests behavior with different XMLA providers with the Discover command while testsExecute does the same with the Execute command. Note that you likely need to modify the sources if you want to test yourself since they contain specifics (namely the location of the services and names of the data sources).
SAMPLE
Here is an example how to use it:
import olap.xmla.xmla as xmla p = xmla.XMLAProvider() # mondrian c = p.connect(location="") # to analysis services (if iis proxies requests at /olap/msmdpump.dll) # you will need a valid kerberos ticket of course # c = p.connect(location="", sslverify="/path/to/my/as-servers-ca-cert.pem") # or sslverify=False :) # to icCube # c = p.connect(location="", username="demo", password="demo") # getting info about provided data print c.getDatasources() print c.getMDSchemaCubes() # for ssas a catalog is needed, so the call would be like # get a catalogname from a call to c.getDBSchemaCatalogs() # c.getMDSchemaCubes(properties={"Catalog":"a catalogname"}) # execute a MDX (working against the foodmart sample catalog of mondrian) cmd= """select {[Measures].ALLMEMBERS} * {[Time].[1997].[Q2].children} on columns, [Gender].[Gender].ALLMEMBERS on rows from [Sales] """ res = c.Execute(cmd, Catalog="FoodMart") res.getSlice(property="Value") #return only the Value property from the cells # to return some subcube from the result you can res.getSlice() # return all res.getSlice(Axis0=3) # carve out the 4th column res.getSlice(Axis0=3, SlicerAxis=0) # same as above, SlicerAxis is ignored res.getSlice(Axis1=[1,2]) # return the data sliced at the 2nd and 3rd row res.getSlice(Axis0=3, Axis1=[1,2]) # return the data sliced at the 2nd and 3rd row in addition to the 4th column
Note
The contained vs.wsdl originates from the following package: and was subsequently modified (which parameters go in the soap header) to work with the suds package.olap.xmla
CHANGES
0.4
- keyword kerberos is gone. kerberos auth need is detected automatically
- BeginSession and EndSession provide XMLA Sessionsupport
- changes to work with icCube XMLA provider
0.3
- changed keyword doKerberos in XMLProvider.connect to kerberos
- added sslverify keyword to XMLProvider.connect defaulting to True. This will be handed to requests get method, so you can point it to your certificate bundle file.
0.2
- removed dependencies on specific versions in setup.py
- Downloads (All Versions):
- 9 downloads in the last day
- 178 downloads in the last week
- 632 downloads in the last month
- Author: Norman Krämer
- License: Apache Software License 2.0
- Categories
- Package Index Owner: mayday
- DOAP record: xmla-0.4.xml | https://pypi.python.org/pypi/xmla/0.4 | CC-MAIN-2015-40 | refinedweb | 461 | 51.75 |
Coordination Data Structures: New Classes for .NET Multithreading
- |
-
-
-
-
-
- June drop of Parallel Extensions for .NET added a set of classes to make sharing data in a multi-threaded application easier. With ten new classes including new synchronization primitives, futures, and new collection classes, there is only time to touch on each of them briefly.
This first batch of classes is in the System.Threading namespace.
CountdownEvent allows for coordination between an unlimited number of threads. The counter is either preset or incremented as each thread is started. As the threads complete their task they decrement the counter. A call to CountdownEvent.Wait blocks the main thread until the counter to reaches zero. In this sense the CountdownEvent is similar to AutoResetEvent and ManualResetEvent.
LazyInit is essentially what many people refer to as a Future. A LazyInit object takes a class or delegate. If it gets a class it will create a new instance with the default constructor when the Value property is called. If given a delegate the result of the delegate is stored and returned.
LazyInit has three modes for value creation. AllowMultipleExecution allows each thread to attempt to be the first to initialize the value, but ensures only one object is ever returned. This could result in multiple objects being created and discarded. EnsureSingleExecution guarantees that only one instance is ever created. Finally, ThreadLocal gives a different instance to each thread.
WriteOnce can be seen as an alternative to LazyInit. Like LazyInit, it's value can only be set once and becomes immutable thereafter. However, the value for WriteOnce is assigned externally to the class. Once set, it can never change.
WriteOnce is especially useful when you want read only semantics but cannot use the readonly modifier because you do not want to assign the value in the constructor. If you try to assign a value to a WriteOnce object more than once, it will be marked as "corrupted" and can no longer be read. Using TrySetValue instead of the Value property prevents this corruption from occurring.
ManualResetEventSlim is a lightweight version of ManualResetEvent. Unlike the older version, it does not rely on kernel objects and is not finalizable. This should result in better performance, especially when they need to be created frequently.
SemaphoreSlim, like ManualResetEventSlim, replaces the thin wrapper around the kernel with a lightweight alternative.
Two more lightweight objects, SpinLock and SpinWait, are really only suitable for multi-core and multi-processor machines. Both leave the blocked thread active, essentially wasting CPU cycles. They are useful when the expected wait time is very short and context switches have become a bottleneck.
The second group of classes is in the System.Threading.Collections namespace.
ConcurrentQueue is a queue structure designed with multithreading in mind. Older queues, even when "thread-safe" required locks so that you can check the Count property and call Dequeue as an atomic action. ConcurrentQueue avoids that pitfall by only offering a TryDequeue method. Since it is safe to call without checking the count, no explicit locks are needed.
ConcurrentStack works the same way, though obviously with stack semantics.
BlockingCollection, designed for multiple readers and writers, is a rather complex thing with many features one normally implements separately. First of all, BlockingCollection can be used on its own as a collection or the semantics can change by having it wrap an IConcurrentCollection object such as ConcurrentStack and ConcurrentQueue.
Unlike most collections, BlockingCollection supports something a method called GetConsumingEnumerable. This allows one to use a for-each loop or LINQ query against the collection in a thread safe manner. Normally destructive operations like consuming items from a queue would trigger an exception when using either construct.
In order to throttle your writers, BlockingCollections can be given an upper size limit. When this limit is exceeded, calls that add to the collection are blocked.
BlockingCollections also have the concept of being "complete". When you call CompleteAdding, consumers are notified that no new items will be added to the collection and that they can stop processing after the current batch.
Finally, BlockingCollections can also be used in groups. If you pass an object to AddAny along with an array of BlockingCollections, the object will be added to one of them. The documentation is still sparse, but presumably the collection selected by which is smallest. This call is blocking if all the collections are full.
Rate this Article
- Editor Review
- Chief Editor Action | https://www.infoq.com/news/2008/06/CDS | CC-MAIN-2017-51 | refinedweb | 735 | 57.87 |
Implementing JWT based authentication in Golang 🔐
Authentication allows your application to know that the person who sending a request to your application is actually who they say they are. The JSON web token (JWT) is one method for allowing authentication, without actually storing any information about the user on the system itself (as opposed to session based authentication).
In this post, we will demonstrate how JWT based authentication works, and how to build a sample application in Go to implement it.
If you already know how JWT works, and just want to see the implementation, you can skip ahead, or see the source code on Github
The JWT format
Let’s say we have a user called
user1, and they try to log into an application or website. Once successful they would receive a token that looks like this:
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6InVzZXIxIiwiZXhwIjoxNTQ3OTc0MDgyfQ.2Ye5_w1z3zpD4dSGdRp3s98ZipCNQqmsHRB9vioOx54
This is a JWT, and consists of three parts (separated by
.):
- The first part is the header (
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9). The header specifies information like the algorithm used to generate the signature (the third part). This part is pretty standard and is the same for any JWT using the same algorithm.
- The second part is the payload (
eyJ1c2VybmFtZSI6InVzZXIxIiwiZXhwIjoxNTQ3OTc0MDgyfQ), which contains application specific information (in our case, this is the username), along with information about the expiry and validity of the token.
- The third part is the signature (
2Ye5_w1z3zpD4dSGdRp3s98ZipCNQqmsHRB9vioOx54). It is generated by combining and hashing the first two parts along with a secret key.
Now the interesting thing is that the header and payload are not encrypted. They are just base64 encoded. This means that anyone can view their contents by decoding them.
For example, we can use this online tool and decode the header or payload.
Which will show its contents as:
{ "alg": "HS256", "typ": "JWT" }
If you are using linux or Mac OS, you can also execute the following statement on the terminal:
echo eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9 | base64 -D
Similarly, the contents of the payload are:
{ "username": "user1", "exp": 1547974082 }
How the JWT signature works
So if the header and signature of a JWT can be read and written to by anyone, what actually makes a JWT secure? The answer lies in how the last part (the signature) is generated.
Let’s pretend you’re and application that wants to issue a JWT to a user (for example,
user1) that has successfully signed in.
Making the header and payload are pretty straightforward: The header is more or less fixed, and the payload JSON object is formed by setting the user ID and the expiry time in unix milliseconds.
The application issuing the token will also have a key, which is a secret value, and known only to the application itself. The base64 representations of the header and payload are then combined with the secret key and then passed through a hashing algorithm (in this case its
HS256, as mentioned in the header)
The details of how the algorithm is implemented is out of scope for this post, but the important thing to note is that it is one way, which means that we cannot reverse the algorithm and obtain the components that went into making the signature… so our secret key remains secret.
Verifying a JWT
In order to verify an incoming JWT, a signature is once again generated using the header and payload from the incoming JWT, and the secret key. If the signature matches the one on the JWT, then the JWT is considered valid.
Now let’s pretend that you’re a hacker trying to issue a fake token. You can easily generate the header and payload, but without knowing the key, there is no way to generate a valid signature. If you try to tamper with the existing payload of a valid JWT, the signatures will no longer match.
In this way, the JWT acts as a way to authorize users in a secure manner, without actually storing any information (besides the key) on the application server.
Implementation in Go
Now that we’ve seen how JWT based authentication works, let’s implement it using Go.
Creating the HTTP server
Let’s start by initializing the HTTP server with the required routes:
package main import ( "log" "net/http" ) func main() { // "Signin" and "Welcome" are the handlers that we will implement http.HandleFunc("/signin", Signin) http.HandleFunc("/welcome", Welcome) http.HandleFunc("/refresh", Refresh) // start the server on port 8000 log.Fatal(http.ListenAndServe(":8000", nil)) }
We can now define the
Welcome routes.
Handling user sign in
The
/signin route will take the users credentials and log them in. For simplification, we’re storing the users information as an in-memory map in our code:
var users = map[string]string{ "user1": "password1", "user2": "password2", }
So for now, there are only two valid users in our application:
user1, and
user2. Next, we can write the
Signin HTTP handler. For this example we are using the dgrijalva/jwt-go library to help us create and verify JWT tokens.
import ( //... // import the jwt-go library "github.com/dgrijalva/jwt-go" //... ) // Create the JWT key used to create the signature var jwtKey = []byte("my_secret_key") var users = map[string]string{ "user1": "password1", "user2": "password2", } // Create a struct to read the username and password from the request body type Credentials struct { Password string `json:"password"` Username string `json:"username"` } // Create a struct that will be encoded to a JWT. // We add jwt.StandardClaims as an embedded type, to provide fields like expiry time type Claims struct { Username string `json:"username"` jwt.StandardClaims } // Create the Signin handler func Signin(w http.ResponseWriter, r *http.Request) { var creds Credentials // Get the JSON body and decode into credentials err := json.NewDecoder(r.Body).Decode(&creds) if err != nil { // If the structure of the body is wrong, return an HTTP error w.WriteHeader(http.StatusBadRequest) return } // Get the expected password from our in memory map expectedPassword, ok := users[creds.Username] // If a password exists for the given user // AND, if it is the same as the password we received, the we can move ahead // if NOT, then we return an "Unauthorized" status if !ok || expectedPassword != creds.Password { w.WriteHeader(http.StatusUnauthorized) return } // Declare the expiration time of the token // here, we have kept it as 5 minutes expirationTime := time.Now().Add(5 * time.Minute) // Create the JWT claims, which includes the username and expiry time claims := &Claims{ Username: creds.Username, StandardClaims: jwt.StandardClaims{ // In JWT, the expiry time is expressed as unix milliseconds ExpiresAt: expirationTime.Unix(), }, } // Declare the token with the algorithm used for signing, and the claims token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims) // Create the JWT string tokenString, err := token.SignedString(jwtKey) if err != nil { // If there is an error in creating the JWT return an internal server error w.WriteHeader(http.StatusInternalServerError) return } // Finally, we set the client cookie for "token" as the JWT we just generated // we also set an expiry time which is the same as the token itself http.SetCookie(w, &http.Cookie{ Name: "token", Value: tokenString, Expires: expirationTime, }) }
If a user logs in with the correct credentials, this handler will then set a cookie on the client side with the JWT value. Once a cookie is set on a client, it is sent along with every request henceforth. Now we can write our welcome handler to handle user specific information.
Handling post authentication routes
Now that all logged in clients have session information stored on their end as cookies, we can use it to:
- Authenticate subsequent user requests
- Get information about the user making the request
Let’s write our
Welcome handler to do just that:
func Welcome(w http.ResponseWriter, r *http.Request) { // We can obtain the session token from the requests cookies, which come with every request c, err := r.Cookie("token") if err != nil { if err == http.ErrNoCookie { // If the cookie is not set, return an unauthorized status w.WriteHeader(http.StatusUnauthorized) return } // For any other type of error, return a bad request status w.WriteHeader(http.StatusBadRequest) return } // Get the JWT string from the cookie tknStr := c.Value // Initialize a new instance of `Claims` claims := &Claims{} // Parse the JWT string and store the result in `claims`. // Note that we are passing the key in this method as well. This method will return an error // if the token is invalid (if it has expired according to the expiry time we set on sign in), // or if the signature does not match } // Finally, return the welcome message to the user, along with their // username given in the token w.Write([]byte(fmt.Sprintf("Welcome %s!", claims.Username))) }
Renewing your token
In this example, we have set a short expiry time of five minutes. We should not expect the user to login every five minutes if their token expires. To solve this, we will create another
/refresh route that takes the previous token (which is still valid), and returns a new token with a renewed expiry time.
To minimize misuse of a JWT, the expiry time is usually kept in the order of a few minutes. Typically the client application would refresh the token in the background.
func Refresh(w http.ResponseWriter, r *http.Request) { // (BEGIN) The code uptil this point is the same as the first part of the `Welcome` route c, err := r.Cookie("token") if err != nil { if err == http.ErrNoCookie { w.WriteHeader(http.StatusUnauthorized) return } w.WriteHeader(http.StatusBadRequest) return } tknStr := c.Value claims := &Claims{} } // (END) The code up-till this point is the same as the first part of the `Welcome` route // We ensure that a new token is not issued until enough time has elapsed // In this case, a new token will only be issued if the old token is within // 30 seconds of expiry. Otherwise, return a bad request status if time.Unix(claims.ExpiresAt, 0).Sub(time.Now()) > 30*time.Second { w.WriteHeader(http.StatusBadRequest) return } // Now, create a new token for the current use, with a renewed expiration time expirationTime := time.Now().Add(5 * time.Minute) claims.ExpiresAt = expirationTime.Unix() token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims) tokenString, err := token.SignedString(jwtKey) if err != nil { w.WriteHeader(http.StatusInternalServerError) return } // Set the new token as the users `token` cookie http.SetCookie(w, &http.Cookie{ Name: "token", Value: tokenString, Expires: expirationTime, }) }
Running our application
To run this application, build and run the Go binary:
go build ./jwt-go-example
Now, using any HTTP client with support for cookies (like Postman, or your web browser) make a sign-in request with the appropriate credentials:
POST {"username":"user1","password":"password1"}
You can now try hitting the welcome route from the same client to get the welcome message:
GET
Hit the refresh route, and then inspect the clients cookies to see the new value of the
token cookie:
You can find the working source code for this example here.
If you want to learn more about cryptography in Go, I’ve written another post on implementing RSA encryption in Go | https://www.sohamkamani.com/golang/2019-01-01-jwt-authentication/ | CC-MAIN-2020-50 | refinedweb | 1,823 | 53.31 |
Haar Transform Demo Applets
The two applets on this page were written for Math 371, Spring 2006. Clicking on either of the applet buttons will launch a program in a separate window. To use these programs, your screen should be at least 1024 pixels wide (and even at 1024 pixels, the windows will not quite fit across the screen). The applets require Java 1.4 or higher. If you do not have Java installed properly on your computer, you might not even see any buttons on this page.
Haar Wavelet Demo
To launch a demo of the Haar Wavelet Transform for finite, discrete signals, click the following button:
Instructions: The button will open a window that has two large white areas representing a portion of the xy-plane. In the left area, an input signal is graphed. The input signal is just a finite list of numbers (s1,s2,...sN), where N is a power of two. The signal values are graphed as points at x-values 1, 2, 3, ..., N. When the window first appears, N is 128, but you can change the number of points using a pop-up menu at the bottom of the screen.
The signal values are the y-values of the points. Initially, all the signal values are zero. You can change the signal values by dragging the points of the input signal up and down. If you drag with the left mouse button, nearby points will be pulled along by the point that you drag. If you drag a point with the right mouse button, only the one point that you drag will move. You can also set the y-values from a function of the variable n, where n takes on the values 1, 2, ..., N for the different points of the signal. Enter the function in the box at the bottom of the window and press return (or click the "Set") button. The definition of the function can use the operators +, -, *, /, and ^ (the latter representing exponentiation). It can include trigonometric, exponential, and logarithmic functions as well as abs (for absolute value) and trunc (for truncating a real number to an integer value). The definition of the function can also use the variable x. For input number n, the value of x is defined to be n/N (where N is the number of points. Thus, the value of x ranges from 1/N for the first signal value to 1 for the last signal value.
The point of the applet is to show the effect of the Haar Transform on the input signal. The applet can show several different outputs for a given signal. You can select the output that you want to see using a pop-up menu at the bottom of the window. If the number of points in the input signal is 2n, there are actually n different transforms, called the 1-level, 2-level, 3-level, ..., n-level transforms. You'll find all of these in the "Output" pop-up menu. There is also another set of possible outputs in the menu: Level 0 Reconstruction, Level 1 Reconstruction, ..., Level n Reconstruction. These reconstructions start with the n-level Haar Transform. The Level 0 Reconstruction is a constant signal whose value is the same as the average value of the input function; this average is obtained from the final "trend" value in the n-level Haar Transform. The Level 1 Reconstruction adds in information from the n-th order difference in the n-level Haar Transform. The Level 2 Reconstruction adds in information from the two (n-1)-th order differences. And so on. The final Level n Reconstruction has added in all the differences from the n-level Haar Transform, and so is equal to the original signal. (The Level n Reconstruction is what you get when you apply the inverse n-level Haar Transform to the transform of the input signal -- the result is that you get the original input signal back.) Another way of looking at this is to say that as you go from one reconstruction to the next, you are adding in information about the input signal at finer and finer scales.
Note that the display areas are initially set up to show a range of y-values between -5.0 and 5.0. However, this range will automatically be expanded if necessary to make sure that all the signal values are visible.
Homework #2 from Math 371 (hw2.pdf) has some exercises based on this program.
Haar Wavelet Signal Compression Demo
To launch a demo of using the Haar Wavelet Transform for signal compression/decompression, click the following button:
Instructions: The button will open a window that is similar to the Haar Wavelet Demo described above, but in this case, the purpose is to help you understand how the Haar transform can be used to "compress" and decompress a signal. The idea is that some of the numbers in the Haar Transform of a signal are thrown away (that is, set equal to zero). Because of the missing information, when you apply the inverse Haar Transform to the modified signal, you don't get an exact reconstruction of the original input signal; you get an approximation. ("Compression" means that you have less data to store or transmit, because you only have to save the non-zero values. This is "lossy" compression because you have lost some of the information and only get an approximation when you decompress the signal.)
The slider at the bottom-center of the window controls how many elements of the n-level Haar Transform are discarded. Initially, the number is zero, so there is no compression at all. Drag the slider to the right to discard more data from the transform.
In this program, the display area on the right always shows the full n-level Haar Transform of the input signal. The values that have been discarded are shown in pink. As you drag the slider to the right, you'll see that more and more points turn pink. Note that values are discarded in order of increasing absolute value.
The display area on the left actually shows two things (although initially only one of them is visible). In this area, the input signal is first drawn in blue. Then the "decompressed" signal is drawn in red. The decompressed signal is obtained by starting with the n-level Haar Transform of the input signal, "compressing" by setting the specified number of values in the transform to zero, then applying the inverse n-level Haar Transform to the modified transform. The result is an approximation of the input signal. The more points that are discarded, the worse the approximation.
When no points are discarded, the decompressed signal is equal to the original, so all you see are the red points that represent the decompressed signal; the blue points of the input signal are hidden behind them. As you discard more and more points, the decompressed signal generally becomes a poorer approximation of the input, and the blue points of the input signal start to appear as the red points of the decompressed signal move out of alignment. If you discard all the points from the transform, of course, the "decompressed signal" will be equal to zero.
As with the first applet on this page, you can change the input signal by dragging points in the left-hand display area or by entering a function in the function input box. | http://math.hws.edu/eck/math371/applets/Haar.html | crawl-003 | refinedweb | 1,251 | 61.56 |
- Advertisement
MatsenMember
Content Count142
Joined
Last visited
Community Reputation132 Neutral
About Matsen
- RankMember
Personal Information
- InterestsDevOps
Programming
- Quote: The switch statement would still be needed upon construction of the array, but that would be it. I hope that makes more sense. Let me know! You're on the right track. I think you should separate the construction of the concrete file classes from the class itself. For example, you could create a separate function which creates a concrete file for you. The rest of your code will only deal with the base class, FileObjBase. FileObjBase* CreateFileObj(const std::string& fileName) { HANDLE fileHandle; // open the file and fetch the type switch (theFileType) { case TYPE_INT: return new FileObj<int>(fileHandle); case TYPE_FLOAT: return new FileObj<float>(fileHandle); case TYPE_MYTYPE: return new FileObjMyType(fileHandle); case TYPE_WEIRDTYPE: return new FileObjWeirdType(fileHandle); } return 0; // or throw an exception for unknown type } I can't see any reason for why your solution would not work, basically you're doing the same thing but with the array class. But, I think writing your own special array class when std::vector would do the work for you, with a different design, feels more appealing. Also, as your code shows, it looks like you're stuck with switch statements all over. By separating construction from the class, you will only need one switch statement and need only to create subclasses for any type you wish to support. Hope it helps
- Why use a base class for the array class? Looking at your previous question, it seems as if a base class for the file class would be better: class FileObjBase { public: virtual void foo() = 0; virtual void bar() = 0; }; template <typename T> class FileObj : public FileObjBase { std::vector<T> array; public: void foo() {} void bar() {} }; Please elaborate a little on your design, if I've misunderstood you =)
hello world
Matsen replied to jagguy's topic in For Beginners's ForumYou're mixing C++ and Managed C++ (.NET). Console::WriteLine is part of the .NET libraries while cin and cout is part of the standard C++ libraries. As Boder said, cin and cout are in the std namespace: std::cin, std::cout
a design problem
Matsen replied to derek7's topic in General and Gameplay ProgrammingIf it is correct to say that Material has a texture, then texture should be owned by Material. But, having a class where a texture is rarely used makes using the Material class a bit cumbersome, as you have to check if a texture is available. My suggestion would be to either create a sub class of Material which always have a texture, as "Material" is a very abstract concept. If this do not fit your design, you could always create some sort of a null-texture, so that the rest of the design can assume a texture always being present.
Arrays with new operator
Matsen replied to Rabbi's topic in For Beginners's Forumstd::string* layerOrder = new std::string[n]; ... delete [] layerOrder; Regards Mats Edit: I'm too slow
Archiving files c++
Matsen replied to incin's topic in General and Gameplay ProgrammingIf compression is of no particular concern, the best might be to write your own simple compression routine. It's not very hard, search for Huffman or Adaptive Huffman encoding. Edit: Or RLE encoding. It depends on the data you wish to compress. Regards Mats.
C++ help
Matsen replied to namingway's topic in For Beginners's ForumSure there is. It seems as if game maker uses strings, but that seem unnecessary. The simplest way would be to use a class, with the different levels separated: case 1: player.hp += 30; player.mana += 3; case 2: player.hp += 32; player.mana += 4; . . etc If strings is what you want, you should take a look at std::string and std::stringstream. There are other ways, such as using masking for storing the levels in one 32 bit int (search for masking and bitwise operators). Your question is very general, so perhaps you should elaborate so we can give you a more precise answer. Regards Mats
- Quote:Original post by Winegums are deleting the struct and deleting the thread two different things? if so, how do i delete each/both of them? i thought changing the struct instances 'running' variable to END would kill the thread, but it seems not. Yes. You have to delete the struct and let the thread return from doit(). The thread doesn't know about the struct and doesn't care either. It has a life of its own. Don't try to terminate the thread forcibly, that can cause havoc if you try to expand on this framwork of yours. As I said in my previous reply, changing the state to END probably does nothing, because the state variable is only read from memory once. Making it volatile might solve the problem, but this is a guess and even if it works, it's a very bad solution. Your solution will not allow for a 'theoretically limitless number of threads' because any computer will choke long before that. What you should do is use synchronization. An 'event' would be very useful in your case. Here's a symbolic example. The syntax is not correct. Replace the 'paused' state with an event. while (thread_properties->isAlive()) { thread_properties->event->wait(); // if paused, the thread is stopped here // do stuff } That will allow for as many threads and events as the OS can handle. As I also said, threads are tricky, so my suggestion is that you find some tutorials about threads AND synchronization before you continue. Edit: If this really is a school assignment or from an employer/co-worker you should run and never look back ;)
- I don't understand what you're trying to do. Perhaps you should take a look at something called a thread pool, if that's what you want achieve. Anyway, first, you don't need the _endthread() call. It's called automatically when the thread returns from doit(). Second, don't use loops to prevent unused threads from dying. They will eat resources. Use some form of synchronization. Third, you're not deleting the thread entry, you simply assign the pointer a null value. Forth, this is a guess, but the reason why the thread doesn't die might be because the state variable is not read from memory each lap in the loop. Try making it volatile and see if that helps (volatile int state). Threads are tricky, so don't be discouraged! Regards Mats
Or Suggestions on good *series* scifi novels!
Matsen replied to Tape_Worm's topic in GDNet LoungeI recommend Peter F Hamilton's Night's Dawn Trilogy. Each book is about 1000+ pages, so you'll be busy for a while.
operator overload sort problem
Matsen replied to fireside's topic in For Beginners's Forumoperator< is not called because it's comparing pointers. Try this: bool less(const Obj* lhs, const Obj* rhs) { return lhs->getY() < rhs->getY(); } ... std::sort(drawObjects.begin(), drawObjects.end(), less); Regards Mats
Access violation in dbgdel.cpp?
Matsen replied to nickwinters's topic in General and Gameplay ProgrammingYou're probably deleting something twice, or maybe your trying to delete something that isn't dynamically allocated. Run a debug. Regards Mats
The internet phenomenon (Consuming our lives)
Matsen replied to MindWipe's topic in GDNet LoungeQuote:Original post by MindWipe But it's not really that easy. I think I want to study computer engineering, mostly because I've been coding for 10 years and it's been fun. BUT, I'm not so sure this is what will give me the most out of life, I was thinking becoming a teacher would really be much more rewarding. And maybe go somewhere, do something out of the blue, an impuslive action. It is easy. You can't foresee the future. You don't know what you want to do in 1, 5 or 10 years from now. So, just do what your heart/gut tells you to do and trust it. Put your head in a refrigerator for a while. Stop thinking and intellectualizing your situation. Worry about the future when the future is now. Until then, the future is an illusion that exists only in your head. You can't do something impulsive by trying to be impulsive. You have to stop thinking. Quote:People need to learn to be thankful for everything they take for granted. I agree 100% Regards Mats
Use class variables between threads
Matsen replied to Halsafar's topic in General and Gameplay ProgrammingQuote:...but is it also safe to use class instances between threads? It depends on your class, its expected behaviour and design. And it's not inherently safe to share variables between threads either. It all depends :) Look at this: struct A { int some_variable = 0; void f() { if (some_variable == 0) { some_variable++; } } }; If two threads access a shared instance of A and call f at the same time, some_variable can accidentally be incremented twice. So, in the case above you'll need some form of synchronization. Basically, the only scenario where you don't need synchronization is when there are only read operations. As soon as you start adding some behaviour to the class, as in the case above, you'll probably need to serialize access. In your example, using a global variable to pause the demo/game seems a bit silly. Why not let all objects that can be paused have pause-methods? Regards Mats
- Advertisement | https://www.gamedev.net/profile/30845-matsen/?tab=node_profile_projects_ProjectUserProfile&page=1&sortby=project_rating_total&sortdirection=desc | CC-MAIN-2018-47 | refinedweb | 1,590 | 64 |
cq: Code Query
A tool to extract code snippets using selectors (instead of line numbers)
Install
$ npm install --global @fullstackio/cq
Usage
$ cq <query> <file> # or $ cat file | cq <query>
Examples
Say we have a file
examples/basics.js with the following code:
// examples/basics.js const bye = function() { return 'bye'; } bye(); // -> 'bye' let Farm = () => 'cow'; class Barn { constructor(height, width) { this.height = height; this.width = width; } calcArea() { return this.height * this.width; } }
Get the
bye() function:
$ cq '.bye' examples/basics.js const bye = function() { return 'bye'; }
Get the
bye() function plus the invocation line after:
$ cq '.bye:+1' examples/basics.js const bye = function() { return 'bye'; } bye(); // -> 'bye'
Get the
calcArea function on the
Barn class:
$ cq '.Barn .calcArea' examples/basics.js calcArea() { return this.height * this.width; }
Get the range of
constructor through
calcArea , inclusive, of the
Barn class
$ cq '.Barn .constructor-.calcArea' examples/basics.js constructor(height, width) { this.height = height; this.width = width; } calcArea() { return this.height * this.width; }
Features
- Extract chunks of code from text using robust selectors (vs. brittle line numbers)
- Locate ranges of code using identifiers
- Parses ES6 & JSX (withbabylon)
Motivation
When writing blog posts, tutorials, and books about programming there’s a tension between code that gets copied and pasted into the text and runnable code on disk.
If you copy and paste your code into the copy, then you’re prone to typos, missing steps. When things change, you have to update all of the copypasta and eyeball it to make sure you didn’t miss anything. Mistakes are really easy to make because you can’t really test code that’s in your manuscript without it’s context.
A better solution is to keep your code (or steps of your code) as runnable examples on disk. You can then load the code into your manuscript with some pre-processing.
The problem with the code-on-disk approach is how to designate the ranges of code you wish to import. Line numbers are the most obvious approach, but if you add or remove a line of code, then you have to adjust all line numbers accordingly.
cq is a tool that lets you specify selectors to extract portions of code. Rather than using brittle line numbers, instead
cq lets you query your code. It uses
babylon to understand the semantics of your code and will extract the appropriate lines.
Query Grammar
.Identifier
Examples:
.Simple
.render
A dot
. preceding JavaScript identifier characters represents an identifier.
In this code:
const Simple = React.createClass({ render() { return ( <div> {this.renderName()} </div> ) } });
The query
.Simple would find the whole
const Simple = ... variable declaration.
Searches for identifiers traverse the whole tree, relative to the parent, and return the first match. This means that you do not have to start at the root. In this case you could query for
.render and would receive the
render() function. That said, creating more specific queries can help in the case where you want to disambiguate.
[space]
Examples:
.Simple .render
.foo .bar .baz
The space in a query selection expression designates a parent for the next identifier. For instance, the query
.Simple .render will first look for the identifier
Simple and then find the
render function that is a child of
Simple .
The space indicates to search for the next identifier anywhere within the parent. That is, it does not require that the child identifier be a direct child the parent.
In this way the space is analogous to the space in a CSS selector. E.g. search for any child that matches.
cq does not yet support the
> notation (which would require the identifier to be a direct child), but we may in the future.
Range
Examples:
.constructor-.calcArea
.Barn .constructor-.calcArea
Given:
class Barn { constructor(height, width) { this.height = height; this.width = width; } calcArea() { return this.height * this.width; } }
A pair of selections (e.g. identifiers) joined by a dash
- form a range . A range will emit the code from the beginning of the match of the first identifier to the end of the match of the last.
You can use a parent identifier to limit the scope of the search of the range as in the query:
.Barn .constructor-.calcArea
If you’d like to specify a line number, you can use a number (instead of an identifier) in a range. For example the query:
30-35 will give lines 30 through 35, inclusive.
Modifiers
Examples:
.bye:+1
.bye:+1,-1
Given:
// here is the bye function (emitted with -1) const bye = function() { return 'bye'; } bye(); // -> 'bye' (emitted with +1)
After the selection expression you pass additional query modifiers. Query modifiers follow a colon and are comma separated. The two modifiers currently supported are:
- adding additional lines following an identifier (or range)
- adding additional lines preceding an identifier (or range)
Lines following the identifier are designated by
+n whereas lines preceding are specified by
-n , where
n is the number of lines desired.
Library Usage
var cq = require('@fullstackio/cq').default; var results = cq(codeString, query); console.log(results.code);
Future
- Add queries for header information such as comments,
imports, and
requires
- Add the ability to extract several sections in a single query
- Add TypeScript support (but keep the same query language)
- Add
BOFand
EOFtokens for selecting the range from beginning and end of file
Limitations
- It’s possible to specify invalid queries and the error messages are not helpful
- Only one selector is possible per query
- Some sections of code are not directly selectable (because the query language is not yet expressive enough)
- You can only select whole lines (e.g. comments on the same line after an expression are captured) – this is by design, but it should be configurable
Query API Stability
The query API early is likely to change for version 1.0.0 (seeFuture). Any breaking API changes (query or otherwise) will result in a major version bump.q
Contributing
Please feel free to submit pull requests!
Authors
Originally written by Nate Murray .
Related
- GraspJS – another tool to search JavaScript code based on structure
- Pygments – a handy tool to colorize code snippets on the command line
- ASTExplorer – an online tool to explore the AST of your code
Fullstack React Book
This repo was written and is maintained by the Fullstack React team. If you’re looking to learn React, there’s no faster way than by spending a few hours with the Fullstack React book.q: code query – extract code snippets using selectors (instead of line numbers)
评论 抢沙发 | http://www.shellsec.com/news/29790.html | CC-MAIN-2016-50 | refinedweb | 1,085 | 64.3 |
You may face errors while doing the tutorials that require Relays / Service Bus - where you are supposed to Create LOB Relay and Target for the Operations.
Message: 'Error occurred while trying to bring up the relay service. Error Message: 'The remote name could not be resolved: 'mysbname-sb.accesscontrol.windows.net'
The issue is because any Service Bus namespace created via portal after August 2014, will not have an accompanying ACS namespace and MABS relies on ACS for the authentication. We are working on SAS support within BizTalk Services. Till then, please use PowerShell command New-AzureSBNamespace to create the Service Bus namespace which allows to create ACS namespace. For more details, have a look at Service Bus namespace creation on portal no longer has ACS connection string
Trying to use newly created Service Bus Queue on the receive side. While deploying the VS project, getting an error.
Error 1 QueueSource1 deployment failed at ''. Failed to connect to the Service Bus using specified configuration.
This issue is due to the service using an older version of Microsoft.ServiceBus.dll which does not support Partitioning entities. Partitioning entities is available in 2.2 onwards versions of Microsoft.ServiceBus.dll.
Check the blog for further details: Enabling BizTalk Services to work with Service Bus 2.2 SDK
The resolution mentioned is
1. You can use a queue created from Service Bus Explorer. As of writing this blog, Queues created this way, do not have entity partitioning enabled by default.
2. Use custom create option while creating queues\topics. In the second page, user should DISABLE partitioning – UNCHECK THE CHECKBOX | https://blogs.msdn.microsoft.com/mabs/2014/10/27/service-bus-related-issues/ | CC-MAIN-2017-34 | refinedweb | 267 | 58.48 |
jm113Members
Content count779
Joined
Last visited
Community Reputation355 Neutral
About ajm113
- RankAdvanced Member
ajm113 replied to Plotnus's topic in Engines and MiddlewareSounds like you already covered every inch of what needs to be known making a choice. :wink: Not sure why you started this thread to be honest. Unless I missed the point.
ajm113 replied to ajm113's topic in Games Career DevelopmentInteresting, can't say I've ever worked at that level, most people I've worked with is 10. Half of them were UI/UE designers. To answer your question, I suppose it's simply something I'm most comfortable at of doing and knowing low level of things. i.e I would much rather build the server software that runs the website and dealing with worker threads, logging, and learning the HTTP protocol extensively then actually putting together the PHP and HTML that shows up. I'm definitely noticing that where I live with all the .NET positions cropping up. Perhaps I'm stubborn, and should just pick up a few books on .NET and deal with the current market to adapt. Not to start any flame wars, or no offense to any die hard fans, I just found .NET a very limited platform, which is why I don't bother with it and I do see why companies use it. Something breaks then there is someone else to yell at. ;) In all honesty, I'm looking for something rewarding in the long run. (health insurance, retirement, salary). Just something a little more stable, where I work I get no benefits, just pay by the hour. Got a few people telling me they get 40k a year and work 50-60 odd hours a week in the game dev world, seems a lot of stress for nothing. Hate to say it, but I'm kinda leaning towards the self start up idea of opening a online marketing business that will evolve into other areas. Definitely will need a bit of time and money, before that can get rolling. Of course one thing at a time. ;) You can't be a "astronaut, doctor, and president at the same time".
ajm113 replied to ajm113's topic in Games Career Development1. So you're making a difficult decision between the ideal and the practical, you say. Why do you think you can't have both? Go for it already. It pays well and is reasonably secure. 2. Okay, I admit it - I don't know what the difference is. Care to explain? 3. Why do you think you need a large market? What's wrong with the state where you live? You never mentioned games in your post, so although this is a game forum, I assume you're not looking for a game job. 4. You should put all your work experience on your résumé. 5. You should shoot for a position that you'd be good at and also enjoy. 6. Some people have, but that's irrelevant. How old are you? 7. Why not now? 8. I moved it to the right one. Good point, why wait, basically. Developer would simply be bug fixing, maintaining code, and integrating new features, as an engineer, you would be using engineering principles for software creation. i.e. wide picture of a big project. One way I see it at least. Where I currently live now in Arizona has a very small cpp software market. It's mainly .NET, CPP is very rare and when one position shows up the requirements are pretty high. I want to do simply software at the moment. I've written small game engines using C++ > Lua, SDL/OpenGL, but nothing more then that. Even smaller projects that simply used C++/C? True, I would need to spend a bit of time researching the market. 24, a lot of HR will shoot down an app with no degree, at least my experience anyway. Too many damn bills and things to take care of at this moment. (Fingers crossed to get back in school next year.) Thanks! :) I could see that, not sure if a place like Portland vs San Diego would have a large difference for cpp programmers. I was planning on starting a business in online marketing and slowly merging to other areas like A/V, and advertising, when things get rolling, but I'm not 100% sure on that yet. I can see that, put whatever you have pretty much. I suppose all that would highly depend on where you live, living in AZ housing would be dramatically different then NY, or NJ. If I got a sr. position I would differently want that 100k salary. haha
ajm113 replied to RLS0812's topic in GDNet LoungeI'm almost certain those simply cost 10-20 USD in East Asia to produce. haha Oh Amazon comments...
ajm113 posted a topic in Games Career DevelopmentI'm going to assume I'm answering my own question and I'm doing wishful thinking, but I would like to hear some opinions. I've been in web development for 3 - 4 years now working with Linux/Unix, Windows almost as a full stack developer. I really enjoy my job and find it secure, however I've always have had my heart set being a C++ developer forever. (Assuming developer is much more achievable then an engineer.) Can anyone give me some tips or pointers what US states have the largest cpp market, and perhaps giving me some pointers on what to look for and what to use on the resume? Obviously recent C++ projects, but should I include my web development jobs anyway? Should I shoot for a jr. position that pays decently (60k+) or sr. role (less working with code and more working with clients/managers). Also has anyone had luck landing a job as a cpp developer position with simply programming experience and no college degree? I plan on working to get my CS degree some time in the future. Sorry if this isn't in the proper area of the forum. Thanks, Ajm
Unity
ajm113 replied to ajm113's topic in General and Gameplay ProgrammingI'm sorry frob, it's a nice library, but it's not what I'm looking for exactly. (I should have been more specific) I'm looking to basically store records like a RDBM or like MySQL. Where I'll need to constantly require to be able find data, and or write it from small chunks to large chunks in a file/vector array.
Unity
ajm113 posted a topic in General and Gameplay ProgrammingHello, I'm on the search for file mapping library. I know Win32 has a file mapping functionality, but this doesn't help me for the Unux/Linux community and I know I maybe able to write one my own. (Why reinvent the wheel?) But is their a MIT license file mapping library out their thats cross platform? My project granted will be fine with 4GB RAM machines for small scale projects (32bit), maybe 90GB RAM servers(64bit) if we are talking big data, but supposed someone wanted to go the inexpensive route and do clustering with my application. It will need to use the HD (slow) as means to store data for the program (which maybe a lot of read and writes) if we are talking 5GB to 10GB of data required to be stored. If someone can give tips or point me in the right direction, that would be amazing, thank you! -Ajm
ajm113 replied to Nicholas Kong's topic in General and Gameplay ProgrammingI like to look at it this way... Is time costing someone else money? 1. Plan (Find the best method) 2. Implement. 3. Test 4.Continue. Time Costing No One Money 1. Implement. 2. Test. 3. Continue. 5. Finish. 6. Optimize
ajm113 posted a blog entry in ajm113's JournalI'm publishing an update for Awesome INI Loader library. The library now has a new license, WTFPL. So now you may use the library for anything you wish. Why I changed it to this is because I believe code shared to the world, should be able to be modified and used by anyone with or without permission. I know some may differ, but I write code because it's fun, and I want to help the smaller guys out. And for anyone who isn't familiar with this library: This library will load, parse, error check, INI files, while giving the programmer full customization easily, leave small memory footprint, easy to port to any platform, and easy to add to any C++ project. The library has extremely small requirements, and will run on most C++ compilers. Great for small and big projects and gets weekly or monthly updates. The library has a few new functions! Telling from the image, error checking and getting the data you want is 2x much simpler and faster now that the library supports these new functions...[code=:0] bool getKeyInt(int &p, const char* group, const char* key); bool getKeyBool(bool &p, const char* group, const char* key); bool getKeyString(char* dstBuffer, const char* group, const char* key); bool getKeyFloat(float &p, const char* group, const char* key); The code is pretty much self explanatory, but I'll give a small run down... The first arguments are the variables or buffers you wish to pass to store the data. Arguments 2 and 3 are the group and key you wish to retrieve. If everything goes well, the functions will return true, and you dont have to check the error function. Otherwise if it does you may need see whats going on. I've also added a little documentation to the error codes in the source code in "awesome_ini.h".
ajm113 posted a blog entry in ajm113's JournalHello everyone! I would like to introduce my INI library parser. I wrote from scratch and would like to thank the community for their help, writing this library. So whats so good about this library? Lightweight small memory footprint. Easy to modify BSD License Pure C++ mixed with C. (Easy to port) Multiplatform Easy to use error handling system. One pointer does all. Easy to setup with any project. Easy to customize to your project needs. (Want to use "#" for comments? Change separator? Edit one line of code!) Im really looking for some testers and input to make this project better for everyone for any application big and small! Example code:#include "awesome_ini.h"int _tmain(int argc, _TCHAR* argv[]){ awesome_ini* myIni = new awesome_ini("test.ini"); AWESOME_ERROR_CODES r = myIni->getError(); size_t l = myIni->getErrorLine(); if (r == AWESOME_ERROR_NONE) { printf("Loaded ini!\n"); } else { printf("Failed! %i %i\n", r, l); } char value[AWI_MAX_VALUE_NAME_LENGTH]; AWESOME_INI_KEY_TYPE t = myIni->getKey(value, "test", "hello"); printf("Variable Type: %i - Variable Value: %s\n", t, value); getchar(); return 0;} If anyone has any questions or would like to help port this library to other languages let me know! Thank you! ajm113
ajm113 replied to ajm113's topic in General and Gameplay ProgrammingFunny enough you said that, I do actually already do something like that. My process... 1. Get idea. 2. Create console application in MVS. 3. Write base classes that do logic in my "main". (Keeps code clean, and easy to look at, plus I don't cause global pollution.) 4. Test in main. //..Includes, etc... int main(...) { //For debugging.. bool printToScreen = true; myClass *myPtr = new myClass("Do something cool!", printToScreen); myPtr->explode(); return 0; } 5. Import library / code to larger project. 6. Drink more coffee.
ajm113 replied to ajm113's topic in General and Gameplay ProgrammingCrank up the warning level of your compiler until it complains about that line. Returning a pointer to something in the stack is almost universally an error, and your compiler should be able to tell you about it. If you are going to do without std::string, I suggest you reimplement it, as something like this: struct String { char *data; unsigned size; unsigned reserved; }; Then write functions to perform all the operations you need on them. Ah, thats a very good post. If it means a better programmer at the end of the day, then all warnings are errors. `doLogic' should also take an argument that is the length of `dstBuffer', so it can make sure to not write past the end of the buffer. Notice how `sprint_s' already does that. Yes, that would be better to use as well for my error handling system if something went wrong so the user's application doesnt crash because of a small memory error. char subbuff is stack allocated memory, you NEVER return pointers to stack memory, this is because exactly what you are seeing is happening, instead you pass in a buffer that the function should write to(and you should pass in the size of that buffer). The reason why you don't see this with objects is because when you return an object, you are copying the value from the function, so if i do: struct Foo{ int x; }; Foo Func(void){ Foo p; p.x = 10; return p; } int main(int argc, char **argv){ Foo f = Func(); printf("%d\n", f.x); return 0; } what's happening is that when p is returned from func, the values of p are copied into the values of f, so your copying the values off the stack into another part of the stack that is still valid in the program context(note: this is a very simiplied explanation). however if we do: struct Foo{ int x; }; Foo *Func(void){ Foo p[2]; p[0].x = 10; p[1].x = 12; return p; } int main(int argc, char **argv){ Foo *f = Func(); printf("%d\n", f[0].x); return 0; } this is undefined behavior, and is very likely to crash the program, this is because you are returning a pointer into stack allocated memory that became invalid the moment you returned from Func. in many cases using the data immediately after would technically still work, but it is undefined and very bad to rely on such behavior. one of the correct solutions is to do like so: struct Foo{ int x; }; bool Func(Foo *p, int psize){ if(psize<2) return false; //failed p[0].x = 10; p[1].x = 12; return true; //succeded! } int main(int argc, char **argv){ Foo f[2]; Func(f, 2); // we can pass stack allocated memory into functions, we just should never return such memory! printf("%d\n", f[0].x); return 0; } edit: always try to provide the minimalist working example to demonstrate your problem, you'll get more concise answers that way, then us trying to understand what the hell your doing. Very well explained, makes sense now. I appreciate your help and everyone elses! And next time I'll be more specific so it's not a guest that code game.
ajm113 replied to ajm113's topic in General and Gameplay ProgrammingAh, that would make sense. So I need to make the memory heap then so I can use the variable globally. I didn't think I would need a working example since I thought putting comments of the output would be enough. I really need to get back to the basics again, but at lower level. @Mark DoLogic is anything that returns char*. I.E Example of a substr function I made: char* awesome_ini::aw_subStr(const char* line, size_t start, size_t length) { char subbuff[AWI_MAX_KEY_NAME_LENGTH]; memset(subbuff, 0, AWI_MAX_KEY_NAME_LENGTH); size_t p = 0; for (size_t i = start; i < length+1; i++) { subbuff[p] = line[i]; p++; } return (char*)subbuff; } That also causes the pointer in the example code to go away after a few lines of code. And do math is simply a function that does anything really... I.E void doMath(void) { //Do some long calculation int x = 5 - 2; printf("%i\n", x); }
- Hello, I'm writing a library using low level C++ methods so my library can easily be built in C. So I'm staying away from C++ standard templates. So vectors and strings are a bit limit. (Plus I think it's fun to have a bit of a challenge. ;) ). I've mostly been using strings in C++ for ease of use, and I decided to stick with char pointers, but I appear to run into odd problems, of which I cant explain.. I'll give a example. //Issue One char buffer[MAX_LENGTH]; sprintf_s(buffer, MAX_LENGTH, "Hello World!") const char* bufferP = doLogic(buffer); //bufferP = "Goodbye World!" after doLogic //Some random function in my class that has nothing to do with bufferP at all. doMoreMath(); //Wut? Now bufferP equals random garbage data? //bufferP = "/////////tx///" I know this problem can be fixed doing this: const char* bufferP = doLogin(); char buffer_a[MAX_NAME_LENGTH]; memset(buffer_a, 0, MAX_NAME_LENGTH); strncpy(buffer_a, bufferP, strlen(bufferP)); But why does this happen? This issue never seems to happen on any other pointers I've created, like objects. Is it necessary I do everything in char arrays? So I don't lose data randomly? Thank you, Andrew.
ajm113 commented on Indium Indeed's article in Visual Arts*clears throat* So if we do the math... (Avg program worth per hour 15.00-50.00) So lets use the min a programmer makes to do the math... 8-15hrs (2 days of about 8 hours a day) = Est. 150-250 dollars just to write something your going to use no more then a week. If you ask me, you may as well buy one of those game recorders those Chinese manufacturing companies are barfing out. and use your time to make a good game. Basically, "if you need to break rules to get to your goal, then maybe your not doing it right." Cant afford one a game recorder? Then maybe your should be in a job that utilizes your skills with higher pay. | https://www.gamedev.net/profile/112568-ajm113/?tab=reputation | CC-MAIN-2017-30 | refinedweb | 2,984 | 72.97 |
Created on 2000-09-15 23:22 by gward, last changed 2000-09-17 12:21 by tim_one.
The 'copy.deepcopy()' function blows up if it stumbles across class, function, or method objects as it recurses into objects. This is (at least partly) because of an inconsistency between the dispatch table used for plain-vanilla 'copy()' and for 'deepcopy()': the 'copy()' dispatch table has an entry for ClassType, but the 'deepcopy()' dispatch table does not.
Also, neither dispatch table has entries for FunctionType or MethodType. This isn't a big deal for 'copy()', since it's unlikely that you'd do 'copy(my_func)', but it's inconsistent since you can do 'copy(my_class)' -- and the class object my_class is *not* copied, it's treated atomically, like strings or ints. For consistency, functions and methods should probably be treated the same as class objects.
Here's a script that demonstrates the problem:
from copy import copy, deepcopy
def f(): pass
class K: pass
# can copy a list that contains a function, since we don't
# dive into the list
l = [1, 2, f]
try: copy(l)
except: print "error copying list with function"
else: print "ok copying list with function"
# but we can't copy the function
try: copy(f)
except: print "error copying function"
else: print "ok copying function"
# also can't deep copy a list with a function because deepcopy() dives in
try: deepcopy(l)
except: print "error deep-copying list with function"
else: print "ok deep-copying list with function"
# nor can we deepcopy the function alone
try: deepcopy(f)
except: print "error deep-copying function"
else: print "ok deep-copying function"
# can copy list with class object
l = ["foo", K]
try: copy(l)
except: print "error copying list with class object"
else: print "ok copying list with class object"
# but can't deepcopy it
try: deepcopy(l)
except: print "error deep-copying list with class object"
else: print "ok deep-copying list with class object"
(Hmm, I don't see a test script for the 'copy' module -- perhaps this would be a good start.)
This gives the same output for me with Python 1.5.2, 1.6, and 2.0b1 (all on Linux):
ok copying list with function
error copying function
error deep-copying list with function
error deep-copying function
ok copying list with class object
error deep-copying list with class object
And here's a patch that makes everything come out OK:
--- copy.py.orig Fri Sep 15 19:19:05 2000
+++ copy.py.hacked Fri Sep 15 19:20:20 2000
@@ -92,6 +92,8 @@
d[types.TypeType] = _copy_atomic
d[types.XRangeType] = _copy_atomic
d[types.ClassType] = _copy_atomic
+d[types.FunctionType] = _copy_atomic
+d[types.MethodType] = _copy_atomic
def _copy_list(x):
return x[:]
@@ -165,6 +167,9 @@
d[types.CodeType] = _deepcopy_atomic
d[types.TypeType] = _deepcopy_atomic
d[types.XRangeType] = _deepcopy_atomic
+d[types.ClassType] = _deepcopy_atomic
+d[types.FunctionType] = _deepcopy_atomic
+d[types.MethodType] = _deepcopy_atomic
def _deepcopy_list(x, memo):
y = []
Changed to Feature Request and added to PEP 42.
The docs (both the library ref and the comments at the top of copy.py) say that objects of class, method and function type are not handled by the module, so the docs match the code, and anything beyond that is a new-feature request. Well, almost -- it's a bug that shallow copies *do* "work" for class type now (since the docs say it doesn't), but I'm not going to take that away now.
If/when copy/deepcopy are so extended, more is needed than the suggested patch: after
y = deepcopy(x)
it should be the case that no mutations of x are visible via y, so _deepcopy_atomic is inadequate for anything with a writable attribute. For example, after deepcopy'ing a class object, changing the __bases__ attr of the original should have no effect on the deepcopied clone.
So correct deepcopy implementations for these things probably need to use the "new" module. I suspect the copy module punted on these types to begin with in part because "new" wasn't built by default in the past. | http://bugs.python.org/issue214553 | crawl-002 | refinedweb | 680 | 59.84 |
The Abdication of the HTML Standard 298
GMGruman writes "The end of numbering for HTML versions beyond HTML5 hides two painful realities, argues Neil McAllister. One is that the HTML standards process has failed, becoming a seemingly never-ending bureaucratic maze that has encouraged the proliferation of draft implementations. That's not great, but as all the wireless draft standards have shown, it can be managed. But the bigger problem is that HTML has effectively been abandoned to four companies: Apple, Google, Opera, and Mozilla. They are deciding the actual fate of HTML, not a truly independent standards process."
Those Who Ship Win (Score:5, Insightful)
But the bigger problem is that HTML has effectively been abandoned to four companies: Apple, Google, Opera, and Mozilla. They are deciding the actual fate of HTML, not a truly independent standards process.
This reminds me of something that was promoted in a book I reviewed [slashdot.org]:
those who ship win
It's that simple. If this armchair talking head who wrote this article chastising the standards process were to magically code up a browser that better empowered me, a software developer, to deploy code to users that ran to my satisfaction then his standards would be realized first. And I might be tempted to use it and ask my users to use it so we can get good functionality.
Duh.
Back when the standards were still in flux (and still are) I was using Google Chrome to enjoy an Arcade Fire experiment [chromeexperiments.com] that used many HTML5 elements. And guess what? I started using Chrome and the implementation of their perspective of the standards gained just a planck constant more marketshare.
This guy can sit around and complain all he wants but for better or for worse: those who ship win.
Re:Those Who Ship Win (Score:4, Insightful)
Gee, that sounds like - a De Facto standard. Like MS Word
.doc format! Guess evil is in the eye of the beholder.
Re: (Score:2, Insightful)
While I hate eldavojohn as much as anyone, I don't see anything in his post that was saying this fact is 'good' or 'evil', just that it's a fact. And honestly, he's right. If you design by independent committee, that committee needs to move at the pace of development or it will be ignored. If software companies are putting out releases faster than the committee is putting out standards, then the committee is worthless. This is ultimately the reasoning behind the move to non numbered releases, as it at l
Re: (Score:2, Insightful)
While I hate eldavojohn as much as anyone
...
Seriously, why do I even bother with this site?
Re:Those Who Ship Win (Score:4, Insightful)
Seriously, why do I even bother with this site?
Because you know that the majority are both more temperate and less vocal than the baseline stupid douche?
Re: (Score:2)
You madea good post. Don't get trolled brother.
Re: (Score:2)
Yeah, that was a poke in the eye for sure. As the others have commented, you aren't universally hated here. That's the first reference to their being a group of people not liking you.
You're good people, don't let the AC's getcha down.
Re: (Score:2)
Re: (Score:2)
>If software companies are putting out releases faster than the committee is putting out standards, then the committee is worthless.
No. This is completely and utterly backwards thinking. The STANDARD is there for a reason, for releases to have a target to hit for making their core technology usable. Further releases should enhance the technology *surrounding* that standard, not try to "improve" upon that standard until they and others can agree on a new version of that standard to go forward.
Having browser makers defining the "standard" is the mess we needed to get out of in the first place. What, do we want to return to browser makers making up things as they go along, like goddamn IE?
How has backwards thinking like this become so prevalent in modern technology? I don't get it.
Wrong. The standard is there to allow competition so that one company doesn't control the technology. HTML has always been made up by the browsers and has always been open so that web developers can use it. There's nothing wrong with that.
I just need to know what to put in my web page header to tell a browser how to parse the output. If I want to use bleeding features supported by a subset of browsers, that's my problem. But if I want to reach the widest possible audience, I need to know what features
Re:Those Who Ship Win (Score:5, Interesting)
Having browser makers defining the "standard" is the mess we needed to get out of in the first place. What, do we want to return to browser makers making up things as they go along, like goddamn IE?
There's a long history to this that explains it fairly well. The concept of a "standard" was invented to handle the problem that vendors have always cheated customers by having their own definitions for units of measurement. Thus, food seller have always tried sell a "pound" (or whatever the local weight term was) of produce that was short weight.
Governments finally blocked this by making legal definitions of such units of measurement, and prosecuting vendors who used a unit that was different from the standard. This is, historically, the only thing that has ever worked. Anywhere in the world, if you sell, say, 200 grams of meat or rice, and it only weighs 180 grams, you will be found guilty of consumer fraud. If you try to say that you use your own definition of a gram, that statement would be used in court to show that your fraud was intentional. (There are occasionally parts of the world where such things aren't enforced, and the result is always the same: Vendors start cheating their customers.)
The real problem here is that HTML is not actually a "standard". That is, it has no force of law behind it. Any vendor can violate it at will, call their product "HTML", and have no fear that the government standards body will prosecute them for consumer fraud.
The phrase "de-facto standard" is in fact a statement of knowingly committing consumer fraud. It should lead to prosecution. But it doesn't. That's the real reason that we have such problems. If we had an actual HTML standard, enforceable in court, we wouldn't have such problems. (Actually, we'd still have them, but they'd lead to prosecution.
;-)
How has backwards thinking like this become so prevalent in modern technology? I don't get it.
It's because "When a computer is involved, all precedent is forgotten, and everything has to be relearned." I've forgotten who said that, but it applies here. The computer industry uses the term "standard" for things that aren't standardized at all. Vendors can freely claim their products "standard" when they don't follow any actual standard, without fear of prosecution for fraud. We have "industry standards" bodies like W3C that have no enforcement power, and thus aren't actually defining standards. They are merely defining pseudo-standards, marketing terms that vendors are free to violate at will without fear of any legal consequences.
The entire history of commerce tells us how vendors will behave in such a situation. They'll define their own "standards", using the same terms as the published pseudo-standards such as HTML. They'll do this knowingly, to persuade the public to buy. Without any legal enforcement, they know they can get away with it.
If we want an actual standard for HTML, it has to have legal import. Without this, vendors will continue to behave as they've always behaved. But there is no sign this is being considered, here in the US or in any other country.
So any HTML "standard" is just a pseudo-standard, a marketing term that vendors can violate at will. We can discuss it all we like, but this will have no effect on the vendors. The only way this can change is if government get involved by having their standards bodies declare actual legal standards that are enforced. We have millennia of experience showing that this is the only approach that actually gives us a useful standard. Computers haven't changed this fact; they only led us to ignore it and then complain that vendors are violating a "standard". Of course they are; they're vendors who are trying to sell us something, without any legal constraints on their use of terminology.
Re:Those Who Ship Win (Score:5, Interesting)
Yes, yes it is.
The difference being that the group behind the de facto standard sees high value in being consistent, predictable, and having that pseudo-standard very well documented, because without those facts nobody can create content for them to consume.
With the
.doc format, there's high value to Microsoft in obfuscating the "standard" documentation as much as possible since they both create and consume the documents.
Big difference.
Re:Those Who Ship Win (Score:4, Insightful)
.doc is not that it's a de facto standard -- all standards that are worth anything must be de facto at least as much as they are de jure -- but that it's a bad one, because it's hard for any program that doesn't share MS Word's internal data structures and algorithms to implement (because a .doc is, to first order, a memory dump of Word's data). HTML doesn't work like that, and it's hard to make it work like that if you tried.
Re: (Score:3)
I have been on the sending end of that problem. KOffice and OpenOffice generate ODT files that differ wildly in their interpretation of parts of the standard that aren't specified. For spreadsheet files in particular, there are more things that are unspecified than are specified. Trying to collaborate using KOffice and OpenOffice is worse than using different versions of Excel.
Re: (Score:3, Insightful)
Nothing wrong with de-facto standards, if they're fully open.
Look at e.g. the Python programming language. The CPython implementation is the de-facto standard implementation, and the language specs actually refer (or used to) to the implementation saying if in doubt, that implementation wins.
Yet there are other, mostly compatible Python implementations out there, and nothing - not patents, nor secrets - stops you from starting a new one.
Re:Those Who Ship Win (Score:5, Informative)
It's nothing like Word get a grip.
Apple created canvas, submitted to W3C, Mozilla submitted changes, canvas was standardized, then Apple invested significant engineerig resources into changing their canvas implementation to match the standard.
If you want an academic standard with no real world use, XHTML 2 is available for your masturbatory needs. The Web needs a practical HTML standard that documents how you DO write HTML, not how you theoretically SHOULD write HTML.
Canvas patent (Score:3)
Apple created canvas, submitted to W3C
And a lot of people complain about Apple's patent on the <canvas> element. Apple isn't required to license this patent until <canvas> becomes part of a W3C Recommendation.
Re: (Score:3)
A standard that says how things are done is not a standard. It is merely documentation.
A standard sets a standard, an expectation of behavior. If you do not meet that expectation, you are not compliant with the standard. Period.
You can not have a standard that contradicts itself because then, nothing could ever actually implement it. This is one problem with the HTML5 standard, it is unimplementable due to self contradiction. This problem is made worse because now even theoretically compliant browsers will
Helping you with definition of evil (Score:2)
Gee, that sounds like - a De Facto standard. Like MS Word
.doc format! Guess evil is in the eye of the beholder.
Good is when you help other companies ship a product that supports a generally agreed upon standard - like HTML5 extensions. That way you compete in the market based on quality of product.
Evil is when you ship something you promote as a standard that you will not help anyone else ship a competing product for, like
.doc.
Re: (Score:3)
evil? no. Beauty? Death? Petrification? Anti-magic? sure. Evil is in the alignment of a Beholder.
Re: (Score:3)
Re: (Score:2)
I am curious if the attempt to strive toward HTML# compliance didn't result in more uniformity among them, however. I am trying not to fall into the trap of assuming that everything will become proprietary, but without an independent body saying "here's the standard, and here's where you are", then won't the compatibility problems get worse?
Re: (Score:2)
Exactly. I'm not even sure if I'd agree with author's goal in an idea world.
What does he want, some independent body of academics, bureaucrats, public input, commercial bodies... setting up the HTML spec without any idea of how it will be implemented, used...
Oh no... as far as I'm concerned, you want to determine the fate of HTML, you build a browser (or some connected product) and join the committee. Fight it out.
Re:Those Who Ship Win (Score:5, Interesting)
>>>HTML has effectively been abandoned to four companies: Apple, Google, Opera, and Mozilla.
Sounds like a lot of FUD to me. It used to be:
- 1999 and earlier: No HTML standard existed and Mozilla Netscape just willy-nilly added new features (blink tag for example).
- 1999 and later: Ditto Microsoft once their IE became dominant. IE5 and 6 were browsers that complied with nothing, and even today still cause problems for web designers.
Better to have four companies talking to one another and hashing-out HTML5 and HTML6, rather than the old (a) chaos of Netscape producing non-compliant features or (b) Monopoly of MS-IE. We don't want to have another Format war like HD-DVD v. Bluray on the internet. We want consensus first, even if that slows progress a little.
Re:Those Who Ship Win (Score:5, Insightful)
It's not 4 companies, that is BS:
1) in this context, Apple is the WebKit open source project
2) dozens of vendors use WebKit, including Google, and there are many contributors
3) Mozilla is a foundation
4) Microsoft and Adobe are also part of W3C, although they sometimes had to be dragged kicking and screaming, but that just shows that standardization works
Re: (Score:2)
What, in this context, is the difference between a company and a foundation?
Re: (Score:3)
Re: (Score:3)
Foundation is not for profit, company is for profit. As you may have noticed from recent history, if a company for profit makes a 'standard' it will only be possible to get implemented either by said company or with the blessings (patent licenses) of said company (DOC, ActiveX, MSOffice Open XML, Silverlight, Flash, DECNet, Skype...). When a foundation makes a standard it will be possible to get different implementations without any restrictions (OpenDocument, HTML, WebM/VP8, Ethernet,
...)
Re:Those Who Ship Win (Score:5, Insightful)
The sad thing for web developers is that it doesn't matter whose html standard is techically better or which one better enables development. It's which one/ones are being used by your target audience that matter. Otherwise you are coding a site just for yourself! It really comes down to a browser marketing issue, not an html standards one. Whoever markets their browser better gets to set the standard.
Re: (Score:2)
Standards committees that didn't have to ship anything were responsible for a ton of late 80s to mid 90s disasters, like X.500, X.509 (certificates), and the whole of the ISO networking stack. There are borderline disasters such as SNMP. There are smoking radioactive holes where you don't want
/ever/ want to go (SOAP is my favorite example here).
The proper path: Write working code, get users and customers, re-design and re-write a few times, THEN you can have a standard.
The Standards Really Never Have Been the Standards (Score:3, Interesting)
The W3C has never really had complete control of HTML. Those who write the browser effectively can extend or cripple HTML features at will. Netscape added many new features [merlins.org] and everyone simply had to live with the results. IE did some nasty things to CSS and we all had to live with that, too.
Re: (Score:3)
It never had complete control, but it did its job. It established a level playing field and and brought parity (more or less) to four different browser engines. Now that there *is* competition, all four vendors are busy as bees trying to add new features and mimic the new features added by the other vendors. So we don't need a standard per se, as long as we have users that have iPhones expecting that a web page will work the same way on their desktops.
So kudos to the W3C for making it viable for other br
Re: (Score:3)
Not really. A "rolling" standard allows the periodic adoption and standardization of a single tag or attribute, which would allow progress on that front while the vendors continue to bicker about the other proposed changes. It's like of like the JCP process was (sans incorporation into a major version release). Different ideas are proposed: some get formalized and adopted and some languish. Nothing wrong with that. The result is that you get a browser or update with support for (for example) WSR-1234 w
HTML *was* simple (Score:5, Insightful)
Re: (Score:2, Insightful)
Any HTML your grandparents wrote 10 years ago still works fine today, so what are you complaining about?
If anything, HTML5 represents a shift back towards the 'vernacular' - for example, the B tag is officially a-ok for bolded text.
Re:HTML *was* simple (Score:5, Informative)
Remember when it was ok to use a "b" tag, and no one scoffed? How about table layouts? It's funny, the new standards aren't always better.
If you still think it's actually not better, sorry, but you should have 10 blind persons hit you with their canes...
Re: (Score:2)
Re: (Score:3, Insightful)
We t
Re: (Score:2)
Heh, sorry, tables were never okay to use tables for layout.
Re: (Score:2)
I'll reply to you, including your sig.
I haven't yet seen the principle that for a true groundswell of a phenomenon, you need a huge swath of "low end users" Just Doing Stuff. Then the experts could float on top pushing the edge.
I saw layout as a necessary evil to get a structure for information. I'm no Developer, so I only need to be a 1-trick guy if it "the consensus" says it works. Count me in the class of people who want to Just Do Stuff.
Taking the long view, we're just about to see the Decline and Fall
Re: (Score:2)
Remember when it was ok to use a "b" tag, and no one scoffed? How about table layouts? [...] I could teach my grandparents how to edit HTML 10 years ago. Now, not so much
Huh? The "b" tags still works. Here you go: bold. Tables still work too. If you want to use them, use them.
If you want more adoption, focus HTML on what actually is important - layout that's understandable to the masses.
Most people use GUI apps to create web pages. They couldn't care less whether the code produced by their GUI is done according to one standard or another. And suppose they did care. Are you claiming that a wave of popular support would then cause WHATWG to be successful, MS to support web standards, and patent holders to release their codecs under royalty-free terms?
HTML should be more focused on making layouts easier, and faster. It should not be focused on animation.
Well, first off, html 5 isn't j
That's not what the masses want. Or need. (Score:2)
The problem is that we're turning the browser into an application-level container. HTML should be more focused on making layouts easier, and faster
But that's not what "the masses" want, or even need.
The masses like doing things over the web, so any standards that improve the ability to do more things in the browser help people. The demand is obviously there from the growth of Flash.
The "masses" also NEVER wanted to edit HTML. Not directly. Because most people HATE AND FEAR code. You simply cannot make
Re:HTML *was* simple (Score:4, Funny)
Remember when it was OK to use an "i" tag, and it worked on Slashdot?
Re: (Score:2)
Remember when it was ok to use a "b" tag, and no one scoffed? How about table layouts? It's funny, the new standards aren't always better. This is why a format "of the people" isn't going anywhere. I could teach my grandparents how to edit HTML 10 years ago. Now, not so much. Is that better? I'd argue, no.
yeah, and 25 years ago i could teach my mother how to manually mark blocks and insert formating codes on text edited in an 8-bit computer. technology evolves, things get more complicated, then new tools apear to ease the process. so what if you can't create a good looking site using vi or notepad anymore ? use a goddamn authoring tool.
HTML should be more focused on making layouts easier, and faster. It should not be focused on animation. This is where MS Word has fallen off a cliff. If you want more adoption, focus HTML on what actually is important - layout that's understandable to the masses.
oh, and let animation be the focus of adobe flash ? video to real networks, microsoft or apple ? remember the same 10 years ago, we needed 3 different plugins installed so we
Re: (Score:2)
You still can. Not that much has changed. In fact, HTML+CSS now is easier IMHO than 10 years ago and I've been doing this since '95. The real issue you're getting to is that people want much more out of their websites than they did 10 years ago and that requires more skillsets. A Geocities page just doesn't cut it anymore since now people are more familiar with what's possible and have raised their e
Re: (Score:2)
Sure it was... before those tags were added.
Re: (Score:2)
Re: (Score:2)
CSS is horrible for table layouts (Score:2)
CSS has no sane table layout syntax. The only way to do tables in CSS is to basically DUPLICATE the HTML table syntax using div tags with table properties on them - tell me how that is better in any way shape or form?
There is a difference between content markup, content layout, and content styling. The problem is people get them all confused and try to shoe-horn improper tools in each.
Re: (Score:3)
Re: (Score:2)
floats are not meant for doing layouts.
using floats for even fairly simple layouts almost always results in horrible hacks.
Floats should be used for 2 cases:
- you want to illustrate some text with an image (or a box, block, table...), and you want the text to fill the space on the side of it, then resume to full available width under it.
- you have several boxes (or images or whatever) that you want to place side by side, filling the available horizontal space, then continue on a new line whe
Re: (Score:2)
In the sense that it can be made to work in all the major browsers, yes.
The GP is right - tables are for tables of data, not for laying out content that is not actually tabular. You wouldn't write a document in a spreadsheet just because that would mean not having to worry about tabstops, would you?
Re: (Score:2)
If you want to display data in a table, use table tags. That's fine. That's what they're there for. They're NOT there for general block-level content layout.
Re: (Score:2)
I'm not sure what you mean by "table layouts"
The table tag is a tool, and its job is to display tabular data. Using it for anything else (design and layout of a page) is where the shoe-horning happens.
Re: (Score:2)
What in the world are you smoking? Granny's web site does not need to be ADA compliant, and she's not likely to "refactor" the layout. Tabled layouts are fine for Granny. Hell, some of the "solutions" to common css problems involve once again mixing layout and content. At that point, who cares if Granny encloses something in a DIV or a TABLE?
"not a truly independent standards process" (Score:2)
Re:"not a truly independent standards process" (Score:5, Insightful)
Isn't that par for the course? It seems a lot of standards are driven by a few big players who have a strong interest in it.
True. When I read the summary, I thought that four players seemed better than the early days of the web, when HTML was driven by just the pair of Netscape and Microsoft.
Re: (Score:2)
W3C should retire (Score:2)
I can't remember if W3C has ever really successfully moved the HTML language ahead. Much of the early improvements were due to Netscape and Microsoft throwing new features around willy-nilly. A bunch of those features would be chosen to be part of the standard, while the rest (layers, blink, marquee) would fade away into disuse. As soon as the major players focused more on following the standards rather than setting them, then everything seemed to just grind to a halt. It wasn't until browser makers started
Re: (Score:2)
Re: (Score:2)
Actually, the W3C was crucial to preventing MSFT from just turning IE into a Frontpage renderer.
It was a pity that it didn't stop Netscape from becoming the pile of steaming crud that it ended up being (prior to Mozilla). What Microsoft did was no different to what Netscape did. If Netscape had won the war, then we would be complaining how much Netscape Navigator and Netscape One were tied together.
All Microsoft needed was a competitor to force it to maintain some sort of compatibilty. With Netscape out of action for so long, that role went to W3C. But it was during this period that development of the
Re: (Score:2)
We lost 10 years of W3C work thanks to WHATWG and their html5 junk
Well, that was the problem. If it hadn't taken 10 years to go from version 1 to version 2 then perhaps another group would not have had time to come in and steal their thunder.
Eh? (Score:5, Informative)
HTML has effectively been abandoned to four companies: Apple, Google, Opera, and Mozilla.
And Microsoft is where?
Their Internet Explorer is used by most Internet users today ( [hitslink.com] )
Re: (Score:2, Informative)
Re: (Score:2)
Microsoft is, I think, implementing the standard (slowly) rather than defining it. And I'd argue that the same is true of Mozilla and Opera to some extent.
It seems to me that it's Webkit that's pushing the standard, with Apple and Google as the major contributors but with a whole load of other companies along for the ride. The Webkit project is becoming a de-facto standards body with a members list that includes every smartphone platform builder except Microsoft.
Re:Eh? (Score:5, Informative)
They are still there. The article just fails to mention them. Microsoft has contributed a LOT to the HTML 5 specification process. A very large number of test cases were submitted by them, and they contribute during the discussions as well. It's just the author obviously has a very anti-microsoft bias. And for the purposes of the article, the lack of any one company doesn't really matter the principle remains the same.
finally (Score:2, Insightful)
Finally, people are starting to realise (and argue) that today's HTML is no more "open" than Flash. It's just a cartel between a few major tech companies to promote particular implementations of particular technologies in their medium term interest. Apple's canvas is the most obvious culprit. Rather than freeing people from Flash, it gives such a seductive but incomplete alternative (to an already subpar platform) that developers are encouraged to write native Cocoa apps. It's msjvm deja vu all over again.
Re:finally (Score:4, Insightful)
Regardless of who is setting the standard, it *is* an open standard, implementable by anyone who reads the spec. Flash is not. Big difference.
Open Screen Project (Score:2)
[HTML] *is* an open standard, implementable by anyone who reads the spec.
Apple holds patents that cover <canvas>. See my other comment [slashdot.org].
Flash is not.
What you said was true until February 2009, when Adobe changed the license terms [wikipedia.org] for the SWF specification.
Re: (Score:3)
Re: (Score:2)
Bad Thing? (Score:5, Insightful)
Hey! At least a certain monolithic juggernaut ISV that is known for hijacking ALL standards isn't in the top four.
Could be worse (Score:5, Insightful)
What's with all the hate? (Score:4, Informative)
Last I checked, anyone could submit ideas, corrections, feature requests *RIGHT THERE ON THE HTML5 WORKING DRAFT*. "Feedback Comments" right at the top of [w3.org]
Now, if they ignore your idea, that's almost certainly because it sucks and is badly written. No really, it does suck. Follow the instructions there *carefully*, really think about this feature or tag or whatever you're requesting, and your ideas will get consideration.
I must be getting old (Score:2)
Standards are based on who shows up (Score:2)
Standards always tend to be dominated by the people and companies that show up.
Welcome to corporate feudalism (Score:3)
What's happening with this issue is a microcosm of what's happening in the world. Democracy and the rule of law wither, while wealth, in the form of organizations or a few super-rich individuals control outcomes.
patents, MS (Score:5, Insightful)
It seems to me that everybody is moaning and groaning about what a bad job WHATWG is doing, when in fact WHATWG is just doing the best it can in an extremely difficult environment created by patents and Microsoft.
The confusion with respect to audio and video codecs only exists because of patents. A certain patent-encumbered codec shows up that's good enough, so it gets widely adopted, and then it's impossible to displace it because of network effects. This is not WHATWG's fault.
The html 5 feature that I really care about is mathml, and here it's very, very clear that MS is the bad guy and W3C and WHATWG have just been trying, unsuccessfully, to work around MS. Mathml worked fine in xhtml years ago, but MS never bothered to support xhtml in IE, which would have been technically trivial to do. They stated that their policy was to have independent vendors supply support for mathml rendering via plugins, and Design Science did their best to do that, but MS made it impossible for them to do that in a standard way, because the standard depended on xhtml, which IE didn't support. So xhtml died in the crib, and WHATWG decided to pour the svg and mathml namespaces into the flat html 5 namespace. Kind of an ugly solution, but they had no other choice. Now for the first time it is theoretically possible to write a web page coded in a standard way that has mathml in it and that might render properly in some future version of IE. But meanwhile big institutions are still sticking to IE 6 because they need compatibility with all its bugs, and preview versions of IE 9 have broken mathml support. [dessci.com]
The big problem is that commercial entities have interests that oppose the interests of their customers and internet users at large. MS wants users to be locked into their browser through proprietary plugins and bug-compatibility, and they don't stand to profit by supporting features like mathml, which are only used by a relatively small proportion of their users. (Never mind that blind people can access mathml but not bitmapped renderings of equations. Blind people aren't economically important to MS.) Owners of patents on codecs want to harvest licensing fees, and they don't care if that screws everybody else up and makes a mess out of audio and video on the web.
McAllister complains that WHATWG is dominated by a clique consisting of Google, Apple, Mozilla, and Opera. But that clique is basically a list of all the browser vendors, and doesn't that kind of make sense? These are the people who acually need to implement the standard, so of course they should be the ones with the most influence. The only browser vendor missing from the list is MS, which is only interested in subverting standards.
Re: (Score:2)
McAllister complains that WHATWG is dominated by a clique consisting of Google, Apple, Mozilla, and Opera. But that clique is basically a list of all the browser vendors, and doesn't that kind of make sense?
It makes sense to include them in the standardization process. It doesn't make sense to let the standardization* process be dictated by them, with no one else having a say on it, whether they are competing companies (MS included) or, ghasp, any of the thousands of people that make a living developing and maintaining the WWW.
* this isn't a standardization process per se. Once the WHATWG decided to abandon versioning numbers they effectively abandoned any attempt to define a basic set of features which any i
Re: (Score:2)
but MS never bothered to support xhtml in IE
In what sense? The site I'm working on is XHTML 1.0 strict compliant and renders properly in IE 6, 7 and 8. No, we don't use MathML, but to say simply "IE doesn't support XHTML" seems somewhat disingenuous.
Re: (Score:2)
In what sense? The site I'm working on is XHTML 1.0 strict compliant and renders properly in IE 6, 7 and 8. No, we don't use MathML, but to say simply "IE doesn't support XHTML" seems somewhat disingenuous.
You're mistaken. Xhtml only works in IE if you serve it as text/html. If you have xhtml+mathml content, you're supposed to deliver it as application/xhtml+xml, but then IE won't display it. This makes it impossible to make a single, static xhtml web page that uses xhtml features (such as mathml) and renders in both IE and other browsers.
3 more... (Score:2)
That's 3 more than we used to have. And not to put to fine a point on it, but
Re: (Score:2)
I liked where you were going with that thought, a shame you had to leave.
Re: (Score:2)
I kept typing after I hit the preview button. The preview looked fine, but then it posted that typing (not the preview) when I clicked submit.
I feel like there's something in that about Ajax and web standards...
Re: (Score:2)
Huh, when I hit preview there's no means to continue typing...
Re: (Score:2)
Yes. It was in between the time when I hit preview and when my page actually showed the preview div. Due to some bizarre networking setups in my house I have a decent amount of lag.
Reference implementation (Score:2)
Re: (Score:2)
Re: (Score:3)
Defining "standard" and "open source" (Score:2)
Most standards that actually work have open-source reference implementations.
Then I guess video codecs are exceptions to your "most standards". ISO publishes standards, many of which are standards for mathematical systems. These have a reference implementation in a computer program whose source code is available to the public. However, due to ISO's patent policy, many standards cannot be implemented in open source software as Open Source Initiative defines it [opensource.org]. For example, ISO allows MPEG-LA to attach a uniform royalty to the MPEG-4 standard, including the controversial AVC Advanced
just being realistic (Score:3)
The lack of version numbers is just being realistic. No browser is 100% compliant even with HTML 4.01, which has been around for how long now? And when is HTML 4.02 coming out? Seems to me they've abandoned the versions a long time ago. Everyone just uses HTML 4.01.
They can make a HTML 5.00 standard, and have most of the browsers implement 99% of it and then they release 5.01 and the browser makers will get to work implementing that, but totally abandon implementing that last 1% of the HTML 5.00 spec... because they would be too busy implementing 5.01, 5.02, etc. So a Web developer sets a HTML 5.00 doctype, uses a feature that isn't implemented yet hoping that someday browsers may support it. But there is no guarantee they will. So the web developer will just change the doctype to 5.01, 5.02 (or whatever the latest version of the spec is) every time he makes changes to a web page or CMS.
So they're just being realistic. No matter what standard they come up with, it will never be implemented fully by all browsers. Their standard won't be the law, it will be more of a guideline. Having version numbers is pretty pointless when all browsers aren't going to render a HTML 5.01 document exactly the same. Its easier for the web developer to tell the browser that this is a HTML 5 doc and the browser will use its latest code to render the page.
Re: (Score:3)
ok so you target HTML 5.00 which isn't implemented perfectly. You test it and notice a few problems on a couple of browsers. You do a work around for those browsers.
Then a couple of years from now a developer for one of the glitchy browsers notices a bug in how it renders HTML 5.00. What should he do? if he fixes the bug then your workaround is going make the page be rendered in a way that you didn't intend. A couple of more years later, that browser gets a completely new rewrite of its rendering engine. No
This is the W3C's fault (Score:2)
As a standards making body, the W3C was pretty much doomed as soon as they abandoned things that people actually use and decided to focus on XHTML 2 for so long (which almost nobody was interested in).
The result was WHAT-WG being created (with the major browser players) to do the work that needed to be done: adding features to HTML that people actually care about.
Of course we've got the major vendors making the standard, they're the only ones who have been actually focused on making a standard for years! If
standardizing HTML is flawed in its very concept (Score:2)
Re: (Score:2)
Yeah, no great loss.
I proposed something to the W3C many years ago that would have improved web security (and if implemented would have stopped the myspace and other XSS worms). But the W3C are just interested in more and more "Go" buttons and they didn't even want a single "Stop" button.
Anyway, Mozilla has finally proposed something in concept (more encompassing but also more complex) CSP which might help: [mozilla.org].
Re: (Score:2)
Re: (Score:2)
The proper way is to use a span with class="blink" and do the blinking in javascript.
Re:HTML compliance is for wankers (Score:5, Insightful)
Re: (Score:2)
The major browsers just don't give a crap what you feed them.
The same is not necessarily true of assisstive technologies such as screen readers.
Now if all you care about is the maximum return on investment that probably isn't important to you, but in that case I'd be wary of throwing the word wanker around too much...
Re: (Score:3)
That's funny, I've encountered lots of people using Microsoft Word as their HTML editor who say just the same thing. So you're in good company.
Validators make a good sanity check/diagnostic tool when something isn't working correctly in Foobar browser, but they're not a crutch. Once you've got a solid working knowledge of HTML they're not really going to teach you much but might find a few typos.
Once you move beyond HTML and into CSS, valid HTML can certainly make a difference, but if you're sticking with H
Re: (Score:2)
Browsers are more consistent than ever in what they support,
This may be true for the latest version of every major browser (which is different from just "browsers" because there are many actively developed browsers out there based on icky stuff like MSIE6 or Firefox 2), but it does not help the web developer when he needs to carefully add 4-5 different CSS declarations in a particular order just to add a gradient or round corners... The problem just multiplies with the new features being added to JS (Web Workers etc.)... | http://tech.slashdot.org/story/11/01/28/142248/the-abdication-of-the-html-standard | CC-MAIN-2014-35 | refinedweb | 6,999 | 71.44 |
Hi Ceri,
I was trying to use axis2c a few weeks ago. Everytime that I tried to make
something works, I got into a bug or leak.
Personally I think that the project is not reliable for a production
system, and for now Its a waste of time to try to use it :(
Regards
El 26/03/2013 18:56, "Ceri Davies" <Ceri.Davies@oracle.com> escribió:
> Background:
> ==========
> I'm an experienced programmer, but new to axis2c. I initially downloaded
> axis2c 1.7.0 and got that working but it is intermittent, seems to not work
> it there is a gap between synchronous messages.
> I tried an async keepalive every 25 seconds, which seemed to help a bit,
> but not enough. I then looked around and realized that 1.7.0 is
> "unstable", so sought out the stable release 1.6.0 and installed that. I
> now just change a link so can test with either.
>
> Problem:
> =======
> Async messages using axis2_svc_client_send_robust() work. sync messages
> using axis2_svc_client_send_receive() core dump in the axis2c code. This
> is code that works (intermittently) with 1.7.0. I've played with this for
> a while, trying xml2 instead of guththila and that core dumps as well.
> I've looked though the examples that come with axis2c 1.6.0 code ,and think
> I'm doing it right?
> Any ideas? I'm willing to send my code if needed. The basic premise for
> my app (which there is no example of), is that I do the init once, for the
> one endpoint, for both sync or async operations, and then wait for async
> and sync requests, and then set up the request for that operation using
> either robust_send or send_receive.
>
>
> Program terminated with signal 11, Segmentation fault.
> #0 0x00000030a9878db0 in strlen () from /lib64/libc.so.6
> (gdb) where
> #0 0x00000030a9878db0 in strlen () from /lib64/libc.so.6
> #1 0x00007f7ed0edf009 in guththila_write_start_element_with_namespace
> (wr=0x16bd430, namespace_uri=0x54415245504f5f5f <Address 0x54415245504f5f5f
> out of bounds>,
> local_name=0x16bbc10 "open", env=0x165e520) at
> guththila_xml_writer.c:1446
> #2 0x00007f7ed1951dcc in
> guththila_xml_writer_wrapper_write_start_element_with_namespace
> (writer=0x16bcf00, env=0x165e520, localname=0x16bbc10 "open",
> namespace_uri=0x54415245504f5f5f <Address 0x54415245504f5f5f out of
> bounds>) at guththila_xml_writer_wrapper.c:371
> #3 0x00007f7ed1df0499 in axiom_output_write (om_output=0x16c1b70,
> env=0x0, type=<value optimized out>, no_of_args=<value optimized out>) at
> om_output.c:465
> #4 0x00007f7ed1deef85 in axiom_element_serialize_start_part
> (om_element=0x16bbb90, env=0x165e520, om_output=0x16c1b70, ele_node=<value
> optimized out>)
> at om_element.c:822
> #5 0x00007f7ed1debaab in axiom_node_serialize (om_node=0x16bbb40,
> env=0x165e520, om_output=0x16c1b70) at om_node.c:511
> #6 0x00007f7ed1738676 in axis2_http_sender_send (sender=0x16bced0,
> env=0x165e520, msg_ctx=0x16bbf20, out=0x16bc2d0,
> str_url=0x165e5b0 "",
> soap_action=0x7f7ed1748e6a "") at http_sender.c:465
> #7 0x00007f7ed173780e in axis2_http_transport_sender_write_message
> (transport_sender=0x1661fe0, env=0x165e520, msg_ctx=0x16bbf20,
> epr=0x165e560, out=0x16bc2d0,
> om_output=0x16c1b70) at http_transport_sender.c:806
> #8 0x00007f7ed1736751 in axis2_http_transport_sender_invoke
> (transport_sender=0x1661fe0, env=0x165e520, msg_ctx=0x16bbf20) at
> http_transport_sender.c:309
> #9 0x00007f7ed1b7e716 in axis2_engine_send (engine=0x16bc150,
> env=0x165e520, msg_ctx=0x16bbf20) at engine.c:176
> #10 0x00007f7ed1bafc18 in axis2_op_client_two_way_send (env=0x165e520,
> msg_ctx=0x16bbf20) at op_client.c:1171
> #11 0x00007f7ed1bae99f in axis2_op_client_execute (op_client=0x16bc6f0,
> env=0x165e520, block=1) at op_client.c:508
> #12 0x00007f7ed1bb1b04 in axis2_svc_client_send_receive_with_op_qname
> (svc_client=0x165e7d0, env=0x165e520, op_qname=0x16bba90,
> payload=0x16bbb40) at svc_client.c:732
> #13 0x00007f7ed1bb1dd1 in axis2_svc_client_send_receive
> (svc_client=0x165e7d0, env=0x165e520, payload=0x16bbb40) at svc_client.c:830
> #14 0x00000000004072e1 in afs_msg_send_receive (msg=0x7f7ed02a5d20
> "/export/Container0/ceri.0", msg_len=512,
> outmsg=0x7f7ed02a5ad0 "ms\205", <incomplete sequence \305>,
> outmsg_len=512, action=0x7f7ed02a5cd0 "open") at afs_msg.c:207
> (note: outmsg is not initialized at the time of the call, this is a char
> string that will contain the reply...)
>
>
> <>
> <> <> <><><><><>
> Ceri Davies | Principal Software Engineer | Ceri.Davies@Oracle.com
> Work: <>+1 3032727810 <+1%203032727805> x77810 |
> Home: +1 3035321116 <+1%207209805105> | Cell: +1 3038706743<+1%207209805105>
>
> (Note: Home phone forwards to Cell, so try Home# first)
> Oracle Storage | 500 Eldorado Blvd. | Broomfield, CO 80021
> Oracle is committed to developing practices and products that help
> protect the environment <>
> <>
>
> | http://mail-archives.apache.org/mod_mbox/axis-c-user/201303.mbox/%3CCAHBEoONHwR_rHTo2veHchSsEzFC=VN3-mHGE94z2NXOU3iSVBg@mail.gmail.com%3E | CC-MAIN-2018-43 | refinedweb | 636 | 58.69 |
I don't know how to proceed with this...
I want to change a binary stored as an int
1111
111
I don't ordinarily answer "gimme teh codez" questions, but it was an interesting problem so I did it for fun. As usual, most of the time went into extraneous stuff like the output code.
If this is homework, do take the time to understand how the code works, or you'll only be cheating yourself.
#include <stdio.h> #include <string.h> // Define "number" as an unsigned number of desired size typedef unsigned long number; number drop_msb(number n); char *ntob(char *dest, number n, int min_len); int main() { number i; number j; char ibuf[65]; char jbuf[65]; for (i = 0; i < 512; i++) { j = drop_msb(i); ntob(ibuf, i, 0); ntob(jbuf, j, strlen(ibuf) - 1); printf("%s --> %s\n", ibuf, jbuf); } return 0; } number drop_msb(number n) { number bit; number a; // Handle special case if (n == 0) return 0; // Set highest bit bit = ((number) -1 >> 1) ^ (number) -1; // Guaranteed to terminate while (1) { a = n ^ bit; if (a < n) return a; bit >>= 1; } } char *ntob(char *dest, number n, int min_len) { /* Convert n to shortest binary string, padding with zeroes on left * to make it at least min_len characters long. dest should be long * enough to hold the maximum number, plus terminating null. */ char *left; char *right; /* min_len should be >= 1, to handle n == 0 correctly. Also needs to * be non-negative to avoid bad pointer during padding. */ if (min_len < 1) min_len = 1; // Build with lsb on left for (right = dest; n; right++, n >>= 1) *right = '0' | (n & 1); // Pad if needed while (right < dest + min_len) *right++ = '0'; *right = '\0'; // Reverse it for (left = dest, right--; left < right; left++, right--) { *left ^= *right; *right ^= *left; *left ^= *right; } return dest; } | https://codedump.io/share/BdrhvifUnZvO/1/completely-removing-most-significant-bit | CC-MAIN-2017-43 | refinedweb | 302 | 68.03 |
Games::RolePlay::MapGen::MapQueue - An object for storing objects by location, on a map, with visi-calc support
use Games::RolePlay::MapGen; use Games::RolePlay::MapGen::MapQueue; my $map = new Games::RolePlay::MapGen; $map->generate; my $queue = new Games::RolePlay::MapGen::MapQueue( $map ); $queue->add( $object1 => (1, 2) ); $queue->add( $object2 => (5, 3) ); # The objects can be any unique identifier or reference (blessed or unblessed). $queue->replace( $object3 => (5, 3) ); # remove first if it's already on the map somewhere else $queue->remove( $object3 ); # just remove it my $distance = $map->distance( $object1, $object2 ); # the distance from o1 to o2 or undef if the tile is not visible my @all = $queue->all_open_locations; my @los = $queue->locations_in_line_of_sight( @dude_position );
These docs refer to something called the lhs and the rhs from time to time. They are short-hand for left-hand-side and right-hand-side. Where there is a source and a destination, or a beginning and an end, the lhs is the start or beginning and the rhs is the end.
This module assumes that all tiles are in the appropriate base units (5ft by 5ft tiles for d20 game systems) and doesn't even pretend to check their size. If your game uses 5ft by 5ft tiles, it's up to you to make sure the tiles in the map passed to this
MapQueue module are actually set up correctly.
Additionally, the distances returned by
distance()/
ldistance() and given to
locations_in_range_and_line_of_sight are in tile units, ignoring the size in feet (or meters or whatever) of the tiles. If you want the distance in feet (or meters), you'll have to multiply and divide it on your own.
new() takes a single argument, which is required: a MapGen map object
Most of the MapQueue functions are cached with Memoize,
flush() clears the caches.
Store an object on the map at a specified location. This function will raise an error if the location doesn't make any sense. To make sense, the location must be within the map boundaries and must be an open tile -- either a corridor or a room tile. It will additionally raise an error if the object is already elsewhere on the map.
my $pistol = bless {}, "Sig P229r"; $mq->add($pistol, (2,2) ); $mq->add( "boring string", (2,3) );
See
is_open() below for a function to check whether a location makes sense.
Remove an object on the map. It raises an error if the object isn't on the map or if the location doesn't make sense (see
add()).
$mq->remove( $pistol ); $mq->remove( "boring string" );
Exactly like
add(), except that it removes the pistol iff it's already on the map.
$mq->replace( $pistol, (5,4) ); $mq->replace( $pistol, (5,4) ); $mq->replace( blagh => (2,3) );
Locates an object in the queue, returning the (x,y) coordinate. Raises an error if the object isn't on the map.
my @loc = eval { $mq->add($pistol) }; warn "wasn't there" if $@;
In scalar context,
distance() returns the distance from one object to another. It raises errors if the lhs or rhs objects aren't found in the queue. If there is no line of sight from the lhs to the rhs, the function returns a scalar undef (even in list context).
my $dist = $mq->distance( blagh => $pistol );
If
distance() is given a third argument that evaluates to true, distance will instead return both the distance and a boolean value indicating whether there's a line of sight. (It returns the distance even if there's no line of sight.)
my ($dist, $los) = $mq->distance( blagh => $pistol, 1 );
Works rather like
distance() but takes locations as arguments instead of objects. It raises errors if either of the locations don't make sense (see
add()). If there's no line of sight,
ldistance() returns a scalar undef (even in list context).
my $dist = $mq->ldistance( (1,1), (2,2) );
If given a fifth argument,
ldistance returns
$dist and
$los like
distance().
my ($dist, $los) = $mq->ldistance( (1,1), (2,2), 1 );
Returns a scalar indicating whether there's a line of sight from the lhs object to the rhs object.
$los will be one of
LOS_NO or
LOS_YES, which are exported in to the requiring namespace. They are usable as booleans, so you don't have to use the names, and are single scalars in list context.
The
$los values returned from distance are of the same type as the
$los returned here.
my $los = $mq->line_of_sight( blagh => $pistol ); if( $los == LOS_NO ) { # if( $los ) is fine also # blargh }
The function raises errors if the objects aren't found on the map.
Works just like
line_of_sight but takes locations as arguments instead of objects. The function raises an error if the locations don't make any sense.
my $los = $mq->lline_of_sight( (1,1), (2,2) );
Determines if there is a line of sight from an object to a closure (a wall, door, or opening of a tile). Presently there are no functions to return door objects from the map, but you can get them from the map yourself.
my $door = $map->[ $y ][ $x ]{od}{ w }; # west door of ($x,$y); $mq->add( $some_player_object ); $mq->closure_line_of_sight( $some_player_object, $door );
The function raises errors if the
$door isn't a door object (or isn't on the map) or if the lhs object isn't on the map.
The decision about whether there's a line of sight includes the idea that the line of sight doesn't do much good if you can't see most of the door at an angle that allows you to examine it...
That minimum angle is a package global that can be changed but defaults to 9 degrees. The global is in radians, but can be set like this:
$Games::RolePlay::MapGen::MapQueue::CLOS_MIN_ANGLE = deg2rad(9);
(
deg2rad() comes from Math::Trig.)
Like
closure_line_of_sight(), this tells whether there's a line of sight from a tile location to a closure. It takes five arguments:
$mq->closure_lline_of_sight( ($x,$y), ($x,$y,$d) );
Like the other "l-name" functions, the lhs and rhs are coordinate pairs, but unlike the others this function also takes a direction argument (to name the closure). The fifth argument must be one of '
n', '
e', '
s', or '
w'.
Predictably, it raises errors if the locations don't make sense or the direction is incorrect or missing.
Returns all the object so the map as an array. Each element in the array is just the objects as they were placed on the map.
my @objs = $mq->objs; my ($pistol) = grep {m/P229r/} @objs;
Like
objs(), but also returns the locations. Each element is an array ref, whose first element is the location (as an arrayref) and whose second element is an arrayref of objects.
my @objs = $mq->objs_with_locations; my ($loc, $a) = $objs[0]; my ($x,$y) = @$loc; # the location my @objs_a = @$a; # the objects at the location
Like
objs(), but only returns objects at a specific location
($x,$y).
my @objs = $mq->objs_at_location($x,$y);
The function raises errors if the location doesn't make sense.
Like
objs(), but only returns objects with a clear line of sight from
($x,$y).
my @objs = $mq->objs_in_line_of_sight($x,$y);
The function raises errors if the location doesn't make sense.
Returns all the locations on the map.
my @locs = $mq->all_open_locations; my ($x,$y) = @{ $locs[0] };
It will optionally return the array as an arrayref:
my $locs = $mq->all_open_locations; my ($x,$y) = @{ $locs->[0] };
Returns a single random open location from the map.
my ($x,$y) = $mq->random_open_location;
It will optionally return the location as an array:
my @xy = $mq->random_open_location;
Returns all open tile locations to which there is a clear line of sight from the
($x,$y) location argument. It raises an error if the location doesn't make sense.
my @locations = $mq->locations_in_line_of_sight($x,$y); for my $l (@locations) { local $" = ","; print "I can see (@$l).\n"; }
Returns all open tile locations to which there is a clear line of sight from the
($x,$y) location argument that are also within a certain range. It raises an error if the location doesn't make sense or if the range isn't greater than
0.
my @locations = $mq->locations_in_range_and_line_of_sight($x,$y);
Takes two locations (four scalars) as arguments. Raises errors if either location doesn't make sense or if there isn't a clear light of sight from the lhs to the rhs.
Returns the locations of tiles a player would step through to get from the lhs to the rhs. The path is not necessarily optimal.
locations_in_path() uses a straight line heuristic with slight corrections for passing through doors and the like. It should be reasonably close to optimal.
The path is meant to be reasonably compatible with d20 game systems, but may differ slightly in certain cases.
my @locations = $mq->locations_in_path( ($src_x,$src_y), ($dst_x,$dst_y) ); for my $l (@locations) { local $" = ","; print "Player through to (@$l) on his way to ($dst_x,$dst_y).\n"; }
The path is inclusive, meaning, the locations in the path include the starting point and the end point.
Takes two locations (four scalars) as arguments and raises errors if either location doesn't make sense. If one were to draw a line from each corner of the lhs tile to each corner of the rhs tile, this function would return true if and only if none of the lines intersects a wall closure and false otherwise.
This is meant to be reasonably compatible with d20 game systems.
It returns either
LOS_COVER, when there is over or
LOS_NO_COVER when there isn't.
LOS_COVER evaluates to true and
LOS_NO_COVER does not, so you can choose not to use these constants.
my $melee_cover = $mq->melee_cover( @lhs => @rhs ); # if( $melee_cover == LOS_COVER ) would also work if( $melee_cover ) { print "That's a tough shot, really.\n"; }
Additionally,
melee_cover() automatically returns
LOS_NO_COVER if the tiles are farther apart than one tile in either direction. That's really more of a reach or ranged attack.
Takes two locations (four scalars) as arguments and raises errors if either location doesn't make sense. If all four corners of the rhs tile are visible from any one corner in the lhs tile, then there is
LOS_NO_COVER.
To correct for various weird artifacts because of smallish openings and long distances, when all four corners of the rhs are visible from the lhs, that corner must also be able to see a tighter box toward the middle of the tile. I doesn't do any good to be able to see the corners of the rhs if you can't see the middle!
Lastly,
LOS_COVER is upgraded to
LOS_DOUBLE_COVER if there is cover (i.e., there is no corner of lhs that can see all four corners of the rhs) and there is also cover when considering a tighter line of sight.
This is meant to be reasonably compatible with d20 game systems, but it differs quite a bit because computers are willing to do more work than humans and the string you're meant to use is only virtual here.
Takes a standard
($x,$y) pair as arguments. Returns a boolean indicating whether a location is an open tile (true) or a wall tile (false). This is actually the function you can use to see if a location "makes sense" as first described in
add().
Takes a semi-standard
($x,$y,$d) triplet as arguments (where
$d is '
n', '
e', '
s', or '
w'). Returns a boolean indicating whether a closure is a door. Raises an error if the location doesn't make sense.
Takes a semi-standard
($x,$y,$d) triplet as arguments (where
$d is '
n', '
e', '
s', or '
w'). Returns a boolean indicating whether a door is closed. Raises errors if the location doesn't make sense or there isn't a door there.
Takes a
($x,$y,$d) closure triplet as arguments. Raises errors if the location doesn't make sense, there isn't a door, or the door is already open.
Takes a
($x,$y,$d) closure triplet as arguments. Raises errors if the location doesn't make sense, there isn't a door, or the door is already closed.
Returns either the position of the last column of the map (rather like
$#array) or an array of all the columns on the map (like
0 .. $#array).
Returns either the position of the last row of the map (rather like
$#array) or an array of all the rows on the map (like
0 .. $#array).
perl -MSoftware::License::LGPL_2_1 \ -e '$l = Software::License::LGPL_2_1->new({ holder=>"Paul Miller"}); print $l->fulltext' | less
perl(1), Games::RolePlay::MapGen | http://search.cpan.org/~jettero/Games-RolePlay-MapGen-1.5008/lib/Games/RolePlay/MapGen/MapQueue.pod | CC-MAIN-2016-44 | refinedweb | 2,106 | 68.81 |
User:Rcygania2/XML literals
Contents
Background
- History of XML literals:
There are lots of things in an XML text that don't affect its meaning (or value), like choice of single vs double quotes, order of attributes, and so on. So, many different XML texts can serialize the same XML value. It's a given that in RDF we want value-based equality at some point (at least in the semantics). So somewhere, someone has to canonicalize the text into a value. This canonicalization can be done in different places:
- by the author of the XML text
- by the RDF parser
- by the RDF toolkit when it does value-based comparison (in the L2V mapping)
- by the application
The status quo is that each serialization format makes a decision between 1) and 2). RDF/XML goes for 2, everyone else for 1.
Option 1 is ok when interchanging between RDF systems (N-Triples?), but really bad for anyone who transforms XML from a non-RDF context to RDF (RDFizers, and of course anyone who authors RDF by hand). In languages like Turtle it makes XML literals unusable.
Option 2 is ok in an XML-based language, because it already has an XML parser, and C14N shouldn't be a big deal. Otherwise it sucks because now a Turtle parser would have to ship an XML parser too.
Option 3 means that SPARQL engines and reasoners have to do it, or hopefully the underlying RDF API can handle it. I think this sucks less than 2), for reasons I have not tried to articulate.
Option 4 is what happens when Option 3 is made optional. It's kind of acceptable. Comparing XML values doesn't seem to be a huge use case.
There are two main reasons for the choice of 1) and 2) in the status quo. First, otherwise the output of an RDF/XML parser has to be allowed to be likely somewhat indeterministic, because they work with the output of an underlying XML parsers and never know what kind of quotes were used. Second, RDF/XML was the only game in town, and as stated above, 2) doesn't suck too much in XML-based syntaxes. Option 3 seemed less attractive because an OWL reasoner doesn't want to ship an RDF/XML parser.
Contrasting with other datatypes
All other datatypes (the XSD types) basically let implementations choose between 3) and 4). Canonicalization only has to happen when value-based comparison is needed. And XSD support is essentially optional.
However, unlike with rdf:XMLLiteral, it's trivial to implement deterministic parsers for all XSD datatypes, in any syntax we know of.
Proposal
- Canonicalization happens in the L2V mapping
- rdf:XMLLiteral support is optional
Discussion
I18n objection?
Support for XML literals was introduced partly to support I18n concerns like bidi, ruby markup, and mixed-language text. Making XML literal support optional would be bad from an i18n point of view. On the other hand, the rather low level of actual use of XML literals for these purposes shows that it's not a big loss. (Data???)
Arbitrarily different lexical forms
Some funny stuff could happen.
- Two RDF/XML parsers would likely obtain two different XML literals from the same RDF/XML file. (They would have the same value, but different lexical form. They are two triples.)
- Serialize a graph containing XML literals as Turtle and as RDF/XML. Load them both. Likely you have two different graphs now.
Cost for SPARQL and reasoners
If they want to support value-based comparisons for rdf:XMLLiteral then they need to ship the canonicalizer. Migration cost.
Other problems with XML literals
There are plenty of other warts. Is it worth trying to fix it?
- Language tags from RDF/XML are not inherited
- To use XHTML in Turtle, the XHTML namespace needs to be declared on every top-level element!? Oh that sucks.
- If they ever catch on, we get to deal with XSS exploits in Tabulator.
- XHTML is obsolete anyway, better focus on HTML5
Use cases for XML literals
- Storing XML content as opaque blobs
- Transmitting content snippets, e.g., RSS
- Rich literals, e.g., a title with some markup like sub, sup, em | http://www.w3.org/2011/rdf-wg/wiki/User:Rcygania2/XML_literals | CC-MAIN-2015-14 | refinedweb | 702 | 64.51 |
Maelstrom: An Overview
- Posted: Nov 20, 2012 at 1:00 PM
- 70,721 Views
- 5 Comments
Something went wrong getting user information from Channel 9
Something went wrong getting user information from MSDN
Something went wrong getting the Visual Studio Achievements.
Prior to DirectX 11.1, stereoscopic 3D required specific hardware and a custom API written for that hardware. With DX11.1, stereoscopic 3D has been "democratized." Any GPU that supports DirectX 11.1 can be connected to any device which supports stereoscopic 3D, be it a projector, an LCD TV, or anything else. Plug it in and DirectX does the rest.
From the software side of things, any DX11.1 application can determine if the connected display supports stereoscopic 3D, and choose to render itself separately for the player's left and right eye.
To showcase this feature, we decided to build a very simple game that would give the illusion of depth, but be easy to explain and play. What's easier than Pong? So, we built the world's most over-engineered game of 3D Pong named Maelstrom.
Each player setup consists of two applications: the DirectX 11.1 game written in C++, and the touch-screen controller written in C#/XAML. Both are Windows Store applications. Since this is a two player game, there are two instances of each application running, one per player. All for applications are networked together using StreamSockets from the Windows Runtime. The two controllers and player two's DirectX game connect to player one's DirectX game, which acts as the "master". Controller input is sent here, and, once the ball and paddle positions are calculated, the data is drawn for player one and sent to player two which draws the world from the other player's perspective.
If you have never worked with DirectX before, it can be a little overwhelming at first. And even if you have worked with it some in the past, targeting the new Windows 8 ecosystem, along with C++ and XAML have added some additional changes in how you may have designed your solution previously.
Fortunately, the Windows Dev Center for Windows Store Apps has some great samples to get you started, and we took full advantage of them to get to speed. For a great, simple example of how to leverage the new stereoscopic feature in Direct3D 11.1, we started with Direct3D Stereoscopic Sample which shows the basic adjustments to the Render loop for toggling your virtual cameras. However, to see a great example of a simple game structure that also leverages stereoscopic rendering where available, the tutorial found at Walkthrough: a simple Windows Store game with DirectX is invaluable. Further in this article, we will dive deeper into the specifics of stereoscopic rendering in our game.
One thing to note, if you follow the link in the above Walkthrough to the original project, it will take you to a C++ only implementation of the game. Now, of course, all the DirectX game objects such as the paddle, puck and walls are all rendered using D3D. However, for HUD (Heads up Display) elements, this C++ only sample also uses DirectX exclusively. If you are coming from a managed code background, this will definitely seem like unnecessary overhead. That is because this C++ only sample was created after last year's BUILD conference in 2011 and C++ and DirectX still did not play well with XAML.
However, a few months later, the ability to nest DirectX content in a XAML project became available for true hybrid style solutions (see the article DirectX and XAML interop - Windows Store apps using C++ and DirectX for more information). After this feature was added, the simple Shooter Game referenced above had its HUD logic rewritten in XAML and posted up to Dev Center as XAML DirectX 3D shooting game sample, which shows both stereoscopic support, a simple Game Engine structure in C++ and XAML integration. At this point, we had all the starter code we needed to start writing our own game.
We modified the base sample to accommodate our needs. We created specific GameObjects, such as Paddle, Puck, etc. to add the behaviors we needed. We also added an Update and Render method to the base GameObject so that, for every frame, we could do any calculations required, and then draw the object to the screen. This is very similar to how XNA sets up its game engine.
Because we were tweaking a variety of values like colors, sizes, camera locations, etc., we created a GameConstants.h header file which contains nothing but these types of values in a single location. This made it very easy for us to quickly try out various tweaks and see the results on the next run. Using namespaces helped keep the code a bit more manageable here as well. Here’s a quick snippet of that file:
namespace GameConstants { // bounds of the arena static const DirectX::XMFLOAT3 MinBound = DirectX::XMFLOAT3( 0.0f, 0.0f, 0.0f); static const DirectX::XMFLOAT3 MaxBound = DirectX::XMFLOAT3(19.0f, 10.0f, 90.0f); // game camera "look at" points static const DirectX::XMFLOAT3 LookAtP1 = DirectX::XMFLOAT3(9.5f, 5.0f, 90.0f); static const DirectX::XMFLOAT3 LookAtP2 = DirectX::XMFLOAT3(9.5f, 5.0f, 0.0f); // Waiting Room camera positions static const DirectX::XMFLOAT3 WaitingEyeP1 = DirectX::XMFLOAT3(GameConstants::MaxBound.x/2, GameConstants::MaxBound.y/2, GameConstants::MaxBound.z - 12.0f); static const DirectX::XMFLOAT3 WaitingEyeP2 = DirectX::XMFLOAT3(GameConstants::MaxBound.x/2, GameConstants::MaxBound.y/2, GameConstants::MinBound.z + 12.0f); static const DirectX::XMFLOAT3 WaitingEyeMjpegStation = DirectX::XMFLOAT3(GameConstants::MaxBound.x/2, GameConstants::MaxBound.y/2, GameConstants::MinBound.z + 9.6f); // game camera eye position static const DirectX::XMFLOAT3 EyeP1 = DirectX::XMFLOAT3(GameConstants::MaxBound.x/2, GameConstants::MaxBound.y/2, GameConstants::MinBound.z - 6.0f); static const DirectX::XMFLOAT3 EyeP2 = DirectX::XMFLOAT3(GameConstants::MaxBound.x/2, GameConstants::MaxBound.y/2, GameConstants::MaxBound.z + 6.0f); static const float Paddle2Position = MaxBound.z - 5.0f; namespace PaddlePower { // power level to light paddle at maximum color static const float Max = 9.0f; // max paddle power color...each component will be multiplied by power factor static const DirectX::XMFLOAT4 Color = DirectX::XMFLOAT4(0.2f, 0.4f, 0.7f, 0.5f); // factor to multiply mesh percentage based on power static const float MeshPercent = 1.2f; }; // time to cycle powerups namespace Powerup { namespace Split { static const float Time = 10.0f; static const float NumTiles = 4; static const DirectX::XMFLOAT4 TileColor = DirectX::XMFLOAT4(0.1f, 0.4f, 1.0f, 1.0f); static const float TileFadeUp = 0.20f; static const float TileDuration = 2.10f; static const float TileFadeDown = 0.20f; static const float TileMeshPercent = 2.0f; static const float TileDiffusePercent = 2.0f; }; }; }
Direct3D must be initialized properly to support stereoscopic displays. When the swap chains are created, an additional render target is required, such that one render target is for the left eye, and one render target is for the right eye. Direct3D will let you know if a stereoscopic display is available, so you can create the swap chain and render targets appropriately.
With those in place, it’s simply a matter of rendering your scene twice, once per eye…that is, once per render target.
For our game this was very simple. Our in-game camera contains two projection matrices, one representing the view from the left eye, and one from the right eye. These are calculated when the projection parameters are set.
void Camera::SetProjParams( _In_ float fieldOfView, _In_ float aspectRatio, _In_ float nearPlane, _In_ float farPlane ) { // Set attributes for the projection matrix. m_fieldOfView = fieldOfView; m_aspectRatio = aspectRatio; m_nearPlane = nearPlane; m_farPlane = farPlane; XMStoreFloat4x4( &m_projectionMatrix, XMMatrixPerspectiveFovLH( m_fieldOfView, m_aspectRatio, m_nearPlane, m_farPlane ) ); STEREO_PARAMETERS* stereoParams = nullptr; // Update the projection matrix. XMStoreFloat4x4( &m_projectionMatrixLeft, MatrixStereoProjectionFovLH( stereoParams, STEREO_CHANNEL::LEFT, m_fieldOfView, m_aspectRatio, m_nearPlane, m_farPlane, STEREO_MODE::NORMAL ) ); XMStoreFloat4x4( &m_projectionMatrixRight, MatrixStereoProjectionFovLH( stereoParams, STEREO_CHANNEL::RIGHT, m_fieldOfView, m_aspectRatio, m_nearPlane, m_farPlane, STEREO_MODE::NORMAL ) ); }
Depending on which eye we are rendering, we grab the appropriate projection matrix and pass it down to the vertex shader, so the final scene is rendered offset for the proper eye.
If you are just starting to move into 3D modeling and programming, one of the trickier aspects of your game can be collision detection and response. Maelstrom uses primitives for all of the game elements, so our collision code was able to be a bit more straightforward compared to complex mesh collisions, but understanding a few core math concepts is still critical to grasp what the code is doing.
Fortunately, DirectX provides us with an DirectX Math Library that is able to do the serious heavy lifting, so the main complexity comes from framing the problem and learning how to apply the library.
For example, In our situation we had up to three very fast moving spheres and needed to check for wall collisions and then handle to appropriate bounce, since some of the walls would also be angled. In a 2D game, a collision detection between a sphere and an axis line is very easy. If the distance between a circle and the line is less than or equal to the radius of the sphere, they are touching. On every frame, you move your circle based on its velocity and do your collision test again. But even here, your solution may not be that easy for two reasons.
First, what if the line is angled and not lying flat on the X or Y axis? You have to find the point on the line based on the line's angle that is closest to the sphere to do your distance calculations. And if you then want it to bounce, you have to rotate the velocity of the circle by the line's angle, calculate your bounce, and then rotate back. And that's just rotated walls in 2D. When you move up to 3D, you have to take into account the surface normal (which way the 3D plane is facing) in your calculations.
The second complexity that we needed to account for and which pops up in either 2D or 3D collision detection is travel between frames. In other words, if your ball is travelling very fast, it may have completely travelled through your collision boundary in between frames and you wouldn't notice it if you are only doing a distance / overlap check as outlined above. In our case, the pucks had the ability of travelling very fast with a speed boost, so we needed a more robust solution. Therefore, instead of implementing a simple sphere plane intersection test, we needed to create a line of motion that represented where the ball ended on the previous frame and where it currently is after it's new velocity is added to it's position. That line then needs to first be tested to see if it crosses a WallTile. If it does cross, then we know an collision has occurred between frames. We then solve for the time (t) between frames the Sphere would have first made contact to know the exact point of impact and calculate the appropriate "bounce off" direction.
The final code for a puck (or moving sphere) and wallTile collision test looks like this:
bool GameEngine::CheckWallCollision(Puck^ puck) { bool isIntersect = false; bool wallCollision = false; for(unsigned int i = 0; i < m_environmentCollisionWalls.size(); i++) { WallTile^ wall = m_environmentCollisionWalls[i]; float radius = puck->Radius(); float signedRadius = puck->Radius(); float contactTime = 0.0f; XMVECTOR contactPlanePoint = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); XMVECTOR contactPuckPosition = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); bool intersectsPlane = false; // Determine the velocity of this tick by subtracting the previous position from the proposed current position. // in the previous update() cycle, puck->Position() = puck->OldPosition() + ( puck->velocity * timerDelta ). // Therefore, this calculated velocity for the current frame movement differs from the stored velocity // since the stored velocity is independent of each game tick's timerDelta. XMVECTOR puckVectorVelocity = puck->VectorPosition() - puck->OldVectorPosition(); float D = XMVectorGetX( XMVector3Dot( wall->VectorNormal(), wall->VectorPosition() ) ); // Determine the distance of the puck to the plane of the wall. float dist = XMVectorGetX( XMVector3Dot(wall->VectorNormal(), puck->OldVectorPosition() )) - D; signedRadius = dist > 0 ? radius : -radius; // if the distance of the puck to the plane is already less than the radius, the oldPosition() was intersecting already if ( fabs(dist) < radius ) { // The sphere is touching the plane. intersectsPlane = true; contactTime = 0.0f; contactPuckPosition = puck->OldVectorPosition(); contactPlanePoint = puck->OldVectorPosition() + wall->VectorNormal()*XMVectorSet(signedRadius,signedRadius,signedRadius,1.0f); } else { // See if the time it would take to cross the plane from the oldPosition() with the current velocity falls within this game tick. // puckVelocityNormal is the amount of force from the velocity exerted directly toward the plane. float puckVelocityNormal = XMVectorGetX(XMVector3Dot(wall->VectorNormal(), puckVectorVelocity )); // if the puckvVelocityNormal times the distance is less than zero, a plane intersection will occur if ( puckVelocityNormal * dist < 0.0f ) { // determine the contactTime, taking into account the shell of the sphere ( position() + radius ) // is what will make contact, not the position alone. contactTime = (signedRadius - dist) / puckVelocityNormal; // if the contact time is bewteen zero and one, the intersection has occured bewteen oldPosition() and position() if ( contactTime > 0.0f && contactTime < 1.0f ) { intersectsPlane = true; // this is the position of the puck when its shell makes contact on the plane contactPuckPosition = puck->OldVectorPosition() + XMVectorScale(puckVectorVelocity, contactTime); // this is the position on the plane where the shell touches. contactPlanePoint = contactPuckPosition - XMVectorScale(wall->VectorNormal(), signedRadius); } } } // If the puck has contacted the wall plane, determine if the point of contact falls within the wall boundary for true contact. if (intersectsPlane) { float Kr = 1.0f; // Kr is the coefficient of restitution. At 1.0, we have a totally elastic bounce with no dampening. At Kr = 0.0, the ball would stop at the wall. // Make sure the puck velocity and wall normal are facing each other float impact = XMVectorGetX ( XMVector3Dot ( wall->VectorNormal(), puck->VectorVelocity()) ); if (impact < 0.0f) { wallCollision = true; //// bounce the vector off the plane XMVECTOR VectorNormal = XMVector3Dot(wall->VectorNormal(), puck->VectorVelocity())*wall->VectorNormal(); XMVECTOR VectorTangent = puck->VectorVelocity() - VectorNormal; puck->Velocity(VectorTangent - (XMVectorScale(VectorNormal, Kr))); puck->Position(contactPuckPosition); int segment = (int)(puck->Position().z / GameConstants::WallSegmentDepth); segment = max(min(segment, GameConstants::NumWallSegments-1), 0); auto tiles = m_wallTiles[segment]; WallTile^ tile = tiles[i]; if(tile->GetPowerup() == Powerup::Split) SplitPucks(); break; } } } return wallCollision; }
To draw the game, we wanted to use some advanced techniques. We decided to go with a light pre-pass deferred rendering pipeline with normal mapping. That’s a lot of jargon but it isn’t all that complicated once you know what the jargon means, so let’s break it down.
When you draw something in 3D, there are three things that come together to determine the final color of each pixel on the screen: meshes, materials, and lights. A mesh is a collection of triangles that make up a game object (such as a wall tile in Maelstrom). On its own, a mesh is just a bunch of dots and lines. A material makes a mesh look like something. It could be as simple as a solid color but usually it’s a texture and sometimes it’s more (the wall tiles in Maelstrom use both a texture and a normal map to define their material properties). Lastly, lights transform materials by determining how bright they should appear and what sort of tint, if any, they should have. Without lights you would either have complete darkness or you would have flat lighting (where everything has a uniform brightness and adding a tint color would uniformly tint everything on the screen).
The simplest approach to drawing 3D graphics is something called forward rendering. With forward rendering, drawing consists of rendering the mesh and calculating its material and all the lights that affect the material all at the same time. The more lights you add, the more complicated your shaders become since you have to determine whether each light affects the material and if so how much. (Ok, so there’s also multi-pass forward rendering, but that has its own problems – more passes mean longer render times and thus a lower frame rate – and we wanted to keep the descriptions simple).
In the last 5 years, many games started using a technique called deferred rendering. In classic deferred rendering, there are two rendering passes. The first pass renders the positions, normals, and material values of all the meshes in the scene to something called a G-Buffer (two or more render targets); nothing is actually drawn to the screen in this first pass. The second pass uses the data from the G-Buffer (which tells us everything we need to know about the geometry that appears at each screen pixel) and combines it with the lights to create the final image that you see. By doing this, we decouple geometry and lighting. This makes it possible to add more lights to the scene with a much smaller performance impact than in forward rendering since we don’t need to create a really complex pixel shader to handle all the lights (single-pass forward rendering) or draw the geometry over and over again for each light (multi-pass forward rendering).
There are drawbacks to classic deferred rendering though. Even a minimal G-Buffer takes up quite a bit of memory and the more different types of materials you want to support, the larger the G-Buffer will need to be. Wolfgang Engel, an XNA/DirectX MVP, came up with a variation on deferred rendering which he called Light Pre-Pass Rendering. This is a three pass technique. We once again use a G-Buffer, but in this case it is smaller than the classic deferred rendering G-Buffer and can even be squeezed down to a single render target which makes it viable for graphics hardware which does not support drawing to multiple render targets at the same time.
The G-Buffer is created in the first pass by rendering all the scene geometry. It only needs to store normals and the geometry’s world position. We stored the world position of the geometry at that screen position in one render target and its normal at that screen position in second render target for simplicity.
The next pass draws the lights to a light accumulation buffer. The buffer starts out entirely dark and each light that is rendered adds brightness (and tint, if any) to the light buffer. These lighting calculations take into account the normal and world position of the geometry that is at each screen position, drawing the values from the G-Buffer, such that each light only affects the pixels it is supposed to have an impact on. In Maelstrom we ended up only using point lights (spheres of light that fade out as you get further from the light’s position), but you can use any kind of light you can imagine (spot lights and directional lights are the two other common light types). Adding more lights has a very low impact on rendering time and this kind of lighting tends to be much easier for the designer to work with since there’s no need for him or her to understand HLSL or even any complicated C++ in order to add, remove, reposition, or otherwise change any lights.
The final pass draws the geometry a second time. This time, though, all the lighting calculations are done so all we do here is just render the meshes with their appropriate materials, adjust the color values and intensities from the material based on the light buffer value, and we’re done. Each rendering style (forward, deferred, and light pre-pass) has its own benefits and drawbacks, but in this case light pre-pass was a good solution and choosing it let us show how a state-of-the-art graphics technique works.
We also incorporated normal mapping. Normal mapping makes us of a special texture (a normal map) in addition to the regular texture that a material has. Normals are values used in lighting calculations to determine how much a particular light should affect a particular pixel. If you wanted to draw a brick wall, you would typically create two triangles that lined up to form a rectangle and you would apply a texture of a brick wall to them as their material. The end result of that doesn’t look very convincing though since unlike a real brick wall there are no grooves in the mortared area between each brick since our brick and mortar is just a flat texture applied to flat triangles. We could fix this by changing from two triangles to a fully modeled mesh with actual grooves, but that would add thousands of extra vertices which would lower the frame rate.
So instead we use a normal map, which fakes it. One of the reasons that the two triangles + a brick wall texture approach doesn’t look right is because the lighting doesn’t behave correctly when compared to a real brick wall (or to a fully modeled mesh of a real brick wall). The normals point straight out perpendicular from the face of the rectangle whereas if we had the fully modeled mesh with actual grooves, the surface normals would only point straight out on the bricks themselves and they would curve along the mortared areas such that the lighting calculations would end up giving us the right levels of light and dark depending on the location and direction of the light. That’s where a normal map comes in. The normal map (which you can generate using a plugin for Adobe Photoshop or GIMP or by modeling a real brick wall in 3DSMax, Maya, or Blender which you then “bake” a normal map from) allows us to get the same lighting effect as we would with a fully modeled mesh while still keeping the simple two triangle + a brick wall texture approach that gives us really good performance for our game. There are limits to the effectiveness of normal mapping (you can’t use it to fake anything too deep and it doesn’t hold up as well if the camera can get really close to the object) but in Maelstrom it allowed us to keep the walls as simple triangles (like the two triangles + a brick wall texture example above) while making it seem like there were geometric grooves in the wall. Here’s a before and after screenshot using normal mapping:
We also used several post-processing effects. The first was the bloom effect. Bloom is an effect that analyzes a rendered image, identifies parts that are above a certain brightness threshold, and makes those areas brighter and adds a peripheral glow to them as well, giving it a look and feel that is similar to a neon sign or to the light cycles in the movie Tron. Here’s the same shot as above with the addition of bloom:
We also made use of two different damage effects. Whenever the player took damage, we had a reddish tinge around the edge of the screen. This was simply a full screen overlay texture that is actually white but is tinted red by the shader. It is alpha-blended over the final rendered scene and fades out over the course of a couple of seconds. Rather than fading out linearly, we use a power curve which helps to sell the effect as being more complicated than it really is.
Lastly we added in some damage particles. The particles themselves were created using a geometry shader. The vertex shader took in a series of points in world space and passed these points along to the geometry shader. The geometry shader expanded these points into two triangles by generating the missing vertices and applying the world-view-projection transformation matrix to transform the positions from world coordinates to homogeneous coordinates so that they can then be rasterized correctly by D3D and the resulting pixels passed along to the pixel shader. Once again we used a simple texture with alpha blending to simulate much more complicated geometry than we were actually drawing. In this case we also made use of a texture atlas (an image made up of smaller images) which, in conjunction with the randomizer we used to generate the initial vertices for the particles, allowed us to have several different particle textures. Like with the power curve for the damage texture, the texture atlas allowed us to make the particles seem more complex than they really were. It also let us show off the use of a geometry shader, a feature that was added in DirectX 10 and requires DirectX 10 or higher hardware.
All audio was done using the XAudio2 API. Thankfully, we were able to get a huge head start by using some of the code from the sample project we started from. The audio engine sets up the very basics of XAudio2, and then wraps that with a simpler API for the rest of the application to call.
We don’t have many sound effects, so we on startup, we load all sounds effects and music cues into a std::map, keyed on a SoundCue enum. Sounds are loaded using the Media Foundation classes, and the resulting byte data of the sound (and some format information) are stored in our SoundEffect class.
void AudioEngine::Initialize() { m_audio = ref new Audio(); m_audio->CreateDeviceIndependentResources(); m_mediaReader = ref new MediaReader(); // Impacts m_soundMap[SoundCue::BallLaunch] = LoadSound("Sounds\\Impacts\\BallLaunch.wav"); m_soundMap[SoundCue::Buzz] = LoadSound("Sounds\\Impacts\\Buzz.wav"); m_soundMap[SoundCue::Impact1] = LoadSound("Sounds\\Impacts\\Impact1.wav"); m_soundMap[SoundCue::Impact2] = LoadSound("Sounds\\Impacts\\Impact2.wav"); ... } SoundEffect^ AudioEngine::LoadSound(String^ filename) { Array<byte>^ soundData = m_mediaReader->LoadMedia(filename); auto soundEffect = ref new SoundEffect(); soundEffect->Initialize(m_audio->SoundEffectEngine(), m_mediaReader->GetOutputWaveFormatEx(), soundData); return soundEffect; }
When the game needs to play a sound, it simply calls the PlaySound method, passing in the cue to play, and the volume to play it at. PlaySound keys into the sound map, getting the associated SoundEffect, and plays it.
void AudioEngine::PlaySound(SoundCue cue, float volume, bool loop) { m_soundMap[cue]->Play(volume, loop); }
To achieve the effect of seeing the opponent in stereoscopic 3D, we strapped two Axis M1014 network cameras side-by-side. Using Brian’s MJPEG Decoder library, with a special port to Windows Runtime, individual JPEG frames were pulled off each camera, and then applied to a texture at the back of the arena. The image from the left camera is drawn when DirectX renders the player’s left eye, and the frame from the right camera is drawn when DirectX renders the right eye. This is a cheap and simple way to pull off live stereoscopic 3D.
void MjpegCamera::Update(GameEngine^ engine) { if(m_decoderLeft != nullptr) UpdateTexture(m_decoderLeft->CurrentFrame, &textureLeft); if(m_decoderRight != nullptr) UpdateTexture(m_decoderRight->CurrentFrame, &textureRight); Face::Update(engine); } void MjpegCamera::Render(_In_ ID3D11DeviceContext *context, _In_ ID3D11Buffer *primitiveConstantBuffer, _In_ bool isFirstPass, int eye) { if(eye == 1 && textureRight != nullptr) m_material->SetTexture(textureRight.Get()); else if(textureLeft != nullptr) m_material->SetTexture(textureLeft.Get()); GameObject::Render(context, primitiveConstantBuffer, isFirstPass); }
With the distance between the cameras being about the distance of human eyes (called the intra-axial distance), the effect works pretty well!
The Tablet controller is the touch screen that lets the player control their 3D paddle in the Game Console app. For this part of the game system, there wasn't a reason to dive deep into DirectX and C++ since the controller is neither stereoscopic or visually intense, so we kept things simple with C#.
Since the controller would also serve as our attract screen in the podium to entice potential players, we wanted to have the wait screen do something eye-catching. However, if you are moving from C# in WPF to C# and XAML in WinRT and are used to taking advantage of some of the more common "memory hoggish UX hacks" from WPF, you'll quickly find them absent in WinRT! For example, we no longer have OpacityMask, non-rectangular clipping paths or the ability to render a UIElement to a Bitmap. Our bag of UX tricks may be in need of an overhaul. However, what we do get in C# / XAML for WinRT is Z rotation, which is something we've had in Silverlight but I personally have been begging for in WPF for a long time.
Therefore, the opening animation in the controller is a procedurally generated effect that rotates PNG "blades" in 3D space, creating a very compelling effect. Here is how it works. The Blade user control is a simple canvas that displays one of a few possible blade images. The Canvas has a RenderTransform to control the scale and rotation and a PlaneProjection which allows us to rotate the blade graphic in X, Y and Z space.
<Canvas> <Canvas.RenderTransform> <ScaleTransform x: </Canvas.RenderTransform> <Canvas.Projection> <PlaneProjection x: </Canvas.Projection> <Image x: </Canvas>
Each Blade is added dynamically to the Controller when the Tablet application first loads, stored in a List to have it's Update() method called during the CompositionTarget.Rendering() loop.
protected override void OnNavigatedTo(NavigationEventArgs e) { canvas_blades.Children.Clear(); _blades.Clear(); for (int i = 0; i < NumBlades; i++) { Blade b = new Blade { X = 950.0, Y = 530.0 }; int id = _rand.Next(0, 5); b.SetBlade(id); b.Speed = .1 + id * .1; SeedBlade(b); _blades.Add(b); canvas_blades.Children.Add(b); } } void CompositionTarget_Rendering(object sender, object e) { if(_inGame) { paddle.Update(); } else if(_isClosing) { foreach (Blade b in _blades) b.UpdateExit(); } else { foreach (Blade b in _blades) b.Update(); } }
Since each Blade has been assigned an individual speed and angle of rotation along all three axis, they have a very straightforward Update function. The reason we keep the rotation values between -180 and 180 during the spinning loop is to make it easier to spin them out zero when we need them to eventually leave the screen.
public void Update() { _rotX += Speed; _rotZ += Speed; _rotY += Speed; if (_rotX > 180) _rotX -= 360.0; if (_rotX < -180) _rotX += 360.0; if (_rotY > 180) _rotY -= 360.0; if (_rotY < -180) _rotY += 360.0; if (_rotZ > 180) _rotZ -= 360.0; if (_rotZ < -180) _rotZ += 360.0; projection.RotationX = _rotX; projection.RotationY = _rotY; projection.RotationZ = _rotZ; } public void UpdateExit() { _rotX *= .98; _rotZ *= .98; _rotY += (90.0 - _rotY) * .1; projection.RotationX = _rotX; projection.RotationY = _rotY; projection.RotationZ = _rotZ; }
To continue the experiment of blending C# and C++ code, the network communication layer was written in C# as a Windows Runtime component. Two classes are key to the system: SocketClient and SocketListener. Player one’s game console starts a SocketListener to listen for incoming connections from each game controller, as well as player two’s game console. Each of those use a SocketClient object to make the connection.
In either case, once the connection is made, the client and the listener sit and wait for data to be transmitted. Data must be sent as an object which implements our IGamePacket interface. This contains two important methods: FromDataReaderAsync and WritePacket. These methods serialize and deserialze the byte data to/from an IGameState packet of whatever type is specified in the PacketType property.
namespace Coding4Fun.Maelstrom.Communication { public enum PacketType { UserInputPacket = 0, GameStatePacket } public interface IGamePacket { PacketType Type { get; } IAsyncAction FromDataReaderAsync(DataReader reader); void WritePacket(DataWriter writer); } }
The controllers write UserInputPackets to the game console, consisting of X,Y positions of the paddle, as well as whether the player has tapped the screen to begin.
public sealed class UserInputPacket : IGamePacket { public PacketType Type { get { return PacketType.UserInputPacket; } } public UserInputCommand Command { get; set; } public Point3 Position { get; set; } }
Player one’s game console writes a GameStatePacket to player' two’s game console, which consists of the positions of each paddle, each ball, the score, and which tiles are lit for the ball splitter power up. Player two’s Update and Render methods use this data to draw the screen appropriately.
The hardware layer of this project is responsible for two big parts. One is a rumble effect that fires every time the player is hit, and the other is a lighting effect that changes depending on the game state.
As all good programmers do, we reused code from another project. We leveraged the proven web server from Project Detroit for our Netduino, but with a few changes. Here, we had static class “modules” which knew how to talk to the physical hardware, and “controllers” which handled items like a player scoring, game state animations, and taking damage. Because the modules are static classes, we can have them referenced in multiple classes without issue.
When a request comes in, we perform the requested operation, and then return a new line character to verify we got the request. If you don’t return any data, some clients will actually fire a second request which then can cause some odd behaviors. The flow is as follows:
private static void WebServerRequestReceived(Request request) { var start = DateTime.Now; Logger.WriteLine("Start: " + request.Url + " at " + DateTime.Now); try { var data = UrlHelper.ParseUrl(request.Url); var targetController = GetController(data); if (targetController != null) { targetController.ExecuteAction(data); } } catch (Exception ex0) { Logger.WriteLine(ex0.ToString()); } request.SendResponse(NewLine); Logger.WriteLine("End: " + request.Url + " at " + DateTime.Now + " took: " + (DateTime.Now - start).Milliseconds); } public static IController GetController(UrlData data) { if (data.IsDamage) return Damage; if (data.IsScore) return Score; if (data.IsGameState) return GameState; // can assume invalid return null; }
We used a Sparkfun MP3 trigger board, a subwoofer amplifier, and bass rumble plates to create this effect. The MP3 board requires power, and two jumpers to cause the MP3 to play. It has an audio jack that then gets plugged into the amplifier which powers the rumble plates.
From here, we just needed to wire a ground to the MP3 player’s ground pin, and the target pin on the MP3 player to a digital IO pin on the Netduino. In the code, we declare it as an OutputPort and give it an initial state of true. When we get a request, we toggle the pin on a separate thread.
private static readonly OutputPort StopMusic = new OutputPort(Pins.GPIO_PIN_D0, true); private static readonly OutputPort Track1 = new OutputPort(Pins.GPIO_PIN_D1, true); // .. more pins public static void PlayTrack(int track) { switch (track) { case 1: TogglePin(Track1); break; // ... more cases default: // stop all, invalid choice TogglePin(StopMusic); break; } } public static void Stop() { TogglePin(StopMusic); } private static void TogglePin(OutputPort port) { var t = new Thread(() => { port.Write(false); Thread.Sleep(50); port.Write(true); }); t.Start(); }
For lighting, we used some RGB Lighting strips. The strips can change a single color and use a PWM signal to do this. This is different than the lighting we used in Project Detroit which allowed us to individually control each LED and used SPI to communicate. We purchased an RGB amplifier to allow a PWM signal to power a 12 volt strip. We purchased ours from US LED Supply and the exact product was RGB Amplifier 4A/Ch for interfacing with a Micro-Controller (PWM/TTL Input).
We alter the Duty Cycle to shift the brightness of the LEDs and do this on a separate thread. Below is a stripped down version of the lighting hardware class.
public static class RgbStripLighting { private static readonly PWM RedPwm = new PWM(Pins.GPIO_PIN_D5); private static readonly PWM GreenPwm = new PWM(Pins.GPIO_PIN_D6); private static readonly PWM BluePwm = new PWM(Pins.GPIO_PIN_D9); private const int ThreadSleep = 50; private const int MaxValue = 100; const int PulsePurpleIncrement = 2; const int PulsePurpleThreadSleep = 100; private static Thread _animationThread; private static bool _killThread; #region game state animations public static void PlayGameIdle() { AbortAnimationThread(); _animationThread = new Thread(PulsePurple); _animationThread.Start(); } #endregion private static void PulsePurple() { while (!_killThread) { for (var i = 0; i <= 50; i += PulsePurpleIncrement) { SetPwmRgb(i, 0, i); } for (var i = 50; i >= 0; i -= PulsePurpleIncrement) { SetPwmRgb(i, 0, i); } Thread.Sleep(PulsePurpleThreadSleep); } } private static void AbortAnimationThread() { _killThread = true; try { if(_animationThread != null) _animationThread.Abort(); } catch (Exception ex0) { Debug.Print(ex0.ToString()); Debug.Print("Thread still alive: "); Debug.Print("Killed Thread"); } _killThread = false; } private static void SetPwmRgb(int red, int green, int blue) { // typically, 0 == off and 100 is on // things flipped however for the lighting so building this in. red = MaxValue - red; green = MaxValue - green; blue = MaxValue - blue; red = CheckBound(red, MaxValue); green = CheckBound(green, MaxValue); blue = CheckBound(blue, MaxValue); RedPwm.SetDutyCycle((uint) red); GreenPwm.SetDutyCycle((uint) green); BluePwm.SetDutyCycle((uint) blue); Thread.Sleep(ThreadSleep); } public static int CheckBound(int value, int max) { return CheckBound(value, 0, max); } public static int CheckBound(int value, int min, int max) { if (value <= min) value = min; else if (value >= max) value = max; return value; } }
We built this experience over the course of around 4 to 5 weeks. It was our first DirectX application in a very long time, and our first C++ application in a very long time. However, we were able to pick up the new platform and language changes fairly easily and create a simple, yet fun game in that time guys.
This was a lot of fun to play at //Build. Great stuff.
please give me a chnce to join every event
@Ian2: @PhilN: glad you guys liked it
very good i added my archive
Remove this comment
Remove this threadclose | http://channel9.msdn.com/coding4fun/articles/Maelstrom-An-Overview | CC-MAIN-2014-15 | refinedweb | 6,174 | 52.8 |
Answered
I have a project with mixed Java and Kotlin classes, and I figured I would unit test my Kotlin classes using Kotlin.
I’ve created a new Kotlin test class, but can’t figure out how to actually run it. It doesn’t run when I run all the tests in the project. I can’t seem to create a new JUnit run configuration pointing to the class.
import org.junit.Assert
import org.junit.Test
class CsrfFilterTest
@Test
fun simpleTest() {
Assert.assertEquals(2, 1 + 1)
}
How can I get IDEA run this? Any pointers?
My pom.xml has the following in it…
Hi Tobin,
I think that IDEA doesn't support top level test functions, please try to place it to the class (adding "{}").
Anna
Oh man, so stupid. I didn’t even think about that since there weren’t any compilation errors or anything thing.
Thanks! | https://intellij-support.jetbrains.com/hc/en-us/community/posts/205964724-Running-Kotlin-JUnit-tests?sort_by=created_at | CC-MAIN-2021-21 | refinedweb | 149 | 74.69 |
connect
How to connect to MySQL in JSP?
How to connect to MySQL in JSP? How to connect to MySQL in JSP? Share the code example with complete project source code.
Thanks
Hi,
You can use the Java JDBC code to connect to MySQL database in JSP page. Check
Connect JSP with mysql
;
This query creates database 'usermaster' in
Mysql.
Connect JSP with mysql :
Now in the following jsp code, you will see
how to connect... Connect JSP with mysql
JSP Simple Examples
in JSP
An exception can occur if you trying to connect to a database... in JSP
The while loop is a control flow statement, which allows code...:
out> For Simple Calculation and Output
In this example we have used
friend,
pagination using jsp with database
<...];
System.out.println("MySQL Connect Example.");
Connection con...;
}
System.out.println("MySQL Connect Example.");
Connection conn = null;
String
JSP code
JSP code I get an error when i execute the following code :
<... the following code its getting connected to database
import java.sql.Connection... app only i am unable to connect the database.
I am using Eclipse
jsp function - JSP-Servlet
a simple example of JSP Functions
Method in JSP
See the given simple button Example to submit...://
://
<jsp:useBean id="user.../jsp/simple-jsp-example/UseBean.shtml...how can we use beans in jsp how can we use beans in jsp
problem connect jsp and mysql - JSP-Servlet
problem connect jsp and mysql hello,
im getting an error while connecting jsp and mysql.
I have downloaded the driver mysql-connector...;
This is my code
The error i get is as follows
How to connect mysql with jsp
How to connect mysql with jsp how to connect jsp with mysql while using apache tomcat
code - JSP-Servlet
code how can i connect SQl database in javascript. and how to execute the query also i think u can not connect to SQL database in javascript.
thanks
sandeep
connect database with javascript
connect database with javascript can you please tell me how to connect database with javascript code.
balaji.sivaramgari@gmail.com this is my mail id
thanks in advance
code - JSP-Servlet
know how can i call database connection in javascript.
please help me use some server side script like servlet or jsp, call these when user click in check box.in that u can connect to database.javascript is run on client machine
Database connectivity with jsp code - JSP-Servlet
Database connectivity with jsp code I have written a program in java... established and the code in java is showing the output. But the problem is there with jsp code.Its giving the exceptions I have posted u before.
I dont know
Navigation in a database table through jsp
.
Create a database:
Before run this jsp code first create a database named...
Navigation in a database table through jsp
This is detailed jsp code that shows how
jsp code - JSP-Servlet
jsp code sample code for change password
example
Old Password:
new Password:
simple bank application - JSP-Servlet
simple bank application hi i got ur codings...But if we register a new user it is not updating in the database...so plz snd me the database also....
Thank you
Insert Image into Mysql Database through Simple Java Code
Insert Image into Mysql Database through
Simple Java Code... simple java code that how save image
into mysql database. Before running this java code you need to create data base and
table to save image in same database
jsp
jsp how to connect the database with jsp using mysql
Hi Friend,
Please visit the following links:
jsp code - JSP-Servlet
jsp code sample code to create hyperlink within hyperlink
example:
reservation:
train:
A/C department
non A/c Department
code for insert the value from jsp to access database
code for insert the value from jsp to access database code for insert the value from jsp to access database hello frns
i want to display image from the database along... from database in Jsp to visit....
Thanks
JSP Simple Examples
JSP Simple Examples
Index 1.
Creating....
Try catch in jsp
In try block we write those code which can throw... page.
Html tags in jsp
In this example
Jsp - JSP-Servlet
JSP date picker code I am digging for either a simple example or code to get the Date format in JSP
jsp - JSP-Servlet
jsp i want to code in jsp servlet for login page containing username password submit and then change password.in this code how to maintain session...("password");
System.out.println("MySQL Connect Example
JSP date example
JSP date example
Till now you learned about the JSP syntax, now I will show you how to
create a simple dynamic JSP page that prints
jsp code - Java Beginners
JSP code and Example JSP Code Example
Retrieve image from mysql database through jsp
to retrieve image from
mysql database through jsp code. First create a database....
mysql> create
database mahendra;
Note : In the jsp code given below, image...
Retrieve image from mysql database through
jsp
unable to connect database in java
unable to connect database in java Hello Everyone! i was trying to connect database with my application by using java but i am unable to connect...
i was using this code....
try
{
Driver d=(Driver)Class.forName
the following links:
JSP
://
EL parser...Can you explain jsp page life cycle what is el how does el search
jsp code - JSP-Servlet
;
For the above code, we have created following database tables...jsp code hi
my requirement is generate dynamic drop down lists... statement?
pls provide code how to get first selected drop down list value
jsp code - JSP-Servlet
jsp code i want to add below code data in mysql database using jsp... using below code we got data in text box i want to add multiple data in database...
Add/Remove dynamic rows in HTML table
JSP
the following link:... language , it is a simple language for accessing data, it makes it possible to easily access application data stored in JavaBeans components. The jsp expression
Simple JDBC Example
;
}
Simple JDBC Example
To connect java application to the database we do...,'John',B.Tech') ;
A Simple example is given below to connect a java...("MySQL Connect Example.");
Connection conn = null;
Statement stmt = null
Simple problem to solve - JSP-Servlet
Simple problem to solve Respected Sir/Madam,
I am R.Ragavendran.. Thanks for your kind and timely help for the program I have asked... the code for ur kind refernce..
Here it is:
EMPLOYEE
simple code for XML database in JS
simple code for XML database in JS Sir ,
i want a code in javascript for XML database to store details (username and password entered by user during registration process (login process)).
please send me a code .
Thank you
code to establish jdbc database connectivity in jsp
code to establish jdbc database connectivity in jsp Dear sir,
i'm in need of code and procedure to establish jdbc connectivity in</b>
<%
out.println("Unable to connect to database.");
}
%>...("Unable to connect to database.");
}
%>
</font>
</body>
</html>...jsp Hi
How can we display sqlException in a jsp page?
How can we - JSP-Interview Questions
://
Thanks...
This is simple code.
A Comment Test
A Test of Comments... are the comments in JSP(java server pages)and how many types and what are they.Thanks inadvance
simple - JDBC
simple can we have update,delete,save button in one html or jsp form performing respective operation if yes, give me code respectively. Hi Friend,
Please visit the following link:
Display Data from Database in JSP
Display Data from Database
in JSP
This is detailed java program to connect java
application with mysql database... jsp page.
welcome_to_database_query.jsp
<!DOCTYPE HTML PUBLIC "
database
links:
Connect JSP with database Mysql
Connect Java with database Mysql...database tell me use about database and give me a small program.
It is secure and can easily be accessed, managed, and updated. Moreover
JSP code for forget password
JSP code for forget password I need forget password JSP code..
example
jsp code for storing login and logout time to an account
jsp code for storing login and logout time to an account I need simple jsp code for extracting and storing login and logout time in a database table...://
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://roseindia.net/tutorialhelp/comment/81442 | CC-MAIN-2015-48 | refinedweb | 1,404 | 63.19 |
AWS Developer Tools Blog
Migrating to Boto3
Boto3, the latest version of the AWS SDK for Python, was released earlier this year. Since its release, we’ve seen more and more customers migrating to the latest major version of Boto. Boto3 provides many significant improvements over Boto:
- Faster availability of API updates and consistency in exposed interfaces
- Collections that provide an iterable interface to a collection of resources, including batch actions that can be used to perform an API operation on an entire collection
- Waiters that make it easier to poll resources for status changes
- Data-driven resource abstractions that provide an object-oriented API while still allowing a rapid relase cadence with significantly reduced maintenance overhead
We understand that migrating to a new major version of a project can be a big undertaking. It can take significant developer time to migrate; it requires more testing to ensure the successful migration of your application; and it can often involve wrestling with your dependency graph to ensure everything is compatible.
Using Boto and Boto3 side-by-side
To make the process of migrating to Boto3 easier, we released Boto3 under the
boto3 namespace so that you can use Boto and Boto3 in the same project without conflicts. This allows you to continue to use Boto for legacy applications Boto3 for new development.
pip install boto pip install boto3
import boto3 import boto.ec2 # For new development, use boto3. In this case, S3 s3 = boto3.resource('s3') for bucket in s3.buckets.all(): print(bucket.name) # Feel free to use boto for legacy code. In this case, EC2 conn = boto.ec2.connect_to_region('us-west-2') conn.run_instances('ami-image-id')
If your legacy applications or individual application components are currently running without issue, you might not have much motiviation to migrate from Boto to Boto3. After all, if it ain’t broke, dont’ fix it. However, if you have new applications or applications that need to use newer services and service features, you are strongly encouraged to upgrade to Boto3. Boto3 is the future of Boto. It is where most of our development will be focused.
Anything holding you back?
We want to make Boto3 as good as it can be. Your input and feedback is crucial in helping us decide how to allocate developer time and which features to develop. We’d like to know if there’s something holding you back from migrating or using Boto3 for new application development. Maybe there’s a feature you relied on in Boto that is not present in Boto3. Or perhpas you find something confusing in the documentation or the way a client is used. Please open an issue on the Boto3 issue tracker and let us know. We’ll apply the appropriate issue labels to make it easier to find and
+1 existing issues.
Ready to migrate?
Ready to migrate now? Our migration guide covers some of the high-level concepts to keep in mind. And again, feel free to use the the Boto3 issue tracker for questions or feature requests. | https://aws.amazon.com/blogs/developer/migrating-to-boto3/ | CC-MAIN-2021-43 | refinedweb | 509 | 54.63 |
Simple Git version plugin for setuptools
Project description
Simple Git version plugin for setuptools.
Usage
Add setuptools-gitver to setup_requires in the setup.py of your project. Then, after each release, add .post+gitver suffix to the version string.
For example, in setup.py:
import setuptools if __name__ == '__main__': setuptools.setup( setup_requires=['setuptools-gitver'], name='example-package', version='1.2.3.post+gitver', )
This will then generate version numbers like 1.2.3.post0.dev7+ga1b2c3d where 7 is the number of commits since the v1.2.3 tag and a1b2c3d is commit id of the HEAD.
When creating a release, update the version and remove the post+gitver suffix. When there is no +gitver suffix, the version won’t be touched by Setuptools Gitver. Also remember to tag the release in Git with git tag -a v1.2.3.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/setuptools-gitver/0.1.1/ | CC-MAIN-2022-33 | refinedweb | 163 | 62.75 |
Navigation in a database table through jsp
Navigation in a database table through jsp
... application fetches next
record from database, this is the navigation in database.
Create a database:
Before run this jsp code first create a database named
Thanks
Thanks This is my code.Also I need code for adding the information on the grid and the details must be inserted in the database.
Thanks in advance
Thanks for fast reply - Java Beginners
to fetch data from table in the data grid
I am using mysql database package...Thanks for fast reply Thanks for response
I am already use html...://
and this is the database connectivity code
jsf table
:
Display Database data in JSF table
Thanks... as jsp and back end as mysql.
I have a database named as mydatabase.it contains a table named as customer.
customer table has following columns :
customerid(varchar
How to use next and previous button(or href) for database table that is retrieved from MySQL DB using jsp,jstl,javascript
://
follow this code...How to use next and previous button(or href) for database table... to display previous 10records.
Database Query like this:
Select * from table
how to create database and table using jsp
how to create database and table using jsp hi frnds....,
i want to create database and table in mysql using jsp.... i have an registration form(name... table using jsp code... the table name should be the name of the person
Thanks - Java Beginners
Thanks Hi,
Thanks ur sending url is correct..And fullfill requirement..
I want this..
I have two master table and form vendor... form vend_name take the vendor_master table and in the product_master table
Uploading an image into the table - JSP-Servlet
into the table in java Hi friend,
Code to help in solving the problem... number and database name. Here machine name id localhost
and database name...";
/*declare a resultSet that works as a table resulted by execute a specified
thanks - Development process
thanks thanks for sending code for connecting jsp with mysql.
I have completed j2se(servlet,jsp and struts). I didn't get job then i have learnt.... please help me.
thanks in advance
Thanks - Java Beginners
Thanks Hi,
thanks
This is good ok this is write code but i... either same page or other page.
once again thanks
hai... data in the table corresponding to the typed name...
that code is for displayin
jsp with database.. - Development process
jsp with database.. Hello i need code for.....
I have a car... of that brand should be retrieved from database.i have created a table in database...
Thanks For the above code, create database table named
database through jsp
are there in the database?
thanks
Here is an example of jsp which retrieves data from database and display it in html table.
<%@ page import="java.sql.*" %>
<...database through jsp sir actually i want to retrieve the data from
UINavigationController hide navigation bar
UINavigationController hide navigation bar HI,
How to hide the navigation bar of UINavigationController? I have to hide navigation bar of UINavigationController on the click of a button.
Thanks
Hi,
Use following
problem on jsp, inserting data into table(mysql).
problem on jsp, inserting data into table(mysql). hello friends, I... a html page into a table.And the table name also is given by the user.My database... me... Thanks in advance
Problem in Jsp and database - Development process
to the database itself .
Thanks in advance. Hi Friend,
You can use... the table using jsp
Address...Problem in Jsp and database Hi, How can I reterive values
custom navigation bar in iphone
custom navigation bar in iphone How can i create a custom navigation bar in iPhone application? I wanted to add UIBarButton items as well as Search... it programmatically or if i can do it in Interface builder.
Thanks
Insert text into database table using JSP & jQuery
Insert text into database table using JSP & jQuery
In this tutorial , the text is inserted into database table using JSP &
jQuery. In the below...
for connectivity and fetching data from database table.
insert.jsp
<%@ page language
dynamic retrivel of data from mysql database in table format at jsp
the data from database and display it as table format in jsp... For example, i have...dynamic retrivel of data from mysql database in table format at jsp ... and also details of each member should be displayed in the table format in jsp
how to display values from database into table using jsp
how to display values from database into table using jsp I want to display values from database into table based on condition in query, how to display that? For example i have some number of books in database but i want
JSP Financial Year Table
JSP Financial Year Table Im trying to design a financial year table...wherein the user enters the data in the table and the same is updated in backend.i have successfully desinged the page but i have certain doubts in jsp
Time table generation
:
Generate TimeTable using JSP
Thanks for your reply
but still i have... informations we have to provide to generate time table?
pls make me clear.
Thanks... in database depends upon the number of courses we have in time table. Accordingly, we
database
database i am created one table in mysql database with one... you can register now.
thanks pls reply soon
The given code accepts... or not. If it is already exist in database, then show a message 'Already exists
creating instance of table in jsp
creating instance of table in jsp i face senario look kie as follows;
1)i write a code in jsp to retrieve the data from database.
2)the out put file is obviously a jsp page and shows the output in table manner.
3) now i want
WANNA IMPORT RAINBOW TABLE IN MYSQL DATABASE
to import rainbow table in mysql database
thanks...WANNA IMPORT RAINBOW TABLE IN MYSQL DATABASE hello,
i have a website and i have rainbow table uploaded on my server. this is multipart rainbow
jsp code for dynamic time table generation - JSP-Servlet
jsp code for dynamic time table generation hi
I am doing my academic project college automation.
I want to generate time table dynamically...
For the above code, we have created three database tables
1)btech:
CREATE TABLE
Dynamic retrieval od data from database and display it in the table at jsp
Dynamic retrieval od data from database and display it in the table at jsp ... the details of 20 members in a table format at jsp page by dynamically retrieving...; Here is a jsp code that retrieves the data from the database and display
Create a Table in Mysql database through SQL Query in JSP
Create a Table in Mysql database through
SQL Query in JSP...;
This is detailed java code to connect a jsp page to
mysql database and create a table of given... the
table creation.
1. welcome_to_database_query.jsp
2.create_table.jsp
1
java with jsp and dynamic database retrival for bar charts
java with jsp and dynamic database retrival for bar charts Hi,
I... dynamically for the table values logintime and userid from database.. can u please... database..
Thanks in advance
Upadate Database using updatable fields - JSP-Servlet
Upadate Database using updatable fields Hi,
Thanks for previous answers.They were of Great Help !!!
I want to create a jsp in which data... submits, the database table should get updated with these new entries he has made
table
input from oracle table(my database table..)
This is a very important table of my
Retrieving value from multiple table in database
Retrieving value from multiple table in database Hi fnds, I want to maintain the financial database of 20 users for 1 year and update the details in jsp page.. so i have decided to maintain the details based on month (because
How to know the selected row from table - JSP-Interview Questions
retriving data from database and i place that data into html table.in that table i place one table data as hyperlink.
now the problem here is how can i know the selected row in that table. Hi Friend,
Please clarify your
Re:Need Help for Editable Display table - JSP-Servlet
code, we have used following database table:
CREATE TABLE `student... in jsp to display editable display tag.
I able to show the datagrid in jsp... and edit it in the same grid..
Thanks in advance
Eswaramoorthy Hi
Inserting data on a database in servlet - JSP-Servlet
to a databse by a servlet.I used the example in "Inserting Data In Database table... of a table,that is none of the data which i tried to enter into database by using... the database config. you have used :
Database name :
Database Table Name :
Database
applet connected to table in MS Access database
applet connected to table in MS Access database i have connected my java code with the MS access database and this is my code, can anyone tell me how to show the table in an applet...pls
import java.sql.
JSP
JSP How to retrieve the dynamic html table content based on id and store it into mysql database?
How to export the data from dynamic html table content to excel?Thanks in Advance.. Plz help me its urgent
Table
Table Why u dont understand sir?? I want to make a table program which generate a table on showMessageDialog. I have learnt these methods until now... methods to be used.
Write a table program which use only and only these above methods
display of colors in a table - JSP-Servlet
display of colors in a table Hi,
If i have a table of 4 by 4 boxes... color and another column of another color?
Thanks! Hi Friend,
Try the following jsp code:
table.jsp:
Column1,Row1
Table
Table How i create table on showMessageDialog using JOptionpane and Integer.parseInt.
No other method to use. Pl make a program which generate
5X1...,buffer.toString());
}
}
Thanks
Storing records of a file inside database table in java
Storing records of a file inside database table in java Here is my... the records inside the database except headings (Here sid,sname,age are headings in student.csv file).Please help me in resolving this problem.
Thanks &
how to take input table name in jsp to create table in mysql?
how to take input table name in jsp to create table in mysql? how to take input table name in jsp to create table in mysql?
Hello Friend...="post">
Enter Table Name:<input type="text" name="table"><input type
problem in database
problem in database thanks for web site.
I want change this code to insert data into PostgreSql database using jsp,servlets.
but i getting...;
</web-app>
//Database name: postgres
CREATE TABLE sample
Updating Ms Access Database using jsp - JSP-Servlet
Updating Ms Access Database using jsp Hi
I am new to jsp and I am trying to update a record in an access database. I only want to update part... a table product(prod_id,prod_name,quantity,price) and try the following code
how to connect to MS access database in JSP?
how to connect to MS access database in JSP? how to connect to MS access database in JSP? Any seetings/drivers need to be set or installed before it? Please tell what needs to be done after creating a table with an example
How to get table row contents into next jsp page
How to get table row contents into next jsp page Hi,
I have a 30... contents in the next jsp page and display them
Thanks
Vikas
The given code retrieve data from database and display in the html table. At each row
JSP Database Example
This example shows you how to develop JSP that connects to the database and
retrieves the data from database. The retrieved data is displayed on the
browser.
Read Example
JSP Database
Example
Thanks
i can not connect to database in servlet - JSP-Servlet
i can not connect to database in servlet Hi
I am following... to a databse by a servlet.I used the example in "Inserting Data In Database table using Statement",but typing the following url(
How to Dragging and dropping HTML table row to another position(In Jsp) and save the new position (values) also in database(MySql)?
How to Dragging and dropping HTML table row to another position(In Jsp... have one Html table in jsp page and i am iterating values (in columns of html table)from Database, Now i am Dragging and dropping one HTML table row to another
Dragging and dropping HTML table row to another position(In Jsp) and save the new position (values) also in database
Dragging and dropping HTML table row to another position(In Jsp) and save... table in jsp page and i am iterating values (in columns of html table)from Database, Now i am Dragging and dropping one HTML table row to another position.I and Database access
JSP and Database access Hi,
Please help me with the following program. I am not able to update all the pa column values in my database.
csea.jsp...="post" action="csea.jsp">
<table border="1">
<tr><th>Reg
jsp
in hrid table to data base and at the same time to the next form . i will give u...;" import = "java.io.*" errorPage = "" %>
<jsp:useBean id = "formHandler...)
{
var table = document.getElementById(tbl_actionrequired);
var
Jsp Code to store date in database - JSP-Servlet
Jsp Code to store date in database
Hi, Can u give me jsp code to store current date in to database.
Thanks Prakash
JSP
();
}
%>
In the above code,we have taken the database table student(rollNo,name...)
{
e.printStackTrace();
}
%>
In the above code,we have taken the database table student... in listview or in gridview within JSP?
Hi Friend,
Try textbox autopopulation on basis of SQL table values
JSP textbox autopopulation on basis of SQL table values Hi,
I need... table is created in MySQL DB:
Problem type Status Responsible
LEGAL CONTROL NEW ABC
LEGAL Dept PENDING PQR
There are 2 list box on JSP , one
storing details in database on clicking submit button - JSP-Servlet
storing details in database on clicking submit button I am using JSP...:
JSP Page
ID:
Name:
Thanks... database on clicking submit button. I am unable to do this.Can u tell me how to code
insert data into database
and studentmaster is the database table name.
i am using same details.
Now give...insert data into database hi,thanks for reply
just i am doing student information project,frontend is jsp-servlet and backend is msaccess2007. i.
How to display data fom MySQL DataBase-table in to JSP file by submitting a value in combobox.
How to display data fom MySQL DataBase-table in to JSP file by submitting a value in combobox. I have the same problem.plz help me.
I have MySQL DataBase/DB Name:lokesh;
Table Name:TR_list;
columns:nodename,packageno,TR Delete Record From Table Using MySQL
to write a JSP for deleting a record from
database table. In this section you....
In this tutorial you will learn that how to delete a record of a database
table in JSP... for deleting a record of a database table in jsp.
To accomplish this task we
Connecting to Database from a hyperlink in JSP - JSP-Servlet
Connecting to Database from a hyperlink in JSP How can I connect to database by clicking on a hyperlink in a JSP Page.Can you please give me sample...://
Thanks.
http
Facing Problem to insert Multiple Array values in database - JSP-Servlet
friend ,
Iam beginner in jsp and creating the project in shopping cart but ihave facing the problem while inserting the data in database.
iam using the MsAccess Database
My Database structure is Like iam using tow tabel ,CustomerDetails
display date to jsp from database
display date to jsp from database display date to jsp from database... not available in database field than show in green color and clickable.
NOTE :- Date...-clickable
TELL ME THIS TYPE OF CALENDER
arvind
thanks in ADVANCE
How To Insert A New Record to MS Access table database in GUI
How To Insert A New Record to MS Access table database in GUI Hello... that involves inserting a record into a 6-column table in my MS Access database table. I'm... events. I know I'm missing something, I just couldn't figure out where. Thanks
Accessing database from JSP
;
This will create a table "books_details" in database "books"
JSP Code... Accessing database from JSP
... take a example of Books database. This database
contains a table named books | http://www.roseindia.net/tutorialhelp/comment/90460 | CC-MAIN-2013-48 | refinedweb | 2,781 | 64.81 |
27 May 2011 11:11 [Source: ICIS news]
SINGAPORE (ICIS)--Spot polypropylene (PP) prices fell by $60-70/tonne (€42.60-49.70/tonne) in the ?xml:namespace>
On Friday, spot raffia prices were discussed at $1,640-1,660/tonne CFR (cost & freight)
Weak demand and softer prices in the key
Early this week, a major producer from the Gulf Cooperation Council (GCC) offered its June cargoes at $1,670/tonne CFR Karachi for raffia and $1,680/tonne CFR Karachi for injection, LC 90 days.
However, the producer reduced its offer to $1,650/tonne CFR Karachi for raffia, LC (letter of credit) 90 days, after a second GCC-based maker announced its offers for June shipments at $1,645/tonne
Two other producers from the GCC offered their June parcels at $1,660-1,670/tonne CFR Karachi for raffia, LC 90 days. Players are expecting them to make reductions next week in the presence of other lower offers.
A fifth maker from the GCC offered its June cargoes early this week at $1,700/tonne CFR Karachi for raffia, LC 90 days, but it planned to make a downward adjustment in line with the remaining GCC offers.
An Indian producer nominated offers for its June raffia lots at $1,690/tonne DAP (delivered at place) Attari, but this price level was firmly rejected by converters.
Despite several announcements of offers for June shipments, converters do not have urgency to settle business, as they are expecting prices to decrease further next week.
Exporters to
($1 = €0.71) | http://www.icis.com/Articles/2011/05/27/9463888/pakistans-spot-pp-prices-slump-in-sluggish-market.html | CC-MAIN-2015-22 | refinedweb | 260 | 54.15 |
Investors in Corning Inc (Symbol: GLW) saw new options begin trading today, for the October 25th expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the GLW options chain for the new October 25th contracts and identified one put and one call contract of particular interest.
The put contract at the $27.50 strike price has a current bid of 68 cents. If an investor was to sell-to-open that put contract, they are committing to purchase the stock at $27.50, but will also collect the premium, putting the cost basis of the shares at $26.82 (before broker commissions). To an investor already interested in purchasing shares of GLW, that could represent an attractive alternative to paying $28.47/share today.
Because the .47% return on the cash commitment, or 18.42% annualized — at Stock Options Channel we call this the YieldBoost.
Below is a chart showing the trailing twelve month trading history for Corning Inc, and highlighting in green where the $27.50 strike is located relative to that history:
Turning to the calls side of the option chain, the call contract at the $29.00 strike price has a current bid of 89 cents. If an investor was to purchase shares of GLW stock at the current price level of $28.47/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $29.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 4.99% if the stock gets called away at the October 25th expiration (before broker commissions). Of course, a lot of upside could potentially be left on the table if GLW shares really soar, which is why looking at the trailing twelve month trading history for Corning Inc, as well as studying the business fundamentals becomes important. Below is a chart showing GLW's trailing twelve month trading history, with the $29.00 strike highlighted in red:
Considering the fact that the .13% boost of extra return to the investor, or 23.29% annualized, which we refer to as the YieldBoost.
The implied volatility in the put contract example is 34%, while the implied volatility in the call contract example is 38%.
Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 251 trading day closing values as well as today's price of $28.47) to be 30%.. | https://www.nasdaq.com/articles/interesting-glw-put-and-call-options-for-october-25th-2019-09-06 | CC-MAIN-2019-39 | refinedweb | 415 | 63.9 |
CouchDB and Pylons: User Registration and Login
In the previous tutorial, we learned how to get CouchDB and Pylons up and running, as well as create a simple page counter. Now we are going to implement a simple user authentication system. This tutorial will teach you how to use formencode to validate forms and CouchDB to store our user data.
Let's start by creating a new pylons project and some controllers.
$ paster create -t pylons userdemo $ cd userdemo $ paster controller main $ paster controller auth
Also, delete public/index.html.
For our controller main, we are going to add one action called index. This will be the main page for the site and will only be accessible for logged in users. If a user is not logged in, they will be redirected to the login page. Let's add some routes for the main page, login, logout, and registration. Open up config/routing.py.
map.connect('/', controller='main', action='index') map.connect('/auth/login', controller='auth', action='login', conditions=dict(method=['GET'])) map.connect('/auth/login', controller='auth', action='login_post', conditions=dict(method=['POST'])) map.connect('/auth/logout', controller='auth', action='logout') map.connect('/auth/register', controller='auth', action='register', conditions=dict(method=['GET'])) map.connect('/auth/register', controller='auth', action='register_post', conditions=dict(method=['POST']))
For checking to see if a user is logged in, we will store a user_id in the session. Let's make a decorator that we can add to our index action that will check to see if a user is logged in. If the user isn't logged in, it will redirect to the login page. Your controllers/main.py should look have this:
from decorator import decorator def require_login(func, *args, **kwargs): """ Checks to see if user_id is in session """ if not 'user_id' in session: redirect_to('/auth/login') return func(*args, **kwargs) require_login = decorator(require_login) class MainController(BaseController): @require_login def index(self): return 'You are logged in! Click <a href="/auth/logout">here</a> to logout.'
If you start the server now and go to you will be redirected to /auth/login. Good. Let's get into CouchDB now...
Open up and create a new database, and call it userdemo.
Now we are going to define our User schema in model/init.py. A schema describes a certain type of document, and in this case, it will be a User document. This example is very simple, so we only need username, password, and salt. Note the other field, type. This will specify that the document is for users, and it will be automatically set with the default parameter when we store a document. We are also going to create a simple helper function that will return an instance of our database(from couchdb-python).
from couchdb import Server from couchdb import schema def get_db(): server = Server('') return server['userdemo'] class User(schema.Document): """ Simple user document """ username = schema.TextField() password = schema.TextField() salt = schema.TextField() type = schema.TextField(default='user')
Next, we are going to create a simple template for auth/login. This will prompt the user to login, or register if the user does not have an account. (I assume you are using mako for your template engine).
templates/login.mak:
<html> <head><title>Login</title></head> <body> <h1>Login</h1> <form action="/auth/login" method="post"> <table> <tr><th>Username:</th><td>${h.text('username')}</td></tr> <tr><th>Password:</th><td>${h.password('password')}</td></tr> </table> <input type="submit" value="Login" /> </form> % if c.invalid_user: <p>*** An invalid username or password was entered.</p> % endif <p>Don't have an account? Click <a href="/auth/register">here</a> to register.</p> </body></html>
Notice that we used h.text() and h.password(). These helpers create our input boxes and will also display an error when we get into form validation. Make sure to import those functions in lib/helpers.py.
from webhelpers.html.tags import text, password
Now that we've created or login template, let's implement our action auth/login in controllers/auth.py.
class AuthController(BaseController): def login(self): return render('login.mak')
Okay, let's run the server and try to go to. We are redirect to our new login page.
$ paster serve --reload development.ini
Okay, now let's implement our login action. This is under controllers/auth.py, and the action is login_post. The first thing we will do is add a formencode schema for our login form. This will make sure the username and/or password is not empty, and it will display an error accordingly. Here is what our new auth.py file might look like.
import formencode class LoginForm(formencode.Schema): username = formencode.validators.String(not_empty=True) password = formencode.validators.String(not_empty=True) class AuthController(BaseController): def login(self): return render('login.mak') class AuthController(BaseController): def login(self): return render('login.mak') def login_post(self): try: form_result = LoginForm().to_python(request.POST) try: user = authenticate_user(form_result['username'], form_result['password']) session['user_id'] = user.id session.save() redirect_to('/') except InvalidUser: c.invalid_user = True return render('login.mak') except formencode.Invalid, err: html = render('login.mak') return formencode.htmlfill.render(html, errors=err.error_dict)
This login_post action first checks to see if the form is valid, if not, it will display the login form with the appropriate errors. If the form is valid, we attempt to call the function authenticate_user, which will return a valid User document if the login is successful, or raise an Exception if the login information is invalid. If the login is valid, we save the user.id in our session. If not, we set the invalid_user context variable and return the login page.
Okay, so we have our login_post action defined, but we don't have an authenticate_user method yet. Let's implement that. But before we do that, we need to know a little bit more about CouchDB views. A view is a way to query data from our database. A view is defined by implementing map and reduce functions.
Let's start by adding a user view, that will let us query user data. To create the view, go to and select the userdemo database. Go to Create Document. We are going to create a design document which is where the views are defined. The document ID should be design/user. This view is accessible via. To define our view functions, open the newly created design/user document and add a field called views. For now, just put {} for the value. Hit save document.
Each view function defines a map and optionally a reduce function. This lets us limit and control what documents our view will return. For now, we just want a simple view that allows us to query the database and get a user by the username. Remember that CouchDB returns data with a key/value. The key we want to return is the username.
Let's talk about map for a second. A map function is passed a CouchDB document, and then emits, or adds, key/value pairs. CouchDB uses javascript as the default view server. We will call our view function by_username, and it will return the username as the key and the user document as the value. Open your _design/user document and put this in for the views field.
{ "by_username": { "map": "function(doc) { if(doc.type == 'user') emit(doc.username, doc); }" } }
CouchDB should look like this:
Notice that we check doc.type. This is because all documents are stored under one namespace, so we need a way to differentiate between different documents. We do this by setting a type field for every user, with the value "user". You can access this view by going here:.
Alright, now that we have our view defined, let's implement the authenticate_user function. We also create a custom exception class that we use for an invalid user. We use hashlib to generate a sha256 hash of the user's password and a random salt. gen_hash_password is used later for our registration functions, for creating a new salt and getting a hash. Add this above your controller in controllers/auth.py.
from userdemo.model import get_db, User import hashlib class InvalidUser(Exception): pass def hash_password(password, salt): m = hashlib.sha256() m.update(password) m.update(salt) return m.hexdigest() def gen_hash_password(password): import random letters = 'abcdefghijklmnopqrstuvwxyz0123456789' p = '' random.seed() for x in range(32): p += letters[random.randint(0, len(letters)-1)] return hash_password(password, p), p def authenticate_user(username, password): result = User.view(get_db(), '_view/user/by_username', key=username) if len(result) == 0: raise InvalidUser('bad username') user = result.__iter__().next() # check password if not hash_password(password, user.salt) == user.password: raise InvalidUser('bad password') return user
Okay, now our site can login a user. Let's get into registration. Create the following functions for our registration action in controllers/auth.py.
def register(self): return render('register.mak') def register_post(self): try: form_result = RegisterForm().to_python(request.POST) user = User() user.username = form_result['username'] pwd, salt = gen_hash_password(form_result['password']) user.password = pwd user.salt = salt user.store(get_db()) return 'You are registered. Click <a href="/auth/login">here</a> to login.' except formencode.Invalid, err: html = render('register.mak') return formencode.htmlfill.render(html, errors=err.error_dict)
These functions are similar to the login ones, except here we store a new user into the database, using the User schema document class that we defined in model/init.py. We also need to implement the RegisterForm, and also a custom validator, which checks to see if a username is taken.
def user_exists(username): result = User.view(get_db(), '_view/user/by_username', key=username) if len(result) == 0: return False return True class UsernameValidator(formencode.validators.String): def validate_python(self, value, state): if user_exists(value): raise formencode.Invalid('Username already taken', value, state) class RegisterForm(formencode.Schema): username = UsernameValidator(not_empty=True) password = formencode.validators.String(not_empty=True)
And that's it! You can now register an account, login, and view the protected page(with the require_login decorator). Now you just need to add require_login to any function that is only for logged in users. In the next tutorial, I will go into some more advanced CouchDB topics and some cool map/reduce views.
You can download the pylons userdemo project here.
Tweet | http://www.chrismoos.com/2009/02/21/couchdb-and-pylons-user-registration-and-login/ | CC-MAIN-2014-15 | refinedweb | 1,715 | 52.87 |
The objective of this tutorial is to show how to deauthenticate all the stations connected to a network hosted by the ESP32, operating as soft AP. The tests from this tutorial were performed using an ESP32 board from DFRobot.
Introduction
The objective of this tutorial is to show how to deauthenticate all the stations connected to a network hosted by the ESP32, operating as soft AP. We will be using the Arduino core.
To illustrate this, we will be printing periodically the number of stations connected to the network. Then, if we receive any content on the serial port, we will call the deauth function, so the number of connections should become 0.
The tests from this tutorial were performed using an ESP32 board from DFRobot.
The code
We will start our code by the library includes. We will need the WiFi.h lib, to be able to setup the device to operate in soft AP mode, and the esp_wifi.h, which will make available the function we need to deauthenticate all the stations.
#include <WiFi.h> #include "esp_wifi.h"
In the Arduino setup function, we will start by opening a serial connection, so we can output the results of our program. We will also need the serial connection to read any incoming bytes, so we know when to perform the deauthentication.
After that, we will setup the network that will be hosted by the ESP32. We do this with a call to the softAP method on the WiFi extern variable, passing as input the name we want to assign to the network.
The full setup function can be seen below.
void setup() { Serial.begin(115200); WiFi.softAP("MyESP32AP"); }
We will write the rest of the code in the Arduino loop function. We will first check if there are any incoming bytes on the serial port, with a call to the read method on the Serial object.
This method will return the next byte available (if there is incoming data) or -1 if there is no data to read.
In our case, for simplicity, I’m assuming that as long as any content is sent to the serial port, then we want to deauth all stations. So, if the value returned by the read method is different from -1, them we will call the deauth function.
To deauth all the stations, we simply need to call the esp_wifi_deauth_sta function, passing as input the value 0.
Note that if we want to deauthenticate a particular station rather than all of them, we can alternatively pass as input of the function the id of that station. Nonetheless, we won’t be covering that scenario here.
if(Serial.read() != -1){ esp_wifi_deauth_sta(0); }
To finalize the loop function, we will print the total number of stations connected to the network. That way, we can confirm if our deauth had effect. The procedure to print the number of connected stations can be checked on this previous tutorial.
In short, we simply need to call the softAPgetStationNum method on the WiFi extern variable.
Serial.print("Stations connected: "); Serial.println(WiFi.softAPgetStationNum());
The complete loop code can be seen below. We have added a 5 seconds delay between each iteration of the loop.
void loop() { if(Serial.read() != -1){ esp_wifi_deauth_sta(0); } Serial.print("Stations connected: "); Serial.println(WiFi.softAPgetStationNum()); delay(5000); }
The final code can be seen below.
#include <WiFi.h> #include "esp_wifi.h" void setup() { Serial.begin(115200); WiFi.softAP("MyESP32AP"); } void loop() { if(Serial.read() != -1){ esp_wifi_deauth_sta(0); } Serial.print("Stations connected: "); Serial.println(WiFi.softAPgetStationNum()); delay(5000); }
Testing the code
To test the code, simply compile it and upload it to your ESP32, using the Arduino IDE. When the procedure finishes, open the IDE serial monitor and wait for the network setup to complete.
If you check the Arduino IDE serial monitor, it should be printing 0 stations connected to the network.
Then, connect one or more devices to the network. The number of stations printed to the serial monitor should increase.
Then, write a character and send it to the ESP32 using the Arduino IDE serial monitor. The next time the number of stations is printed, it should be 0, as all the previously connected ones should have been deauthenticated. This is shown in figure 1.
Note that, depending on the device that you had connected to the network, it might attempt to reconnect again after a while. Our code only contemplates deauthing all the devices at a given moment, it doesn’t previne them from rejoining the network later.
So, you might see the number of connected stations going to 0 and then increasing again without explicitly connecting the device again to the network. The behavior is normal.
References
[1]
One Reply to “ESP32 Soft AP: deauth connected stations”
Hello! Do you use a WordPress template for this blog? I am interested in starting a blog and I absolutely love the style of your blog (and content) but I am not sure which hosting/blogging platform to use. Please let me know! I am very curious. I know this is not relevant to the topic of this post and I apologize for that. | https://techtutorialsx.com/2019/09/22/esp32-soft-ap-deauth-connected-stations/?shared=email&msg=fail | CC-MAIN-2020-29 | refinedweb | 861 | 66.13 |
Update from Friday 9/23: The SafeInt developers have already uploaded a new version that fixes the problems described in this post. Nice!
I have a minor obsession with undefined behaviors in C and C++. Lately I was tracking down some integer overflows in Firefox — of which there are quite a few — and some of them seemed to originate in the well-known SafeInt library that it uses to avoid performing unsafe integer operations. Next, I poked around in the latest version of SafeInt and found that while executing its (quite good) test suite there are 43 different places in the code where an undefined integer operation is performed. A substantial number of them stem from code like this:
bool negative = false; if (a<0) { a = -a; negative = true; } ... assume a is positive ... if (negative) { return -result; } else { return result; }
To see a real example of this, take a look at the code starting at line 2100 of SafeInt3.hpp. The problem here, of course, is that for the most common choices of implementation-defined behaviors in C++, negating INT_MIN is an undefined behavior.
Now we have to ask a couple of questions. First, does this code operate properly if the compiler happens to implement -INT_MIN using 2’s complement arithmetic? I’m not sure about all of the overflows in SafeInt, but I believe this function actually does work in that case. The second question is, do compilers (despite not being required to do so) give 2’s complement semantics to -INT_MIN? Every compiler I tried has this behavior when optimizations are disabled. On the other hand, not all compilers have this behavior when optimizations are turned on. A simple test you can do is to compile this function with maximum optimization:
void bar (void); void foo (int x) { if (x<0) x = -x; if (x<0) bar(); }
If the resulting assembly code contains no call to bar(), then the compiler has (correctly) observed that every path through this function either does not call bar() or else relies on undefined behavior. Once the compiler sees this, it is free to eliminate the call to bar() as dead code. Most of the compilers I tried — even ones that are known to exploit other kinds of integer undefined behavior — don’t perform this optimization. However, here’s what I get from a recent GCC snapshot:
[regehr@gamow ~]$ current-gcc -c -O2 overflow2.c [regehr@gamow ~]$ objdump -d overflow2.o 0000000000000000 <foo>: 0: f3 c3 repz retq
Now, does this same optimization ever fire when compiling SafeInt code, causing it to return a wrong result? Here’s a bit of test code:
#include <cstdio> #include <climits> #include "SafeInt3.hpp" void test (__int32 a, __int64 b) { __int32 ret; bool res = SafeMultiply (a, b, ret); if (res) { printf ("%d * %lld = %d\n", a, b, ret); } else { printf ("%d * %lld = INVALID\n", a, b); } } int main (void) { test (INT_MIN, -2); test (INT_MIN, -1); test (INT_MIN, 0); test (INT_MIN, 1); test (INT_MIN, 2); return 0; }
Next we compile it with the recent g++ at a couple of different optimization levels and run the resulting executables:
[regehr@gamow safeint]$ current-g++ -O1 -Wall safeint_test.cpp [regehr@gamow safeint]$ ./a.out -2147483648 * -2 = INVALID -2147483648 * -1 = INVALID -2147483648 * 0 = 0 -2147483648 * 1 = -2147483648 -2147483648 * 2 = INVALID [regehr@gamow safeint]$ current-g++ -O2 -Wall safeint_test.cpp [regehr@gamow safeint]$ ./a.out -2147483648 * -2 = INVALID -2147483648 * -1 = INVALID -2147483648 * 0 = 0 -2147483648 * 1 = INVALID -2147483648 * 2 = INVALID
The first set of results is correct. The second set is wrong for the INT_MIN * 1 case, which should not overflow. Basically, at -O2 and higher gcc and g++ turn on optimization passes that try to generate better code by taking integer undefined behaviors into account. Let’s be clear: this is not a compiler bug; g++ is simply exploiting a bit of leeway given to it by the C++ standards.
What can we take away from this example?
- It’s a little ironic that the SafeInt library (a widely used piece of software, and not a new one) is itself performing operations with undefined behavior. This is not the only safe math library I’ve seen that does this — it is simply very hard to avoid running afoul of C/C++’s integer rules, particularly without good tool support.
- It’s impressive that G++ was able to exploit this undefined behavior. GCC is getting to be a very strongly optimizing compiler and also SafeInt was carefully designed to not get in the way of compiler optimizations.
If you have security-critical code that uses SafeInt to manipulate untrusted data, should you be worried? Hard to say. I was able to get SafeInt to malfunction, but only in a conservative direction (rejecting a valid operation as invalid, instead of the reverse) and only using a recent G++ snapshot (note that I didn’t try a lot of compilers — there could easily be others that do the same thing). Also, there are 42 more integer overflow sites that I didn’t look at in detail. One way to be safe would be to use GCC’s -fwrapv option, which forces 2’s complement semantics for signed integer overflows. Clang also supports this option, but most other compilers don’t have an equivalent one. Also unfortunately, SafeInt is structured as a header file instead of a true library, so it’s not like just this one file can be recompiled separately.
This issue has been reported here.
Update: I checked CERT’s IntegerLib (download here) and it is seriously broken. Its own test suite changes behavior across -O0 … -O3 for each of Clang, GCC, and Intel CC. The undefined behaviors are:
UNDEFINED at <add.c, (24:12)> : Op: +, Reason : Signed Addition Overflow
UNDEFINED at <add.c, (24:28)> : Op: +, Reason : Signed Addition Overflow
UNDEFINED at <add.c, (29:12)> : Op: +, Reason : Signed Addition Overflow
UNDEFINED at <add.c, (43:14)> : Op: +, Reason : Signed Addition Overflow
UNDEFINED at <add.c, (43:30)> : Op: +, Reason : Signed Addition Overflow
UNDEFINED at <add.c, (47:14)> : Op: +, Reason : Signed Addition Overflow
UNDEFINED at <add.c, (61:12)> : Op: +, Reason : Signed Addition Overflow
UNDEFINED at <add.c, (61:28)> : Op: +, Reason : Signed Addition Overflow
UNDEFINED at <add.c, (65:15)> : Op: +, Reason : Signed Addition Overflow
UNDEFINED at <mult.c, (52:13)> : Op: *, Reason : Signed Multiplication Overflow
UNDEFINED at <mult.c, (69:47)> : Op: *, Reason : Signed Multiplication Overflow
UNDEFINED at <sub.c, (24:23)> : Op: -, Reason : Signed Subtraction Overflow
UNDEFINED at <sub.c, (28:12)> : Op: -, Reason : Signed Subtraction Overflow
UNDEFINED at <sub.c, (42:23)> : Op: -, Reason : Signed Subtraction Overflow
UNDEFINED at <sub.c, (46:12)> : Op: -, Reason : Signed Subtraction Overflow
UNDEFINED at <sub.c, (60:23)> : Op: -, Reason : Signed Subtraction Overflow
UNDEFINED at <sub.c, (64:12)> : Op: -, Reason : Signed Subtraction Overflow
UNDEFINED at <unary.c, (28:9)> : Op: -, Reason : Signed Subtraction Overflow
UNDEFINED at <unary.c, (37:9)> : Op: -, Reason : Signed Subtraction Overflow
UNDEFINED at <unary.c, (46:9)> : Op: -, Reason : Signed Subtraction Overflow
I cannot imagine this library is suitable for any purpose if you are using an optimizing compiler.
Update from Friday 9/23: I looked into the problems in IntegerLib in more detail. The different output across different compilers and optimization levels isn’t even due to the integer problems — it’s due to the fact that a couple of functions have no “return” statement, and therefore return garbage. Do not use IntegerLib! Use SafeInt.
I appreciate your work on this, but I think the inescapable conclusion is that allowing simple arithmetic on signed integers to trigger undefined behavior is clearly unworkable.
In practice, it means that nobody can write safe code. Surely nobody is much better at writing careful integer code than the SafeInt authors. Even with stellar tool support, we will not find all the bugs—or even fix all the bugs that could be found.
I’ve read about the neato ways compilers can take advantage of this rule, but I have yet to see any example where the optimization actually seemed desirable. It usually amounts to taking code that is apparently intended to do one thing, and using the accidental undefined behavior as an excuse to rewrite the code to do something completely different and unexpected (but fast). The more you break the rule, the faster your code can do the wrong thing. Isn’t that something.
Ideally, in my view, you could direct your obvious talents to more deserving problems. But if the outcome here is that the C++ committee agrees enough is enough and specifies two’s complement, it will have been worth it!
Hi Jason, the examples people have told me about where undefined signed overflow is useful involve loop-intensive numerical codes where, for example, it is highly advantageous to be able to assume that an increasing induction variable never goes negative. Apparently 30%-50% speedups on certain loops are not unheard of.
But of course you are right: this part of the C standard gives zero benefit to 99.9% of C/C++ developers and causes significant pain for some fraction of them. I don’t see the standards committees changing this — the languages will die first.
@regehr#2: The committee could usefully mandate “two’s-complement” while still allowing the result of INT_MAX+1 to be undefined. In my opinion, a two’s-complement mandate would include finally giving a definition to at least the following behaviors:
* Right-shifting a signed value (by 0<=n<width, at least).
* Performing bitwise operations on signed values.
They could mandate all that and just decide to leave the "school-arithmetic" operations undefined. Wouldn't really help with the problem at hand, but at least it would be helpful to people such as yourself who care about the semantics of the language.
And @post: It strikes me as just outrageous that SafeInt and IntegerLib exist at all, if their creators couldn’t be bothered to check for obvious overflow conditions. Jason’s takeaway was “SafeInt authors are experts, therefore it’s impossible to use C++ safely”. My takeaway is the contrapositive of that: “SafeInt seems like an obvious idea, but its authors somehow managed to screw up the implementation, therefore SafeInt’s authors are idiots.”
I took a quick look at the code, but it seemed way over-templatized. For someone who wanted a solid “safe integer” class, it seems easier to throw it out and start from scratch than to try fixing all the bugs that might exist in there.
(And yeah, I know about template metaprogramming… but SafeInt isn’t a template-meta- type of class; it’s a traditional class with overloaded operators and stuff. That doesn’t seem like it would actually fold away at compile time without an aggressive inliner, and if you have an aggressive inliner you can dispense with most of that template junk.)
g++ from gcc 4.6.1 does not exhibit the suggested behavior with -O2 optimization enabled.
Plan9 prints
void
main(void) {
long x = -2147483648L;
if(x<0) x = -x;
if(x<0) print("X %ld\n", x);
}
AC, I agree that C/C++ would lose little by mandating 2’s complement. I’m guessing there are a few DSPs or similar that are not 2’s complement, but I have no idea if any such architectures are still actively targeted by C compilers.
Re. the templates, I’m happily ignorant about C++ design. The language fills no niche in my programming life. Efficiency wise, clang++, g++, and Intel C++ seem to see through all the templates.
Thanks for the insightful article.
As a SafeInt user, I tend to feel much like Orendorff (and disagree with Cowherd). I would even take it a bit further: the C++ committee and its abstract machine with unlimited precision which never overflows is a majority of the problem. Its absurd that math is well defined until a result overflows or wraps (after the fact!).
GCC aggravates the problem by complying with the standard but ignoring the practical issues of wrap and overflow. Ie, GCC does not offer primitives or intrinsics to detect overflow or wrap efficiently via hardware supported mechanisms (ie, FLAGS/EFLAGS in the x86 family, and PSR on ARMs).
Unlike Linus Torvalds, I don’t feel like “GCC is crap” [1]. But I would like something better to deal with the problem other than -fwrapv and -fno-strict-overflow [2].
JW, Baltimore, MD, US
[1]
[2] | http://blog.regehr.org/archives/593 | CC-MAIN-2014-52 | refinedweb | 2,075 | 62.17 |
Message Driven POJOs!
Of all the new Spring 2.0 features and improvements, I must admit that Message-Driven POJOs are one of my personal favorites. I have a feeling that a lot of other Spring users will feel the same way.
Here I am providing a quick introduction. There is a lot more to show, and I will follow this up with other posts. For now though - this should provide you with enough information to get up and running with some truly POJO-based asynchronous JMS! I hope you are as excited about that as I am ;)
Prerequisites:
You will need the following JAR files on your classpath. I've also listed the versions that I am using (any spring-2.x version should be fine. I just dropped RC3 in there about 2 minutes ago in fact):
- activemq-core-3.2.2.jar
- concurrent-1.3.4.jar
- geronimo-spec-j2ee-managment-1.0-rc4.jar
- commmons-logging-1.0.4.jar
- log4j-1.2.9.jar
- jms-1.1.jar
- spring-2.0-rc3.jar
Setup the Environment
First, we need to setup the environment. I am going to be using ActiveMQ, but the impact of changing a provider will be limited to modifications within this one file. I'm calling this file "shared-context.xml" since as you will see shortly, I am going to be importing these bean definitions for both sides of the JMS communication. Here are the "shared" bean definitions: the connection factory and two queues (one for the requests and one for replies):
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: >
As you see, I will be running ActiveMQ on tcp (I'm just running 'activemq' from the bin directory of the distribution). It is also possible to run embedded (with "vm://localhost" instead) - or you can run the main method of the org.activemq.broker.impl.Main class. If you want to grab the distribution, visit:.
The Example Domain
I'm keeping things intentionally simple here - the main goal is to demonstrate how the pieces fit together. One of the most important things I want to point out though is that these classes in my "domain" are POJOs. You will see no Spring or JMS dependencies at all.
Ultimately, we will accept input from the user (a "name" via stdin) and this will be transformed into a "registration request" for some unspecified event. The message will be sent asynchronously, but we will have another queue to handle replies. The ReplyNotifier will then write the confirmation (or a "not confirmed" message) to stdout.
I’m creating all of these classes in a “blog.mdp” package by the way.
The first class is the RegistrationRequest:
package blog.mdp; import java.io.Serializable; public class RegistrationRequest implements Serializable { private static final long serialVersionUID = -6097635701783502292L; private String name; public RegistrationRequest(String name) { this.name = name; } public String getName() { return name; } }
Next is the RegistrationReply:
package blog.mdp; import java.io.Serializable; public class RegistrationReply implements Serializable { private static final long serialVersionUID = -2119692510721245260L; private String name; private int confirmationId; public RegistrationReply(String name, int confirmationId) { this.name = name; this.confirmationId = confirmationId; } public String toString() { return (confirmationId >= 0) ? name + ": Confirmed #" + confirmationId : name + ": Not Confirmed"; } }
And the RegistrationService:
package blog.mdp; import java.util.HashMap; import java.util.Map; public class RegistrationService { private Map registrations = new HashMap(); private int counter = 100; public RegistrationReply processRequest(RegistrationRequest request) { int id = counter++; if (id % 5 == 0) { id = -1; } else { registrations.put(new Integer(id), request); } return new RegistrationReply(request.getName(), id); } }
As you can see, this is merely providing an example. In reality, something would probably be done with the registrations map. Also, you see that 20% of registration attempts will be denied (given a -1 confirmationId) - not a very practical way to process registration requests, but it will provide some variety to the reply messages. Again, the important thing is that this service class has NO ties to Spring or JMS. Nevertheless, as you will see in just a moment, it is going to be handling the payload of messages sent via JMS. In other words, this RegistrationService IS the Message-Driven POJO.
Finally, create a simple class to log the reply messages:
package blog.mdp; public class ReplyNotifier { public void notify(RegistrationReply reply) { System.out.println(reply); } }
Configure the Message-Driven POJO
Now for the most important part. How do we use Spring to configure the POJO service so that it will receive JMS Messages? The answer comes in the form of 2 bean definitions (well, 3 if you count the service itself). In this next bean definition file, notice the "container" which actually receives the message and enables the use of an asynchronous listener. The container needs to be aware of the connectionFactory and the destination from which it receives messages. There are multiple types of containers available, but that is beyond the scope of this blog. Read the reference document for more information: Message Listener Containers.
The "listener" in this case is an instance of Spring's MessageListenerAdapter. It has a reference to the delegate (the POJO service) and the name of the handler method. In this case, we've also provided a defaultResponseDestination. For a void-returning method, you would obviously not need to do this. Also (and probably more likely in a production application), you can leave this out in favor of setting the "reply-to" property of the incoming JMS Message instead.
Now that we've discussed the various players, here are the bean definitions (I've named this file "server-context.xml"):
```xml
The last step here is to provide a bootstrap mechanism for running the service since this is a simple standalone example. I’ve just created a trivial main method to startup an ApplicationContext with the relevant bean definitions and then block:
package blog.mdp; import java.io.IOException; import org.springframework.context.support.ClassPathXmlApplicationContext; public class RegistrationServiceRunner { public static void main(String[] args) throws IOException { new ClassPathXmlApplicationContext("/blog/mdp/server-context.xml"); System.in.read(); } }
Configure the Client
On the “client” side, we will send the registration requests and log the replies. First, I will list the bean definitions. After the previous section, you should understand the role of the “container” and “listener”. In this case, the delegate is the ReplyNotifier and since it has a void return type, it does not itself send replies (therefore, no ‘defaultResponseDestination’ property is present). I’ve named this file “client-context.xml”:
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: >
There is another bean defined there - an instance of Spring’s “jmsTemplate”. We will use that to send the registration request messages to its defaultDestination. With the simple convertAndSend(..) methods that Spring provides, the sending of JMS messages is trivial. I’ve created a class that takes user input and then sends the message by using this “jmsTemplate”:
package blog.mdp; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; import org.springframework.jms.core.JmsTemplate; public class RegistrationConsole { public static void main(String[] args) throws IOException { ApplicationContext context = new ClassPathXmlApplicationContext("/blog/mdp/client-context.xml"); JmsTemplate jmsTemplate = (JmsTemplate) context.getBean("jmsTemplate"); BufferedReader reader = new BufferedReader(new InputStreamReader(System.in)); for (;;) { System.out.print("To Register, Enter Name: "); String name = reader.readLine(); RegistrationRequest request = new RegistrationRequest(name); jmsTemplate.convertAndSend(request); } } }
Running the Example
Now for the fun part. Startup the ActiveMQ broker (as briefly discussed in the “Setup the Environment” section). Run the main(..) method of the RegistrationServiceRunner. Run the main(..) method of the RegistrationConsole. Enter a name, and you should see a reply in that same console.
Further Resources
Hopefully, that’s enough to give you an idea of what Spring’s new Message-Driven POJO support is about. However, as I mentioned, there is quite a bit more involved - different container types, transaction support, configuration of consumer threading, pluggable message conversion strategies, etc. Stay tuned to the Interface21 Team Blog for more examples and information about those features. In the meantime, you can check out the Spring Reference Documentation on JMS. Also, be sure to visit the “Remoting and JMS” section of the Spring Support Forums as you begin to explore this exciting new functionality. | http://spring.io/blog/2006/08/11/message-driven-pojos/ | CC-MAIN-2018-26 | refinedweb | 1,382 | 50.53 |
Type: Posts; User: eBooster
You were since very close : the bug came from the second line after the one you highlighted.
Yes, I have checked this, but I think it come from the source you said in post #13.
I tried your advice in post #13 without success.
To be honnest, I cannot say since the whole code does not come from me. But I still working on this matter.
Yes, I discovered that based on your previous advice. Now I am trying to see why an index may be out of range.
I will now try to separate the file as described above.
However, please find attached the files I am trying to run.
32435
I have done this and I am also redirected to the file at C:\Program Files\Microsoft Visual Studio 10.0\VC\include\vector and exactly at this part :
The CallStack indicates the following :
I believe the bug rizes from 'const char* file = "C:\\WAR ROOM\\FINANCE\\Cpp\\Test\\Test\\swapData.txt";' and the program does not go in the loop 'if (fin.good())' : this means that program is not...
I added the statement using namespace std but the result remains the same. I need to take the values from a file.
The swap.h file is as below :
#ifndef _SWAP_H__
#define _SWAP_H__
...
Unfortunately I still have the same error message as above using 1).
Using <sstream> as advised in 2) make the program to not compile.
Hi all,
I am trying to run the code below but I receive the following error message :
Thanks in advance for your help.
#include <fstream>
Thanks, this work fine for me (I have used using namespace std; everywhere instead).
Hi all,
I am running a cpp file that uses the file DATECL.H below but I receive the following error messages :
Thanks in advance for your help.
Hi all,
I am trying to publish a C# Project in Visual Studio 2010 express but I receive the error/warning messages below :
I checked the path C:\Program Files\Microsoft... | http://forums.codeguru.com/search.php?s=166d1cf44b08e5e4f6ab47e721159956&searchid=7651051 | CC-MAIN-2015-35 | refinedweb | 341 | 82.65 |
#include "petscmat.h" PetscErrorCode MatSetSizes(Mat A, PetscInt m, PetscInt n, PetscInt M, PetscInt N)Collective on Mat
If PETSC_DECIDE is not used for the arguments 'm' and 'n', then the user must ensure that they are chosen to be compatible with the vectors. To do this, one first considers the matrix-vector product 'y = A x'. The 'm' that is used in the above routine must match the local size used in the vector creation routine VecCreateMPI() for 'y'. Likewise, the 'n' used must match that used as the local size in VecCreateMPI() for 'x'.
You cannot change the sizes once they have been set.
The sizes must be set before MatSetUp() or MatXXXSetPreallocation() is called. | https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/Mat/MatSetSizes.html | CC-MAIN-2019-51 | refinedweb | 117 | 69.31 |
Reminds me of the 169.x.x.x address assigned when an IP address cannot be
obtained from a DHCP server. As John said, we should discuss this more.
Lawrence Mandel
Software Developer
IBM Rational Software
Phone: 905 - 413 - 3814 Fax: 905 - 413 - 4920
lmandel@ca.ibm.com
John Kaputin <KAPUTIN@uk.ibm.com>
10/13/2005 05:33 AM
Please respond to
woden-dev
To
woden-dev@ws.apache.org
cc
Subject
Re: Plan to handle parsing errors and validation.
The empty string namespace in QName was just a way for the Reader to
capture the namespace error internally at parse-time, so the validator can
recognize it and decide how to handle it. A suitable error message would
always be reported as well.
We could certainly make the parse-time error more explicit to the
validator
by using a namespace string that reflects the error. Especially if there
are different types of parse-time QName namespace errors we need to
distinguish or if we decide we want the validator to report the error
message rather than report it at parse-time in the Reader as we do
currently.
Not sure if we want to externalize such a namespace string in QName to the
user application though. My thinking was that a suitable error message and
the wsdl model with the offending QName containing the original prefix and
an empty string namespace would be sufficient for the user. That is, the
error message makes the problem explicit to the user and the empty string
namespace indicates that we don't have any value for this property yet.
However, your suggestion to externalize the namespace error string in the
QName may be useful in a scenario where the calling application needs to
handle this error programatically (i.e. a runtime application where there
is no human user reading the error message).
Will give it some thought and confer with the Woden team.
thanks,
John Kaputin
Kihup Boo
<kboo@ca.ibm.com>
To
12/10/2005 15:37 woden-dev@ws.apache.org
cc
Please respond to Subject
woden-dev Re: Plan to handle parsing errors
and validation.
>
What about making the namespace error more explicit in the QName?
For example, using something like
"" instead of using "".
This way, users of the model might be able to make a better guess when the
error happens without learning the empty string namespace.
- Kihup
John Kaputin
<KAPUTIN@uk.ibm.c
om> To
woden-dev@ws.apache.org
10/12/2005 05:54 cc
AM
Subject
Plan to handle parsing errors and
Please respond to validation.
woden-dev
Yesterday
--------------------------------------------------------------------- | http://mail-archives.apache.org/mod_mbox/ws-woden-dev/200510.mbox/%3COF8A36B8E2.FC835201-ON8525709A.002042A4-8525709A.00205806@ca.ibm.com%3E | CC-MAIN-2016-50 | refinedweb | 435 | 56.25 |
Str...
Registration Form
Registration Form Hi Friends
I am Working on a small project.I have to create a Registration page using MVC pattern where i have to insert data... Date of birth from Drop down to database and how to validate user entered String
registration form
.prepareStatement("select user_name from user_details where user_name...registration form Hii.. I have to design one registration page... verification mail to the user)
please tell me how could i design that. Please
Integrating MyFaces , Spring and Hibernate
, let's start developing Login and Registration application using
Hibernate, Spring...Integrating MyFaces , Spring and Hibernate
... to configure Hibernate, Spring
and MyFaces to use MySQL Database to build real world
Login and User Registration Application
how to create a user registration form in php
how to create a user registration form in php how to create a user registration form in php
User Login and Registration application
User Login and Registration application
... of Spring and Hibernate frameworks.
This is the best example to understand how to make... to the registration
page. In this page, the user fills all the required information like user
registration form using oop concept
registration form using oop concept I would like to write a program student registration form at kindergartens and display the information... the output?
Registration Form:
import javax.swing.*;
import java.awt.
popup registration form
popup registration form hi i want a code for popup registration form.when user click a button popup form will appear.thanks.
Login form and registration
Login form and registration I need a complete code for ligin and new user registration form and validation
Develop user registration form
User Registration Form in JSP
In this example we are going to work with a user registration... user registration jsp page.
In this example we will create a simple
The Complete Spring Tutorial
you how you can integrate struts, spring and hibernate in your web...
Registration Example
In this section we will create user registration form... services,
Schedulers, Ajax, Struts, JSF and many other frameworks. The Spring MVC
;
and User Registration Application Using Struts... by
combining all the three mentioned frameworks e.g. Struts, Hibernate and
Spring... that can be used later in any big Struts Hibernate and
Spring based:
Spring 3 MVC Registration Form Example
Spring 3 MVC Registration Form Example
In this tutorial we are create... registration form to the user. The
fields of the registration form... as:::
Again click hyperlink Registration Form display
web page for form registration to my department
web page for form registration to my department I am creating web page for form registration to my department ..I have to Reprint the Application Form (i.e Download the PDf File from the Database ) , when the user gives
Integrating MyFaces , Spring and Hibernate
About Hibernate, Spring and JSF Integration Tutorial
About Hibernate, Spring and JSF Integration Tutorial
... explains the integration of Hibernate and
Spring Frameworks into JSF (MyFaces... experience in developing applications
using JSF, Spring and Hibernate frameworks.
Spring Handling Form Request
Spring Handling form Request
By help of this tutorial we are going explain the flow of the form data in
spring MVC.
For running this tutorial you need...;
<tr>
<td>User Id</td>
<td><form:input path="
Form Handling in Spring Framework
Form Handling in Spring Framework I have created spring project.I created the form. When I entered the values to the form, the values should... empty. Can you send me some code for handling form in spring framework.
User Registration Action Class and DAO code
/user/updatesuccess.jsp"/>
</action>
Now the registration form...
User Registration Action Class and DAO code... UserRegisterAction.java process the user registration request. It saves the user information
to creating a registration form
to creating a registration form how to create a registration form
plese tell -Struts or Spring - Spring
/hibernate/index.shtml... about spring.
which frameork i should do Struts or Spring and which version.
i...,
You need to study both struts and spring.
Please visit the following links
Struts 2.1.8 Login Form
Struts 2.1.8 Login Form
... to
validate the login form using Struts 2 validator framework.
About the example:
This example will display the login form to the user. If user enters Login
Name
integration with struts 2.0 & spring 2.5 - Framework
integration with struts 2.0 & spring 2.5 Hi All,
The total integration is
Client (JSP Page) --- Struts 2.0--Spring 2.5 --- Hibernate 3.0--MySQL... for more >> <
JSP code for registration form
JSP code for registration form Haiiii
can u please tell me how to encrypt password field in registration form and to compare both password and confirm password fields using jsp
Spring MVC Tutorials
.
Spring 3 MVC Registration Form Example - This is an
example of developing the registration form in Spring MVC...;
Spring 3 MVC Login Form Example - In this tutorial
you will learn how
Where to implement spring - Spring
Where to implement spring
Hi, Where we implement spring framework in j2ee appplication. I mean which layer .Thanks Integration
Struts Hibernate
... search engine application that shows
a search form to the user. User enters search... for this tutorial.
Downloading Struts, Hibernate and Integrate
JSF+SPRING+HIBERNATE - AOP
JSF+SPRING+HIBERNATE any form builder is available from database table to UI FORM LIKE list ,add,edit,delete and search
Login Application
application that can be used later in any big Struts Hibernate and Spring based... Form
Following Entries are accepted on the Registration Form
User Id....
Register: The registration page has five fields namely User Id, Password, E - Struts
Struts Hello
I have 2 java pages and 2 jsp pages in struts
registration.jsp for client to register user
registeraction.java to forward success
registrationForm.java for form beans
Success.j
student registration example
visit the following links: registration example 1.reg.jsp
<%@ page language="java...;*/
if(name =="" ||name == null ) {
alert("user id cannot be blank
struts
;<struts-config>
<form-beans>
<form-bean name...struts <p>hi here is my code can you please help me to solve...;
<p><html>
<body></p>
<form action="login)
<
get data in pop up window droplists and on selecting data in the same show a grid table with related datas
text field.Please help me...I am trying to do this with structs2+spring+hibernate
Spring 3 MVC Validation Example
. The
Hibernate validator framework is used for validation of the form data.
The application will present simple user registration form to the user. Form... application we will be create a Validation form in Spring 3.0 View Component : Takes the input from the user and provide...
Struts is capable to integrate with the other frameworks like, Hibernate
Ajax Registration Program
to validate the user registration through ajax
call and then display the alert massage if no values are provided in the
username and password fields.
When a user...
Ajax Registration Program | http://www.roseindia.net/tutorialhelp/comment/45403 | CC-MAIN-2015-11 | refinedweb | 1,144 | 50.12 |
An improvement of Django-PJAX: The Django helper for jQuery-PJAX.
What’s PJAX?What’s PJAX?
PJAX is essentially AHAH, except with real permalinks and a working back button. It lets you load just a portion of a page (so things are faster) while still maintaining the usability of real links.
A demo makes more sense, so check out the one defunkt put together.
CreditsCredits
This project is an extension of Django-PJAX and all credits from the original version goes to Jacob Kaplan-Moss.
AboutAboutCompatibility
- Python 2.6+ or 3.2+
- PyPy or PyPy3
- CPython
- Django 1.3+
Not all Django versions works with Python, PyPy or CPython. See the Django docs to know more about supported versions.
InstallInstall
Just run:
pip install django-pjax
UsageUsageP containers:
from djpjax import pjax @pjax(pjax_template="pjax.html", additional_templates={"#pjax-inner-content": "pjax_inner.html") def my_view(request): return TemplateResponse(request, "template.html", {'my': 'context'})
Class-based viewClassUsingTesting
Install dependencies:
pip install -r requirements.txt
Run the tests:
python tests.py | https://libraries.io/pypi/django-pjax | CC-MAIN-2018-26 | refinedweb | 169 | 53.17 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.