text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Hi everyone,
I've detected a strange issue recently. Let's consider the following code inside a managed bean:
@ManagedBean @ViewScoped public class RecordUI implements Serializable{ ... @PostConstruct public void initializeView(){ if(FacesContext.getCurrentInstance().getExternalContext().getFlash().get("someRecordIdParameter")==null){ FacesContext.getCurrentInstance().getApplication().getNavigationHandler().handleNavigation(FacesContext.getCurrentInstance(), null, "/pages/someRecordsManagerPage.faces?faces-redirect=true"); }else{ ... } }
The idea is to check every time I load a record specific page if the record id parameter is being sent in the flash context, and if not - then redirect user to some records manager page.
So the steps to reproduce the issue are:
1. Load the page sending the parameter in the flash.
2. Hit F5 to reload the page. At this moment you should be redirected to the page specified in the navigation handler method.
3. Now if you check the OS's process monitor it should have WildFly's java process consuming one of CPUs by 100%.
The issue is accumulative, I can repeat the process to eat up all of the CPUs.
I could avoid the issue by moving the parameter check and redirection rule to a phase listener but:
1. It would require a pretty custom rules (and I have a lot of pages with similar logic).
2. I'd really would like to understand why this is happening to be able to overcome this problem using a more elegant approach than the one involving page rules inside a phase listener if it can be avoided at all.
Any ideas?
Thanks and Regards
Anyone?
Hi!
Which container version do you use? Which version of Mojarra do you use? Is it javax.faces.bean.ManagedBean or javax.annotation.ManagedBean? What about using CDI ?
|
https://developer.jboss.org/message/871920?tstart=0
|
CC-MAIN-2019-04
|
refinedweb
| 280
| 50.23
|
This is my little program, it basic but is something .
#include <stdio.h>#include <allegro5/allegro.h>#include <stdlib.h>
main(){ al_init(); extern char allegro_id[]; printf("%c",allegro_id); exit(0);}
And here is my problem that i can't solve:
[Linker error] undefined reference to `allegro_id' ld returned 1 exit status
You now wich could be the problem?
I'm not sure that Allegro 5 has an allegro_id. Searches all reference Allegro 4. A different, incompatible version of the library. There is an al_get_allegro_version function that returns a packed integer... I assume that would be similar in purpose. What is it that you wanted allegro_id)
Because you declared it extern.EDIT: wait, no. Don't know.
"Code is like shit - it only smells if it is not yours"Allegro Wiki, full of examples and articles !!
The problem is that i don't know what is the allegro_id, if it is a file that is missing or doesn't exist. I try with an example downloaded from web and it gave me the same problem.
allegro_id does not exist in Allegro 5. Remove those two lines that reference it and move on.
ok, i am going to try it
[EDIT 1]
Okay,i try with another example but the error is the same. What do you recomend me to do?
Thanks for your help and attention.
Your examples are for Allegro 4. Where are you getting them?
Allegro 5 has some tutorials here:
"For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18[SiegeLord's Abode][Codes]:[DAllegro5]:[RustAllegro]
|
https://www.allegro.cc/forums/thread/615994/1019092
|
CC-MAIN-2018-30
|
refinedweb
| 265
| 59.8
|
A Stream in Java 8 can be defined as a sequence of elements from a source. Streams supports aggregate operations on the elements. The source of elements here refers to a Collection or Array that provides data to the Stream.
Stream keeps the ordering of the elements the same as the ordering in the source. The aggregate operations are operations that allow us to express common manipulations on stream elements quickly and clearly.
Table of Contents 1. Stream vs Collection return a stream. This helps us to create a chain of stream operations. This is called as pipe-lining.
1. Java Stream vs Collection
All of us have watched online videos on Youtube. When we start watching a video, a small portion of the file is first loaded into the computer and starts playing. we don’t need to download the complete video before we start playing it. This is called streaming.
At a very high level, we can think of that small portions of the video file as a stream, and the whole video as a Collection.
At the granular level, the difference between a Collection and a Stream is to do with when the things are computed. A Collection is an in-memory data structure, which holds all the values that the data structure currently has. Every element in the Collection has to be computed before it can be added to the Collection. While a Stream is a conceptually a pipeline, in which elements are computed on demand.
This concept.
The terminal operations return a result of a certain type and intermediate operations return the stream itself so we can chain multiple methods in a row to perform the operation in multiple steps.
Streams are created on a source, e.g. a
java.util.Collection like
List or
Set. The
Map is not supported directly, we can create stream of map keys, values or entries.
Stream operations can either be executed sequentially or parallel. when performed parallelly, it is called a parallel stream.
Based on the above points, if we list down the various characteristics of Stream, they will be as follows:
- Not a data structure
- Designed for lambdas
- Do not support indexed access
- Can easily be aggregated as arrays or lists
- Lazy access supported
- Parallelizable
2. Creating Streams
The given below ways are the most popular different ways to build streams from collections.
2.1. Stream.of()
In the given example, we are creating a stream of a fixed number of integers.
public class StreamBuilders { public static void main(String[] args) { Stream<Integer> stream = Stream.of(1,2,3,4,5,6,7,8,9); stream.forEach(p -> System.out.println(p)); } }
2.2. Stream.of(array)
In the given example, we are creating a stream from the array. The elements in the stream are taken from the array.
public class StreamBuilders { public static void main(String[] args) { Stream<Integer> stream = Stream.of( new Integer[]{1,2,3,4,5,6,7,8,9} ); stream.forEach(p -> System.out.println(p)); } }
2.3. List.stream()
In the given example, we are creating a stream from the List. The elements in the stream are taken from the List.
public class StreamBuilders { public static void main(String[] args) { List<Integer> list = new ArrayList<Integer>(); for(int i = 1; i< 10; i++){ list.add(i); } Stream<Integer> stream = list.stream(); stream.forEach(p -> System.out.println(p)); } }
2.4. Stream.generate() or Stream.iterate()
In the given example, we are creating a stream from generated elements. This will produce a stream of 20 random numbers. We have restricted the elements count using
limit() function.
public class StreamBuilders { public static void main(String[] args) { Stream<Integer> randomNumbers = Stream .generate(() -> (new Random()).nextInt(100)); randomNumbers.limit(20) .forEach(System.out::println); } }
2.5. Stream of String chars or tokens
In the given example, first, we are creating a stream from the characters of a given string. In the second part, we are creating the stream of tokens received from splitting from a string. also such as using Stream.Buider or using intermediate operations. We will learn about them in separate posts from time to time.
3. Stream Collectors
After performing the intermediate operations on elements in the stream, we can collect the processed elements again into a Collection using the stream Collector methods.
3.1. Collect Stream elements to a List
In the given example, first, we are creating a stream on integers 1 to 10. Then we are processing the stream elements to find all even numbers.
At last, we are collecting all even numbers into a
List.. Collect Stream elements to an Array
The given example is similar to the first example shown above. The only difference is that we are collecting the even numbers in an Array. a
Set,
Map or into multiple ways. Just go through
Collectors class and try to keep them in mind.
4. Stream Operations
Stream abstraction has a long list of useful functions. Let us look at a few of them.
Before moving ahead, let us build a
List of strings beforehand. We will build our examples methods calls in a row. Let’s learn important ones.
4.1.1. Stream.filter()
The
filter() method accepts a Predicate to filter all elements of the stream. This operation is intermediate which enables us to call another stream operation (e.g. forEach()) on the result.
memberNames.stream().filter((s) -> s.startsWith("A")) .forEach(System.out::println);
Program Output:
Amitabh Aman
4.1.2. Stream.map()
The
map() intermediate operation converts each element in the stream into another object via the given function.
The following example converts each string into an UPPERCASE string. But we can use
map() to transform an object into another type as well.
memberNames.stream().filter((s) -> s.startsWith("A")) .map(String::toUpperCase) .forEach(System.out::println);
Program Output:
AMITABH AMAN
4.1.2. Stream.sorted()
The
sorted() method is an intermediate operation that returns a sorted view of the stream. The elements in the stream are sorted in natural order unless we pass a custom Comparator.
memberNames.stream().sorted() .map(String::toUpperCase) .forEach(System.out::println);
Program Output:
AMAN AMITABH LOKESH RAHUL SALMAN SHAHRUKH SHEKHAR YANA
Please note that the
sorted() method only creates a sorted view of the stream without manipulating the ordering of the source Collection. In this example, the ordering of string in the
memberNames is untouched.
4.2. Terminal operations
Terminal operations return a result of a certain type after processing all the stream elements.
Once the terminal operation is invoked on a Stream, the iteration of the Stream and any of the chained streams will get started. Once the iteration is done, the result of the terminal operation is returned.
4.2.1. Stream.forEach()
The
forEach() method helps in iterating over all elements of a stream and perform some operation on each of them. The operation to be performed is passed as the lambda expression.
memberNames.forEach(System.out::println);
4.2.2. Stream.collect()
The
collect() method is used to receive elements from a steam and store them in a collection.
List<String> memNamesInUppercase = memberNames.stream().sorted() .map(String::toUpperCase) .collect(Collectors.toList()); System.out.print(memNamesInUppercase);
Program Output:
[AMAN, AMITABH, LOKESH, RAHUL, SALMAN, SHAHRUKH, SHEKHAR, YANA]
4.2.3. Stream.match()
Various matching operations can be used to check whether a given predicate matches the stream elements. All of these matching);
Program Output:
true false false
4.2.4. Stream.count()
The
count() is a terminal operation returning the number of elements in the stream as a
long value.
long totalMatched = memberNames.stream() .filter((s) -> s.startsWith("A")) .count(); System.out.println(totalMatched);
Program Output:
2
4.2.5. Stream.reduce()
The
reduce() method performs a reduction on the elements of the stream with the given function. The result is an
Optional holding the reduced value.
In the given example, we are reducing all the strings by concatenating them using a separator
#.
Optional<String> reduced = memberNames.stream() .reduce((s1,s2) -> s1 + "#" + s2); reduced.ifPresent(System.out::println);
Program, we will do with the if-else block. In the internal iterations such as in streams, there are certain methods we can use for this purpose.
5.1. Stream.anyMatch()
The
anyMatch() will return
true once a condition passed as predicate satisfies. Once a matching value is found, no more elements will be processed in the stream.
In the given example, as soon as a String is found starting with the letter
'A', the stream will end and the result will be returned.
boolean matched = memberNames.stream() .anyMatch((s) -> s.startsWith("A")); System.out.println(matched);
Program Output:
true
5.2. Stream.findFirst()
The
findFirst() method will return the first element from the stream and then it will not process any more elements.
String firstMatchedName = memberNames.stream() .filter((s) -> s.startsWith("L")) .findFirst() .get(); System.out.println(firstMatchedName);
Program Output:
Lokesh
6. Parallelism in Java Steam
With the Fork/Join framework added in Java SE 7, we have efficient machinery for implementing parallel operations in our applications.
But implementing a fork/join framework is itself a complex task, and if not done right; it is a source of complex multi-threading bugs having the potential to crash the application. With the introduction of internal iterations, we got the possibility of operations to be done in parallel more efficiently.
To enable parallelism, all we have to do is to create a parallel stream, instead of a sequential stream. And to surprise, this is really very easy.
In any of the above-listed stream examples, anytime we want to do a particular job using multiple threads in parallel cores, all we have to call Stream APIs.
Happy Learning !!
Read More:
Stream Operations
Intermediate Operations
Terminal Operations
- forEach()
- forEachOrdered()
- toArray()
- reduce()
- collect()
- min()
- max()
- count()
- anyMatch()
- allMatch()
- noneMatch()
- findFirst()
- findAny()
Java Stream Examples
Java 8 – Stream map() vs flatMap()
Java 8 – Infinite Stream
Java 8 – Stream Max and Min
Java 8 – Stream of Random Numbers
Java 8 – Stream Count of Elements
Java 8 – Get the last element of Stream
Java 8 – Find or remove duplicates in Stream
Java 8 – IntStream
Java 8 – IntStream to Collection
Java 8 – IntPredicate – Convert Iterable or Iterator to Stream
Java 8 – Sorting numbers and strings
Java 8 – Sorting objects on multiple fields
Java 8 – Join stream of strings
Java 8 – Merge streams
Java 9 – Stream API Improvements
Feedback, Discussion and Comments
Shatakshi
Its an informative post.
But I have one query here, in terms of performance streams are better or for-loops. As much I have read online, answer is for loops then why should we prefer using streams?
Lokesh Gupta
The gains are so much for most of the real-world examples. In most applications, we will not iterate over collections having 1 million or more records. If that is ever needed, it is done on the database side.
The lambda expressions are amazing in writing complex logic in a single line.
Rajanikant
Hi,
How to breakout from loop from 2.4
I have thought to apply break statement but in consumer we can’t apply as its not a loop statement.
or is it stream should have only fixed size.? –
|
https://howtodoinjava.com/java8/java-streams-by-examples/
|
CC-MAIN-2020-45
|
refinedweb
| 1,868
| 56.76
|
This is my code:
import datetime
today = datetime.date.today()
print today
import datetime
mylist = []
today = datetime.date.today()
mylist.append(today)
print mylist
[datetime.date(2008, 11, 22)]
In Python, dates are objects. Therefore, when you manipulate them, you manipulate objects, not strings, not timestamps nor anything.
Any object in Python have TWO string representations:
The regular representation that is used by "print", can be get using the
str() function. It is most of the time the most common human readable format and is used to ease display. So
str(datetime.datetime(2008, 11, 22, 19, 53, 42)) gives you
'2008-11-22 19:53:42'.
The alternative representation that is used to represent the object nature (as a data). It can be get using the
repr() function and is handy to know what kind of data your manipulating while you are developing or debugging.
repr(datetime.datetime(2008, 11, 22, 19, 53, 42)) gives you
'datetime.datetime(2008, 11, 22, 19, 53, 42)'.
What happened is that when you have printed the date using "print", it used
str() so you could see a nice date string. But when you have printed
mylist, you have printed a list of objects and Python tried to represent the set of data, using
repr().
Well, when you manipulate dates, keep using the date objects all long the way. They got thousand of useful methods and most of the Python API expect dates to be objects.
When you want to display them, just use
str(). In Python, the good practice is to explicitly cast everything. So just when it's time to print, get a string representation of your date using
str(date).
One last thing. When you tried to print the dates, you printed
mylist. If you want to print a date, you must print the date objects, not their container (the list).
E.G, you want to print all the date in a list :
for date in mylist : print str(date)
Note that in that specific case, you can even omit
str() because print will use it for you. But it should not become a habit :-)
import datetime mylist = [] today = datetime.date.today() mylist.append(today) print mylist[0] # print the date object, not the container ;-) 2008-11-22 # It's better to always use str() because : print "This is a new day : ", mylist[0] # will work This is a new day : 2008-11-22 print "This is a new day : " + mylist[0] # will crash cannot concatenate 'str' and 'datetime.date' objects print "This is a new day : " + str(mylist[0]) This is a new day : 2008-11-22
Dates have a default representation, but you may want to print them in a specific format. In that case, you can get a custom string representation using the
strftime() method.
strftime() expects a string pattern explaining how you want to format your date.
E.G :
print today.strftime('We are the %d, %b %Y') 'We are the 22, Nov 2008'
All the letter after a
"%" represent a format for something :
%dis the day number
%mis the month number
%bis the month abbreviation
%yis the year last two digits
%Yis the all year
etc
Have a look at the doc, you can't know them all.
Since PEP3101, every object can have its own format used automatically by the method format of any string. In the case of the datetime, the format is the same used in strftime. So you can do the same as above like this:
print "We are the {:%d, %b %Y}".format(today) 'We are the 22, Nov 2008'
The advantage of this form is that you can also convert other objects at the same time.
Dates can automatically adapt to the local language and culture if you use them the right way, but it's a bit complicated. Maybe for another question on SO(Stack Overflow) ;-)
|
https://codedump.io/share/s28zQMuvGpbm/1/how-to-print-date-in-a-regular-format-in-python
|
CC-MAIN-2017-51
|
refinedweb
| 648
| 71.55
|
Below I have got a class with two int variables from which I have created a number of objects which I have stored in the linked list. Now I want to loop through each object and compare THEIR FIELDS from the class with other int values. In other words I want to loop through the fields of the objects and not the objects themselves. Any help?
Here's the code
import java.util.*; import java.io.*; public class Obj { int n; int c; public Obj (int nn) { n = nn; c = 0; } public static void main(String argv[]) throws IOException { LinkedList list = new LinkedList(); int i = 7; Obj element = new Obj(i); // I may add further objects.. list.add(element); // then I want to iterate through the linked list of objects and get each object // and compare its n or c field values with something else // It should sth like the below which I found in the web but I don;t get how it works for(Obj elementf : list) { // Process each object inside of object here. } }
|
http://www.javaprogrammingforums.com/whats-wrong-my-code/34372-loop-through-linked-list-objects-their-fields.html
|
CC-MAIN-2016-26
|
refinedweb
| 176
| 68.64
|
I am trying to write a simple program that picks a number between 1 and 100, and then lets the user guess what the number is. The program should tell the user if their guess is to high, to low, or correct.
The problem I am having a problem with the program I wrote. Every time I input a number it tells me I am correct even if that number is larger than 100.
I don't what the problem is. Could someone please help.
randexample.cpprandexample.cppCode:#include <iostream> #include <cstdlib> #include <ctime> using namespace std; int rand_range(int low, int high) { return rand() % (high - low + 1 )+low; } int main() { int input; srand(time(NULL)); rand_range(1,100); cout << "pick a number between 1 and 100 \n"; cin >> input; if(input=rand_range(1,100)) { cout << "correct!\n"; } else if( input != rand_range(1,100)) { cout<< "wrong!\n"; }
|
http://cboard.cprogramming.com/cplusplus-programming/161199-i-need-help-random-number-generator-program.html
|
CC-MAIN-2014-41
|
refinedweb
| 147
| 73.78
|
This action might not be possible to undo. Are you sure you want to continue?
Set 8
Questions
The function addition is called by passing the values The function addition is called by passing reference Question 5: . a. z.y) a. b. b. y=3. b. a. True False Question 3: Type specifier is optional when declaring a function a. b.Question 1: The void specifier is used if a function does not have return type. True False Question 2: You must specify void in parameters if a function does not have any arguments. True False Question 4: Study the following piece of code and choose the best answer int x=5. a=addition(x.
y). b. c. b. a. The variables x and y will be updated when any modification is done in the function The variables x and y are passed to the function addition None of above are valid. Very long functions that can hardly run One function containing another one or more functions inside it.In case of arguments passed by values when calling a function such as z=addidion(x. Any modifications to the variables x & y from inside the function will not have any effect outside the function. . The values of those variables are passed to the function so that it can manipulate them b. that function Question 7: In case of pass by reference a. d. The function declaration should contain ampersand (& All of above in its type declaration Question 8: Overloaded functions are a. b. pass by value pass by reference . d. The location of variable in memory is passed to the function so that it can use the same memory area for its processing c. Question 6: If the type specifier of parameters of a function is followed by an ampersand (& call is a.
} float operate (float a. We use default keyword to specify the value of such parameters. a.c. . None of above Question 9: Functions can be declared with default values in parameters. d. } int main() { int x=5. True False Question 10: Examine the following program and determine the output #include <iostream> using namespace std. float b) { return (a/b). y=2. Two or more functions with the same name but different number of parameters or type. int b) { return (a * b). b. int operate (int a.
0. 10. c. m=2.0 5. cout << operate(x. return 0.float n=5.0. } a.m).0 10 5. b.5 . cout << operate (n.0 2.5 5 2.0 10.y) <<"\t". d.
a. None of above b. The location of variable in memory is passed to the function so that it can use the same memory area for its processing 8.5 Set 7 Questions . True b. The function addition is called by passing the values 5. 3. 9. False a. False [ parameters can be empty without void too!] b. pass by reference 7. Any modifications to the variables x & y from inside the function will not have any effect outside the function 6. b. a. 10. d. b.Answers 1. 2. 10 2. 4. False d.
Variable x should not be inside quotation Question 2: Looping in a program means a. 100 should be enclosed in quotations b. In while structure condition is tested before executing statements inside loop whereas in do structure condition is tested before repeating the statements inside loop . Repeat the specified lines of code c.Question 1: Find out the error in following block of code. There is no semicolon at the end of first line c. If (x = 100) Cout << x is 100 . The do structure decides whether to start the loop code or not whereas while statement decides whether to repeat the code or not d. None of above Question 3: The difference between while structure and do structure for looping is a. Equals to operator mistake d. In do structure the condition is tested at the beginning of first iteration c. Jumping to the specified branch of program b. a. In while statement the condition is tested at the end of first iteration b. Both of above d.
exit d. switch Question 7: The continue statement . do d. break b. goto c. for Question 5: Which of the following is not a jump statement in C++? a.Question 4: Which of the following is not a looping statement in C? a. while b. goto c. break b. until c. exit d. switch Question 6: Which of the following is selection statement in C++? a.
break. all of above Question 8: Consider the following two pieces of codes and choose the best answer Code 1: switch (x) { case 1: cout << x is 1 . skips the rest of the loop in current iteration d. case 2: cout << x is 2 . resumes the program if it is hanged b. } Code 2 If (x==1){ Cout << x is 1 . break. resumes the program if it was break was applied c. . default: cout << value of x unknown .a.
} Else{ Cout << value of x unknown . Both of the above code fragments have the same behaviour b. The second code produces more results than first. The first code produces more results than second d. so jumping to third branch". } a. Both of the above code fragments produce different effects c. Question 9: Observe the following block of code and determine what happens when x=2? switch (x){ case 1: case 2: case 3: cout<< "x is 3. goto thirdBranch.} Else if (x==2){ Cout << x is 2 . default: .
d.. Program jumps to the end of switch statement since there is nothing to do for x=2 b. we need to put break statement at the end of the group of statement of a condition c. default task is run c. so jumping to third branch and jumps to thirdBranch. } a. Will display x is 3. so need to say Thank You!".3 d. None of above . None of above Question 10 Which of the following is false for switch statement in C++? a. It uses labels instead of blocks b. we can put range for case such as case 1.cout<<"x is not within the range. so. The code inside default will run since there is no task for x=2.
2. Repeat the specified lines of code In while structure condition is tested before executing statements inside loop whereas in do structure condition is tested before repeating the statements inside loop b. Equals to operator mistake b. 8. 7.Answers 1. 5. so jumping to third branch and jumps to thirdBranch 4. ++++++++++++++++++++++++++++++ Question 1: . Until d. c. Will display x is 3. Both of the above code fragments have the same behaviour c. Switch c. 6. Switch d. 3. skips the rest of the loop in current iteration a. 9. we can put range for case such as case 1. c. 10.
d. b. c. b. d. stringstream is defined in the header file <sstream> It allows string based objects treated as stream It is especially useful to convert strings to numerical values and vice versa. None of above Question 4: Which of the header file must be included to use stringstream? a. c. string mystring. getline(cin. b.cin extraction stops execution as soon as it finds any blank space character a. mystring). c. a. b. true false Question 2: Observe the following statements and decide what do they do. <iostream> <string> <sstring> <sstream> Question 5: . reads a line of string from cin into mystring reads a line of string from mystring into cin cin can¶t be used this way none of above Question 3: Regarding stringstream identify the invalid statement a. d.
mutable default readable volatile Question 9: . b. c. <iostream> <string> <sstring> <sstream> Question 6: If you use same variable for two getline statements a. d.´ statement in main function indicates a. The program did nothing. c. completed 0 tasks The program worked as expected without any errors during its execution not to end the program yet. b. d. b. None of above Question 8: Which of the following is not a reserve keyword in C++? a. d. d. b. Both the inputs are stored in that variable The second input overwrites the first one The second input attempt fails since the variable already got its value You can not use same variable for two getline statements Question 7: The ³return 0. c. c.Which of the following header file does not exist? a.
8. b. Answers 1. Global variables are declared in a separate file and accessible from any program. <sstring> b. Local variables are declared in the main body of the program and accessible only from functions. 9. Local variables are declared inside a function and accessible within the function only. The program worked as expected without any errors during its execution c. 4. c. a. The second input overwrites the first one b. Global variables are declared inside a function and accessible from anywhere in program. 7. True a. Reads a line of string from cin into mystring d. d. int long int short int float Question 10: Identify the correct statement regarding scope of variables a. b. short int . 2. d.The size of following variable is not 4 bytes in 32 bit systems a. 5. 6. readable c. 3. <sstream> c. c. None of above d.
Abstraction to perform input and output operations in sequential media b. << d. v c.10 b. Both are similar but endl additionally performs flushing of buffer c. Both a and c Question 2 Which of the following is known as insertion operator? a. endl can¶t be used with cout . Objects where a program can either insert or extract characters to and from it d. >> Question 3: Regarding the use of new line character (/n) and endl manipulator with cout statement a. Local variables are declared inside a function and accessible within the function on Question 1 Streams are a. Abstraction to perform input and output operations in direct access media c. ^ b. Both ways are exactly same b.
get d. Printer b. print b. Disk . none of above Question 6: By default. cin Question 5: Which of the following is input statement in C++? a. input c. the standard output device for C++ programs is a. cin b. Monitor c. write c. cout d. Modem d. \n can¶t be used with cout Question 4: Which of the following is output statement in C++? a.d.
Keyboard b. user must separate each by using . ^ b. cin statement must contain a variable preceded by >> operator b. Scanner d. the standard input device for C++ program is a. cin does not process the input until user presses RETURN key c.Question 7: By default. all of above Question 9: Which of the following is extraction operator in C++? a. << d. >> Question 10: When requesting multiple datum. Mouse c. you can use more than one datum input from user by using cin d. v c. None of these Question 8: Which of the following statement is true regarding cin statement? a.
d. all of above Answers 1. a space b. a new line character d. a tab character c.a. Both a and c .
5. 8. all of above Set 4 Questions Question 1 In an assignment statement a=b. . Monitor a. 9. Cout a. << b. Keyboard d. The value of variable a is assigned to variable b and the value of variable b is assigned to variable a. d. The value of b is assigned to variable a and the later changes on variable b will effect the value of variable a d. 3. b. All of above d. Both are similar but endl additionally performs flushing of buffer c. 7. c. The value of b is assigned to variable a but the later changes on variable b will not effect the value of variable a c.2. >> 10. 4. The variable a and the variable b are equal. Which of the following statement is true? a. Cin b. 6.
Question 2 All of the following are valid expressions in C++ a = 2 + (b = 5). a contains 3 and b contains 4 . what happens? b = 3. a = b = c = 5. a = 11 % 3 a. True False Question 3: To increase the value of c by one which of the following statement is wrong? a. a = b++. c++. c += 1 Question 4: When following piece of code is executed. b. b. c + 1 => c. a. d. c = c + 1. c.
d. x<10. d. c. == => >= >= Question 7: What is the final value of x when the code int x. x++) {} is run? A. either True or False is less than or is more than is equal or less or more All of these Question 6: Which of the following is not a valid relational operator? a. b.b. c. c. a contains 4 and b contains 4 a contains 4 and b contains 3 a contains 3 and b contains 3 Question 5: The result of a Relational operation is always a. 10 . d. for(x=0. b.
When x is less than one hundred B. When x is greater than one hundred C. 1 Question 8: When does the code block following while(x<100) execute? A. While it wishes Question 9: Which is not a loop structure? A.B. for B. When x is equal to one hundred D. 0 D. repeat until Question 10: How many times is a do while loop guaranteed to loop? . 9 C. do while C. while D.
0 B. 1 D. Variable Answers .A. Infinitely C.
String that varies during program execution b. True c.1. a. b. Repeat Until C. 6. a. either True or False b. a contains 3 and b contains 4 a. 10. When x is less than one hundred D. 4. c + 1 => c. 3. => A. The value of b is assigned to variable a but the later changes on variable b will not effect the value of variable a 2. A portion of memory to store a determined value . 7. 1 Set 3 Questions Question 1 A variable is/are a. 9. 5. 10 A. 8.
Underscores d. Those numbers that are frequently required in programs d. Letters b.c. Spaces Question 3 . Digits c. None of these Question 2 Which of the following can not be used as identifiers? a.
papername b.Which of the following identifiers is invalid? a. typename d. biand d. bitand b. band . bittand c. printname Question 4 Which of the following can not be used as valid identifier? a. writername c.
\t b. None of above Question 6 Which of the following is not a valid escape code? a.Question 5 The difference between x and µx¶ is a. \v . The first one refers to a variable whose identifier is x and the second one refers to the character constant x b. The first one is a character constant x and second one is the string literal x c. Both are same d.
tabulators. If we want the string literal to explicitly made of wide characters. You can also concatenate several string constants separating them by one or several blank spaces. newline or any other valid blank character c. All of above Question 8 . \w Question 7 Which of the following statement is true? a. String Literals can extend to more than a single line of code by putting a backslash sign at the end of each unfinished line. \f d. b. we can precede the constant with the L prefix d.c.
None of the above Question 9 Regarding following statement which of the statements is true? const int pathwidth=100. Declares a variable pathwidth with 100 as its initial value b. It is a C++ statement that declares a constant in C++ d.Regarding #difine which of the following statement is false? a. a. This does not require a semicolon at the end of line c. It is not C++ statement but the directive for the preprocessor b. Declares a construction pathwidth with 100 as its initial value .
All of above .c. a variable. The lvalue must always be a variable b. Declares a constant pathwidth whose value will be 100 d. The assignment always takes place from right to left and never the other way d. Constructs an integer type variable with pathwidth as identifier and 100 as value Question 10 In an assignment statement a. an expression or any combination of these c. The rvalue might be a constant.
b.Answers 1. d. Typename 4. c. The first one refers to a variable whose identifier is x and the second one refers to the character constant x 6. Spaces 3. A portion of memory to store a determined value 2. d. a. a. Bitand 5. \w .
All lines beginning with two slash signs are considered comments. c.7. It is a C++ statement that declares a constant in C++ 9. All of above 8. Programmer can use comments to include short explanations within the source code itself. All of above Set 2 Questions Question 1 Identify the correct statement a. d. Comments very important effect on the behaviour of the program d. b. both . Declares a constant pathwidth whose value will be 100 10. c. c. d.
Ampersand symbol (& Two Slashes (//) Number Sign (#) Less than symbol (< Question 3 The file iostream includes a.Question 2 The directives for the preprocessors begin with a. d. c. d. b. c. b. Start() Begin() Main() Output() Question 5 Every function in C++ are followed by . The declarations of the basic standard input-output library. c. b. d. The streams of includes and outputs of program effect. Both of these None of these Question 4 There is a unique function in C++ program by where all C++ programs start their execution a.
These are lines read and processed by the preprocessor They do not produce any code by themselves . c.a. b. c. Parameters Parenthesis Curly braces None of these Question 6 Which of the following is false? a. d.) A Comma (. Cout represents the standard output stream in c++. b. b. b. c. A full stop (.) A Semicolon ( A colon ( Question 8 Which of the following statement is true about preprocessor directives? a. d. Cout is declared in the iostream standard file Cout is declared within the std namespace None of above Question 7 Every statement in C++ program should end with a. d.
c. d. b. Starting every line with double slashes (//) Starting with /* and ending with */ Starting with //* and ending with *// Starting with <!. Use code and /* comment on the same line Use code and // comments on the same line Use code and //* comments on the same line Use code and <!. c.comments on the same line . b. d. d. These must be written on their own line They end with a semicolon Question 9 A block comment can be written by a.c.and ending with -!> Question 10 When writing comments you can a.
7. None of above c. 10. b. A semicolon d. All lines beginning with two slash signs are considered comments. Starting with /* and ending with */ b. 8. Number Sign (#) a. Parenthesis d. Use code and // comments on the same line Set 9 Questions Question 1: A function can not be overloaded only by its return type. 3. 5. 9. True . c.Answers 1. a. The declarations of the basic standard input-output library. They end with a semicolon b. 4. 2. 6. Main() b. c.
Overloaded Function d. a. True b. False Question 2: A function can be overloaded with a different return type if it has all the parameters same. Nested Function c. False Question 3: Inline functions involves some additional overhead in running time. False Question 4: A function that calls itself for its processing is known as a. double . True b. Recursive Function Question 5: We declare a function with ______ if it does not have any return type a. a. Inline Function b.b. long b.
Variable b is of integer type and will always have value 2 b. None of these Question 7: Variables inside parenthesis of functions declarations have _____ level access. Variable a and b are of int type and the initial value of both variables is 2 c. comma (. colon ( d. Variable b is international scope and will have value 2 . Module d. Universal Question 8: Observe following function declaration and choose the best answer: int divide ( int a. a. Global c.) b. int b = 2 ) a. int Question 6: Arguments of a functions are separated with a. Local b. semicolon ( c.c. void d.
a. a. Ends current line and starts a new line in cout statement. \t c. True 4. The last index of it contains the null-terminated character a. Recursive Function . True 2. False 3.d. b. Ends the output in cout statement c. \0 d. \n b. \1 Answers: 1. Variable b will have value 2 if not specified when calling function Question 9: The keyword endl a. There can be no statements after endl d. d. Ends the execution of program where it is written b. Question 10: Strings are character arrays. Ends the line in program.
Comma (. c. Variable b will have value 2 if not specified when calling function 9. Ends current line and starts a new line in cout statement 10.5. a.) 7. c. d. Void 6. d. a. Local 8. \0 .
|
https://www.scribd.com/document/62508947/c
|
CC-MAIN-2017-26
|
refinedweb
| 3,586
| 75.1
|
import all of the how-to articles from the pkgsrc.se wiki
**Contents** [[!toc levels=2]] # Introduction The purpose of this document is to guide you to create a [RAM disk]() image and a custom kernel in order to boot your mini NetBSD off a [Compact Flash]() or use it to debug it in an emulated environment, such as [qemu](). The ramdisk image will have to be inserted into your kernel and then extracted to memory of your embedded device (or your emulator) and used as the root file system, which then can be used by your kernel as if it resided on a "normal" storage media like a hard drive. The steps below were tested in a NetBSD 4.99.20 i386 box. # Create the ramdisk First we need to create the ramdisk that will get embedded into the kernel. The ramdisk contains a filesystem with whatever tools are needed, usually [[basics/init|init(8)]] and some tools like sysinst, ls(1), etc. ## Standard ramdisk To create the standard ramdisk (assuming your source tree lies at `/usr/src` and you have `/usr/obj` and `/usr/tools`): # cd /usr/src # ./build.sh -O ../obj -T ../tools -u tools # cd /usr/src/etc/ # make MAKEDEV # cd /usr/src/distrib/i386/ramdisks/ramdisk-big # make TOOLDIR=/usr/tools ## Custom ramdisk If you want to customize the contents of the filesystem, customize the `list` file. Let's say for example that we need the [[basics/uname|uname(1)]] utility to be included in the ramdisk, which is not by default. # cd /usr/src # ./build.sh -O ../obj -T ../tools -u tools # cd /usr/src/etc/ # make MAKEDEV # cd /usr/src/distrib/i386/ramdisks/ramdisk-big # cp list list.old Then we edit the `list` file, by adding the following line: PROG bin/uname And after having done it: # make TOOLDIR=/usr/tools Either way, you will get something like this: # create ramdisk-big/ramdisk-big.fs Calculated size of `ramdisk-big.fs.tmp': 5120000 bytes, 65 inodes Extent size set to 4096 ramdisk-big.fs.tmp: 4.9MB (10000 sectors) block size 4096, fragment size 512 using 1 cylinder groups of 4.88MB, 1250 blks, 96 inodes. super-block backups (for fsck -b #) at: 32, Populating `ramdisk-big.fs.tmp' Image `ramdisk-big.fs.tmp' complete And verify with: $ ls -lh ramdisk-big.fs -rwxr-xr-x 1 root wheel 4.9M Jun 19 08:33 ramdisk-big.fs # Build the kernel Next we shall build our custom kernel with ramdisk support. We may choose any INSTALL* kernel configuration. Here, we will use INSTALL_TINY: # cd /usr/src/sys/arch/i386/conf # cp INSTALL_TINY MY_INSTALL_TINY Then we edit `MY_INSTALL_TINY` file, and we go to the section: # Enable the hooks used for initializing the root memory-disk. options MEMORY_DISK_HOOKS options MEMORY_DISK_IS_ROOT # force root on memory disk options MEMORY_DISK_SERVER=0 # no userspace memory disk support options MEMORY_DISK_ROOT_SIZE=3100 # size of memory disk, in blocks The size of `MEMORY_DISK_ROOT_SIZE` must be equal or bigger than the size of your image. To calculate the kernel value you can use following rule: MEMORY_DISK_ROOT_SIZE=10000 would give 10000*512/1024 = 5000 kb We check that the following lines are un-commented: pseudo-device md 1 # memory disk device (ramdisk) file-system MFS # memory file system Once we are done with the configuration file, we proceed with building our kernel: # cd /usr/src # ./build.sh -O ../obj -T ../tools -u kernel=MY_INSTALL_TINY # Insert ramdisk to kernel Having built our kernel, we may now insert the ramdisk to the kernel itself: # cd /usr/src/distrib/i386/instkernel # cp Makefile Makefile.old Then we edit the `Makefile`, to make sure that the `RAMDISKS` and `MDSETTARGETS` variables are set properly. After the modifications, mine looks like this: # $NetBSD: how_to_create_bootable_netbsd_image.mdwn,v 1.1 2011/11/20 20:55:21 mspo Exp $ .include <bsd.own.mk> .include "${NETBSDSRCDIR}/distrib/common/Makefile.distrib" # create ${RAMDISK_*} variables # RAMDISKS= RAMDISK_B ramdisk-big .for V F in ${RAMDISKS} ${V}DIR!= cd ${.CURDIR}/../ramdisks/${F} && ${PRINTOBJDIR} ${V}= ${${V}DIR}/${F}.fs .endfor MDSETTARGETS= MY_INSTALL_TINY ${RAMDISK_B} - MDSET_RELEASEDIR= binary/kernel .include "${DISTRIBDIR}/common/Makefile.mdset" .include <bsd.prog.mk> Next write: make KERNOBJDIR=/usr/obj/sys/arch/i386/compile TOOLDIR=/usr/tools Should you encounter errors of the following form, try increasing the `MEMORY_DISK_ROOT_SIZE` at your kernel configuration file. i386--netbsdelf-mdsetimage: fs image (5120000 bytes) too big for buffer (1587200 bytes) *** Error code 1 Provided that everything went ok, you will have the following files in your current directory: # ls -lh netbsd* -rwxr-xr-x 1 root wheel 21M Jun 20 11:07 netbsd-MY_INSTALL_TINY -rw-r--r-- 1 root wheel 1.5M Jun 20 11:07 netbsd-MY_INSTALL_TINY.gz -rw-r--r-- 1 root wheel 58K Jun 20 11:07 netbsd-MY_INSTALL_TINY.symbols.gz # Make image bootable Finally: # cd /usr/src/distrib/i386/floppies/bootfloppy-big # cp Makefile Makefile.old Edit the Makefile if your kernel config has a different name than INSTALL*. Replace `FLOPPYKERNEL= netbsd-INSTALL.gz` with `FLOPPYKERNEL= netbsd-MY_INSTALL_TINY.gz` (Where MY_INSTALL_TINY is of course the name of the custom kernel.) # make TOOLDIR=/usr/tools [...] Final result: -rw-r--r-- 1 root wheel 2949120 Jun 20 11:15 boot-big1.fs Now you are ready to test your image, with qemu for example: $ cd /usr/src/distrib/i386/floppies/bootfloppy-big $ qemu boot-big1.fs # Screenshots [. # Additional related links 1. [Embedded NetBSD]() 2. [Creating a custom install/boot floppies for i386]()
|
https://wiki.netbsd.org/cgi-bin/cvsweb/wikisrc/tutorials/how_to_create_bootable_netbsd_image.mdwn?rev=1.1;content-type=text%2Fx-cvsweb-markup
|
CC-MAIN-2017-04
|
refinedweb
| 904
| 56.25
|
Conflicts Mode
The Conflicts mode displays files in your workspace whose changes conflict with another version of that file elsewhere in the stream hierarchy — often, but not always, the version in the parent stream. Overlaps, deep overlaps, and twins are all examples of the types of conflicts displayed in this mode. Note that while other File Browser modes also display conflicts, only the Conflicts mode can show elements with deep overlap conflicts.
See File Browser Modes for more information on other modes.
The Explorer and Details Panes
The Explorer pane in Conflicts mode shows only those directories that are themselves in conflict or that contain one or more files that are in conflict with other versions elsewhere in the stream hierarchy. For example, if a file in the \flounder directory conflicts with another version, the \flounder directory is displayed in the Explorer pane.
To display all conflicts, click the root node. The In Folder column displays the relative path of the conflicted element to help you quickly identify the element’s location in the stream structure; this can be a useful feature when several directories contain conflicted elements.
To display only the conflicts occurring within a specific directory, select that directory.
You can format columns, change the sort order, and filter displayed objects in the Details pane as you would with any other table in AccuRev. See Working with Tables for more information. You can also save any changes you make to the default layout as your preference. See Saving Layout and Other Usage Preferences for more information.
Timestamp Optimization option. By default, AccuRev uses cached server information to identify which files in the workspace should have a status of (modified). To learn more about this option and when you might want to turn it off, see Timestamp Optimization (TSO) in the AccuRev Administrator’s Guide. See AccuRev Element Status Reference for a discussion of (modified) and other statuses.
Details Pane Toolbar Reference
The following table summarizes the tools available on the Details pane toolbar and where to find more information.
The Diff Pane
When you select an element in the Details pane, AccuRev populates the Diff pane. The panel on the right displays the current workspace content whose changes conflict with a version elsewhere in the stream hierarchy; that version is displayed on the left.
The Diff pane is read only, but the coloring, changes navigation, and other features it uses are similar to those of the AccuRev Diff tool. See Diff Tab Layout for more information.
Tip: You can hide the Diff pane by clicking the Hide Diff Pane button (
).
Example
In this example, lines 3, 4, and 6 were edited in the workspace version, creating a conflict with the version in the workspace’s parent stream. The Version Browser depicts the relationship between these two versions like this:
In this example, note that the Diff pane depicts the conflict between the workspace content and its parent, the dev stream, but that the Version Browser shows the stream where the change originated, int. This is because the int stream is the stream from which the workspace version’s parent stream (dev) inherited the change.
Tip: Placing the pointer on a version icon displays a tooltip with detailed information about that version.
See The Version Browser to learn more about the Version Browser and its features.
Types of Conflicts Displayed in Conflicts Mode
The Conflicts mode displays files with the following types of conflicts. Except where noted, conflicts can appear in other File Browser modes as well.
- Overlap. An (overlap) status occurs when the element has changed both in the backing stream and in your workspace. (In more technical terms, the current version in the parent stream is not an ancestor of the workspace version.) An (overlap) status can indicate content changes or namespace changes in the parent stream version that are not present in the selected workspace version.
- Deep Overlap. A deep (overlap) status occurs when an overlap condition exists farther up the stream hierarchy than the current parent stream — in the parent stream, in the grandparent stream, and so on, all the way up the depot's stream hierarchy. Because searching the entire stream hierarchy for overlap conditions can take more time, the Conflicts mode does not display deep overlaps by default. See Displaying Deep Overlaps below for more information.
- Twin. A (twin) status occurs when two or more elements in a dynamic stream have the same pathname. This can occur in the following scenario, for example: (1) an element is defuncted in a workspace, (2) the element is promoted to the backing stream, (3) another element is created at the same pathname in the same workspace or a sibling workspace, (4) the new element is promoted to the backing stream.
Displaying Deep Overlaps
To display deep overlaps in the Conflicts mode, click the Include Deep Overlaps check box. When you do this, AccuRev refreshes the Details pane to include any elements with deep (overlap) status. In addition, the name of the stream associated with the conflicting version appears in the Overlap Stream column in the Details pane.
Depending on the depth and complexity of your stream structure, the search for elements with deep (overlap) status can take more time than a search for simple overlaps. To make the search more efficient, the Conflicts mode uses Deep Overlap Optimization, which stops searching for overlaps beyond any time basis stream in the workspace’s stream hierarchy. This optimization improves performance, and also simplifies the display by not showing elements that are not relevant. If you want AccuRev to search the workspace’s entire stream hierarchy for deep overlaps, clear the Deep Overlap Optimization check box. When you do this, any deep overlap elements beyond a time basis stream are displayed with orange instead of yellow highlighting to distinguish them from other conflict types.
Tip: The Deep Overlap Optimization check box setting is saved as a user preference in AccuRev’s preferences.xml file. The setting is applied to all workspaces and streams you subsequently open in the File Browser, though you can change it any time you wish.
Resolving Conflicts
Generally speaking, the typical conflict that occurs in a workspace or a stream is an overlap — the current version in the parent stream is not an ancestor of the version in the current workspace or stream. This can happen, for example, when users in separate workspaces are working on the same file and one of them promotes his or her changes to the workspaces’ parent stream.
To resolve overlap and deep overlap conditions, you must merge your changes with the conflicting version and keep them in the workspace before you can promote your version. If the conflict is in a dynamic stream, you must perform the merge in the Change Palette. See Diff, Merge, Patch, and the Change Palette for more information.
Twin elements can be resolved in the same way as overlaps — by manually merging, keeping, and then promoting the merged version — but the process can be a little more complicated. To help, AccuRev provides a twin resolution wizard that speeds and simplifies the twin resolution process. See Resolving (twin) Status for more information.
|
https://admhelp.microfocus.com/accurev/en/latest/online/Content/AccuRev-OnLine-Help/fb_conflicts_mode.html
|
CC-MAIN-2022-05
|
refinedweb
| 1,197
| 58.42
|
CHAPTER 4: FINANCIAL ANALYSIS OVERVIEW
- Aleesha Gordon
- 3 years ago
- Views:
Transcription
1 In the financial analysis examples in this book, you are generally given the all of the data you need to analyze the problem. In a real-life situation, you would need to frame the question, determine the type of analysis to do, and collect the data yourself. Only then can you apply the procedures you have learned in the previous chapters. The first section in this chapter discusses the overall process of conducting a financial analysis, of which calculating a present or future value is only a small part. Selecting an interest rate to use in a financial analysis can be one of the most difficult steps, usually with no clear-cut right or wrong choice. The second section of the chapter provides some guidelines to consider in selecting an interest rate. Chapters 2 and 3 focused largely on the details of how to discount in a variety of different situations. We have not focused much on how the results of a financial analysis should be interpreted and how they should be used in decision making. The third section in this chapter discusses three criteria that are used for assessing the financial merits of projects: the net present value, the benefit/cost ratio, and the internal rate of return. In general, the net present value is the preferred criterion. However, each can provide useful information. A final section of this chapter discusses some ethical concerns that have been expressed regarding discounting and its implications for long-term investments, such as those that are often required in forest management. 1. Steps in Financial Analysis In a real-life management situation, conducting a financial analysis involves far more than simply calculating a net present value. Generally, you will need to identify the data to use in the analysis. Often, you will have to decide for yourself which data are relevant and should be included. Part of the analysis process is sifting through all of the potentially relevant information to identify what is most important. Sometimes it is not even clear what the question is. The following is a list of general steps typically involved in a financial analysis. It is intended to give you an idea of how you might work your way through the overall process. 1. Identify exactly what the question is. This step is often called framing the question. It may seem like a trivial step; however, it is perhaps the most important. Usually, a financial analysis is done to provide input into some decision-making process. Here are some questions that should be asked: FOREST RESOURCE MANAGEMENT 67
2 ! What is the decision that needs to be made? Was there a problem which precipitated the need for a decision? What issues need to be addressed by the decision?! Who will be affected by the decision? Have all potential stakeholders been considered?! How will the results of the financial analysis be used in making the decision? Have all of the possible alternatives been considered? 2. Establish the scope of the financial analysis problem. It is generally not necessary, or even desirable, to consider everything that might affect a project. Part of the analysis should involve identifying the key aspects of the project. The following are some basic questions that should be considered before initiating any financial analysis problem.! Which impacts will be included in the analysis? Will the analysis consider only timber-related impacts, or will it include wildlife, water quality, recreation aesthetic, or other impacts? Will the analysis consider only impacts that affect the owner of the forest land, or should the analysis also account for impacts on neighboring lands? What about the general public?! When does the project end? Does it have an end? How far out should you go in considering impacts? What is the time horizon of the analysis? 3. Identify the schedule of events associated with the project.! When are activities expected to happen? When do benefits occur? When are goods produced? When are services provided? When do costs occur?! Are there any other significant events that should be considered? 4. Quantify and value events wherever possible. For each good, service, cost or benefit that occurs, three pieces of information are needed: 1) the quantity, 2) the value (generally, this would be the quantity times its price), and 3) the timing of the good, service, cost or benefit (this was determined in step 3). This raises many questions:! How will the impacts of the project be predicted? What sources of data are available? Are there published data that can be used? If existing data are not available, will original data be collected? To what extent can or should expert opinion be used? Are there models that apply to the situation? For example, how will future prices be predicted? What price data are available? How will future timber yields be predicted? How might impacts on wildlife, water, recreation, or aesthetics be predicted?! Sometimes items are relatively easy to quantify but difficult to value; for example, the number of acres of a particular type of grouse habitat may be easy to measure, but what is its value? FOREST RESOURCE MANAGEMENT 68
3 ! Sometimes both quantifying and valuation are difficult; e.g., aesthetics. 5. Select an alternate rate of return and calculate the project s net present value.! Selecting the alternate rate of return is discussed in the next section of this chapter.! Chapters 2 and 3 covered the mechanics of calculating the net present value. There is an amusing story of a drunk who was found, late at night, looking for something under a street lamp. The person who found the drunk asked him what he had lost. The drunk replied, My keys. The person then asked: Is this where you lost them? Again, the drunk replied, No, but the light is better here. Often, the same thing happens with financial analyses. Wildlife, water, aesthetics and recreation may have very high values, but such values are difficult to quantify. On the other hand, we typically have relatively good information about timber. Thus, we end up analyzing the timber values, ignoring the potentially larger, but less quantifiable values associated with other forest resources. This book also falls into this trap, partly because that s where the light is. Many of these other values can be quantified and valued, but it is difficult, and foresters and forest economists are still learning how. 2. Selecting an Interest Rate In this book, you are generally given the interest rate to use in solving each problem. As a forest manager or consultant, you may have to decide for yourself what rate you should use when conducting financial analyses. Often, in a real-world situation, there is no clear right or wrong interest rate to use. However, there are some basic principles that you should consider in selecting an interest rate. Recall one of the terms that is commonly used to describe the discount rate: the alternate rate of return (ARR). This term reflects the first rule in selecting a discount rate: L The applicable discount rate for a financial analysis is the rate the investor (you, your client, your company, the government, etc.) can earn in their best comparable alternative investment. The word comparable is important here because some alternatives are not really equivalent. A comparable investment will be similar in terms the five factors discussed in Chapter 3 that affect the real rate of return, namely: risk, liquidity, transactions costs, taxes and the time period of the investment. It is generally impossible to find a perfectly comparable alternative investment, so some judgement will usually be required. Thus, if the risk associated with the alternative investment is not similar to the risk associated with the investment under consideration, then the rate of return on that alternate investment may need some adjustment before it is used as an alternate rate of return. FOREST RESOURCE MANAGEMENT 69
4 A key consideration when selecting a discount rate is the financial position of the person or company for whom the analysis is being done. L If the person (or company) is going to borrow money to carry out the project, then the rate of interest on the loan is usually the best discount rate. L If the person (or company) is going to invest their own money in the project, then the ideal discount rate would be the rate of return on the investment that the money would be used for if the project was not pursued. Finally, many organizations have a set discount rate that they require you to use for any financial analyses you conduct for them. If the organization you are working for has a specified rate, then obviously that is the rate you should use. Again, be sure you know whether the rate you choose is a real rate or a nominal rate, and make sure you project real future values if and only if you are going to use a real discount rate and nominal future values if and only if you are going to use a nominal discount rate. This is an extremely important point. If you apply the wrong kind of rate, your analysis will be worse than no analysis at all! 3. Alternative Financial Criteria for Project Evaluation The primary purpose of doing a financial analysis of a project is to evaluate the project s profitability or cost-effectiveness relative to some alternative project or investment. Frequently, the results of the financial analysis are used to compare alternative projects to select which ones should be implemented. Sometimes projects are mutually exclusive, such as alternate prescriptions for a stand. In this case, only one project will be selected and the task is solely to determine which of the choices is best. In other cases, any or all of the projects can be implemented, and the task is to identify all of the projects which should be pursued. Several different financial criteria have been proposed for comparing different projects. This section reviews three that are commonly used. Most economists agree that the net present value is the best, but all have some value. Of course, financial criteria will generally not be the only criteria used in deciding which project or projects to select. Net Present Value (NPV) The NPV is the sum of all of the discounted net benefits (benefits minus costs) associated with a project. It is the most widely accepted criterion for selecting between projects. NPV = T Revenuet Cost ( t 1+ i ) t = 0 t FOREST RESOURCE MANAGEMENT 70
5 The criterion for project acceptability is NPV > 0. A NPV > 0 indicates that the project will be able to pay interest on all of the capital invested in the project, plus earn an excess return (or true profit) equal to the NPV. As a general rule, all projects with a positive NPV should be pursued. If all non-mutually exclusive projects with a positive NPV cannot be pursued due to limited capital, then capital is really more scarce than implied by the interest rate, and the alternate rate of return used in the calculations does not reflect the true cost of capital. In this case, a higher discount rate should be used. In general, for mutually exclusive projects, a project with a higher NPV is better than a project with a lower NPV. This is not a rule that should be applied blindly, however. One project may have a higher NPV simply because it is a bigger project, with proportionally large investment requirements. The Benefit/Cost Ratio is also useful because it takes into account the relative size of the investment. Benefit/Cost Ratio (B/C) The B/C is the ratio of the discounted benefits over the discounted costs. It measures the size of the benefits of a project relative to the costs of the project. B / C = T T Revenuet Costt t ( 1+ i) ( 1+ i) t = 0 t = 0 t The criterion for project acceptability is B/C > 1; that is, the discounted project benefits should be greater than the discounted project costs. As with the NPV, all non-mutually exclusive projects meeting this criterion should be pursued. Note that all of the projects with a B/C > 1 will also have NPV > 0. However, the ranking of projects may be quite different under the two criteria. What should be done when the NPV and the B/C result in conflicting recommendations for choosing among a set of mutually exclusive projects? In other words, what if two mutually exclusive projects are ranked differently by these two criteria one having the higher NPV and the other having the higher B/C ratio? The answer will generally be the project with the higher NPV. The project with the higher NPV will generally have higher capital requirements, but, assuming that the cost of this capital has been properly accounted for, the capital required by this project will be well-invested. Keep in mind, however, that the financial analysis seldom captures all of the relevant information about the projects under consideration, and these financial criteria are usually not be the sole factor in selecting a preferred alternative. In ambiguous cases where different financial criteria point toward different conclusions, the factors not included in the financial analysis may tip the balance toward one project or the other. FOREST RESOURCE MANAGEMENT 71
6 Internal Rate of Return (IRR) The IRR is the discount rate for which the NPV of a project is 0. The criterion for project acceptability is that IRR > ARR. It is widely used, largely because it seems easy to interpret. The IRR is not generally a good way to evaluate investment alternatives. The IRR assumes that all of the profits (net revenues after accounting for costs) from a project should be counted as a return to the capital used in the project. However, if the correct alternate rate of return is known, then this rate is the only payment that should be attributed to the capital used in the project. For example, if all of the capital used for a given project is borrowed at a given interest rate, then the interest paid on the borrowed capital is the true and total cost of the capital. It is not necessary to pay any more than this for the use of the capital. If a project has a positive net present value when the appropriate alternate rate of return is used, then this positive net present value should be interpreted as true profits. Generally, the true profits of a project should be attributed to the skill of the designers and managers of the project. (Sometimes these true profits are attributed to the land, as when we calculate a land expectation value. This makes some sense when we are trying to assess the value of a piece of forest land. However, if we know what the land is worth, or what it would rent for, then attributing the excess profits to the land would also be incorrect.) Another problem with the IRR has to do with how intermediate costs or returns are treated. The IRR assumes that funds to cover intermediate costs are borrowed at the IRR and that intermediate returns can be reinvested at the IRR. This is perhaps the most serious shortcoming of the IRR, as the borrowing rate and the reinvestment rate will usually be independent of the project under consideration. To properly calculate an IRR, therefore, you need to explicitly account for the rate paid for borrowed money and the rate received on reinvested intermediate returns. However, if you know what these rates are, then you already have all the information you need to select an appropriate alternate rate of return to use in calculating the NPV. You will sometimes get different results from the maximum NPV criterion than from the maximum IRR criterion. So, which investment is the best: the one that maximizes the NPV or the one that maximizes the IRR? The answer is usually the one that maximizes the NPV. Even though the IRR should generally not be used to decide which of two or more projects is best, it does give some useful information about a project and you should know what it is and how to calculate it. Frequently there is no direct way to calculate an IRR, so in practice the IRR is calculated iteratively by calculating a net present value with different interest rates until the resulting net present value is approximately zero. A solve-for function (Tools Goal Seek in Microsoft Excel) is available in most spreadsheet packages which can be very useful for finding a precise IRR. Most spreadsheets also have an IRR function that can be used to solve directly for the IRR of a stream of costs and revenues. FOREST RESOURCE MANAGEMENT 72
7 Example: NPV, IRR and B/C Ratio Consider two alternative investments. Option 1 is an example of low intensity forest management. With Option 1 you make a minimal investment of $50 per acre to establish the stand. At the end of each year, for 30 years, you will incur an annual management expense of $25 per acre with Option 1. At the end of 30 years, you expect to receive $4,000 per acre in stumpage fees. Option 2 is a more intensive management option. With Option 2, your initial stand establishment investment is $400 per acre, and your annual management expenses are projected to be $50 per acre. With Option 2, however, you expect to receive considerably more in stumpage fees: $8,000 per acre. Table 4.1 summarizes the cash flows for the two options. Table 4.1. Cash flow summary for alternative investments. Year Option 1 Option 2 Initial investment 0 $50 $400 Annual mgmt. expense All $25 $50 Final expected return 30 $4,000 $8,000 a. Calculate the NPV for these two investments using four interest rates: 4%, 6%, 8%, and 10%. Answer: The general formula for the net present value in this example is: NPV = R i E [( 1 ) 1] 30 + i ( 1+ i) V30 (1+ i) where E = the initial (establishment) investment, R = the annual management expense, and V 30 = the final expected revenue in year 30. Table 4.2 shows the NPVs for the two options using the four interest rates, and Figure 4.1 below shows a graph of the results. 30 FOREST RESOURCE MANAGEMENT 73
8 Figure 4.1. Net present value of alternative investments at several interest rates. Table 4.2. NPVs for alternative investments under several discount rates. Interest Rate Option 1 Option 2 4% $ $1, % $ $ % $ $ % -$ $ b. Calculate the B/C ratio these two investments using four interest rates: 4%, 6%, 8%, and 10%. Answer: The general formula for the benefit-cost ratio in this example is: B C E R i 30 V30 [( 1+ ) 1] / = (1+ i) i ( 1+ i) where the variable definitions are the same as in part a. FOREST RESOURCE MANAGEMENT 74
9 Figure 4.2. Benefit-cost ratios of alternative investments at several interest rates. Table 4.3 shows the B/C ratios for the two options using the four interest rates, and Figure 4.2 below shows a graph of the B/C ratios. Table 4.3. B/Cs for alternative investments under several discount rates. Interest Rate Option 1 Option 2 4% % % % c. Estimate the internal rate of return for the two investments to the nearest hundredth of a percent. Answer: Recall that the IRR is the interest rate at which the net present value of the investment equals zero. Looking at Figure 4.1, it is pretty clear that the IRR of Option 1 is about 9% and the IRR of Option 2 is about 7.5%. In order to calculate the IRR more precisely than this it is usually necessary to use an iterative process such as the following: FOREST RESOURCE MANAGEMENT 75
10 1. Start with an initial estimate at the IRR. 2. Calculate the NPV with the current estimate of the IRR. 3. If the NPV is close enough to zero, stop. If the NPV > 0, increase the estimate of the IRR and return to step 2. If the NPV < 0, decrease the estimate of the IRR and return to step 2. This can be a laborious process if done with a calculator. However, most spreadsheets have built-in functions for calculating IRRs. Also, most spreadsheets have some kind of solve-for function, which allows you to use the computer to find the value of a variable that results in a function taking on some target value. I used Quattro Pro s solve-for function to identify the IRRs of these two investments. The IRRs are given in Table 4.4. Table 4.4. IRRs for alternative investments. Option 1 Option 2 IRR 8.91% 7.13% There are a few things you should note about this example. First, Option 1 has a higher B/C ratio at all of the interest rates considered. Furthermore, Option 1 has a higher IRR. In addition, if one s ARR is greater than 6%, Option 1 has a higher NPV. In spite of the fact that Option 1 outperforms Option 2 by all of these criteria, however, if the true cost of capital is less than or equal to 6%, then Option 2 is the better investment. This is because the NPV of Option 2 is higher than the NPV of Option 1 when the interest rate is less than or equal to 6%. 4. Is Discounting Unfair to Future Generations? Some people believe that discounting future values is wrong even immoral. At least one major religion, Islam, forbids paying and charging interest. Yet the American economic system is predicated on the borrowing and lending of money. Is charging and earning interest good or bad? Is charging interest ok in some situations and not others? Islam s ban on charging interest was a reaction to the usurious interest rates that were common in ancient cultures. (Usury is charging an exorbitant amount of interest imagine a loan shark.) A more modern argument against discounting (and implicitly against charging interest) is that it is unfair to future generations. The discount rate is determined by the current generation without any input from future generations. Since the discount rate can be interpreted as an exchange rate for the value of goods tomorrow versus the value of goods today (see Chapter 2, Section 2), then there is a tradeoff between how we value the well- FOREST RESOURCE MANAGEMENT 76
11 being of current and future generations that is implicit in the interest rate. Interest rates are like a price set in a market where one group of traders (the current generation) gets to set the price and everyone else (all future generations) has to live with that price. Some people suggest that, while it is necessary to discount, lower interest rates should be used for investments with particularly desirable social benefits. I am not convinced by these arguments. First, it is not clear that low interest rates are always better for future generations. Not discounting at all is the same thing as using an interest rate of zero. When someone says they don t believe in discounting, or that they believe that discounting is unfair and should not be done, they are essentially suggesting that interest rates should be zero. However, if interest rates were zero, there would be no incentive for anyone to save money. If no one saved any money, there would be no investment, and that would not be good for future generations at all. Thus, a discount rate of zero would not necessarily be good for future generations. So, would positive, but lower interest rates be good for future generations? Low interest rates do benefit future generations in some ways. For example, with low interest rates more investment projects that benefit future generations would be able to pay the interest on the capital used in the project and still show a profit. Using a lower interest rate for projects which are particularly beneficial to future generations is a way of biasing the analysis in favor of those projects. However, with low interest rates fewer people would be willing to invest their money at all, so there will be less money available for investments in general. In order to raise the capital to fund all the projects that would be profitable at lower-than-market interest rates, the government would have to subsidize these investments. This would represent a transfer from the current generation to future generations. A good argument can be made that current generations are better off than earlier generations and that future generations are likely to be even better off than we are. In that case, why should the current generation subsidize future generations? Furthermore, if these subsidies are funded by deficit spending, as is usually the case for modern governments, future generations will end up paying for the subsidies anyway. It is best to think of an interest rate as an equilibrium price where the supply of investment money is just high enough to satisfy the need for funds from projects that can meet the given interest rate. One reason suggested for using lower-than-market interest rates for some projects is that there are benefits associated with those projects that are not accounted for in the cost-benefit analysis. This may be true; however, we should not arbitrarily bias the financial analyses of such projects for this reason. A better approach is to use financial analysis as only one of the criteria for evaluating projects. Benefits that are difficult to quantify in a financial analysis should be addressed by the other criteria used to evaluate projects. It is probably true that an equilibrium interest rate that is lower will benefit future generations more than an equilibrium rate that is higher. But, too low an interest rate may not be fair to FOREST RESOURCE MANAGEMENT 77
12 current generations. What then is the best interest rate one that is fair to all generations? What mechanism should be used to lower interest rates if society decides interest rates should be lower? How should interest rates be lowered if the ideal interest rate were lower than the current market interest rate? These questions, while very interesting, are beyond the scope this course. 5. Study Questions 1. What are the basic steps in performing a financial analysis? What kinds of questions should you consider? What kinds of information will you need? 2. What kind of questions should you ask yourself when considering the scope of a financial analysis? 3. What three basic pieces of information are usually needed for each good, service, cost or revenue for a financial analysis of a project? 4. Give an example of a good or service produced by forests that is easy to quantify but difficult to value. Give an example of a good or service produced by forests that is difficult to quantify and to value. 5. Why is the discount rate sometimes called the alternate rate of return? 6. What basic principle should always be considered when selecting an interest rate? 7. How does the financial position (as a lender or borrower) of the investor affect the choice of the discount rate for a financial analysis? 8. What is the NPV?... the B/C?... the IRR? 9. What is the criterion for project acceptability for the NPV?... for the B/C?... for the IRR? 10. How is the IRR of a project generally calculated? 11. Why is the IRR generally not a good way to evaluate investment alternatives? 12. How can higher interest rates hurt future generations? How can higher interest rates help future generations? 13. Explain why not discounting at all is the same thing as using an interest rate of zero. FOREST RESOURCE MANAGEMENT 78
13 6. Exercises *1. Consider the following investment. You invest $500 per acre to establish a stand of trees. At the end of each year for 30 years, you incur an annual management expense of $10. At the end of 30 years, you receive $5,000 per acre in stumpage fees. a. Calculate the NPV and B/C ratio for this investment. b. Calculate the internal rate of return on this investment to the nearest tenth of a percent. FOREST RESOURCE MANAGEMENT 79
Investment Decision Analysis
Lecture: IV 1 Investment Decision Analysis The investment decision process: Generate cash flow forecasts for the projects, Determine the appropriate opportunity cost of capital, Use the cash flows and
Understanding Financial Management: A Practical Guide Guideline Answers to the Concept Check Questions
Understanding Financial Management: A Practical Guide Guideline Answers to the Concept Check Questions Chapter 8 Capital Budgeting Concept Check 8.1 1. What is the difference between independent and mutually
An Appraisal Tool for Valuing Forest Lands
An Appraisal Tool for Valuing Forest Lands By Thomas J. Straka and Steven H. Bullard Abstract Forest and natural resources valuation can present challenging analysis and appraisal problems. Calculations
Chapter 7. Net Present Value and Other Investment Criteria
Chapter 7 Net Present Value and Other Investment Criteria 7-2 Topics Covered Net Present Value Other Investment Criteria Mutually Exclusive Projects Capital Rationing 7-3 Net Present Value Net Present
Solutions to Chapter 8. Net Present Value and Other Investment Criteria
Solutions to Chapter 8 Net Present Value and Other Investment Criteria. NPV A = $00 + [$80 annuity factor (%, periods)] = $00 $80 $8. 0 0. 0. (.) NPV B = $00 + [$00 annuity factor (%, periods)] = $00 $00
Cost Benefits analysis
Cost Benefits analysis One of the key items in any business case is an analysis of the costs of a project that includes some consideration of both the cost and the payback (be it in monetary or other terms).
Capital Investment Analysis and Project Assessment
PURDUE EXTENSION EC-731 Capital Investment Analysis and Project Assessment Michael Boehlje and Cole Ehmke Department of Agricultural Economics Capital investment decisions that involve the purchase of
CHAPTER 6. Investment Decision Rules. Chapter Synopsis
CHAPTER 6 Investment Decision Rules Chapter Synopsis 6.1 and Stand-Alone Projects The net present value () of a project is the difference between the present value of its benefits and the present value
Which projects should the corporation undertake
Which projects should the corporation undertake Investment criteria 1. Investment into a new project generates a flow of cash and, therefore, a standard DPV rule should be the first choice under consideration.
Inflation. Chapter 8. 8.1 Money Supply and Demand
Chapter 8 Inflation This chapter examines the causes and consequences of inflation. Sections 8.1 and 8.2 relate inflation to money supply and demand. Although the presentation differs somewhat from that)
CHAPTER 9 NET PRESENT VALUE AND OTHER INVESTMENT CRITERIA
CHAPTER 9 NET PRESENT VALUE AND OTHER INVESTMENT CRITERIA Answers to Concepts Review and Critical Thinking Questions 1. A payback period less than the project s life means that the NPV is positive for
BENEFIT-COST ANALYSIS Financial and Economic Appraisal using Spreadsheets
BENEFIT-COST ANALYSIS Financial and Economic Appraisal using Spreadsheets Ch. 3: Decision Rules Harry Campbell & Richard Brown School of Economics The University of Queensland Applied Investment Appraisal
CHAPTER 8 CAPITAL BUDGETING DECISIONS
CHAPTER 8 CAPITAL BUDGETING DECISIONS Q1. What is capital budgeting? Why is it significant for a firm? A1 A capital budgeting decision may be defined as the firm s decision to invest its current funds
WHAT IS CAPITAL BUDGETING?
WHAT IS CAPITAL BUDGETING? Capital budgeting is a required managerial tool. One duty of a financial manager is to choose investments with satisfactory cash flows and rates of return. Therefore, a financial
PRESENT DISCOUNTED VALUE
THE BOND MARKET Bond a fixed (nominal) income asset which has a: -face value (stated value of the bond) - coupon interest rate (stated interest rate) - maturity date (length of time for fixed income payments)
Oklahoma State University Spears School of Business. NPV & Other Rules
Oklahoma State University Spears School of Business NPV & Other Rules Slide 2 Why Use Net Present Value? Accepting positive NPV projects benefits shareholders. NPV uses cash flows NPV uses all the cash
BENEFIT-COST ANALYSIS Financial and Economic Appraisal using Spreadsheets
BENEFIT-COST ANALYSIS Financial and Economic Appraisal using Spreadsheets Ch. 4: Project and Private Benefit-Cost Analysis Private Benefit-Cost Analysis Deriving Project and Private cash flows: Project
GEORGIA PERFORMANCE STANDARDS Personal Finance Domain
GEORGIA PERFORMANCE STANDARDS Personal Finance Domain Page 1 of 8 GEORGIA PERFORMANCE STANDARDS Personal Finance Concepts SSEPF1 The student will apply rational decision making to personal spending and
Lecture 8 (Chapter 11): Project Analysis and Evaluation
Lecture 8 (Chapter 11): Project Analysis and Evaluation Discounted cash flow analysis is a very powerful tool, but it is not easy to use. We have already seen one difficulty: How do we identify the cash
CHAPTER 5. Interest Rates. Chapter Synopsis
CHAPTER 5 Interest Rates Chapter Synopsis 5.1 Interest Rate Quotes and Adjustments Interest rates can compound more than once per year, such as monthly or semiannually. An annual percentage rate (APR)
Part 610 Natural Resource Economics Handbook
Part 610 Natural Resource Economics Handbook 610.20 Introduction Subpart C Discounted Cash Flow Analysis A. Benefits and costs of conservation practices do not necessarily occur at the same time. Certain
CHAPTER 9 NET PRESENT VALUE AND OTHER INVESTMENT CRITERIA
CHAPTER 9 NET PRESENT VALUE AND OTHER INVESTMENT CRITERIA Basic 1. To calculate the payback period, we need to find the time that the project has recovered its initial investment. After two years, the
11.3 BREAK-EVEN ANALYSIS. Fixed and Variable Costs
385 356 PART FOUR Capital Budgeting a large number of NPV estimates that we summarize by calculating the average value and some measure of how spread out the different possibilities are. For example, it
CHAPTER 6 NET PRESENT VALUE AND OTHER INVESTMENT CRITERIA
CHAPTER 6 NET PRESENT VALUE AND OTHER INVESTMENT CRITERIA Answers to Concepts Review and Critical Thinking Questions 1. Assuming conventional cash flows, a payback period less than the project s life means
Project Evaluation Guidelines
Project Evaluation Guidelines Queensland Treasury February 1997 For further information, please contact: Budget Division Queensland Treasury Executive Building 100 George Street Brisbane Qld 4000 or telephone
CHAPTER 29. Capital Budgeting
CHAPTER 9 Capital Budgeting Meaning The term Capital Budgeting refers to the long-term planning for proposed capital outlays or expenditure for the purpose of maximizing return on investments. The capital
Answers to Warm-Up Exercises
Answers to Warm-Up Exercises E10-1. Answer: E10-2. Answer: Payback period The payback period for Project Hydrogen is 4.29 years. The payback period for Project Helium is 5.75 years. Both projects are acceptable 3 Programme Management and Project Evaluation
Chapter 3 Programme Management and Project Evaluation McGraw-Hill Education ISBN 0-07-710989-9 2006 3.10 Cost-Benefit Analysis Cost/benefit analysis, comparing Expected costs Expected benefits Issues Estimating
Introduction to Health Care Accounting. What Is Financial Management?
Chapter 1 Introduction to Health Care Accounting and Financial Management Do I Really Need to Understand Accounting to Be an Effective Health Care Manager? Today s health care system, with its many types
1. What is the difference between nominal returns and real returns?
End of Chapter 11 Questions and Answers 1. What is the difference between nominal returns and real returns? Answer: Nominal returns include inflation while real returns have inflation netted out. For example,
Time Value of Money. Background
Time Value of Money (Text reference: Chapter 4) Topics Background One period case - single cash flow Multi-period case - single cash flow Multi-period case - compounding periods Multi-period case - multiple
The Basics of Interest Theory
Contents Preface 3 The Basics of Interest Theory 9 1 The Meaning of Interest................................... 10 2 Accumulation and Amount Functions............................ 14 3 Effective Interest
11. Introduction to Discounted Cash Flow Analysis and Financial Functions in Excel
11. Introduction to Discounted Cash Flow Analysis and Financial Functions in Excel John Herbohn and Steve Harrison The financial and economic analysis of investment projects is typically carried out
LESSON 6. Real Estate Investment Analysis and Discounting
LESSON 6 Real Estate Investment Analysis and Discounting Note: Selected readings can be found under "Online Readings" on your Course Resources webpage Assigned Reading 1. Real Estate Division. 2009. Foundations
Capital Budgeting OVERVIEW
WSG12 7/7/03 4:25 PM Page 191 12 Capital Budgeting OVERVIEW This chapter concentrates on the long-term, strategic considerations and focuses primarily on the firm s investment opportunities. The discussions
MEASURING ECONOMIC IMPACTS OF PROJECTS AND PROGRAMS
Economic Development Research Group April 1997 MEASURING ECONOMIC IMPACTS OF PROJECTS AND PROGRAMS GLEN WEISBROD, ECONOMIC DEVELOPMENT RESEARCH GROUP BURTON WEISBROD, ECONOMICS DEPT., NORTHWESTERN UNIV.
CHAPTER 7 MAKING CAPITAL INVESTMENT DECISIONS
CHAPTER 7 MAKING CAPITAL INVESTMENT DECISIONS Answers to Concepts Review and Critical Thinking Questions 1. In this context, an opportunity cost refers to the value of an asset or other input that will
Class Notes for Managerial Finance I
Class Notes for Managerial Finance I These notes are a compilation from: 1. Class Notes Supplement to Modern Corporate Finance Theory and Practice by Donald R. Chambers and Nelson J. Lacy. I gratefully
BETTER YOUR CREDIT PROFILE
BETTER YOUR CREDIT PROFILE Introduction What there is to your ITC that makes it so important to you and to everyone that needs to give you money. Your credit record shows the way you have been paying your 02 How to Calculate Present Values
Chapter 02 How to Calculate Present Values Multiple Choice Questions 1. The present value of $100 expected in two years from today at a discount rate of 6% is: A. $116.64 B. $108.00 C. $100.00 D. $89.00
6. FINANCIAL MANAGEMENT
6. FINANCIAL MANAGEMENT Syllabus Financial Management: Investment-need, Appraisal and criteria, Financial analysis techniques-simple pay back period, Return on investment, Net present value, Internal rate
Net Present Value and Other Investment Criteria
Net Present Value and Other Investment Criteria Topics Covered Net Present Value Other Investment Criteria Mutually Exclusive Projects Capital Rationing Net Present Value Net Present Value - Present value
Calculating interest rates
Calculating interest rates A reading prepared by Pamela Peterson Drake O U T L I N E 1. Introduction 2. Annual percentage rate 3. Effective annual rate 1. Introduction The basis of the time value of money
Present Value Concepts
Present Value Concepts Present value concepts are widely used by accountants in the preparation of financial statements. In fact, under International Financial Reporting Standards (IFRS), these concepts
CHAPTER 4 DISCOUNTED CASH FLOW VALUATION
CHAPTER 4 DISCOUNTED CASH FLOW VALUATION Solutions to Questions and Problems NOTE: All-end-of chapter problems were solved using a spreadsheet. Many problems require multiple steps. Due to space and readability
Investment, Time, and Present Value
Investment, Time, and Present Value Contents: Introduction Future Value (FV) Present Value (PV) Net Present Value (NPV) Optional: The Capital Asset Pricing Model (CAPM) Introduction Decisions made by a
CHAPTER 9 Time Value Analysis
Copyright 2008 by the Foundation of the American College of Healthcare Executives 6/11/07 Version 9-1 CHAPTER 9 Time Value Analysis Future and present values Lump sums Annuities Uneven cash flow streams
About credit reports. Credit Reporting Agencies. Creating Your Credit Report
About credit reports Credit Reporting Agencies Credit reporting agencies maintain files on millions of borrowers. Lenders making credit decisions buy credit reports on their prospects, applicants and customers
Project Selection and Project Initiation
Chapter 4 Project Selection and Project Initiation Objectives Understand how to select right projects and why selecting the right projects to work on is important and sometimes difficult for an organization
Practice Problems on Money and Monetary Policy
Practice Problems on Money and Monetary Policy 1- Define money. How does the economist s use of this term differ from its everyday meaning? Money is the economist s term for assets that can be used in
CARNEGIE MELLON UNIVERSITY CIO INSTITUTE
CARNEGIE MELLON UNIVERSITY CIO INSTITUTE CAPITAL BUDGETING BASICS Contact Information: Lynne Pastor Email: lp23@andrew.cmu.edu RELATED LEARNGING OBJECTIVES 7.2 LO 3: Compare and contrast the implications
Chapter 3: Commodity Forwards and Futures
Chapter 3: Commodity Forwards and Futures In the previous chapter we study financial forward and futures contracts and we concluded that are all alike. Each commodity forward, however, has some unique
Valuing the Business
Valuing the Business 1. Introduction After deciding to buy or sell a business, the subject of "how much" becomes important. Determining the value of a business is one of the most difficult aspects of any
Private Equity Performance Measurement BVCA Perspectives Series
Private Equity Performance Measurement BVCA Perspectives Series Authored by the BVCA s Limited Partner Committee and Investor Relations Advisory Group Spring 2015 Private Equity Performance Measurement
ANSWERS TO STUDY QUESTIONS
ANSWERS TO STUDY QUESTIONS Chapter 17 17.1. The details are described in section 17.1.1. 17.3. Because of its declining payment pattern, a CAM would be most useful in an economy with persistent deflation
Perpetuities and Annuities
1/1 Perpetuities and Annuities (Welch, Chapter 03) Ivo Welch UCLA Anderson School, Corporate Finance, Winter 2014 January 13, 2015 Did you bring your calculator? Did you read these notes and the chapter
Cost/Benefit Analysis. A Methodology for Sound Decision Making. Student Lesson Document
Cost/Benefit Analysis A Methodology for Sound Decision Making Student Lesson Document Essential Question: Is it worth it? Context: We ask ourselves this question all the time, don t we? When we are out
Module 5: Interest concepts of future and present value /Courses/2010-11/CGA/FA2/06course/m05intro.htm Module 5: Interest concepts of future and present value Overview In this module, you learn about the fundamental concepts of interest and present
#10. Timing is Everything. CPA... Imagine the possibilities!
#10 T I M E V A L U E O F M O N E Y Timing is Everything CPA... Imagine the possibilities! Intro Learning Activity Learning Objectives 1. Understand the time value of money. 2. Calculate the present value
Borrowing at negative interest rates and investing at imaginary returns
Borrowing at negative interest rates and investing at imaginary returns Prepared by Eric Ranson Presented to the Actuaries Institute Financial Services Forum 5 6 May 2014 Sydney This paper has been prepared
Finance 350: Problem Set 6 Alternative Solutions
Finance 350: Problem Set 6 Alternative Solutions Note: Where appropriate, the final answer for each problem is given in bold italics for those not interested in the discussion of the solution. I. Formulas
1.040 Project Management
MIT OpenCourseWare 1.040 Project Management Spring 2009 For information about citing these materials or our Terms of Use, visit:. Project Financial Evaluation
PUBLIC HEALTH OPTOMETRY ECONOMICS. Kevin D. Frick, PhD
Chapter Overview PUBLIC HEALTH OPTOMETRY ECONOMICS Kevin D. Frick, PhD This chapter on public health optometry economics describes the positive and normative uses of economic science. The terms positive
ebrief for freelancers and contractors Borrowing company money
ebrief for freelancers and contractors Borrowing company money The facts behind the Directors Loan Account Taking money from the business for personal use when trading as a partnership or sole trader is
The Time Value of Money
The Time Value of Money This handout is an overview of the basic tools and concepts needed for this corporate nance course. Proofs and explanations are given in order to facilitate your understanding and
Numbers 101: Cost and Value Over Time
The Anderson School at UCLA POL 2000-09 Numbers 101: Cost and Value Over Time Copyright 2000 by Richard P. Rumelt. We use the tool called discounting to compare money amounts received or paid at
EXAM 2 OVERVIEW. Binay Adhikari
EXAM 2 OVERVIEW Binay Adhikari FEDERAL RESERVE & MARKET ACTIVITY (BS38) Definition 4.1 Discount Rate The discount rate is the periodic percentage return subtracted from the future cash flow for computing
Business Plan Planning Service Financial Analyses and Projections
Business Plan Planning Service Financial Analyses and Projections Financials Included With Every Ceo Resource Plan These are the financial analyses and projections that are included with all plans developed
Chapter 13 The Basics of Capital Budgeting Evaluating Cash Flows
Chapter 13 The Basics of Capital Budgeting Evaluating Cash Flows ANSWERS TO SELECTED END-OF-CHAPTER QUESTIONS 13-1 a. The capital budget outlines the planned expenditures on fixed assets. Capital budget
Lease Analysis Tools
Lease Analysis Tools 2009 ELFA Lease Accountants Conference Presenter: Bill Bosco, Pres. wbleasing101@aol.com Leasing 101 914-522-3233 Overview Math of Finance Theory Glossary of terms Common calculations
BENCHMARKING PERFORMANCE AND EFFICIENCY OF YOUR BILLING PROCESS WHERE TO BEGIN
BENCHMARKING PERFORMANCE AND EFFICIENCY OF YOUR BILLING PROCESS WHERE TO BEGIN There have been few if any meaningful benchmark analyses available for revenue cycle management performance. Today that has
Course 3: Capital Budgeting Analysis
Excellence in Financial Management Course 3: Capital Budgeting Analysis Prepared by: Matt H. Evans, CPA, CMA, CFM This course provides a concise overview of capital budgeting analysis. This course is recommended
|
http://docplayer.net/15318382-Chapter-4-financial-analysis-overview.html
|
CC-MAIN-2019-04
|
refinedweb
| 7,495
| 52.7
|
Details
Description
I've been working on this off-and-on for awhile, but it's currently in a state where I feel like it's worth sharing: I came up with an implementation of the Crunch APIs that runs on top of Apache Spark instead of MapReduce.
My goal for this is pretty simple; I want to be able to change any instances of "new MRPipeline(...)" to "new SparkPipeline(...)", not change anything else at all, and have my pipelines run on Spark instead of as a series of MR jobs. Turns out that we can pretty much do exactly that. Not everything works yet, but lots of things do-- joins and cogroups work, the PageRank and TfIdf integration tests work. Some things that do not work that I'm aware of: in-memory joins and some of the more complex file output handling rules, but I believe that these things are fixable. Some thing that might work or might not: HBase inputs and outputs on top of Spark.
This is just an idea I had, and I would understand if other people don't want to work on this or don't think it's the right direction for the project. My minimal request would be to include the refactoring of the core APIs necessary to support plugging in new execution frameworks so I can keep working on this stuff.
Activity
- All
- Work Log
- History
- Activity
- Transitions
Gabriel Reid that error makes sense to me, actually-- the CombineFn generic definition has more constraints on it than the DoFn<S, Pair<K, V>> has at that point (i.e., there's a constraint that S must be Pair<K, Iterable<V>>). I'm fine with that fix.
Re: caching, I agree with you. I hesitated at first, but then I realized that a lot of the use of materialize() is really to signal a split point in a pipeline, and that returning the PCollection instead of an Iterable will be more literate.
Looks good to me in general, although I'm running into one small issue – it's not compiling for me directly under maven (jdk 1.7.0_40 on Mac OS X), although it is fine compiling within Eclipse.
The issue is line 42 in BaseDoTable.java. Replacing that line with
return (CombineFn) fn;
resolves the issue for me, although looking at that change that's required to get it to compile and the generics info that's being thrown away, I wonder if there is something else to worry about there.
At first I wasn't so sure about the new cache() methods on PCollection, but thinking about it more I think it's actually even a more logical naming for the way that materialize() is currently used to cache results of a computation on disk, so I'm all for it.
Gabriel Reid this is my (hopefully!) last stab at this; I added support for writing multiple PCollections to the same output target and also created cache() and cache(CachingOptions) methods on PCollection to expose Spark's caching functionality to the client (in MR mode, cache() is just syntactic sugar for a materialize call where you don't care about the outputs.) I feel ready to commit.
Adds support for requireSortedKeys() and the trick for applying combine functions in Spark that we discussed on the list. (Thanks to Gabriel Reid for that.)
Yeah, that sounds right. I'm thinking that this is basically just a matter of adding a sortedKeys() or similar method to GroupingOptions and GroupingOptions.Builder, right?
Off the top of my head I would expect that this is probably only needed in the sorting and joining library code (or am I totally underestimating the impact that this would have)?
Okay-- so are you cool w/me editing the code in the o.a.c.lib packages to explicitly turn on sorting where it is necessary?
Cool!
About the sorting issue, I think that turning it off by default and adding the option in GroupingOptions to turn it on makes the most sense. As far as I understand, the main point of Spark is improved performance, so it would seem wrong to lose a portion of the speed improvement for something that can be handled as easily as just only turning it on when needed.
Latest and greatest: passes all of the sorting and mapside join integration tests.
It turns out that our sorting library (in a completely unsurprising fashion) expects that keys will be sorted in the reducer. But there's no reason that has to be the case in Spark, and I'm sure that there are a lot of use cases where turning off sorting would be a welcome speed improvement. So here's my question: do we turn off sorting in Spark by default and add an option in GroupingOptions to turn it on, or do we leave it on by default in Spark and add an option to turn it off?
Here's a new patch that integrates the changes in
CRUNCH-294.
I went through the patch, and it looks good to me. I haven't got any experience with Spark, so I can't say much about the Spark stuff, but the refactoring in crunch-core looks good, and looks like it should fit well if we want to expand to Tez or something else in the future as well. I also like the speed that the Spark integration tests run at
+1 on the idea. Still need to review the patch.
Excited for this.
Thanks Tom. My goal for the project is to be useful to MapReduce developers, and I suspect that many MapReduce developers are going to become Tez/Spark developers in the coming years. I think that anything we can do to smooth those transitions and ensure that they can easily select the right framework for the job at hand is a worthwhile goal for this community.
This sounds like a great addition!
Regarding whether it fits in the project, I think it does. MapReduce is the workhorse, and I can't see it going away, but Spark and Tez (both in the Apache Incubator) can be more efficient for certain types of pipelines, so it makes sense to support them as alternative execution engines. For comparison, work is currently underway to make Hive and Pig both take advantage of the more flexible DAGs that Tez supports, so it's natural to do something similar in Crunch.
Looks and sounds very interesting – I'm definitely looking forward to taking a closer look at this and playing around with it.
I think it's worth considering where we want to go with this (and/or where we don't want to go with it), as it is straying away from the tagline of "Simple and Efficient MapReduce Pipelines". That being said, as long as this doesn't get in the way of working with MapReduce (assuming that's what the intention of Crunch will remain), then I'm all for it.
The gigantic patch.
Ridiculously huge patch committed.
|
https://issues.apache.org/jira/browse/CRUNCH-296
|
CC-MAIN-2016-22
|
refinedweb
| 1,182
| 66.78
|
Managed Metadata fields have always been slightly painful for SharePoint developers, but if you did any kind of site templating or Feature development in SharePoint 2010, chances are that you did some research, read some blog articles and came to understand the solution. Here I’d like to talk about it for Office 365, and show a little customization I built to help with the process we currently use. This article became long, so I split it over two articles:
- Provisioning Managed Metadata fields in Office 365 – dealing with multiple environments [this article]
- Provisioning Managed Metadata fields in Office 365 – building WSP packages for a specific environment/tenancy
So, back on the SharePoint 2010 situation - the deal is that some Managed Metadata details change between SharePoint environments – so unlike other fields, the same provisioning XML could not be used across dev/test/UAT/production. To be specific, the IDs of the Term Store, Group and Term Set all change between environments. As a reminder, broadly the solution was to:
- Use XML to define the “static” details of the field
- Use server-side code to “finish the job” – i.e. use the API to ask SharePoint what the IDs are (for the current environment), and then update the field config with that value
Without the 2nd step, the field will be created but will be broken – it will be grayed out and users cannot use it (SP2010 example shown here):
Posts by Ari Bakker (whose image I’m using above, thanks Ari), Wictor Wilen and Andrew Connell were popular in discussing the steps to solve this.
It’s the same deal in SharePoint 2013/Office 365 – but it turns out we need different techniques.
Why the approach doesn’t work for Office 365/SharePoint Online
Well firstly, code in sandboxed solutions is deprecated. Full-stop. [RELATED - did you hear? Microsoft are starting to clarify that sandboxed solutions without code aren’t really deprecated after all (and hopefully MSDN will reflect this soon), but CODE in sandboxed solutions is deprecated and could be phased out in future versions. Clearly this is a very important distinction.]
But even if we were happy to use sandboxed code - in Office 365/SharePoint Online, we cannot use the Microsoft.SharePoint.Taxonomy namespace in server-side code anyway – the net result is that we are unable to “finish the job” in this way to ensure the field is correctly bound to the Term Store. This is a problem! Even worse, whilst it is possible in the CSOM API to bind the field, having this execute in the provisioning process (e.g. as a site is being created from the template) is challenging, maybe impossible. Maybe you could come up with some imaginative hack, but that’s probably what it would be. And what happens if this remote code (e.g. a Remote Event Receiver) fails?
Possible solutions
A colleague of mine, Luis Mañez, did some great research – I’ll give you a quick summary here, but I strongly recommend reading his article - Deploying Managed Metadata Fields declaratively in SharePoint 2013 Online (Office 365). Here’s a summary:
In fact, it IS possible to provision Managed Metadata fields without any code, if you are willing to accept a big trade-off – you can declaratively specify the key details (such as the Term Store ID (also known as the SspId), the Group ID, the Term Set ID etc.) into your XML field definitions. Wictor alluded to this possibility in his post. But remember, these details change between environments!
So in other words, the trade-off is that you would need to rebuild your WSPs for each environment.
This is tricky for us, because on this project we choose to run multiple Office 365 tenancies, for development/test/production (something I’ll talk about in future posts) – just like a traditional mature process. So at first we said “No way! That’s against all of our ALM principles! The exact same packages MUST move between the environments!”. But then we rationally looked at the alternatives we could see:
- Option 1 - Some elaborate “remote code” solution, perhaps involving code running separately AFTER the site has been provisioned. Until this code executed, it would not be possible to upload documents to libraries with MM fields within the sites (and similarly if this remote call actually fails for any reason, these libraries would not function correctly until an administrator intervenes).
- Option 2 - The client needs to fix-up any Managed Metadata fields manually – across all 5000 sites we were expecting. In every list and library. Knowing that some lists/libraries have up to 5 such fields. Yeah….
Since neither of these was attractive, we continued looking at this idea of a 100% declarative definition of Managed Metadata fields. And then we realized that..
..if you do things in a certain way, you can get to the point where ONLY THE TERM STORE ID (SspId) CHANGES BETWEEN ENVIRONMENTS. That’s kinda interesting. It means that just one find/replace operation is all that’s needed – assuming you’re happy to accept the overall trade-off. Of course, having to replace the SspId is still sub-optimal, error-prone and less than awesome. But maybe we could work on that too – and that’s what these posts are really about - to show a Visual Studio customization we made to simplify this process, and make it less prone to human-error. If you want to skip ahead to this, see Provisioning Managed Metadata fields in Office 365 – Part 2: building WSP packages for a specific environment/tenancy.
But first, let’s talk about that “if you do things in a certain way” thing (which means that the only the SspId changes between environments)..
The full recipe for Managed Metadata fields
Taking a step back for a second, if you are in “development mode” (e.g. creating a template for SharePoint sites), then successful provisioning actually involves more than just provisioning the field itself in a certain way. Effectively you should seek to provision both the Term Sets AND the fields. Do not allow administrators to create new Term Sets in the admin interface. This is because:
- This way, you can control the ID of all your Term Sets – rather than let SharePoint generate that GUID
- Because this is a static “known” ID, we can reference it elsewhere
Here’s what needs to happen:
- Term Sets are provisioned into the Term Store with “known” IDs
- We create Term Sets (often with some default Terms) in Office 365 using PowerShell and CSOM, instead of through the admin interface. In the article I referenced earlier, Luis provides you with the core PowerShell to do “taxonomy provisioning” in this way. In our project, we basically hook this up to an XML file which defines the desired Term Store structure (e.g. Groups, Term Sets etc.) – and this XML file defines the GUIDs of such objects; WE generated them – not SharePoint/Office 365
- The “known IDs” are then used in the XML definition of the fields
- The code sample below is an example of a Managed Metadata field being provisioned the 100% declarative way. Notice all the properties being specified in the ‘Customization’ section (something a field using the combined declarative + code approach does not have):
** N.B. My newer code samples do not show in RSS Readers - click here for full article **
If you do this, your Managed Metadata fields will work just fine:
Great. It’s certainly very valuable to know this is possible for Office 365. So now we have things so that only the SspId value needs to change between environments. But that’s still a nasty find/replace operation – how could we make this situation better?
I describe the mechanism we use in the next post - Provisioning Managed Metadata fields in Office 365 – Part 2: building WSP packages for a specific environment/tenancy.
|
https://www.sharepointnutsandbolts.com/2013/09/provisioning-managed-metadata-fields-in-Office-365.html
|
CC-MAIN-2018-47
|
refinedweb
| 1,317
| 57.81
|
an approximately area of the graph
(1) work out the desired probability distribution function such that the area under a portion of the curve is equal to the probability of a value being randomly generated in that range, then
(2) integrate the probability distribution to determine the cumulative distribution, then
(3) invert the cumulative distribution to get the quantile function, then
(4) transform your uniformly distributed random data by running it through the quantile function,
and you’re done. Piece of cake!
Next time: a simple puzzle based on an error I made while writing the code above.
(*) A commenter correctly points out that the set of real values representable by doubles is not uniformly distributed across the range of 0.0 to 1.0, and that moreover, the Random class is not documented as guaranteeing a uniform distribution. However, for most practical applications it is a reasonable approximation of a uniform distribution.
(**) Using the awesome Microsoft Chart Control, which now ships with the CLR. It was previously only available as a separate download.
"This graph has exactly the same information content as the "histogram limit" probability distribution; its just a bit easier to read."
I don't get this comment, not in its current absolute form.
Yes, if the information you're trying to get at is the probability of getting a value less than some given value, the integral graph is easier to read. But if the information you're trying to get at is the relative predominance of any given value, the original bell-curve graph is easier to read.
That's the whole point of graphing/visualization. It reveals the data in ways that can be intuitively understood. But many types of data (including random samples) have a variety of interesting facets that are useful to understand. We select a type of graph of the data that best shows us the facet of interest. Each type of graph is only "easier to read" inasmuch as it fits the facet of interest. Pick a different facet, and a different type of graph is "easier to read".
Claiming that the integral graph is easier to read than the bell-curve graph presupposes that what we care about is the cumulative distribution.
If in the article we had started with that statement — that we care about the cumulative distribution — then the comment in question could have been made in a context-dependent way. But in its current form, with the absolute generalization, it seems prejudiced against bell-curve graphs. 🙁
Good post Eric, I enjoyed reading it
"In fact, no matter how many buckets of uniform size you make, if you have a large enough sample of random numbers then each bucket will end up with approximately the same number of items in it."
There is a theorem called the law of large numbers, which says that the average result of an experiment will go closer to the expected value if one performs more trials. In other words if I do just a few experiments the histogram of a uniform distribution will look like a curve with several sharp bends. If I do infinitely many experiments the histogram will look like a perfect straight line parallel to a horizontal axis.
That settles it – next time I'm in near Seattle, I'll definitely have to get you some of these:
flowingdata.com/…/plush-statistical-distribution-pillows
This is standard work in applying a lookup transformation (LUT) to a grayscale image. LUT's have been in image processing since the 1980s. Just about any colored image of Saturn uses one since the original images are in grayscale.
Applying the inverse CDF of the distribution of interest to uniform variates is a very generic method, quite robust. Unfortunately, some quite common distributions have inverse CDFs that are expensive to compute. In these cases, "acceptance methods" (aka acceptance-rejection) can sometimes be faster, although they sound inefficient. In fact, the familiar Marsaglia / Box-Muller methods of generating standard normal variates from uniform ones are acceptance methods. See Wikipedia for details.
The proposal to simulate Cauchy variables sounds dodgy to me. The Cauchy distribution has no defined moments – from the mean on up, they don't exist. But of course, any finite sample from the Cauchy distribution does not have this property. Speaking generally, simulations based on the Cauchy distribution will tend not to be very similar when started from different seeds.
The Microsoft Codename "Cloud Numerics" Lab (…/numerics.aspx) has a good set of statistical functions for .NET including the inverse cumulative density function for many distributions. Despite the name, you don't have to run it in the cloud.
I'm glad I can just #include <random> in C++11 now for all this. Is there not already something like a System.Numerics.Random namespace? Consider me surprised that C++'s anemic standard library has something the BCL doesn't if so.
@Simon Personally I'd think anyone needing say a cauchy distribution, will almost certainly need some additional statistical libraries anyhow – and then these functions could easily be added there. It just doesn't seem like an especially "standard" thing to do.
Now I'm certainly not against vast standard libraries, but only the c++ guys could add support for half a dozen different distributions, but still not include a XML parser.
@voo: Not to get too far off topic, but unlike the BCL, the C++ standard is entirely maintained by volunteers. If you'd like to volunteer to specify an XML parser and represent your ideas at standard committee meetings for 5+ years to make sure it gets into the standard, please do so. Unfortunately, this time around, no one was willing to do that. There is an effort underway to build a much larger set of standard libraries for C++. Herb Sutter talked about this a bit in his Day 2 keynotes at the recent Going Native 2012 conference: channel9.msdn.com/…/C-11-VC-11-and-Beyond. Take a look at the video of his day 2 session.
»If I do infinitely many experiments the histogram will look like a perfect straight line parallel to a horizontal axis.«
Only with very high probability. 😉
I was surprised to hear you say that the .NET pseudo-random generator creates a unifrom distribution of doubles. Neither the Random.NextDouble() online documentation (msdn.microsoft.com/…/system.random.nextdouble.aspx), nor the intellisense comments make this guarantee. Since the valid values for a double type are _not_ uniformly distributed, I saw no reason for Random.NextDouble() to be. If developers can rely on a uniform distribution, it would be nice for the documentation to reflect that.
Cool, Thank you!!
@Mashmagar
The documentation for the constructor of `Random` (don't ask me why they put it there, instead of on the method itself) states:
> The distribution of the generated numbers is uniform; each number is equally likely to be returned.
But the documentation only describes what the implementers wanted to do. The actual implementation is so bad, that depending on what you do, you get significant biases.
* NextDouble has only 2 billion(2^31) different return values. There are around 2^53 (too lazy to look up the exact value, might be wrong by a factor of 2) different doubles in the interval between 0 and 1.
* Next(maxValue) For some carefully chosen maxValues, it's *very* biased.
For the full story, see my rant at stackoverflow.com/…/445517
@Code in Chaos:
I did analyze the implementation. It is almost a correct implementation of an algorithm designed by Donald Knuth. (Actually Knuth's algorithm was a block-based algorithm, which was (correctly) converted to a stream random generator by the book Numerical Recipes in C (2nd Ed.)
There is one mistake in the implementation which will drastically reduce the period of the RNG. They also threw away 56 more values than Knuth specifed for algorithm startup, but that is harmless.
The alogithm proper returns an integer in the range [0,int.Maxval]. To get a Double they multipy it by 1/int.MaxVal. That creates a fairly reasonable approximation to a uniform double in the range [0,1].
The problem is how they generate integers. They literally do `(int)(Sample()*maxValue)` where Sample() is equivlent to NextDouble(). That is pretty obviously a terrible way to generate a random Integer, and is what results in most of the bias.
Finally, the original implementation of Random was technically buggy, since NextDouble (and Next) were specified to return [0,1) and [0,maxValue) respectivly, but they actually returned [0,1] and [0,maxValue].
Note that the maxValue return for int only occured very rarely 1 in int.MaxValue times on average.
However the way they fixed this bug results not only in additional bias, (although very slight), but also completely ruins the period of the generator.
That was especially boneheaded, since there is a trivial change they could have made instead (requiring the addition of only 4 characters to the source code, not counting whitespace) that would cause neither bias nor damage to the generator's period.
|
https://blogs.msdn.microsoft.com/ericlippert/2012/02/21/generating-random-non-uniform-data-in-c/
|
CC-MAIN-2016-40
|
refinedweb
| 1,520
| 55.03
|
Add A New Dimension To The End Of A Tensor In PyTorch
Add a new dimension to the end of a PyTorch tensor by using None-style indexing
< > Code:
Transcript:
This video will show you how to add a new dimension to the end of a PyTorch tensor by using None-style indexing.
First, we import PyTorch.
import torch
Then we print the PyTorch version we are using.
print(torch.__version__)
We are using PyTorch 0.4.0.
Let’s now create a PyTorch tensor of size 2x4x6x8 using the PyTorch Tensor operation, and we want the dimensions to be 2x4x6x8.
pt_empty_tensor_ex = torch.Tensor(2,4,6,8)
This is going to return to us an uninitialized tensor which we assign to the Python variable pt_empty_tensor_ex.
Let’s check what dimensions our pt_empty_tensor_ex Python variable has.
print(pt_empty_tensor_ex.size())
We see that it is a 2x4x6x8 tensor.
What we want to do now is we want to add a new axis to the end of this tensor.
So it’ll be 2x4x6x8x1.
The way we’re going to do this is we’re going to use the None-style indexing.
So we pass in our initial tensor, pt_empty_tensor_ex, and then we’re going to do indexing to specify what it is that we want.
pt_extend_end_tensor_ex = pt_empty_tensor_ex[:,:,:,:,None]
So for the first index, we pass in a colon to specify that we want everything in the already existing first dimension.
For the second index, we use a colon to specify that we want everything in the already existing second dimension.
For the third index, we use a colon to specify that we want everything in the already existing third dimension.
For the fourth index, we specify with a colon that we want everything in the already existing fourth dimension.
Then finally, because we had four dimensions and now we’re going to add something new, we are going to use the None-style indexing with a capital N.
What we are specifying here is that we want to create a new axis right at the end of the tensor.
This tensor is then going to be assigned to the Python variable pt_extend_end_tensor_ex.
Let’s check the dimensions by using the PyTorch size operation to see if PyTorch did create a new axis for us at the end of the tensor.
print(pt_extend_end_tensor_ex.size())
Bingo! We see that we now have a new axis at the end.
Before, it was 2x4x6x8.
Now, it’s 2x4x6x8x1.
Perfect - We were able to add a new dimension to the end of a PyTorch tensor by using None style indexing.
|
https://aiworkbox.com/lessons/add-a-new-dimension-to-the-end-of-a-tensor-in-pytorch
|
CC-MAIN-2019-51
|
refinedweb
| 434
| 71.14
|
I'm not sure it is absolutely intended but it is at least an expected behavior: When you write
f(x) = <expr>, SageMath actually creates a "callable symbolic expression" from
<expr> (in which
x is considered as a symbolic variable). This means that
<expr> itself must initially be a symbolic expression (or anything that can be transformed into a symbolic expression). But this is not the case of
divisors(n) since
divisors is not a symbolic function applicable to a symbolic variable. Thus you must use the
def ... return ... construction, or the equivalent
lambda-expression
f = lambda n: divisors(n).
In other words, there is a difference between a symbolic function, which is a Sage object made to represent mathematical functions (thus you can work with it, for instance derive it, integrate, etc.) and a Python function which is a function in the computer science sense, that is a subroutine. The shorthand
f(x) = <expr> is a construction for symbolic functions and not for Python functions.
|
https://ask.sagemath.org/answers/47675/revisions/
|
CC-MAIN-2021-49
|
refinedweb
| 166
| 51.18
|
Liberty Series Release Notes¶
12.2.5¶
12.2
The upgrade playbook nova-flavor-migration.yml will perform a migration of nova flavor data. This will need to be completed prior to upgrading to Liberty. It is recommended that Kilo be deployed from the eol-kilo tag prior to upgrading to Liberty to ensure that this task is completed successfully.
This upgrade task is related to bug 1594584..
12.2.2¶
Known Issues¶
- For OpenStack-Ansible Liberty releases earlier than 12.2.
12.2.0.
Upgrade Notes¶
- During a kilo to liberty upgrade,.
Bug Fixes¶
- The
repo_buildrole now correctly applies OpenStack requirements upper-constraints when building Python wheels. This resolves
12.1.0¶
Upgrade Notes¶
-.
12.0.16¶
New Features¶
The audit rules added by the security role now have key fields that make it easier to link the audit log entry to the audit rule that caused it to appear.
Upgrade Notes¶
- During the upgrade from Kilo to Liberty, this change deletes the repo containers and recreates them to fix an upgrade issue with dependencies.
Bug Fixes¶
The role previously did not restart the audit daemon after generating a new rules file. The bug has been fixed and the audit daemon will be restarted after any audit rule changes..
The security role now handles
ssh_configfiles that contain
Matchstanzas. A marker is added to the configuration file and any new configuration items will be added below that marker. In addition, the configuration file is validated for each change to the ssh configuration file.
12.0.15¶
New Features¶
Deployers can now blacklist certain Nova extensions by providing a list of such extensions in
horizon_nova_extensions_blacklistvariable, for example:
horizon_nova_extensions_blacklist: - "SimpleTenantUsage"
Added
horizon_apache_custom_log_formattunable to the os-horizon role for changing CustomLog format. Default is “combined”.
Added keystone_apache_custom_log_format tunable for changing CustomLog format. Default is “combined”.
Upgrade Notes¶
The Kilo upgrade playbook glance-db-storage-url-fix.yml to Liberty will migrate all existing Swift backed Glance images inside the image_locations database table from a Keystone v2 API URL to a v3 URL. This will force the Swift client to operate against a v3 Keystone URL. A backup of the old image_locations table is stored inside a new database table image_locations_keystone_v3_mig_pre_liberty and can be safely removed after a successfull upgrade to Liberty.
This upgrade task is related to bug 1582279.
Bug Fixes¶
-.
12.0.14¶
New Features¶
-.
Known Issues¶
- Ceilometer does not support V3 endpoints in Liberty, which are the flavor created by OSA. To deploy Ceilometer some endpoints in the Keystone service catalog must be removed and replaced with V2 endpoints. This is neccessary, for example, to use the Swift pollster to collect metrics for Swift storage use the Swift endpoint. For detailed instructions on the steps for these changes to the service catalog see the OpenStack Liberty Install Guide <>.
Upgrade Notes¶
- A new nova admin endpoint will be registered with the suffix
/v2.1/%(tenant_id)s. The nova admin endpoint with the suffix
/v2/%(tenant_id)smay be manually removed.
12.0.13¶
New Features¶
-.
Security Issues¶
- A sudoers entry is added to the repo_servers to allow the nginx user to stop and start NGINX from the init script. This ensures that the repo sync process can shut off NGINX while synchronizing data from master to slaves..
12.0.12.
12.0.11¶
12.0.10¶
New Features¶
- The haproxy-install.yml playbook will now be run as a part of setup-infrastructure.yml.
- LBaaS v2 is available for deployment in addition to LBaaS v1. Both versions are mutually exclusive and cannot be running at the the same time. Deployers will need to re-create any existing load balancers if they switch between LBaaS versions. Switching to LBaaS v2 will stop any existing LBaaS v1 load balancers.
- New rabbitmq-server role override rabbitmq_async_threads defaults to 128 threads for IO operations inside the RabbitMQ erlang VM. This setting doubled the threads for IO operations.
- New rabbitmq-server role override rabbitmq_process_limit defaults to 1048576 for number of concurrent processes inside the erlang VM. Each network connection and file handle does need its own process inside erlang.
- Services deploy into virtual environments by default when the service relies on Python. Find the virtualenv for each service under
/openstack/venvs/on the host or in the container where the service is deployed. Disable the use of virtualenv by overriding the service-specific variable (for example
cinder_venv_enabled) which defaults to
True.
Known Issues¶
- Depending on when the initial Kilo deployment was done it is possible the repository servers have a pip.conf locking down the environment which limits the packages available to to install. If this file is present it will cause build failures as the repository server attempts to build Liberty packages.
- Services deploy into virtual environments by default when the service relies on Python. On upgrade any Python packages installed on the host or container are not upgraded with the release unless the virtualenv for that service is disabled. There might be older and possibly broken packages left on the system outside of the virtualenv, which can cause confusion for those who attempt to use Python-based tools or services without using the virtualenv. These left over packages can be manually removed at the operator’s discretion.
Upgrade Notes¶
- Existing LBaaS v1 load balancers and agents will not be altered by the new OpenStack-Ansible release.
- When upgrading from early Kilo versions of OpenStack-Ansible, the RabbitMQ minor version may need to be upgraded during the upgrade process. This is noted in both the manual steps and the
run-upgrade.shscript.
- To fix this issue the
pip.conffile needs to be removed from all repository servers. The upgrade playbook
repo-server-pip-conf-removal.ymlwill remove the pip.conf file from the repository servers if it’s found.
12.0.9¶
Known Issues¶
- For OpenStack-Ansible Liberty versions <12.0.9 and Kilo versions <11.2.12 the package
pywbemwill fail to build due to the update to v0.8.0 including new requirements which are not met by the repo server. This issue has been resolved in 12.0.9. A workaround for this is to set
pywbem<0.8.0in the file
global-requirement-pins.txt.
- For OpenStack-Ansible Liberty versions >12.0.7,<12.0.9 the wheel version pinned in OpenStack-Ansible (0.29.0) is higher that the OpenStack upper-constraint (0.26.0). This causes an issue where the repo-server install may fail because it cannot find a version of wheel to install that meets the requirements of <0.26.0 and ==0.29.0. A workaround for this issue is to change the wheel package pin in the following files
wheel==0.26.0.
playbooks/inventory/group_vars/hosts.yml
requirements.txt
12.0.8 uses all components. If deployers wish to change this to reduce the components configured then the variable
lxc_container_template_apt_componentsmay be set in
/etc/openstack_deploy/user_variables.ymlwith the full list of desired components.
A new variable called
lxc_container_cache_fileshas been implemented which contains a list of dictionaries that specify files on the deployment host which should be copied into the LXC container cache and what attributes to assign to the copied file.
Known Issues¶
- There is a bug in the version of keepalived which ships with Ubuntu 14.04 which results in all backup nodes having the same priority. This causes the automatic failover to fail when more than two keepalived hosts are configured. To work around this issue it is recommended that deployers limit the number of keepalived hosts to no more than two, or that each host is configured with different priorities.
- Neutron currently does not support enabling the
port_securityextension driver cleanly for existing networks. If networks are created and the plugin is enabled afterwards, VMs connected to those networks will not start. See bug
Upgrade Notes¶
- During the upgrade process new secrets, such as passwords and keys, will be generated and added to
/etc/openstack_deploy/user_secrets.yml. Existing values will not be changed.
- The
/var/cache/heatto
/var/lib/heat/cache/heat. This only applies to heat deployments that use PKI tokens.
- When upgrading from Kilo to Liberty, the
port_securityextension driver will not be configured due to the known issues with enabling it after creating networks.
- Some variables names have been changed to reflect upstream design decisions (such as Nova’s default API version), or to provide clarity. These require updating in
/etc/openstack_deploy/user_*.ymlfor any overrides to continue to work. See the upgrade documentation <> for details.
Deprecation Notes¶
- The Nova 2.1 variables (
nova_v21_<variable>), Heat name variables (
heat_project_domain_name,
heat_user_domain) and Galera SST Method (
galera_sst_method) variables have changed. See the upgrade documentation <> for details.
Bug Fixes¶
- Fix bug by ensuring that the –insecure flag is passed to the cinder CLI tool during task execution
-)
12.0.7¶
New Features¶
- Keystone’s v3 API is now the default for all services.
- MariaDB version 10.x is now the default in OpenStack-Ansible.
- The percona-xtrabackup repository is now enabled in OpenStack-Ansible and it allows deployers to install and use Percona’s XtraBackup project to perform online backups of data stored in MariaDB.
- Deployers how have the option to set the the wsrep method via the
galera_wsrep_sst_method.
- Deployers can specify the authentication credentials to be used with wsrep by configuring
galera_wsrep_sst_auth_userand
galera_wsrep_sst_auth_password.
- The Galera installation process has been optimized and takes less time to complete.
- Each service using RabbitMQ now has a separate vhost and user.
Upgrade Notes¶
The ceilometer alarming functionality has been moved into aodh. The
ceilometer_alarm_notifierand
ceilometer_alarm_evaluatorentries are removed from the
/etc/openstack_deploy/env.d/ceilometer.ymlfile.
aodh.yml and haproxy.yml will be copied into
/etc/openstack_deploy/env.d. LBaaS agent information will be added to
/etc/openstack_deploy/env.d/neutron.yml.
When Glance is configured to use a swift store backend, it will use Keystone v3 authentication by default via the
glance_swift_store_auth_versionvariable.
Two new options were added for handling authentication with Swift storage backends -
glance_swift_store_user_domainand
glance_swift_store_project_domain. Both are set to
defaultand can be adjusted if deployers use a different Keystone domain to authenticate to swift.
The Keystone configuration has been updated for liberty. Several variables that may appear in the
user_config.ymlfile may need to be updated. Those variables include:
- keystone_identity_driver
- keystone_token_driver
- keystone_token_provider
- keystone_revocation_driver
- keystone_assignment_driver
- keystone_resource_driver
- keystone_ldap_identity_driver
Deployers should review the defaults provided in
playbooks/os_keystone/defaults/main.ymland adjust any variables in
user_variables.ymlif they exist there.
Deployers can optionally remove the Keystone v2 endpoints from the database. Those endpoints will not be removed by the upgrade process.
The max connections setting for Galera is now determined automatically by taking the number of vCPUs available and multiplying it by 100. Deployers may override this default via the
galera_max_connectionsvariable.
The upstream MariaDB init script has replaced the custom init script that was provided by OpenStack-Ansible in previous versions.
The
galera_upgradevariable is now provided to allow the MariaDB role to update existing installs.
The
neutron_driver_network_schedulervariable default has changed from ChanceScheduler to WeightScheduler to match the new Neutron defaults.
The
neutron_driver_quotavariable default has changed slightly to match the new upstream driver paths.
The LinuxBridge configuration that was in
plugins/ml2/ml2_conf.iniis now found in
plugins/ml2/linuxbridge_agent.ini.
Two Neutron variables have been deprecated and are now removed from OpenStack-Ansible -
neutron_l3_router_delete_namespacesand
neutron_dhcp_delete_namespaces.
The Nova project has set the v2.1 API as the default and those configuration variables have changed. Variables that began with
nova_v21_*in the Kilo release are now renamed to
nova_*. All new Liberty deployments will have only the v2.1 API registered in the service catalog.
The S3, v3, and EC2 API’s have been deprecated by the Nova project in the liberty release. Those variables have been removed. They include variables that begin with
nova_s3_*,
nova_ec2_*, and
nova_v3_*.
The variables beginning with
openstack_host_systat_in the openstack_hosts role have been renamed to
openstack_host_sysstat_. This was done to better reflect their dependency to sysstat.
Each service using RabbitMQ now has a separate vhost and user. The shared / vhost is cleaned up so that it contains only the default data. The shared user ‘openstack’ is removed.
Nova now utilizes version 2 of the Cinder API. Tempest is now configured to use the v2 Cinder API as well.
The upgrade process will backup and re-configure the /etc/openstack_deploy directory. This includes inserting new environment details, updating changed variable names, and generating newly added secrets.
Security Issues¶
- The
glance_digest_algorithmhas changed from
sha1to
sha256and this improves integrity verification of stored images.
Bug Fixes¶
-.
12.0.6¶
New Features¶
- Keystone can now be configured for multiple LDAP or Active Directory identity back-ends. Configuration of this feature is documented in the Keystone Configuration section of the Install Guide.
Upgrade Notes¶
-.
|
http://docs.openstack.org/releasenotes/openstack-ansible/liberty.html
|
CC-MAIN-2017-04
|
refinedweb
| 2,111
| 50.23
|
Most exam boards offering Computer Science GCSE and A Level courses allow students a choice of which programming language to use. This usually includes Python along with other options like Java, C# or Visual Basic.
In some ways Python is less strict than the others, and this can cause problems for students. For example in Python, there is no need to specify the type of variables, including function arguments. There is also no need to specify the return type from a function. However these details are expected by examiners. Compare the following code snippets:
Python
def fib(n): a = 0 b = 1 for i in range(n): temp = a + b b = a a = temp return a
C#
public static int Fibonacci(int n) { int a = 0; int b = 1; for (int i = 0; i < n; i++) { int temp = a + b; b = a; a = temp; } return a; }
You can see that the Python version has a lot less information about data types. One solution to giving the examiner what they want when writing program code solutions to questions in Python is to use comments to explicitly name the type of a variable. However, with Python 3’s type annotations there is a better way. Take a look a this code:
def fib(n: int) -> int: a: int = 0 b: int = 1 i: int for i in range(n): temp: int = a + b b = a a = temp return a
Can you see how the type hints are now built into the code like in other languages? There is one detail that you may have noticed – since you can’t use :
|
https://compucademy.net/python-type-annotations-for-gcse-and-a-level-computer-science/
|
CC-MAIN-2022-27
|
refinedweb
| 268
| 72.8
|
Just What Do You Think You Are Doing, Dave?
In my adventures as an ASP.NET web-slinger, I have grown very fond of the GridView control. Right out of the box, it will sort, page, and edit your data. I am reasonably sure that it also implements the IJulienneFries interface. However, it does not come prepared to mitigate that tickly sensation in the pit of your stomach when you click on a Delete button and data vanishes without warning.
A computer questioning the intentions of its human user is a time-honored tradition. Remember HAL 9000? No slouch on user command validation there. Sadly, our beloved GridView does fall short in that department. Well, take heart; we’re going to add a JavaScript confirm dialog to the auto-generated delete buttons on your GridView and bring peace to the stomach pits of users everywhere.
Extending the GridView
Now, this functionality can be achieved by simply handling an event in the code-behind of the page that hosts your GridView. Sadly for code-behind event handlers everywhere, though, I find it way snazzier to extend the GridView control itself. This way, the modification becomes portable and reusable.
Let’s begin by creating a new class file in the App_Code folder of your application. Name it “SafeGridView.cs” (yes, we’re using C#). Add the class to a namespace–I used Aptera.BlogSamples. This will become important when we are ready to use our control later. Now, have it inherit from System.Web.UI.WebControls.GridView. Also, we won’t need the constructor–go ahead and remove the one that Visual Studio so helpfully generates.
using System.Web.UI.WebControls; namespace Aptera.BlogSamples { public class SafeGridView : GridView { } }
The ConfirmDeleteText Property
The first thing we need to add to our SafeGridView is a property to contain the delete button confirmation text. We’ll call it ConfirmDeleteText:
public string ConfirmDeleteText { get; set; }
This property should be editable in design view. As is, it will show up on the properties grid for the control at design time. However, it will be unceremoniously relegated to the “Misc.” category, which, as we all know, is the support group for properties whose daddies did not hug them enough. We want our property to occupy some prime real estate in the “Behavior” category, and come with a handy description for our UI developers, so they know right away that this property means business. To accomplish this, we need to decorate it with the System.ComponentModel.CategoryAttribute and the System.ComponentModel.DescriptionAttribute:
using System.ComponentModel; // snip [Category("Behavior")] [Description(@"Gets or sets the text to be displayed in the delete confirmation dialog. If this is left blank, no confirmation dialog is shown.")] public string ConfirmDeleteText { get; set; }
Modify the Delete Buttons On RowDataBound
Now we want to hook into the GridView’s RowDataBound event, because it is a reasonable assumption that when a row has been bound to data, it will also have a delete button rendered in it. In the overridden OnRowDataBound method, we will locate the delete button and add the javascript needed to raise a confirm dialog to its OnClientClick property:
protected override void OnRowDataBound(GridViewRowEventArgs e) { // do we have confirmation text? if (!string.IsNullOrEmpty(this.ConfirmDeleteText)) { // find the row's delete button LinkButton deleteButton = FindDeleteButton(e.Row); if (deleteButton != null) { // add the confirm call with our text deleteButton.OnClientClick = string.Format("return confirm('{0}');", this.ConfirmDeleteText); } } // do any other RowDataBound event handlers base.OnRowDataBound(e); }
By checking the ConfirmDeleteText for a value before adding confirmation, we leave the UI developer the option of not using our feature, or of changing its use at runtime, perhaps based on some business logic.
It’s worth noting at this point that a GridView can generate regular Button command buttons, rather than LinkButton command buttons. However, the default option is for them to be LinkButtons, and for simplicity’s sake, I will leave it as an exercise for the reader to have this control accommodate both button types.
Find The Delete Buttons
The most challenging part of this whole modification is actually locating the automatically generated delete buttons. This is done in our FindDeleteButton method, which starts out like this:
protected virtual LinkButton FindDeleteButton(GridViewRow row) { // default to null LinkButton button = null; //TODO: find the delete button here! return button; }
The delete button itself is a child control of the first cell of the row that we are passing in to this function, so our first step in putting flesh on the bones of this method is to get a reference to the first cell:
TableCell commandCell = row.Cells[0];
As is his/her station in life, our UI developer could make things difficult and move the CommandField that contains the delete button to a position other than the first cell. Again, however, I will leave the handling of this non-default to case to the reader’s boundless imagination.
At this point, we can query the cell’s LinkButtons using LINQ to find the one that issues the “Delete” command:
using System.Linq; // snip button = (from LinkButton b in commandCell.Controls.OfType<LinkButton>() where b.CommandName == DataControlCommands.DeleteCommandName select b).FirstOrDefault();
So our final FindDeleteButton method looks like this:
protected virtual LinkButton FindDeleteButton(GridViewRow row) { // default to null LinkButton button = null; // get the first cell TableCell commandCell = row.Cells[0]; // Query the cell's LinkButtons for the delete button button = (from LinkButton b in commandCell.Controls.OfType<LinkButton>() where b.CommandName == DataControlCommands.DeleteCommandName select b).FirstOrDefault(); return button; }
Configuration
And so, our SafeGridView control is complete. Before we can use it, though, there is one more step we need to follow to make the control visible to our ASP.NET pages. Open up your web.config file, and add the following line under configuration/system.web/pages/controls:
<add tagPrefix="aptera" namespace="Aptera.BlogSamples"/>
Of course, if you used a different namespace in your SafeGridView definition, or if you prefer that the tag prefix of your control be “metallica”, have at it. I mean, they really did redeem themselves on Death Magnetic. Just make sure that this config setting reflects your choices.
Using the SafeGridView
Now we can add our SafeGridView control to any page, like so:
<aptera:SafeGridView ID=MySafeGrid runat=server />
Switch over to design view, assign a data source to your SafeGridView. (I will spare you the details on this operation, but if you must know, I used Northwind’s Employee table and a SqlDataSource component for this example.) Enable Editing and Deleting, and then visit the control’s properties grid.
Find our immaculately categorized and described ConfirmDeleteText property, and set it to whatever verbage you may wish to accost our beloved delete button users with. At this point, we can run the website, and witness the fruits of our labor borne upon clicking any of the grid’s delete buttons.
I feel safer already.
|
https://code.jon.fazzaro.com/2008/09/11/adding-delete-confirmation-to-the-asp-net-gridview/
|
CC-MAIN-2018-26
|
refinedweb
| 1,150
| 56.05
|
[following up on some compile options] hi there all... :) > A great idea. Are you familiar with the automake / autoconf tools? > If yes, why don't you go ahead and create the necessary files? > This upgrade would greatly increase the usability of sword. hmmm, just had a quick look (I'm not familiar with autoconf, etc), and unfortunately I don't have time to get to know it before 1.5.2, but perhaps I'll make myself familiar with it? unless someone else already understands how to get it all up and running in 5 seconds? but a quick hack that may work (not sure?): I added the lines: UNAME_ARCH = `uname -m` UNAME_CPU = ${UNAME_ARCH} under the "#compiler" section in the Makefile.cfg and changed the intel check from -m486 to be: ifeq ($(system),intel) CFLAGS += -mcpu=${UNAME_CPU} -march=${UNAME_ARCH} endif NOW, I've only tested this under Mandrake 7.2, so have to idea what this shall do to other systems... ;) but that's my 2 cents of effort with the amount of time I have free this week... what it's doing is this: The string "`uname -m`" is being passed to the command line, and so each time gcc is being called, uname is being called to figure out the arch of the machine... Anyway, warned you it was a hack... :) nic... :) ps: thanks for that info on WINE... just thought it might be easier to combine all frontend efforts into a single project, and cut down duplicated effort... :) -- "Morality may keep you out of jail, but it takes the blood of Christ to keep you out of Hell." -- Charles Spurgeon ------------------------------------------------- Interested in finding out more about God, Jesus, Christianity and "the meaning of life"? Visit -------------------------------------------------
|
http://www.crosswire.org/pipermail/sword-devel/2001-June/012130.html
|
CC-MAIN-2014-49
|
refinedweb
| 288
| 73.78
|
Number of times characters of a string is present in another string
Sign up for FREE 1 month of Kindle and read all our books for free.
Get FREE domain for 1st year and build your brand new site
This problem is one of the classic competitive questions dealing with strings, I guess we can understand the logic by the question itself, so we will be given with two strings, the main string and the sample string. Now we need to compare and check how many times a charachter in the main string appers int the other i.e the count and finally in the end we need to return the total number. Now let's work on the implementation part but before we proceed please note that each and every character of the first string should be unique, having duplicates will simply double the count for no reason.
Example :
Input String 1: aA String 2: aaaAAA Output : 6 # 'a' and 'A' occurs 3 times each in string 2 Input String 1: abBAc String 2: cZZz Output : 1 # 'c' occurs 1 time in string 2, others never occur.
Solution Approach
The solution for this problem can be obtained by comparing each charcter in string 1 with all characters in string 2 and update the count value each and every time they match and at last display the total sum of all occurances.
Steps
- Iterate through each character in the string 1
- Now for one iteration in string 1 we need to iterate through all the characters in string 2
- For every match/occurance of the characters i.e when both are same increment the count value
- Repeat this process until all charcters in string 1 are iterated
- Finally display the total value.
Explanation
Iteration through the whole string 1 can be done with the help of a simple for loop till the end i.e size/length of the string but we need to nest another loop inside this to iterate through all the characters in string 2 also.
For comparing the characters the process is simple as it does not matter whether the characters are capital or small because we are comparing all of them using ' == ' so it is only validated unless and until they are the same.
Finally while iterating we store the count value in a seperate variable and increment it each time there's a match, and print it in the end as that is our answer.
Pseudocode
- Start
- Read String 1 and store it.
- Read String 2 and store it.
- pass both the strings to a function/loop to start comparing
- Initialize a seperate variable to store answer values
- Now start iterating throught string 1 and compare with all the characters in string 2.
- For every match increment the count value
- Repeat this for all characters in string 1
- At last print the count value
- Exit.
Program in C++
#include <iostream> using namespace std; int occur(string a, string b) { int count=0; for(int i=0;i<a.size();i++) { for(int j=0;j<b.size();j++) { if(a[i]==b[j]) count++; } } return count; } int main() { string a,b; cout<<"Enter String 1"<<endl; cin>>a; cout<<"Enter String 2"<<endl; cin>>b; int answer=occur(a,b); cout<<answer; return 0; }
Example Step by Step Explaination
Input
- Initial steps read both the strings and store them seperately
string1 = zoe string2 = opengenus
pass both the string to a function
First we initialize count = 0 meant to store the count value
iteration starts
for every character in string 1 we iterate through all the characters in string 2
increment the count value if they match
count = 0; string1 = zoe; string2 = opengenus;
z
z == o, z == p, z == e, z== n, z == g, z == e, z == n , z == u, z == s.
no matches no increment
*count= 0
o
o == o ++, o == p, o == e, o == n, o == g, o == e, o == n , o == u, o == s.
one match count++
count = 1
e
e == o ++, e == p, e == e, e == n, e == g, e == e, e == n , e == u, e == s.
two match count+2
count = 3
all characters in string 1 have been iterated
display the count value
count = 3
End
Thoughts and different approaches
This is a standard string comparision problem, it's a bit easy as there are no constraints and conversios and all characters are compared directly.There might also knowledge of solving this problem of finding the Number of times characters of a string is present in another string. Enjoy.
|
https://iq.opengenus.org/number-of-common-characters/
|
CC-MAIN-2021-17
|
refinedweb
| 760
| 57.95
|
Well, everyone, here I am again. (I'm becoming a familiar face, am I???):confused:
I am working on an program in Borland C++ Builder 6 in which I have to create a program that asks the user for their name, what package they choose, and how many hours did they use. (Keep in mind that this program is not completed.)
Well, I have the program, and here it is.
#include <iostream> #include <iomanip> //Needed for the showpoint and setprecison command #include <fstream> //Needed to use files #include <string> #include <conio> //Needed to show black output screen using namespace std; int main() { char choice; int hours; string name; double charges; // Displays the menu choices on the screen. cout << "\t\tInternet Service Provider\n"; cout << "\t\tSubscription Packages\n\n"; cout << "A: For $9.95 per month, can get 10 hours of\n"; cout << " access. Additional hours are $2.00 per hour. \n\n" ; cout << "B: For $14.95 per month, you can get 20 hours\n"; cout << " of access. Additonal hours are $1.00 per hour. \n\n"; cout << "C: For $19.95 per month unlimited access is provided.\n\n"; cout << "Please enter your name. "; getline (cin, name); cout << "Which subscription package would you like?\n"; cin.get(choice); cout << fixed <<showpoint <<setprecision(2); if (choice == 'A' || 'a') { charges = 9.95 + ( cout << "Your charges are $ " << charges << endl; } else if (choice == 'B'||'b') { charges = cout << "Your charges are $ " << charges << endl; } else if (choice == 'C' || 'c') { charges = cout << "Your charges are $ " << charges << endl; } else if (choice > 'C' || >744) { cout << "You must choose packages A, B, or C. Also, your hours\n"; cout << "must not exceed 744.\n"; } getch(); return 0; }
Here is my problem:
I am trying to find out what calculations to use on the charges line.
The choices are:
A- $9.95 a month 10 hours are provided. Additional hours are $2.00 per hour.
B. $14.95 a month 20 hours are provided. Additonal hours are $1.00 per hour.
C. $19.95 per month. Unlimted access is provided.
For example, if a user chooses package A, and they use 15 hours that month, then that user would pay the $9.95 charge (for the 10 hours) plus an additional $10.00 ($2.00 * $5.00), which would be $19.95. I know how to calculate it, I just need help figuring out how to write it in C++.
Any input is appreciated. :p :cool:
|
https://www.daniweb.com/programming/software-development/threads/74747/need-help-with-calculations
|
CC-MAIN-2016-50
|
refinedweb
| 407
| 86.4
|
0. I believe that the `dict` behavior needs to be frozen. The change will break a lot of existing code, it's too much damage. 0.1. Yes, `keys` is not a good name for internal use, but that's okay. 0.2. If I want to make a class look like a `dict`, I understand that I will get `keys`, `items`... This is what I expect. 0.3. When I work with dicts in my code, I have a choice, I can use the default `dict`, or I can create my own dict-like class and implement different behavior. 0.4. `other[k] for k in other.keys()` default behaviour in `dict.update(other: Dict)` is a different big question about the degree of damage. Basically I can use `dict.update(dict.items())`.
Back to the stars:
1. `*`, `**` are operators, but behaviorally they are methods or functions. I think this is the main problem. 1.1. Python operators (mostly?) use their dunder methods to control their behavior. 1.2. Unpack operators are nailed to specific objects and their behavior, like an function or method. As a result, we lose control over them.
2. `*` nailed to `Iterable`, not so bad. 2.1. It uses the `__iter__` method. I can implement any behaviour. 2.2. I only see one problem. I can't realize any other behavior for iterating and unpacking inside a custom class. 2.3. A new special method for unpacking is a better idea. By default, this method should return `self.__iter__`. This will give control and not break existing code.
3. `**` nailed to `dict`. I think this is the fundamental problem. 3.1. `dict` is a good choice for the DEFAULT `kwargs` container. But `dict` is too excess for `**`. One method that returns an iterator is enough. 3.2. `**` use a `kwargs[k] for k in kwargs.keys()` like implementation. I have no control over this behavior. 3.3. I am forced to implement excessive functionality. 3.4. I must take userspace named `keys()`. 3.5. I cannot implement `keys` and `__getitem__` independent unpacking inside the class.
4. Which I think we can do. 4.1. Make `*` and `**` operators with their own methods. 4.2. Implement `return self .__ iter __ ()` as the default behavior of `*`. 4.3. Create a new implementation of the `**` operator expecting: `Iterator [Tuple [key, val]]`. 4.4. Implement `return ((k, self[k]) for k in self.keys())` as the specific behaviour of `dict`. 4.5. Create a `collections.abc` layout with an abstract two star unpacking method. 4.6. Update PEP 448.
|
https://mail.python.org/archives/list/python-ideas@python.org/message/XTNNF2ZW52KCT4MJW5POJWZLIXXZZJ4G/
|
CC-MAIN-2021-43
|
refinedweb
| 430
| 71.31
|
GN ‘ date‘ ‘ unspecified‘. My advice: use tabs and let each developer configure his IDE to have as big or as small indentations as desired.)
And we are suppose to fit whole line in 80 characters?
5.1.2 Single-Line Comments:
if (condition) { /* Handle the condition. */ ... }
Just in case the code is not self-descriptive enough, I suggest even better comment:
if (condition) { /* This block is executed if condition == true. */ ... }
if (a == 2) { return TRUE; /* special case */ } else { return isPrime(a); /* works only for odd a */ }
Did you mean (and don’t tell me it’s less readable, even without comments)?
return a == 2 || isPrime(a);
int level; // indentation level int size; // size of table
Why use descriptive variable names, when we have comments! Consider this instead:
int indentationLevel; int tableSize;
Later in that section: In absolutely no case should variables and functions be declared on the same line. Example:
long dbaddr, getDbaddr(); // WRONG!
Sure it’s wrong, it doesn’t even compile. I’m surprised that ‘ don’t put spaces in variable names ‘ is not mentioned as a good practice… keyword. Later in this section code sample is shown with class fields missing
private modifier (default, package private access). Package private field?
return (size ? size : defaultSize);
Maybe you haven’t noticed, but from the context we can tell that both
size and
defaultSize are of
boolean type. That’s right,
size and
defaultSize can be either
true or
false (!) How counterintuitive is that! From such a document I would expect not only syntactical correctness, but also meaningful code and good practices! Moreover, the expression can be greatly simplified, step-by-step:
size ? size : defaultSize size ? true : defaultSize size || defaultSize
An empty
for statement (one in which all the work is done in the initialization, condition, and update clauses) should have the following form:
for (initialization; condition; update);
‘
empty
for statement‘? Why would you ever use an empty
for statement?.
Every time a
case falls through (doesn’t include a
break statement), add a comment where the
break statement would normally be.
I understand the intentions, but the approach is wrong. Instead of documenting unexpected and error-prone code-fragments, just avoid them. Don’t depend on fall through, don’t use it at all.
One blank line should always be used in the following circumstances:
[...]
- Between the local variables in a method and its first statement
- Before a block [...] or single-line [...] comment
- Between logical sections inside a method to improve readability
Looks like the authors suggest using blank lines to separate ’logical sections of a method‘. Well, I call these sections: ‘ methods‘. Don’t group statements inside methods in blocks, comment them and separate from each other. Instead extract them into separate, well named methods!
Placing a blank line between variable declarations and the first statement sounds like taken from a C language book.
- All binary operators except
.should be separated from their operands by spaces. Blank spaces should never separate unary operators such as unary minus, increment (‘
++‘), and decrement (‘
--‘) from their operands. Example:
[...]
while (d++ = s++) { n++; }
This doesn’t even compile in Java…
9 – Naming Conventions (only in PDF version):
char *cp;
A good name for a
char pointer in Java is
cp. Wait, WHAT?
char pointer in Java?
10.1 Providing Access to Instance and Class Variables:
Don’t make any instance or class variable public without good reason. Really, really good reason! Did I ever used
public field?
10.4 Variable Assignments:
if (c++ = d++) { // AVOID! (Java disallows) ... }
Great advice: please avoid using constructs that do not even compile in Java. This makes our lives so much easier!
if (booleanExpression) { return true; } else { return false; }
should instead be written as
return booleanExpression;
Holy cow, I AGREE!
Summary
It’s not that the official Code Conventions for the Java Programming Language are completely wrong. They are just outdated and obsolete. In the second decade of the XXI century we have better hardware, deeper understanding of code quality and more modern sources of wisdom. Code Conventions… were last published in 1999, they are heavily inspired by C language, unaware of billions of lines of code yet to be written by millions of developers. Code conventions should emerge over time, just like design patterns, rather than be given explicitly. So please, don’t quote or follow advices from official guide ever again.
Reference: Java Coding Conventions considered harmful from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog.
|
http://www.javacodegeeks.com/2012/10/java-coding-conventions-considered-harmful.html
|
CC-MAIN-2013-20
|
refinedweb
| 743
| 57.27
|
Conatiners).
Greetings,
This is very good, but even after six years of playing around with object orientated programming, and my recent flurry of reading every internet article I can find on the subject e.g. by Robert C.Martin, Martin Fowler etc, I am still struggling. So I have a question or two, about the best way to unit test enable something.
I have been using constrctor injection when unit testing, I create a subclass of the real repository / database accesss / persitency layer class, and pass it into an optional parameter when calling the “get instance” static method of the class under test. Above you say the database object class should have an interface and then both the real class and any mock class implement that interface. I am still not clear at all on why that is better than the mock being a subclass of the original. I am sure it IS better, but can you have another go at explaing why?
Also, you mention you could mock up an object using a mocking framework, or by doing it manually. I have thus far been doing this manually i.e. creating an internal table of LIPS or whatever I am testing and filling in the pertinent values needed for the unit test.
There is not a mocking framework built into ABAP as yet, is this correct? Unless there is one on the code exchange?
Even if there was, I would still have to add the values I want somehow? E.g. if my test is that the block of code under test should reject purchase order items flagged for deletion, my mock purchase order item should still somehow know one of items was deleted?
P.S. I am working on my blog about the best way to incrementally refactor procedural code, I have a monster application in my company, which our whole business hangs off, but it is 100% procedural and over it’s 15 years of evolution it has been incremently added to by dozens of programmers and it is literally impossible to understand or even make the simplest modification without hours of study, even by someone like me who has been there the whole time, let alone someone new.
This is what we are aiming for, to change such an appliaction such that a new person CAN modify it easily without bringing the whole business crashing down around our ears!
Mere words cannot say how happy I am that there are now lots of blogs about OO design in ABAP floating around the SDN.
Cheersy Cheers
Paul
P.S. I liked your blogs so much I decided to “follow” you on the SDN, and I presumed I would then get an email each time you posted a new blog or something.
This is clearly not the case. Is “following” someone on SDN sort of just a tick of approval?
Dear Paul,
thank you very much. Of course it was also a feasible way to mock an existing object by subclassing it. The only issues which might occur is, that subclassing final classes is not possible and you would have to take care of any parent classe’s constructor parameters and implementation.
Also, while in the development process I usually cannot guarantee, that subclass and base class stay in a strong functional relationship at all the time: As interfaces usually allow you to replace the class completely later on, base classes only allow to use one of their subclasses. This way, you are often forced to violate the Liskov Substitution Principle, in case replacing the class e.g. in refactorings is needed. But from the functional point of view it would also work.
In fact I am currently dealing with this topic as we have written our own Mocking Framework and IoC Container for internal development purposes. Both will have to provide subclass-support in later releases.
Links can be found in this article.
A mock object can be easily defined by returning conditional results based on the caller’s inputs, for example
DATA: ls_lips1 TYPE lips.
DATA: ls_lips2 TYPE lips.
ls_lips1 = …
ls_lips2 = …
lo_mocker->/leos/if_mocker=>mock( ‘ZCL_LIPS_ACCESS’ )->method( ‘get_item’ )->with( 10 )->returns( ls_lips1 ).
lo_mocker->/leos/if_mocker=>mock( ‘ZCL_LIPS_ACCESS’ )->method( ‘get_item’ )->with( 20 )->returns( ls_lips2 ).
Delete operations may be simulated by taking into account also the numbers of previous calls to this method (this feature is not yet implemented):
DATA: ls_lips1 TYPE lips.
ls_lips1 = …
lo_mocker->/leos/if_mocker=>mock( ‘ZCL_LIPS_ACCESS’ )->method( ‘get_item’ )->with( 10 )->returns( ls_lips1 ).
clear: ls_lips1.
lo_mocker->/leos/if_mocker=>mock( ‘ZCL_LIPS_ACCESS’ )->method( ‘delete_item’ )->with( 10 )->returns( abap_true ).
lo_mocker->/leos/if_mocker=>mock( ‘ZCL_LIPS_ACCESS’ )->method( ‘get_item’ )->with( 10 )->returns_at_second_time( ls_lips1 ).
You may consider to also throw excetions if needed:
lo_mocker->/leos/if_mocker=>mock( ‘ZCL_LIPS_ACCESS’ )->method( ‘delete_item’ )->with( 20 )->raises( ‘ZCX_LIPS_ACCESS’ ).
As Code Exchange does not allow for a company like us to upload the project , it is not an option for us. Furthermore, the project is in our own namespace which is not compatible with Codex.
So any delivery would have to be discussed individually with the one who is interested in the framework.
Regards,
Uwe
PS: I’m also follwing you and obviously I also get no mails. No idea what this feature is all about then…
|
https://blogs.sap.com/2013/01/28/getting-the-brownfield-clean-but-not-green-part-ii/
|
CC-MAIN-2017-39
|
refinedweb
| 869
| 61.46
|
[open] Pictures to pool
edited December 2015 in OpenSesame
Greetings !
I'm conducting an experiment which includes about 400 pictures divided into two groups.
The pictures on the first group suppose to appear throughout the experiment randomly and without repetition (i.e. a pic that has been presented from this group will not be presented again), and the pictures from the 2nd group will be presented randomly and each one of them will be shown twice throughout the experiment. it would be best if I could place two folders each containing one group's pictures into the pool, but since it's impossible (cannot save the project while folders are in the pool) my questions are :
- Whether it's possible to program opensesame to approach a certain folder (inside or outside the pool) whenever it encounters a certain condition and each time pick a photo from there, without my having to drag 400 pics into the pool and give each a different name ?
- What instructions should I use to guide it to pick randomly without repetition from one group's folder, and randomly with 1 repetition per photo on the other group's folder ?
thanks in advance for supporting
Hi Marina,
As long as the files are in the same folder as is the experiment, you don't have to drag them into the
file pool. When calling
exp.get_image(filename), you should be able to access the images. (e.g. like so: (
my_canvas.image(exp.get_file('my_image.png'))).
To pick the right images in a correct way, I recommend using an
inline_scriptand prepare lists with the file names in beginning. That makes it easier to pick the right images later. Something along these lines should work;
So, the list
trialswill contain all the file names for your images. You just have to loop over the
lista present every image.
Does this make sense?
Eduard
Hi Eduard,
Thank you so much for your response,
I will try and run it as you suggest and let you know if it is solved.
Thank you ;;)
Hi Eduard,
I've been trying to place my pictures in the same folder as the experiment,
but within the opensesame folder on my computer there are no folders which contain my experiment. Trying to trace the source of the pool folder leads me to C:\Users\User\appdata\Local\Temp
which is a location I cannot reach on my computer, and within it the program opens
a temporary folder that disappears as I close the program and does not save any files I add to it, so I cannot use any pictures I place in it.
Further, since I'm unable to reach the experiment's folders,
how would I be able to move it to another computer on which I'm supposed to conduct my experiment once it's ready ?
Sorry, some of this stuff is new to me so I'm a bit confused,
once again thank you for your support
Marina
Hi Marina,
In theory you should be able to save the experiment wherever you want. When you save it, just select the folder you want it to be saved in (Desktop, Documents, whatever). Once it is stored there, you can put the images in there. The default location to save the experiment, might be somewhere in
C:\Users\appdata\,,,, but you once you save it to a more accessible location, you should be fine.
Can you try and tell us whether this solved it?
Eduard
Hi Eduard,
so - I placed all the picture files at the correct folder and run them randomly,
and after some adjustments it started running pretty much as it should.
the main problem is that after about 50-60 trials the experiment stops
and gives an 'error: Out of memory ' message is I run it on the quick run window,
and, strangely, it keeps running much longer if I run it on a full screen mode, but still stops after couple hundred steps.
following Sebastian's recommendations on former posts I made sure I'm using xpyriment back end (although my experiment is time sensitive), and that I run the experiment in a separate process, but the problem still occurs.
My experiment contains of 2 target stimuli pictures (square & circle) and 2 frames pictures. on each trial one frame and one target stimuli appears simultaneously , for 600 trials. in addition, on 348 of the trials some other picture appears for a short while before the target stimuli + frame appears. I'm not using the pool and all my pictures are gathered on the same folder where the experiment is.
Before my block loop I placed this initial_script on the prepare phase :
import random
Negative_group = []
Neutral_group = []
Negative_folder = "D:\GNG_SST_experiment\All_Negative_ordered\"
Neutral_folder = "D:\GNG_SST_experiment\All_Neutral_ordered\"
for i in range (1, 73):
for i in range (73, 100):
for i in range (100, 213):
for i in range (213, 230):
random.shuffle(Negative_group)
random.shuffle(Neutral_group)
and after the trial sequence I placed a second inline_script also on prepare phase:
print (var.Picture)
if (var.Picture == "Negative"):
if (var.Picture == "Neutral"):
So I'll be grateful for any idea of the cause of failure.
I added a pic of the experiment's so far (the practice phase is not yet set).
Additionally, I'm not sure how should I make all the pictures to be presented on the same size without cropping them (some of them are of different sizes and I'd like them all to be presented uniformly)
should I use PIL ?
Sorry if some questions are weird, I'm not a pro on this as you can probably tell :]
Thank you and a happy new year,
Marina
here's the error message ;
The experiment did not finish normally for the following reason:
Unexpected error
Details
item-stack: experiment[run].Experimantal_loop[run].Experimemtal_Block[run].Block_loop[run].Trial_sequance_1[prepare].Picture[prepare]
exception message: Out of memory
time: Sun Dec 27 20:58:56 2015
exception type: error
Traceback (also in debug window)
Hi Marina,
Sorry for the late response.
Just from your description, I can't see what the problem could be. Everything seems to be fine. Would you mind uploading the experiment (e.g. to file dropper). Maybe then we can find the problem.
Thanks,
eduard
|
https://forum.cogsci.nl/discussion/comment/6395/
|
CC-MAIN-2019-26
|
refinedweb
| 1,047
| 57.81
|
Getting a view/subviews frame before calling .present()
Is there a form of a preflight to get the frame of a view before presenting it? Maybe it obvious, however I can't see how to do it. If I could say something like get_frame(
popover) or get_frame(
panel) etc.. Before I did a call to
view.present(
whatever mode) would be fantastic. Otherwise I am not sure how to code a little bit device independent.
You can always do...
w, h = ui.get_screen_size() # caution: the values for w and h will change on portrait/landscape changes
And then use
wand
hto look up window dimension values in a dict that you create.
@ccc, sorry, I don't understand your answer. If I run
import ui if __name__ == '__main__': v = ui.View() v.present('sheet') print v.width, v.height
On my iPad Air 2, I get 540,576
Same code on my iphone 6 is 320,504
I am sure a iPhone 5 would be different again.
v.present() is obviously doing something hardware specific, and will continue to do so in the future I guess. So I would be nice to know what it is going to do before it does it.
In the current beta, you will ALWAYS get (100, 100) on all screens. That is even smaller than an iWatch screen!! To be future-proof, you should set your own v.width and v.height to be some reasonable fraction of w and h respectively. Don't hard code to
sheetView sizes but instead calculate the View size that makes sense for your app given the screen size.
@ccc, sorry, I stil don't get it. I guess maybe I didn't say it clear. But for example the 'sheet' presentation mode or 'popover' mode is very nice, as he calculates it, device independently. Omz code is calculating what he think the correct size for the device for each one of his presentation views at runtime (or more likely from Apples User Interface Guidlines). I am happy with his metrics, would just like to be able to query what the resulting size before calling the .present() method. I did a few things like trying to hide the view and calling .present('sheet') for example so I could get his frame values for what he calculated his view to be. But, It didn't look nice.
Oh, also have my apple watch... Got it in the first week out. Was lucky, got an import from Japan. Price was way up, but fun to have it so early in Thailand 😎
# In the current v1.6 beta on an iPad in landscape mode, the output is: # ui.get_screen_size(): 1024.0, 768.0 # full_screen: 1024.0, 704.0 # sheet: 100.0, 100.0 # popover: 100.0, 100.0 # sidebar: 100.0, 100.0 import ui fmt = '{:>20}: {:>6}, {:>6}' def view_sizer(view_type='full_screen'): view = ui.View() view.present(view_type) print(fmt.format(view_type, view.width, view.height)) view.close() view.wait_modal() if __name__ == '__main__': w, h = ui.get_screen_size() print(fmt.format('ui.get_screen_size()', w, h)) for view_type in 'full_screen sheet popover sidebar'.split(): view_sizer(view_type)
@ccc, I can see what you saying here. But do you think it's just a bug that no one has bothered to report? I think it is. I can see no reason that presenting a 'sheet' should be 100,100. Somewhere in the docs (1.5) for example omz talks about the approximate size of the 'sheet' view relative to the device. If I am wrong I am, but I think the reporting of the 100,100 is a bug in 1.6
in the beta, the sheet size is no longer fixed based on screen size, as it was in 1.5.
this was documented in the release notes. it has the advantage that you can change the size of the sheet to suit your needs, though the disadvantage that the defaults were nice for some cases. I don't think it automatically slides up anymore when the keyboard is present, but I'm not sure.
(actually... sheet was only ever valid on iPad, I believe it simply used full screen on iPhone)
|
https://forum.omz-software.com/topic/1800/getting-a-view-subviews-frame-before-calling-present/2
|
CC-MAIN-2022-21
|
refinedweb
| 699
| 84.88
|
There's nothing like a batch of programming links to help push down that third sausage gravy biscuit.
- Mike Zintel's blog has some really awesome screenshots of .NET code running on the Xbox 360. I hope that this whole .NET/Xbox attempt comes to fruition.
- In case you didn't hear, Microsoft is bringing C# programming to the Mac with WPF/E, which stands for Windows Presentation Foundation Everywhere. On Tuesday, the company announced that it would be shipping the first version of WPF/E at the beginning of next year, and it would be shipping the device version in the latter half of 2007.
- Embedded.com has an overview of the B# language. B# is "a tiny, object-oriented, and multi-threaded programming language that is specially dedicated for small footprint embedded systems." It supports delegates, namespaces, abstract and concrete classes, and interfaces. On the embedded side of things, it supports boxing/unboxing conversions, multi-threading statements, field properties, device addressing registers, interrupt handlers, and deterministic memory defragmenter.
- At MIX06, the Internet Explorer team showed that IE 7 could indeed display standards-based designs. Using the CSSZenGarden site as a demo, the browser flawlessly displayed the website. Besides the demo, it was also announced that IE 7's layout is complete.
- Microsoft has released MSBee Beta 2. MSBee allows developers to write code in Visual Studio 2005 but still target the .NET 1.1 Framework.
- If you are looking for something similar to LINQ, check out Karmencita. It's a subset of the object query language and allows for querying structured data that resides in memory.
- If you are a developer, you know there are times when you get that huge feeling of relief because something you did actually works. Pingmag has tried to identify eight possible good feelings , at least in the case of web design. When you're writing code, what makes you let out a big sigh of relief?
You must login or create an account to comment.
|
https://arstechnica.com/information-technology/2006/03/3333/
|
CC-MAIN-2017-43
|
refinedweb
| 333
| 56.86
|
chromium
/
chromium
/
src
/
8ba448850b2c89830296ae096bd29d0d77b930a6
/
.
/
tools
/
media_engagement_preload
/
make_dafsa.py
blob: fd73c68e8e633339fdb4976d91dec4187dfd3774 [
file
] [
log
] [
blame
]
#!/usr/bin/env python
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import array
import json
import sys
import os
import urlparse
SOURCE_ROOT = os.path.join(os.path.dirname(
os.path.abspath(__file__)), os.pardir, os.pardir)
# Manually inject dependency paths.
sys.path.insert(0, os.path.join(
SOURCE_ROOT, "third_party", "protobuf", "third_party", "six"))
sys.path.insert(0, os.path.join(
SOURCE_ROOT, "third_party", "protobuf", "python"))
import media_engagement_preload_pb2
"""
A Deterministic acyclic finite state automaton (DAFSA) is a compact
representation of an unordered word list (dictionary).
This python program converts a list of strings to a byte array in C++.
This python program fetches strings and return values from a gperf file
and generates a C++ file with a byte array representing graph that can be
used as a memory efficient replacement for the perfect hash table.
The input strings are assumed to consist of printable 7-bit ASCII characters
and the return values are assumed to be one digit integers.
In this program a DAFSA is a diamond shaped graph starting at a common
source node and ending at a common sink node. All internal nodes contain
a label and each word is represented by the labels in one path from
the source node to the sink node.
The following python represention is used for nodes:
Source node: [ children ]
Internal node: (label, [ children ])
Sink node: None
The graph is first compressed by prefixes like a trie. In the next step
suffixes are compressed so that the graph gets diamond shaped. Finally
one to one linked nodes are replaced by nodes with the labels joined.
The order of the operations is crucial since lookups will be performed
starting from the source with no backtracking. Thus a node must have at
most one child with a label starting by the same character. The output
is also arranged so that all jumps are to increasing addresses, thus forward
in memory.
The generated output has suffix free decoding so that the sign of leading
bits in a link (a reference to a child node) indicate if it has a size of one,
two or three bytes and if it is the last outgoing link from the actual node.
A node label is terminated by a byte with the leading bit set.
The generated byte array can described by the following BNF:
<byte> ::= < 8-bit value in range [0x00-0xFF] >
<char> ::= < printable 7-bit ASCII character, byte in range [0x20-0x7F] >
<end_char> ::= < char + 0x80, byte in range [0xA0-0xFF] >
<return value> ::= < value + 0x80, byte in range [0x80-0x8F] >
<offset1> ::= < byte in range [0x00-0x3F] >
<offset2> ::= < byte in range [0x40-0x5F] >
<offset3> ::= < byte in range [0x60-0x7F] >
<end_offset1> ::= < byte in range [0x80-0xBF] >
<end_offset2> ::= < byte in range [0xC0-0xDF] >
<end_offset3> ::= < byte in range [0xE0-0xFF] >
<prefix> ::= <char>
<label> ::= <end_char>
| <char> <label>
<end_label> ::= <return_value>
| <char> <end_label>
<offset> ::= <offset1>
| <offset2> <byte>
| <offset3> <byte> <byte>
<end_offset> ::= <end_offset1>
| <end_offset2> <byte>
| <end_offset3> <byte> <byte>
<offsets> ::= <end_offset>
| <offset> <offsets>
<source> ::= <offsets>
<node> ::= <label> <offsets>
| <prefix> <node>
| <end_label>
<dafsa> ::= <source>
| <dafsa> <node>
Decoding:
<char> -> printable 7-bit ASCII character
<end_char> & 0x7F -> printable 7-bit ASCII character
<return value> & 0x0F -> integer
<offset1 & 0x3F> -> integer
((<offset2> & 0x1F>) << 8) + <byte> -> integer
((<offset3> & 0x1F>) << 16) + (<byte> << 8) + <byte> -> integer
end_offset1, end_offset2 and and_offset3 are decoded same as offset1,
offset2 and offset3 respectively.
The first offset in a list of offsets is the distance in bytes between the
offset itself and the first child node. Subsequent offsets are the distance
between previous child node and next child node. Thus each offset links a node
to a child node. The distance is always counted between start addresses, i.e.
first byte in decoded offset or first byte in child node.
Example 1:
%%
aa, 1
a, 2
%%
The input is first parsed to a list of words:
["aa1", "a2"]
A fully expanded graph is created from the words:
source = [node1, node4]
node1 = ("a", [node2])
node2 = ("a", [node3])
node3 = ("\x01", [sink])
node4 = ("a", [node5])
node5 = ("\x02", [sink])
sink = None
Compression results in the following graph:
source = [node1]
node1 = ("a", [node2, node3])
node2 = ("\x02", [sink])
node3 = ("a\x01", [sink])
sink = None
A C++ representation of the compressed graph is generated:
const unsigned char dafsa[7] = {
0x81, 0xE1, 0x02, 0x81, 0x82, 0x61, 0x81,
};
The bytes in the generated array has the following meaning:
0: 0x81 <end_offset1> child at position 0 + (0x81 & 0x3F) -> jump to 1
1: 0xE1 <end_char> label character (0xE1 & 0x7F) -> match "a"
2: 0x02 <offset1> child at position 2 + (0x02 & 0x3F) -> jump to 4
3: 0x81 <end_offset1> child at position 4 + (0x81 & 0x3F) -> jump to 5
4: 0x82 <return_value> 0x82 & 0x0F -> return 2
5: 0x61 <char> label character 0x61 -> match "a"
6: 0x81 <return_value> 0x81 & 0x0F -> return 1
Example 2:
%%
aa, 1
bbb, 2
baa, 1
%%
The input is first parsed to a list of words:
["aa1", "bbb2", "baa1"]
Compression results in the following graph:
source = [node1, node2]
node1 = ("b", [node2, node3])
node2 = ("aa\x01", [sink])
node3 = ("bb\x02", [sink])
sink = None
A C++ representation of the compressed graph is generated:
const unsigned char dafsa[11] = {
0x02, 0x83, 0xE2, 0x02, 0x83, 0x61, 0x61, 0x81, 0x62, 0x62, 0x82,
};
The bytes in the generated array has the following meaning:
0: 0x02 <offset1> child at position 0 + (0x02 & 0x3F) -> jump to 2
1: 0x83 <end_offset1> child at position 2 + (0x83 & 0x3F) -> jump to 5
2: 0xE2 <end_char> label character (0xE2 & 0x7F) -> match "b"
3: 0x02 <offset1> child at position 3 + (0x02 & 0x3F) -> jump to 5
4: 0x83 <end_offset1> child at position 5 + (0x83 & 0x3F) -> jump to 8
5: 0x61 <char> label character 0x61 -> match "a"
6: 0x61 <char> label character 0x61 -> match "a"
7: 0x81 <return_value> 0x81 & 0x0F -> return 1
8: 0x62 <char> label character 0x62 -> match "b"
9: 0x62 <char> label character 0x62 -> match "b"
10: 0x82 <return_value> 0x82 & 0x0F -> return 2
"""
HTTPS_ONLY = 0
HTTP_AND_HTTPS = 1
class InputError(Exception):
"""Exception raised for errors in the input file."""
def to_dafsa(words):
"""Generates a DAFSA from a word list and returns the source node.
Each word is split into characters so that each character is represented by
a unique node. It is assumed the word list is not empty.
"""
if not words:
raise InputError('The origin list must not be empty')
def ToNodes(word):
"""Split words into characters"""
if not 0x1F < ord(word[0]) < 0x80:
raise InputError('Origins must be printable 7-bit ASCII')
if len(word) == 1:
return chr(ord(word[0]) & 0x0F), [None]
return word[0], [ToNodes(word[1:])]
return [ToNodes(word) for word in words]
def to_words(node):
"""Generates a word list from all paths starting from an internal node."""
if not node:
return ['']
return [(node[0] + word) for child in node[1] for word in to_words(child)]
def reverse(dafsa):
"""Generates a new DAFSA that is reversed, so that the old sink node becomes
the new source node.
"""
sink = []
nodemap = {}
def dfs(node, parent):
"""Creates reverse nodes.
A new reverse node will be created for each old node. The new node will
get a reversed label and the parents of the old node as children.
"""
if not node:
sink.append(parent)
elif id(node) not in nodemap:
nodemap[id(node)] = (node[0][::-1], [parent])
for child in node[1]:
dfs(child, nodemap[id(node)])
else:
nodemap[id(node)][1].append(parent)
for node in dafsa:
dfs(node, None)
return sink
def join_labels(dafsa):
"""Generates a new DAFSA where internal nodes are merged if there is a one to
one connection.
"""
parentcount = { id(None): 2 }
nodemap = { id(None): None }
def count_parents(node):
"""Count incoming references"""
if id(node) in parentcount:
parentcount[id(node)] += 1
else:
parentcount[id(node)] = 1
for child in node[1]:
count_parents(child)
def join(node):
"""Create new nodes"""
if id(node) not in nodemap:
children = [join(child) for child in node[1]]
if len(children) == 1 and parentcount[id(node[1][0])] == 1:
child = children[0]
nodemap[id(node)] = (node[0] + child[0], child[1])
else:
nodemap[id(node)] = (node[0], children)
return nodemap[id(node)]
for node in dafsa:
count_parents(node)
return [join(node) for node in dafsa]
def join_suffixes(dafsa):
"""Generates a new DAFSA where nodes that represent the same word lists
towards the sink are merged.
"""
nodemap = { frozenset(('',)): None }
def join(node):
"""Returns a macthing node. A new node is created if no matching node
exists. The graph is accessed in dfs order.
"""
suffixes = frozenset(to_words(node))
if suffixes not in nodemap:
nodemap[suffixes] = (node[0], [join(child) for child in node[1]])
return nodemap[suffixes]
return [join(node) for node in dafsa]
def top_sort(dafsa):
"""Generates list of nodes in topological sort order."""
incoming = {}
def count_incoming(node):
"""Counts incoming references."""
if node:
if id(node) not in incoming:
incoming[id(node)] = 1
for child in node[1]:
count_incoming(child)
else:
incoming[id(node)] += 1
for node in dafsa:
count_incoming(node)
for node in dafsa:
incoming[id(node)] -= 1
waiting = [node for node in dafsa if incoming[id(node)] == 0]
nodes = []
while waiting:
node = waiting.pop()
assert incoming[id(node)] == 0
nodes.append(node)
for child in node[1]:
if child:
incoming[id(child)] -= 1
if incoming[id(child)] == 0:
waiting.append(child)
return nodes
def encode_links(children, offsets, current):
"""Encodes a list of children as one, two or three byte offsets."""
if not children[0]:
# This is an <end_label> node and no links follow such nodes
assert len(children) == 1
return []
guess = 3 * len(children)
assert children
children = sorted(children, key = lambda x: -offsets[id(x)])
while True:
offset = current + guess
buf = []
for child in children:
last = len(buf)
distance = offset - offsets[id(child)]
assert distance > 0 and distance < (1 << 21)
if distance < (1 << 6):
# A 6-bit offset: "s0xxxxxx"
buf.append(distance)
elif distance < (1 << 13):
# A 13-bit offset: "s10xxxxxxxxxxxxx"
buf.append(0x40 | (distance >> 8))
buf.append(distance & 0xFF)
else:
# A 21-bit offset: "s11xxxxxxxxxxxxxxxxxxxxx"
buf.append(0x60 | (distance >> 16))
buf.append((distance >> 8) & 0xFF)
buf.append(distance & 0xFF)
# Distance in first link is relative to following record.
# Distance in other links are relative to previous link.
offset -= distance
if len(buf) == guess:
break
guess = len(buf)
# Set most significant bit to mark end of links in this node.
buf[last] |= (1 << 7)
buf.reverse()
return buf
def encode_prefix(label):
"""Encodes a node label as a list of bytes without a trailing high byte.
This method encodes a node if there is exactly one child and the
child follows immidiately after so that no jump is needed. This label
will then be a prefix to the label in the child node.
"""
assert label
return [ord(c) for c in reversed(label)]
def encode_label(label):
"""Encodes a node label as a list of bytes with a trailing high byte >0x80.
"""
buf = encode_prefix(label)
# Set most significant bit to mark end of label in this node.
buf[0] |= (1 << 7)
return buf
def encode(dafsa):
"""Encodes a DAFSA to a list of bytes"""
output = []
offsets = {}
for node in reversed(top_sort(dafsa)):
if (len(node[1]) == 1 and node[1][0] and
(offsets[id(node[1][0])] == len(output))):
output.extend(encode_prefix(node[0]))
else:
output.extend(encode_links(node[1], offsets, len(output)))
output.extend(encode_label(node[0]))
offsets[id(node)] = len(output)
output.extend(encode_links(dafsa, offsets, len(output)))
output.reverse()
return output
def to_proto(data):
"""Generates protobuf from a list of encoded bytes."""
message = media_engagement_preload_pb2.PreloadedData()
message.dafsa = array.array('B', data).tostring()
return message.SerializeToString()
def words_to_proto(words):
"""Generates protobuf from a word list"""
dafsa = to_dafsa(words)
for fun in (reverse, join_suffixes, reverse, join_suffixes, join_labels):
dafsa = fun(dafsa)
return to_proto(encode(dafsa))
def parse_json(infile):
"""Parses the JSON input file and appends a 0 or 1 based on protocol."""
try:
netlocs = {}
for entry in json.loads(infile):
# Parse the origin and reject any with an invalid protocol.
parsed = urlparse.urlparse(entry)
if parsed.scheme != 'http' and parsed.scheme != 'https':
raise InputError('Invalid protocol: %s' % entry)
# Store the netloc in netlocs with a flag for either HTTP+HTTPS or HTTPS
# only. The HTTP+HTTPS value is numerically higher than HTTPS only so it
# will take priority.
netlocs[parsed.netloc] = max(
netlocs.get(parsed.netloc, HTTPS_ONLY),
HTTP_AND_HTTPS if parsed.scheme == 'http' else HTTPS_ONLY)
# Join the numerical values to the netlocs.
output = []
for location, value in netlocs.iteritems():
output.append(location + str(value))
return output
except ValueError:
raise InputError('Failed to parse JSON.')
def main():
if len(sys.argv) != 3:
print('usage: %s infile outfile' % sys.argv[0])
return 1
with open(sys.argv[1], 'r') as infile, open(sys.argv[2], 'wb') as outfile:
outfile.write(words_to_proto(parse_json(infile.read())))
return 0
if __name__ == '__main__':
sys.exit(main())
|
https://chromium.googlesource.com/chromium/src/+/8ba448850b2c89830296ae096bd29d0d77b930a6/tools/media_engagement_preload/make_dafsa.py
|
CC-MAIN-2020-40
|
refinedweb
| 2,107
| 59.03
|
Lopy and Matchx usage
- misterlisty last edited by misterlisty
I have setup a Matchx Gateway and activated it. Now i want to use the example at but where do i get the app_eui and app_key from? Do i have to use TTN?
from network import LoRa import socket import time import binascii # Initialize LoRa in LORAWAN mode. lora = LoRa(mode=LoRa.LORAWAN) # create an OTAA authentication parameters dev_eui = binascii.unhexlify('10 00 00 00 00 00 00 01'(' ','')) # join a network using OTAA (Over the Air Activation) lora.join(activation=LoRa.OTAA, auth=(app_eui, app_key), timeout=0) # wait until the module has joined the network while not lora.has_joined(): time.sleep(2.5) print('Not yet joined...') #)
@jmarcelino Any update on this? or suggestion?
@jmarcelino Thanks, we are doing field trials of a product using your Fopy devices and hoping for solution in short time. Let me know if you need anything else...
- jmarcelino last edited by
My apologies, I made a mistake and missed there are different start frequencies corresponding to the transmit and receive channels in that region. MatchX is correct: Uplink Channel 1 (915.2) corresponds to an RX1 in Downlink Channel 1 (923.2)
The data rate is also correct. DR2 in uplink corresponds to DR10 in downlink window.
Since the channel and data rate are correct I'm not sure why the LoPy doesn't accept the JoinResponse. I'll try to investigate on our end if there's anything that could cause this. Again sorry for the confusion.
Any advice on my current setup, a bit confused on how to proceed further, MtachX is definetly using RX1, MatchX are saying the up/down frequencies should be different but i think you are saying they should be the same. The LoRaWan spec appears to specify they shouldbe different. Hoope i'm on the right track.
@jmarcelino Not sure if i have it right but according to MatchX
"As stated in the instructions 1883 is not correct, the downlink is not the same as the uplink frequency, this is due and in accordance with the LoraWAN Australian specifications.
We advise to use the RX at the correct frequency to receive the OTAA join response.
"
@jmarcelino Its definitely set to RX1 in application settings and node settings..i suspect they have a bug and not replying to my support requests.
@jmarcelino Thanks, what will the frequency be for RX1? I will try your suggestion.
@misterlisty
OK glad ABP is working, for OTAA you need to get the downlinks working.
If you see the JoinResponse on frequency 923.3Mhz that means it's still RX2. Make sure it's set to RX1 in both places, there's a configuration for the Application and another for the device. --If it's correctly set to RX1 you'll see that the frequency of the JoinResponse matches the JoinRequest.
- misterlisty last edited by misterlisty
@jmarcelino its set to to RX1 as suggested. I now have ABP working :), OTTA is yet to work...its definitely getting a lora_accept signal but the code doesn't acknowledge this.
You're removing channels twice unnecessarily, remove your second part:
> # remove all the non-default channels > for i in range(3, 16): > lora.remove_channel(i)
Also you're sill using RX2, there are two places for this configuration, I think you missed under Application Configuration -> Network Settings. Please set it to RX1 there.
OK..here is my status..have updated Fopy to latest firmware ie 1.1.12b
Here is my OTTA code..it appears to work if you look at the logs of MatchX but code says not yet joined still.
from network import LoRa import socket import time import binascii import struct lora = LoRa(mode=LoRa.LORAWAN) print("Lora send::-3") print("DevEUI[",binascii.hexlify(lora.mac()).upper().decode('utf-8'),"]") # create an OTAA authentication parameters dev_eui = binascii.unhexlify('70 B3 D5 49 97 E4 4C 6E'(' ','')) print("Lora send::-1") for i in range(0, 72): lora.remove_channel(i) # set the 8 channels for MatchX configuration for channel in range(0,8): lora.add_channel(channel, frequency=915200000+channel*200000, dr_min=0, dr_max=3) lora.join(activation=LoRa.OTAA, auth=(dev_eui,app_eui, app_key), timeout=0) print("Lora send::0") while not lora.has_joined(): time.sleep(5) print('Not yet joined...') print('Finally joined...') # remove all the non-default channels for i in range(3, 16): lora.remove_channel(i) # create a LoRa socket s = socket.socket(socket.AF_LORA, socket.SOCK_RAW) print("Lora send::1") i = 0 while True: print("Lora send::1") # set the LoRaWAN data rate s.setsockopt(socket.SOL_LORA, socket.SO_DR, 3) print("Lora send::2") # make the socket blocking # (waits for the data to be sent and for the 2 receive windows to expire) s.setblocking(True) print("Lora send::3") # send some data count = s.send(bytes([i % 256])) print('Sent %s bytes' % count) print("Lora send::4") # make the socket non-blocking # (because if there's no data received it will block forever...) s.setblocking(False) print("Lora send::5") # get any data received (if any...) data = s.recv(64) print("Lora send::",data) i += 1
@misterlisty
It has to go into flash programming mode, it's a very low level hardware feature.
Are you sure you are connecting GND and G23 (also called P2 on the pin map )
Also when you use an USB adapter what exactly is your setup?
In any case I think it will be easier to use the Pysense.
@jmarcelino Will atempt update through pysense board shortly but it not upgrading the firmware if i use an USB adapter, connect G23 & GND...its running the normal code instead of going in upgrade mode..any suggestion here?
@misterlisty
Yes you can use the PySense and you don't need the wire anymore.
Make sure you update the Pysense firs, follow the steps here:
When you have the new firmware on the Pysense load the the Pycom Firmware Update tool and tick the box which says Pytrack/Pysense:
With this setup you don't need the jumper wire as the tool will use the Pysense to put the FiPy into programming mode.
@jmarcelino I have connected a wire from GND to G23 and the fopy will not go into upgrade mode. Its connected to a pysense board while i atempt this. It just running my normal code...can i upgrade while connected to the pysense board? Instructions suggest yes.
- jmarcelino last edited by
Hi,
We have no record of your board, so I guess you never updated it?
We really recommend always updating all our boards first. The factory firmware tends to be very old - or set to the wrong region as yours was.
In any case if you run the upgrade tool and choose Australia, then it should set to 915Mhz correctly and you can use the code I posted.
The firmware update instructions are at:
Thanks!
- misterlisty last edited by jmarcelino
@jmarcelino Here is my Mac <removed>
How do you know what setting my fopy has? DO you sedn this during firmware update?
Yes that’s set to EU868. We need to reset the device in Pycom’s database before you do a firmware upgrade to change the location. Please send me the MAC address of the WLAN via chat or email a request to support@pycom.io.
Thanks
import binascii binascii.hexlify(network.WLAN().mac(),':').decode()
@jmarcelino The result is 868000000
Appears like it isn't Australia..i assume i have to reapply firmware?
|
https://forum.pycom.io/topic/2348/lopy-and-matchx-usage/23
|
CC-MAIN-2019-35
|
refinedweb
| 1,250
| 66.54
|
Does WPF Work with C++?
My understanding is that Microsoft Visual Studio was rewritten to use WPF. I'm still not clear on why, but acknowledge my knowledge about WPF is very limited.
My question is if anyone knows how much support WPF has for C++, and if Visual Studio is still written in C++.
Personally, WPF primarily appears to be .NET/VB/C# thing. Is anyone using it with C++?
You can use WPF with C++/CLI. It is a .NET API, however, so it requires the .NET Framework.
That being said, the designer support is non-existent with C++. This means that, for practical purposes, WPF doesn't really work with C++.
Typically, the user interface layer is written in C# (or VB.NET), then calls into C++ code, often exposed via P/Invoke or C++/CLI layers. By using C++/CLI, it's very easy to interoperate between C++ code and C#/VB.NET code.
Does WPF Work with C++?, NET and C/C++ or Delphi developers cannot use it for their GUI if they want to. I always wanted to make this work! Using C++ as In this solution, you can use WPF/Winforms and QT and whatever else you want together. WPF (Windows Presentation Foundation) introduced as a part of .NET Framework 3.0 is a sub framework of .NET that is used to build Windows client apps for Windows operating system. WPF uses XAML as its front end language and C# as its code behind language. The current version of WPF is 4.5.
WPF is a .NET technology. Of course it can be used with C++, like any other part of .NET can, but it requires you to jump through some interop hoops, or possibly write it all in C++/CLI. (And you'll have to write a lot of boilerplate code yourself, as the designer doesn't work with C++/CLI.)
And Visual Studio isn't, and probably never was, "written in C++". With 2010, members of the VS team have stated on their blogs that VS is now primarily a managed application. Of course there's still a ton of C++ code in there, and that's not going away any time soon, but a lot of it is C#/VB today.
But that didn't happen overnight. Managed code has gradually been added to Visual Studio with every release. Visual Studio is written in many different languages.
If what you're actually asking is "can I write an addin for Visual Studio using C++", then the answer is "yes".
If you're asking "is it practical to write an application in C++, and still use WPF", the answer is probably "only if you write the WPF code in C#, and then have some interop code binding this together with your C++ app.
Create An Awesome WPF UI for Your C++/QT Applications , Windows Presentation Foundation (WPF) is a UI framework that creates desktop client applications. The WPF development platform supports a� However, this model does have challenges. Calling native code requires special work, and not having a debugger can make finding and fixing issues difficult. It would be great to be able to switch to .NET Core for Linux, keep using WPF, and have WPF applications continue to work. Then the applications would have a boxology like this
Noesis gui can run WPF UIs in c++. You will have to adapt the c# classes to c++ (using their reflection macros, etc). Some controls aren't supported, but it is quite elegant.
For example, WPF might generate :
MainWindow.xaml.cs
using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Media; using System.Windows.Shapes; using System.Windows.Input; namespace BlendTutorial { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { public MainWindow() { this.InitializeComponent(); } private void AddButton_Click(object sender, RoutedEventArgs e) { } private void RemoveButton_Click(object sender, RoutedEventArgs e) { } private void ContainerBorder_MouseDown(object sender, MouseButtonEventArgs e) { } private void RadioButton_Checked(object sender, RoutedEventArgs e) { } } }
Then, you would convert it to c++ :
namespace BlendTutorial { class MainWindow final: public Window { public: MainWindow() { InitializeComponent(); } private: void InitializeComponent() { Noesis::GUI::LoadComponent(this, "MainWindow.xaml"); } bool ConnectEvent(BaseComponent* source, const char* event, const char* handler) override { NS_CONNECT_EVENT(Button, Click, AddButton_Click); NS_CONNECT_EVENT(Button, Click, RemoveButton_Click); NS_CONNECT_EVENT(Border, PreviewMouseLeftButtonDown, ContainerBorder_MouseDown); NS_CONNECT_ATTACHED_EVENT(ToggleButton, Checked, RadioButton_Checked); return false; } void AddButton_Click(BaseComponent*, const RoutedEventArgs&) { } void RemoveButton_Click(BaseComponent*, const RoutedEventArgs&) { } void ContainerBorder_MouseDown(BaseComponent*, const MouseButtonEventArgs&) { } void RadioButton_Checked(BaseComponent*, const RoutedEventArgs&) { } NS_IMPLEMENT_INLINE_REFLECTION(MainWindow, Window) { NsMeta<TypeId>("BlendTutorial.MainWindow"); } };
More info here :
They have some pretty nifty stuff if you want to go ham with data models, bindings and mvvp. Or you can just hook up lambdas to control events.
It is a paid framework, though it's free for less than €100K yearly income.
What is WPF?, If I'd have to do WPF I'd use C#, not C++. Yo will find the reasons here: http:// stackoverflow.com/questions/4776355/does-wpf-work-with-c� That does the job but its very bad performance wise. Setting ItemsSource null and then resetting to a collection force datagrid to recreate whole bindings. notifying means just getting the latest changes.
Development desktop app using WPF and C++ - MSDN, Heh, that is the first time I have heard WPF described as “friendly”. I find XAML to be Which one is better for desktop apps with a nice GUI, C++, C#, or Java? This will not always work as it does not push the frame, if an interrupting instruction being made (I.e. a call to a WCF method which in a synchronic continue to this command) chances are you will not see 'refresh' as it will be blocked.. This is why the answer flq provided from the MSDN resource is more correct than this one.
Are there any C++ GUI frameworks as friendly as C# WPF?, WPF is used to build Windows client applications that run on Windows WPF has two classes to work with audio, video and video with audio� When You use Window.ShowDialog(), it calls new Window modally, meaning you cannot go to the parent form.. The Window.Show() function shows the form in a non modal form. This means that you can click on the parent form.
What Is WPF, Written in, C#, C++, C � Operating system � Microsoft Windows � Platform .NET Framework, .NET Core � Type � Software framework � License � MIT License. Website, docs.microsoft.com/en-us/dotnet/framework/wpf/index. Windows Presentation Foundation (WPF) is a free and open-source graphical subsystem Alternatively, hard-copying the plugin DLLs from a working Windows XP� In this article, we learned how to use a Grid panel in WPF at design-time in XAML and code-behind in C#. We also saw how set various properties of a Grid Panel. After that we learned how to work with rows and columns of a Grid panel. In the end of this article, we saw how to format a Grid panel.
- Are you asking about WPF + C++ in general, or about its use in Visual Studio specifically?
- There were actually two question there. One had to do with using WPF with C++. The other had to do with the language used to write Visual Studio.
- Visual Studio does indeed use WPF as part of it's UI in the 2010 release. Large portions of Visual Studio are also still written in C++.
- Take a look at this link: msdn.microsoft.com/en-us/library/ms742522.aspx
- Thanks. This is pretty much what I was thinking. But that made me wonder why VS would be rewritten for WPF. I ask because I do quite a bit of C# these days and am wondering if I should choose WPF instead of MFC, which I've used in the past.
- @Jonathan: If you're using C#, I'd really look hard at WPF. It's far superior in many ways... I wrote a series on migrating to WPF from Windows Forms - while it's not MFC, the same concepts apply. It would give you a good idea of some of the benefits to WPF: reedcopsey.com/talks/from-windows-forms-to-wpf-with-mvvm
- My past experience has been C++/MFC for the desktop and C#/WebForms for the web. So I haven't spent too much time with WinForms. But since most of my recent development has been for the web, I'm becoming increasing comfortable with C#. I'll take a look at your link.
- @Jonathan: It'll make sense from a MFC point of view, and most of it applies to Silverlight, as well. Should be useful if you want to consider the switch to WPF or Silverlight, for Desktop or Web...
- Did anything changed now? I'd love to use this GUI in C++ in a native executable.
- Why? VB.NET is one of the two main .NET languages. Microsoft uses it pretty extensively. it's come quite a long way since, say, VB6
- That's interesting that the developers are saying VS 2010 is now primarily managed code. I wish I had a better picture of how much is managed and why. I mostly ask because I've done a lot of MFC in the past and am wondering if it makes sense to move to C#, which I use a lot lately, and WPF.
- @Jonathan: for UI work, I'd definitely prefer WPF (probably with C#/VB) over anything else Microsoft has produced. And for most purposes, C# is a very good language, so it might make sense to use it for more than just the UI.
- I was wondering why Visual Studio started loading and working so slow after 2008 version. Now I know the answer...
- @Lilian A. Moraru: I can't feel it. Also, still loading faster than any Java program ever written.
|
https://thetopsites.net/article/55066383.shtml
|
CC-MAIN-2021-25
|
refinedweb
| 1,636
| 66.44
|
3.2 CGI and Response Headers
By now, you should be reasonably comfortable designing CGI programs that create simple virtual documents, like this one:
#!/usr/local/bin/perl print "Content-type: text/html", "\n\n"; print "<HTML>", "\n"; print "<HEAD><TITLE>Simple Virtual HTML Document</TITLE></HEAD>", "\n"; print "<BODY>", "\n"; print "<H1>", "Virtual HTML", "</H1>", "<HR>", "\n"; print "Hey look, I just created a virtual (yep, virtual) HTML document!", "\n"; print "</BODY></HTML>", "\n"; exit (0);
Up to this point, we have taken the line that outputs "Content-type" for granted. But this is only one type of header that CGI programs can use. "Content-type" is an HTTP header that contains a MIME content type describing the format of the data that follows. Other headers can describe:
- The size of the data
- Another document that the server should return (that is, instead of returning a virtual document created by the script itself)
- HTTP status codes
This chapter will discuss how HTTP headers can be used to fine-tune your CGI documents. First, however, Table 3.1 provides a quick listing of all the HTTP headers you might find useful.
The following headers are "understood" only by Netscape-compatible browsers (i.e., Netscape Navigator and Microsoft Internet Explorer).
You can see a complete list of HTTP headers at.
-
HTTP is a very simple protocol. The way the server knows that you're done with your header information is that it looks for a blank line. Everything before the blank line is taken as header information; everything after the blank line is assumed to be data. In Perl, the blank line is generated by two newline characters (\n\n) that are output after the last line of the header. If you don't include the blank line after the header, the server will assume incorrectly that the entire information stream is an HTTP header, and will generate a server error.
Back to: CGI Programming on the World Wide Web
© 2001, O'Reilly & Associates, Inc.
|
http://www.oreilly.com/openbook/cgi/ch03_02.html
|
CC-MAIN-2016-22
|
refinedweb
| 334
| 61.26
|
GPU Dask Arrays, first steps throwing Dask and CuPy together
The following code creates and manipulates 2 TB of randomly generated data.
import dask.array as da rs = da.random.RandomState() x = rs.normal(10, 1, size=(500000, 500000), chunks=(10000, 10000)) (x + 1)[::2, ::2].sum().compute(scheduler='threads')
On a single CPU, this computation takes two hours.
On an eight-GPU single-node system this computation takes nineteen seconds.
Combine Dask Array with CuPy
Actually this computation isn’t that impressive. It’s a simple workload, for which most of the time is spent creating and destroying random data. The computation and communication patterns are simple, reflecting the simplicity commonly found in data processing workloads.
What is impressive is that we were able to create a distributed parallel GPU array quickly by composing these three existing libraries:
CuPy provides a partial implementation of Numpy on the GPU.
Dask Array provides chunked algorithms on top of Numpy-like libraries like Numpy and CuPy.
This enables us to operate on more data than we could fit in memory by operating on that data in chunks.
The Dask distributed task scheduler runs those algorithms in parallel, easily coordinating work across many CPU cores or GPUs.
These tools already exist. We had to connect them together with a small amount of glue code and minor modifications. By mashing these tools together we can quickly build and switch between different architectures to explore what is best for our application.
For this example we relied on the following changes upstream:
- cupy/cupy #1689: Support Numpy arrays as seeds in RandomState
- dask/dask #4041 Make da.RandomState accessible to other modules
- dask/distributed #2432: Add LocalCUDACluster
Comparison among single/multi CPU/GPU
We can now easily run some experiments on different architectures. This is easy because …
- We can switch between CPU and GPU by switching between Numpy and CuPy.
- We can switch between single/multi-CPU-core and single/multi-GPU by switching between Dask’s different task schedulers.
These libraries allow us to quickly judge the costs of this computation for the following hardware choices:
- Single-threaded CPU
- Multi-threaded CPU with 40 cores (80 H/T)
- Single-GPU
- Multi-GPU on a single machine with 8 GPUs
We present code for these four choices below, but first, we present a table of results.
Results
Setup
import cupy import dask.array as da # generate chunked dask arrays of mamy numpy random arrays rs = da.random.RandomState() x = rs.normal(10, 1, size=(500000, 500000), chunks=(10000, 10000)) print(x.nbytes / 1e9) # 2 TB # 2000.0
CPU timing
(x + 1)[::2, ::2].sum().compute(scheduler='single-threaded') (x + 1)[::2, ::2].sum().compute(scheduler='threads')
Single GPU timing
We switch from CPU to GPU by changing our data source to generate CuPy arrays rather than NumPy arrays. Everything else should more or less work the same without special handling for CuPy.
(This actually isn’t true yet, many things in dask.array will break for non-NumPy arrays, but we’re working on it actively both within Dask, within NumPy, and within the GPU array libraries. Regardless, everything in this example works fine.)
# generate chunked dask arrays of mamy cupy random arrays rs = da.random.RandomState(RandomState=cupy.random.RandomState) # <-- we specify cupy here x = rs.normal(10, 1, size=(500000, 500000), chunks=(10000, 10000))
(x + 1)[::2, ::2].sum().compute(scheduler='single-threaded')
Multi GPU timing
from dask.distributed import Client, LocalCUDACluster # this is experimental cluster = LocalCUDACluster() client = Client(cluster) (x + 1)[::2, ::2].sum().compute()
And again, here are the results:
First, this is my first time playing with an 40-core system. I was surprised to see that many cores. I was also pleased to see that Dask’s normal threaded scheduler happily saturates many cores.
Although later on it did dive down to around 5000-6000%, and if you do the math you’ll see that we’re not getting a 40x speedup. My guess is that performance would improve if we were to play with some mixture of threads and processes, like having ten processes with eight threads each.
The jump from the biggest multi-core CPU to a single GPU is still an order of magnitude though. The jump to multi-GPU is another order of magnitude, and brings the computation down to 19s, which is short enough that I’m willing to wait for it to finish before walking away from my computer.
Actually, it’s quite fun to watch on the dashboard (especially after you’ve been waiting for three hours for the sequential solution to run):
Conclusion
This computation was simple, but the range in architecture just explored was extensive. We swapped out the underlying architecture from CPU to GPU (which had an entirely different codebase) and tried both multi-core CPU parallelism as well as multi-GPU many-core parallelism.
We did this in less than twenty lines of code, making this experiment something that an undergraduate student or other novice could perform at home. We’re approaching a point where experimenting with multi-GPU systems is approachable to non-experts (at least for array computing).
Here is a notebook for the experiment above
Room for improvement
We can work to expand the computation above in a variety of directions. There is a ton of work we still have to do to make this reliable.
Use more complex array computing workloads
The Dask Array algorithms were designed first around Numpy. We’ve only recently started making them more generic to other kinds of arrays (like GPU arrays, sparse arrays, and so on). As a result there are still many bugs when exploring these non-Numpy workloads.
For example if you were to switch
sumfor
meanin the computation above you would get an error because our
meancomputation contains an easy to fix error that assumes Numpy arrays exactly.
Use Pandas and cuDF instead of Numpy and CuPy
The cuDF library aims to reimplement the Pandas API on the GPU, much like how CuPy reimplements the NumPy API. Using Dask DataFrame with cuDF will require some work on both sides, but is quite doable.
I believe that there is plenty of low-hanging fruit here.
Improve and move LocalCUDACluster
The
LocalCUDAClutsterclass used above is an experimental
Clustertype that creates as many workers locally as you have GPUs, and assigns each worker to prefer a different GPU. This makes it easy for people to load balance across GPUs on a single-node system without thinking too much about it. This appears to be a common pain-point in the ecosystem today.
However, the LocalCUDACluster probably shouldn’t live in the
dask/distributedrepository (it seems too CUDA specific) so will probably move to some dask-cuda repository. Additionally there are still many questions about how to handle concurrency on top of GPUs, balancing between CPU cores and GPU cores, and so on.
Multi-node computation
There’s no reason that we couldn’t accelerate computations like these further by using multiple multi-GPU nodes. This is doable today with manual setup, but we should also improve the existing deployment solutions dask-kubernetes, dask-yarn, and dask-jobqueue, to make this easier for non-experts who want to use a cluster of multi-GPU resources.
Expense
The machine I ran this on is expensive. Well, it’s nowhere close to as expensive to own and operate as a traditional cluster that you would need for these kinds of results, but it’s still well beyond the price point of a hobbyist or student.
It would be useful to run this on a more budget system to get a sense of the tradeoffs on more reasonably priced systems. I should probably also learn more about provisioning GPUs on the cloud.
Come help!
If the work above sounds interesting to you then come help! There is a lot of low-hanging and high impact work to do.
If you’re interested in being paid to focus more on these topics, then consider applying for a job. The NVIDIA corporation is hiring around the use of Dask with GPUs.
That’s a fairly generic posting. If you’re interested the posting doesn’t seem to fit then please apply anyway and we’ll tweak things.
blog comments powered by Disqus
|
http://matthewrocklin.com/blog/work/2019/01/03/dask-array-gpus-first-steps
|
CC-MAIN-2020-29
|
refinedweb
| 1,387
| 54.52
|
hi sandro,
On 4/18/05, Sandro Böhme <s.boehme@inovex.de> wrote:
> Hello,
>
> sorry, but the "poor-man's object-repository-mapping" is already done by
> me ;-) with help of Oliver Kiessler and Christophe Lombart of the
> Graffito community (both not responsible for the poor part ;-) ).
> I submitted a proof of concept code as jira issue to the jcr-mapping
> subproject of Graffito but I would also like to work with the jackrabbit
> community. We just have to clearify where the code belongs to.
>
> Personally, I would like to head in a JDO and MDA-like direction to make
> the nodetype creation an easy task.
>
> It would be good to know how the registering of nodetypes will be
> accessable in jackrabbit in the future. By the api, by the
> custom_nodetypes.xml or both? Any hint is very much appreciated.
cheers
stefan
>
> Just a quick edited copy and past from the Graffito mailing list:
> It's just an initial version with the following three usecases for the
> JCR Mapping:
> 1. Registering custom nodetypes according to a Java Bean model at
> compile time.
> 2. Persist a Java Bean at runtime.
> 3. Loading a Java Bean from the repository at runtime.
>
> ==>1. For registering nodetypes I put the custom_nodetypes.xml in the
> nodetypes repository folder. Creating the xml file is realized with
> JAXB. The BeanConverter class maps the Java class structure to the
> nodetype structure and marshalls the custom_nodetypes.xml to the
> configured nodetype folder. I did not check the schema
> against the spec yet, because it will change anyway.
> At the moment I work on a xdoclet module to replace the jaxb part.
>
> ==>2. Persisting simply works like that:
> PersistenceManager pm = new PersistenceManager(jcrSession);
> String relPath = pm.insert(folder);
>
> ==>3. Loading can be implemented this way:
> Folder loadedFolder = (Folder) pm.getObject(relPath);
>
> All information for reading and writing a bean can be gained by the
> class itself. It does not need to implement an interface or something.
> The path acts like a unambiguous database id.
>
> ++ limitations ++
> o no complex property types can be saved (Folder.getDocument())
> at the moment
> o collections are not yet supported
> o deletion is not yet supported
> o only the basic JCR Types (String, boolean,...) and java.util.Date (not
> a JCR-basic type) are supported
> o the bean converter is not yet adapted to Graffito converter handling
> o mixin's (=Interfaces) are not yet supported
>
> ++ next steps ++
> o much more test cases need to be added and I'm sure according bug's
> need to be fixed ;-)
> o also delete the data created in the test cases
> o pm.update and pm.delete() need to be added
> o more atomic types (like Character,...) need to be added
> o support for complex types
> o support for collections
> o support for interfaces
> o make the namespace "graffito" configurable
> o support for JCR features like searching, versioning,...
> o creating an Ant target for registering Java classes as nodetypes
> o refactor some responsibilties and names of some classes
> o support queries
> o ...
>
> ++ configuration ++
> I don't think it is ready for check in because the configuration is not
> very clean at the moment, it has not enough test cases,...
>
> Please tell me your opinions about all this. Thank you very much.
>
> Regards,
>
> Sandro
>
>
> Ben Pickering wrote:
> > [fwding as I think I missed jackrabbit-dev with this]
> >
> > David,
> >
> >
> >>i think mapping of content to pojo's is not something that should
> >>be covered in the jcr-spec at this point. much like the
> >>ejb (or jdo) and jdbc specs are separate for good reasons i
> >>believe this should be the same for jcr.
> >
> >
> > I agree 100% with you. Perhaps I was misleading when I spoke about
> > 'standardising'. I meant perhaps a related 'best practice' more than
> > anything formal.
> >
> >
> >>i think it could get very interesting assuming hierarchies of
> >>child nodes and inheritance of nodetypes.
> >>does that make sense? this could be the start of a
> >>poor-man's object-repository-mapping.
> >
> >
> > The basics are pretty clear, yes. The idea of hierarchies of child
> > nodes is the interesting bit and where I came in, really, with the
> > thread about importing XML text to node hierarchies. I'm happy to be
> > a poor man if I don't have to write all that complex stuff Daniel
> > wants :) I'm sure Day has it in the labs anyway, heh?
> >
> >
> >>no, i think this is all very reasonable, and i think that if something
> >>like the above is what you are looking for, then this should be
> >>implemented reasonably fast. would that be something that
> >>you are interested in working on?
> >
> >
> > Well, I only checked out jackrabbit the other day, but I do intend to
> > take a look when I have the time. I guess it's all changed by now :)
> >
> > Glad I could finally be clear enough to explain what I was getting at.
> >
> > --
> > Cheers,
> > Ben
> >
> >
>
>
|
http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/200504.mbox/%3C90a8d1c0050418064241677ff3@mail.gmail.com%3E
|
CC-MAIN-2014-42
|
refinedweb
| 812
| 64.3
|
Geo-disaster recovery for Azure Event Hubs is now generally available! The following article gives an overview of how to enable regional disaster recovery capability for Azure Event Hubs.
Previously, the only way to protect from events which would qualify as disaster or Geo-disaster was by manually managing high availability in your respective clients, and keeping independent datacenters in different regions in sync from within your client code. Any entity created in one datacenter had to be replicated to the other datacenter.
With this new feature, this is no longer necessary. Any entity created in one “primary” namespace is replicated to a “secondary” namespace.
Please note: This release does not contain data disaster recoverability. If this is required you would still need to proceed as before. We will also add data replication at a later point in time.
This feature is only available for Azure Event Hubs Standard namespaces.
Please see the full documentation of the feature, including code samples here:
At the release date we will have Geo DR in the following regions for new Event Hubs namespaces enabled: Central US, East US, East US 2, North Central US, North Europe, South Central US, Southeast Asia, West Europe, West US, France Central, France South
Please check back for updates if your region is not in the enabled regions.
|
https://blogs.msdn.microsoft.com/eventhubs/2017/12/18/azure-event-hubs-geo-disaster-recovery-is-now-generally-available/
|
CC-MAIN-2018-39
|
refinedweb
| 220
| 51.38
|
Computer Science Archive: Questions from April 11, 2010
- Anonymous askedThere is a book stall where 5 different books present in 5different bookshelf .1st book in A,2nd boo... Show moreThere is a book stall where 5 different books present in 5different bookshelf .1st book in A,2nd book in B,3rd book in C, 4thbook in D, 5th book in E.the MRP of books are Rs.50.00, Rs.60.00,Rs.80.00, Rs.100.00, Rs.120.00 respectively.Write a c program interbo c to calculate following :
1) If I input names of book and each quantities then calculate thetotal amount to perches and print it
and also print in which bookshelf it stored.
• Show less0 answers
- Anonymous askedHi, can anyone help me to work out a C++ code to convert Binary to Decimal? I'm a new guy on C++ pro... More »1 answer
- dorsty askedx.Hmds help on adding a meaningful GUI to replace the text menu and the command prompt input highlig... Show morex.Hmds help on adding a meaningful GUI to replace the text menu and the command prompt input highlighted below. Can use any GUI containers.
package trygui;
import java.util.*;
import javax.swing.JOptionPane;
public class Main
{
public static void main(String[] args)
{
Scanner input = new Scanner(System.in);
int max = 10;
ClassDataArray arr, others; //reference to Array
others = new ClassDataArray(max);
others.insertOther("Jane", "Wilson", 453.4);
others.insertOther("Google", "Walmart", 2343.5);
others.insertOther("Amazon", "Ebay", 32000.0);
while(true)
{
System.out.println("\n Press 1: To print all the customers.");
System.out.println(" Press 2: To terminate a customer.");
System.out.println(" Press 3: To display the rest of the customers.");
System.out.println(" Press 0: To exit program\n");
System.out.print(" Please enter your choice >>");
int choose = input.nextInt();
switch(choose)
{
case 1:
others.displayOther();
break;
case 2:
String enter = JOptionPane.showInputDialog("Enter one of the above salary");
double dou = Double.parseDouble(enter);
boolean trueOrFal = others.deleteOther(dou);
System.out.println(" Deleting " +dou);
if(trueOrFal == true)
others.displayOther();
else
System.out.println("No such salary in our database");
break;
case 3:
others.displayOther();
break;
case 0:
System.out.printf(" Come back soon\n");
System.exit(1);
}
}
}
}
class Chray
{
private String stt;
private String lstt;
private double dr;
private static int count = 0;
public Chray()
{
count += 1;
}
public Chray(String x, String z, double y)
{
stt = x;
lstt = z;
dr = y;
}
public void est()
{
System.out.println(" The name is " + stt+" " +lstt +", salary is "+dr);
}
public double getAmt()
{
return(dr);
}
}
class ClassDataArray
{
private Chray b[];
private int nOther;
public ClassDataArray(int oMax)
{
b = new Chray[oMax];
nOther = 0;
}
publõodx.Hmher(String last, String first, double sal) //put person int array
{
b[nOther] = new Chray(last, first, sal);
nOther++;
}
public void displayOther()
{
for(int i=0; i<nOther;i++)
b[i].est();
}
public boolean deleteOther(double searchAmt)
{
int j;
for(j=0; j<nOther; j++)
if( b[j].getAmt() == (searchAmt))
break;
if(j == nOther)
return false;
else
{
for(int k=j; k<nOther; k++)
b[k] = b[k+1];
nOther--;
return true;
}
}1 answer
- Deadhead askedWrite a method intrandomInt(int x, int y)
that returns a random integer between x and y inclusive. N... Show moreWrite a method intrandomInt(int x, int y)
that returns a random integer between x and y inclusive. Note thatx and y can be positive or negative. • Show less0 answers
- Anonymous askedGiven the location and shape of several rectangular buildings in acity, compute "silhouette" formed... Show moreGiven the location and shape of several rectangular buildings in acity, compute "silhouette" formed by these buildings when viewedfrom a distance. A building is represented by a triplet (L,H,R),where L and R denote the left and right x-coordinates of building,and H represents the buildings height. The silhouette is simplyunion of the rectangles, represented as a list of pairs orderedfrom left to right. A pair (x,y) denotes a change in silhouette: atlocation x the height of the answer changes to y. The height of thesilhouette is initially zero.
a> Design a divide-and-conquer algorithm to solve the silhouetteproblem. Provide both the pseudo-code and working code.
b>What is the complexity of your algorithm in (a)?
c>Provide a function T(n) to predict the time taken by yourprogram on inputs of size n. How long would it take to compute thesilhouette of 1,000,000 buildings?
• Show less0 answers
- Anonymous askedx.Hmlo,I had this program but I don't know how to do it. Please if you can try to read allinstructio... Show morex.Hmlo,I had this program but I don't know how to do it. Please if you can try to read allinstruction at the end because this is a guide line to write theprogram and if possible RUN the program with command prompt. Thank youin advance.
=========================================================================
• Show less
Thisprogramming assignment explores several concepts discussed in class,notably: the use of a for loop and the use of a while loop. Itcontinues development of a relativelty complex program by focusing ondevelopment of smaller portions of that program. It also requires youto make more design decisions than have previous assignments.
Forthis assignment and for all others, documentation (as discussed inclass) is required and points will be deducted for inadequate ormissing documentation. In particular, you should include (at aminimum), for each class and for each method in your program: yourname(Jhon Smith) , the date (04/10/2010) you created the class (ormethod), and a brief description of what you intend class (or method)to do. Unless told otherwise, you may use meaningful variablenames of your choice, so long as they are valid variable names in Java.
This purpose of this program is to record census information for each member of as many families as desired.
This assignment continues the development of the software you began in your last assignment. In that assignment you created a Person class that was used by the Getpersoninfo class (which also contained the main() method for your program). You will recall that the Getpersoninfo class was essentially a stub that should have permitted you to focus your programming attention on the Person class. For this current programming assignment, you will re-use the Person class exactly as it was specified in your previous assignment with no changes to Person required. In the event you were unable to complete the development of the Personclass or if you just want to make sure that your implementation of thatclass matched specifications, feel free to make use of our version ofthe Person class provided at the end of this document (If youdecide to use any or all of our implementation in the work you submitfor this programming assignment, be certain to make the appropriateauthorship attributions in your documentation). Note: as with the last progamming assignment, focuson doing good work on this program because you will be able to re-useit as the basis for your next programming assignment!
Again, you will create two classes in the same file. One of these classes will be the Person class as specified in your last assignment. The Familycensus class is the other class and it will contain the program's main() method. The Familycensus class will require the following static instancevariables (you may choose different variable names), declared withinthe class body, but before the definition of any method:
static int householdsize = 0;
static int gender = 0;
static int age = 0;
static int married = 0;
static double totalincome = 0.0;
static int maxeducation = 0;
Most of your design work will be focused on the main()method, the function of which is described here. As necessary toconvey direction to the user and to make sense of the output, you mustuse meaningful prompts and other print statements. The main() method will, in this order:
• instantiate an object of the Person class
• declare and assign an appropriate value for a loop control variable(sometimes called a "sentinel" value) to assure that a while loop isentered at least once
• until the user indicates a desire to stop execution of the while loop (through input to the askcontinue() method, further described below):
? determine from user input the count of family members (this number will include the user)
? use the count of family members to control execution of a for loop, thebody of which will execute once for each family member; and, for eachfamily member:
? use other Familycensusmethods (further described below) to "get" required values from userinput, and then use the dot operator and the appropriate method fromthe Person class to set the coresponding value in an object that is an instance of the Person class
? the Familycensus methods (and their corresponding Person methods) must be invoked in this order:
? getgender()
? getage()
? getmarried()
? getincome()
? getmaxeducation()
• As stated above: the for loop will execute within a while loop
? the for loop will stop execution when information for all members of a family has been entered
? the while loop will stop execution when information for all families has been entered
In addition to the main() method, the Familycensus class will also contain methods with the following signatures:
private static int gethouseholdsize()
private static int getgender()
private static int getage()
private static int getmarried()
private static double getincome()
private static int getmaxeducation()
private static int askcontinue()
The methods other than main() will share the same basic structure. Each will:
• instantiate a Scanner class object // don't forget the required import statement which needs to be made in an approptriate location, ONCE for the entire program!
• thoughtfully prompt user to input an appropriate value
• use the value input by the user to set the value of the ap õrax.Hmle
• return that value
So, for example, the gethouseholdsize() method will:
• instantiate a Scanner class object
• prompt user to input number of persons in family, including the user, as an integer
• set the value of householdsize to the next integer input by the user
• return householdsize
The input options available to the user must remain as described in your previous assignment. In particular:
• options for the user's gender are integers: 1 for male, 2 for female, 0 for unreported
• options for the user's marital status are integers: 1 for single, 2 for married, 3 for divorced, 0 for unreported
• options for the user's educational level are integers: 0 for no HS diploma, 1 for HS diploma, 2 for attended college
The method askcontinue() has a diferent sort of functionality than these other methods and requires some additonal thought. The purpose the askcontinue() methodis to get an integer value for a sentinel variable from the user, sothat this sentinel value can be used to control the execution of thewhile loop found in the main() method. Here is pseudocode for the askcontinue() method:
• declare a local int variable to use as a sentinel value for a while loop
• instantiate a Scanner class object
• prompt the user to enter:
? 999 if he or she does not wish to enter information for another family,
? any other integer if he or she does wish to enter information for another family
• set the value of the local sentinel variable to the next integer the user inputs
• return the local sentinel value
/****************************************************************************************/
/* The following is a working implementation of the Person class, which you can use as part of your program */
/*if you provide appropriate authorship attrubution in your documentation */
/****************************************************************************************/
/*************************************************/
/* class Person */
/* Primeaux, March 2010 */
/* Used to create a person object and to stores information */
/* about that person */
/*************************************************/
class Person{
// declare needed variables
int gender; // options: 0 = unreported, 1 = male, 2 = female
int age = 0;
int marriedstatus = 0; // 0=not-reported,1=single,2=married,
// 3=divorced
double familyincome = 0.0;
String maxeducation; // options: 0 = no HS diploma, 1 = high school,
// 2 = college
int personsinhousehold = 0;
// most of the following methods are very, very simple methods.
// Keeping them simple makes them easier for the beginning
// student to understand and create; this also illustrates
// a process of top-down, step-wise refinement that breaks down
// a complex problem into smaller parts that can each be more easily
// addressed.
/*************************************************/
/* recordgender()*/
/* Primeaux, March 2010 */
/* sets value for gender */
public void recordgender(int x){
this.gender = x;
} // end recordgender()
/*************************************************/
/* recordage() */
/* Primeaux, March 2010 */
/* sets value for age */
public void recordage(int x){
this.age = x;
} // end recordage()
/*************************************************/
/* recordmarriedstatus() */
/* Primeaux, March 2010 */
/* sets value for married status */
public void recordmarriedstatus(int x){
this.marriedstatus = x;
}// end recordmarriedstatus()
/*************************************************/
/* recordfamilyincome() */
/* Primeaux, March 2010 */
public void recordfamilyincome(double x){
this.familyincome = x;
}// end recordfamilyincome()
/*************************************************/
/* recordmaxeducation(int x) */
/* Primeaux, March 2010 */
/* uses switch statement to set */
/* maxeducation String */
public void recordmaxeducation(int x){
switch(x){
case 0: this.maxeducation = "no High School diploma";
break;
case 1: this.maxeducation = "High School graduate";
break;
case 2: this.maxeducation = "college";
break;
}
}// end recordmaxeducation()
/*************************************************/
/* recordpersonsinhousehold */
/* Primeaux, March 2010 */
/* sets value for personsinhousehold */
/* Note: uses conditional statment to check for an instance of "bad" data */
public void recordpersonsinhousehold(int x){
if(x < 1) x = 1; // the person answering is in the
// household
this.personsinhousehold = x;
} // end recordpersonsinhousehold()
/*************************************************/
/* personreport() */
/* Primeaux, March 2010 */
/* reports details related to a person */
/* Note: uses conditional statements to determine output */
public void personreport(){
System.out.printf("\n\n***********************\n");
System.out.println("This person's data includes:");
System.out.print("Gender: ");
if(this.gender == 1)
System.out.println("male");
else
if(this.gender == 2)
System.out.println("female");
else
System.out.println("unreported");
System.out.println("Age: " + this.age);
System.out.printf("Family income: $%10.2f\n", this.familyincome);
System.out.print("Highest level of education: ");
System.out.println(maxeducation);
// demonstrates use of AND and OR statements and a reasonable
// "default" condition
System.out.println("Total number of family members indicates ");
if((this.personsinhousehold > 2) &&
(this.personsinhousehold < 5))
System.out.println("average sized household");
else
// OK, so I wanted to show how to use OR. Because of the
// remainder of this program, one simpler way of
// achieving the following would be to start with the expression:
// if (this.personsinhousehold < 3) ... However,
if(this.personsinhousehold == 1) ||
(this.personsinhousehold == 2))
System.out.println("smaller than average sized household");
else
System.out.println("larger than average sized househ°P)
} // end personreport()
} // end class Person0 answers
- Anonymous askedWrite an applet that does the following. Takes as input threenumbers, computes two numbers and print... Show more
Write an applet that does the following. Takes as input threenumbers, computes two numbers and prints these two numbers(interest and amount) out. Have three textfields to input the threenumbers (principal, rate and the number of years). With three fivetop-level variables: simpleInterest, amount, principal,numYears, rate.• Show less0 answers
- Anonymous askedWrite a Java program code that output numbers 0-99 clearly labelingthem “odd” and “even” using a for... Show moreWrite a Java program code that output numbers 0-99 clearly labelingthem “odd” and “even” using a forloop • Show less1 answer
- Physics12 askedMy problem says to find the Maclaurian series of "f" and its radiuson convergence. Graph "f" and its... Show moreMy problem says to find the Maclaurian series of "f" and its radiuson convergence. Graph "f" and its first few Taylor Polynomials onthe same screen.
f= sqrt(1 +x)
MAX POINTS!! • Show less0 answers
- Anonymous askedthe answer is already posted I just need someadditional inform... Show moreClick on this link Rainfallstatistics the answer is already posted I just need someadditional information with this program.
Additionally, try to get it to ask for the month by name andreport the highest and loswest months by name. This will usean array of month names parallel to the array of rainfallnumbers.2 answers
- Anonymous asked0 answers
- Anonymous asked2 answers
- Anonymous askedI am stuck on the program. I am trying to write a program that theuser can input number in an array... Show moreI am stuck on the program. I am trying to write a program that theuser can input number in an array and have my program use mergeSortto sort it. There is a problem with my first while loop. Am Iforgetting something? Please help.
public class MergeSortA{
public static void mergeSort1(int [] inputArray,int [] tempArray, int left, int right) {
if (left < right) {
int center= (left + right) / 2;
mergeSort1(inputArray, tempArray, left, center);
mergeSort1(inputArray, tempArray, center +1, right );
merge(inputArray, tempArray, left, center +1, right);
}
}
public static void mergeSort1(int[] inputArray) {
int [] tempArray = new int[inputArray.length];
mergeSort1(inputArray, tempArray, 0,inputArray.length -1);
}
private static void merge(int [] inputArray, int [] tempArray, intleftPos, int rightPos, int rightEnd ){
int leftEnd = rightPos -1;
int tempPos = leftPos;
int numElements = rightEnd - leftPos + 1;
while ( leftPos <= leftEnd &&rightPos <= rightEnd)
if(inputArray[leftPos].compareTo( inputArray[rightPos] ) <=0)
tempArray[tempPos++] = inputArray[leftPos++];
else
tempArray[tempPos] = inputArray[rightPos++];
while (leftPos <=leftEnd) //This will copy the rest of the left 1/2
tempArray[ tempPos++] =inputArray[leftPos++];
while ( rightPos <= rightEnd) //This will copy the rest of theright 1/2
tempArray[tempPos++] =inputArray[rightPos++];
//copy tempArray back
for(int i=0; i < numElements; i++,rightEnd--)
inputArray[rightEnd] =tempArray[rightEnd];
}
} • Show less0 answers
- Anonymous askedWrite a program that generates random passwords. A good passwordis one which is random, has a mixtur... Show more
Write a program that generates random passwords. A good passwordis one which is random, has a mixture of upper and lower casecharacters and digits.
Write a C program which can generate such passwords from 6 to 12characters in length. The program should ask the user to enter thelength of the password and then produce the generated password.Write a function to generate the password and a function to printit.
Hints:
1. Check a table of the ASCII codes.(Try man ascii or search the web.) You will seethat '0' is the lowest of the alphanumeric characters and 'z' isthe highest. There are some characters in this range that are notalphanumeric.
2. Use srand() and rand() functionsto produce random integer values which should be between ASCIIvalue of ‘0’ and ‘z’. Therefore you have toeliminate those non alphanumeric characters as part of thepassword. Use the functions in ctype.h to eliminate thosecharacters that are not alphanumeric. Go to the Lesson10 notes tofind out which functions are suitable for you to use. Remember toinclude the stdlib.h, time.h, and ctype.h header files.
3. You can use a for loop to outputthe characters one at a time. You have to use %c as formatspecifier to output as characters and they should be in the rangeof the ASCII values for alphanumeric characters.
Sample output:
Password GeneratingProgram
Enter the length ofthe password: 12
Your new passwordis: Mr4uHp78YiL4
Part 2 – Testing
Testing the program should include the following tests:
- Normal operation: try a number of passwords between 6 and 12characters long. Check to make sure the messages are correct andthat a random password is generated.
- Make sure the user does not enter number less than 6 andgreater than 12. If it happens, ask the user to enter the lengthagain until a value is entered between 6 and 12.
Provide a copy of the source code and the output for all yourtesting.• Show less0 answers
- Anonymous asked0 answers
- Anonymous askedi need help writing a program that inputs a sting and then splitsthe string only at the first space.... Show morei need help writing a program that inputs a sting and then splitsthe string only at the first space.
ex. string 1 = the big brown fox
string 1 = big brown fox
string 2 = the
then returns both strings to another function
so it would be main call a function to get string
that function calls the function to split the screen into the twoparts and return to the second function and the second functionreturns string2 into main therefore each time you call the functionstring 1 gets smaller and smaller.
what im trying to do is get a string. the first time you call thefunction you get
main string is i need help
first call:
string1 = i
string2 = need help
second call:
string1 = need
string2 = help
third call:
string 1 help
string2 = null
fourth call:
dont allow
• Show less0 answers
- Anonymous askedto compare two strings input bythe user.... Show more
So the questionis.
Write a program that usesfunction strcmp to compare two strings input bythe user. The program should state whether the frist stringis less than, equal to or greater than the secondstring.
That is the easypart. I have that part coded and it runs perfectly. However my professor wants the program to allow for multipleinputs. For example allowing you to run a verses b andthen run b verses c and so on untill the user decided toquit.
My idea was to use asimple while statement. I asked the question"Would you like another comparison? press y to continue". Ithen used scanf to assign the users input tovariable c. The while statement waswhile ( c == 'y'){. Unfortunatly the codeworks perfectly whout the while statment but assoon as i put it in I get very weird results from theprogram.
Here is thecode:
#include <stdio.h>
#include <string.h>
int main(void)
{
char str1[25]; /*initialize string 1*/
char str2[25]; /* initialize string 2*/
int c = 'y';
while(c == 'y'){
printf(" Enter the first string:\n ");
scanf("%s", str1); /* reads string 1*/
printf(" Enter the second string:\n ");
scanf("%s", str2); /* reads string 2*/
if(strcmp(str1, str2) >= 1) { /* comparestrings if first string is greater than second */
printf("The first string is greater than thesecond string\n%s is greater than %s\n", str1, str2);
}
if(strcmp(str1, str2) <= -1){ /* comparestrings if first string is less than second */
printf("The first string is less than the secondstring\n%s is less than %s\n", str1, str2);
}
if(strcmp(str1, str2) == 0){ /* comparestrings if the strings are equal */
printf("The first string is equal to the secondstring\n%s is equal to %s\n", str1, str2);
}
printf( "Would you like another comparison? press y tocontinue:\n" );
scanf( "%d", c );
}
return 0;}
The output looks like this.
Enter the firststring:
a
Enter the second string:
b
The first string is less than the second string
a is less than b
Would you like another comparison? press y to continue:
y
Enter the first string:
Enter the second string:As you see the while statment works in making the programrepeat, however when it repeats it does not give me the option toenter the first string, and the second string is indented. This makes no sence to me and I am at a complete loss. Thefunny thing is it's probably something simple right in front ofme. Any help would be greatly appreciated.• Show less0 answers
- Physics12 askedconvergence. Graph "f" and its... Show moreMy problem says to find the Maclaurian series of "f" and its radiuson
convergence. Graph "f" and its first few Taylor Polynomials on thesame
screen.
f= sqrt(1 +x)
MAX POINTS!! • Show less0 answers
- Anonymous askedWrite a program that computes and displays the charges for apatient's hospital stay. Firt, the progr... Show moreWrite a program that computes and displays the charges for apatient's hospital stay. Firt, the program should ask if thepatient was admitted as an in-patient or an out-patients. If thepatient was a in-patients the fallowing data should beentered:-The number of days spend in hospital-The daily rate-Charges for the hospital services( lab tests, etc. )If the patient was an out patient the fallowng data should beentered:-Charges for hospital services (lab tests, etc. )-Hospital medication charges.The program should use two overloaded fuctions to calculatethe total charges. one of the functions should accept arguments forthe in-patient datat, while the other function accepts argumentsfor out-patient data. Both functions should return the totalchargesInput validation: Do not accept negative numbers for anyinformation• Show less0 answers
- Anonymous askedWrite a program that checks if a file existsor not. If the file does not exist, then the program p... Show more
. Write a program that checks if a file existsor not. If the file does not exist, then the program prompts theuser whether she wants to create it or not. If yes, the programcreates the file and prints a confirmation message to the user.• Show less1 answer
- Anonymous askedSearch the number in the tree.... Show moreI have a program with a binary tree and i need to be able to searchit
Search the number in the tree. If the number is found, print thelocation of the number
with respect to the root as (for example): The number XX is foundat:
root=>L=>L=>R=>L. If the number is not found, it willprint “Number XX not found”
I have the tree tree structure set up but i do not know how tosearch it
message. • Show less0 answers
- Anonymous askedenum MyType{VA... Show moreOk two questions here actually, say I have an enum type declaredwith it's variables.enum MyType{VARIABLE, ANOTHER, ANDONEMORE}I'm reading in from a text file with the name VARIABLE in it,inFile will not allow me to have an operand type MyType. So I amwondering if you can read in VARIABLE as a string, and then load itinto a class with a plublic void function to set a variable of typeMyType in the class equal to VARIABLE. If you try to just use thestring value to load into the public function it gives a conversionerror...My other question is this, also reading in from a text file,say I have a name listed at the top of the text file. I want toread in this name, and then declare a class variable named the readin name. ExampleText file:myName1 22 3 3... etc..________________string inName;inFile >> inName;At this point inName will be equal to "myName"Now, what I want to do with it is to declare a class variablenamed "myName" (without quotes of course). Like this:class myClass______________myClass myName;^^^That's what I'm wanting to accomplish, but can it be done?Thanks!!!!• Show less0 answers
- Anonymous askedProblem: Given the lengths of a number of boards determine the number of triangles that could be cre... More »0 answers
- Anonymous askedThe two swimming instructors, Jeff and Anna, current schedulesare shown below. The X denotes a one h... Show moreThe two swimming instructors, Jeff and Anna, current schedulesare shown below. The X denotes a one hour time slot that isoccupied with a lesson.Jeff Monday Tuesday Wednesday Thursday11-12 X X12-1 X X X1-2 X X2-3 X X XAnna Monday Tuesday Wednesdy Thursday11-12 X X X12-1 X X1-2 X X2-3 X X XWrite a program with array(s) capable of storing theschedules. Create a main menu that allows the user to mark a timeslot as busy or free for either instructor. Also, add an option tooutput the schedules to the screen. Next, add an option to putputall time slots available for individual lessons(slots when at leastone professor is free). Finally, add an option to output all timeslots available for group lessons(when both instructors arefree).
The program should employ the following:
-as indicated in the problemdescription: 2 two-dimensional arrays, one for Jeff’sschedule, and one for Anna’s
-a main menu that provides options tothe user
-a function that prints the schedule
-afunction that prompts the user to select an instructor, day, andslot.
-afunction that prompts the user, then schedules or frees one of theinstructor's slots. This function calls the second functiondescribed above.
This is on page 443-444 of thetextbook, Chapter 7, problem 17.
Textbook name: Problem Solving withC++ by Walter Savitch. 7th edition.0 answers
- Anonymous askeds names and phone nu... Show more
Write a program that has an array of at least 10 string objectsthat hold people’s names and phone numbers. You may make upyour own strings, or use the following.
“Becky Warren, 555-1223”
“JoeLooney, 555-0097”
“Geri Palmer, 555-8787”
“Lynn Presnell, 555-1212”
“Holly Gaddis, 555-8878”
“SamWiggins, 555-0998”
“BobKain, 555-8712”
“TimHaynes, 555-7676”
“Warren Gaddis, 555-9037”
“Jean James, 555-4939”
“RonPalmer, 555-2783”
The program should ask the user to enter a name or partial nameto search for in the array. Any entries in the array that match thestring entered should be displayed. For example, if t6he userenters “Palmer” the program should display thefollowing names from the list:
Geri palmer, 555-8787
Ron Palmer, 555-2783
Additional(and changed) requirements for subject assignment:
(1) Loaddata from a text file.
(2)Perform case "insensitive" search.
(3) Useeither C-string functions or C++ string class methods.• Show less1 answer
- Anonymous askedWrite a program that can be used to gather statistical dataabout the number of movies college studen... Show more
Write a program that can be used to gather statistical dataabout the number of movies college students see in a month. Theprogram should perform the following steps:
A)Ask the user how many students were surveyed. An array ofintegers with this many elements should then be dynamicallyallocated.
B)Allow the user to enter the number of movies each student sawinto the array.
C)Calculate and display the average, median, and mode of thevalues entered. (Use the functions you wrote in Problems 8 and 9 tocalculate the median and mode.)
Input validation: Do not accept negative numbers forimput.
NOTE: You need to use two more functions: one to calculate anddisplay the average of the values entered, and another to displaythe contents of the array that is dynamically allocated and thenloaded.• Show less1 answer
- Anonymous askedFor the list shown below, illustrate each of following using insertion sort83, 68, 55, 32, 46, 37,... Show more
For the list shown below, illustrate each of following using insertion sort83, 68, 55, 32, 46, 37, 67, 79, 96, 12• Show less2 answers
|
http://www.chegg.com/homework-help/questions-and-answers/computer-science-archive-2010-april-11
|
CC-MAIN-2014-23
|
refinedweb
| 4,916
| 54.02
|
Office Communicator Sign-in and Discovery
Topic Last Modified: 2009-04-02
Office Communicator must determine which server it should log on to by using the user’s URI (for example, jeremy@contoso.com) and any manual settings configured on the client. If manual settings were provided, the server to use is clear. However, if the URI was the only indicator provided, some discovery is required.
Communicator discovery varies based on configuration. After the client discovers the server to connect to, it tries to connect by using TCP or TLS over TCP. If TLS is used, the server provides a certificate to authenticate itself to the client. The client must validate the certificate before it continues. The client might negotiate compression (if using TLS over TCP), and then it initiates a SIP registration.
Next, the client sends a SIP REGISTER message to the server without any credentials. This prompts Office Communications Server to challenge for user credentials, and specifies to the Communicator client the authentication protocols that it accepts.
When it comes to providing credentials, Communicator has two options. Communicator can use the user’s current Windows credentials to log on, or it can prompt the user for credentials.
Authentication failures can occur during the first part of logon processing. This can occur when credentials are not already saved or when the desktop credentials do not match the account that Communicator is trying to use. This can also occur when the SIP URI, account name, or password is typed incorrectly or when credentials and the SIP URI do not match. An example of this is if Jeremy tries to log on with the URI sip:jeremy@contoso.com, but he uses the user account and password for CONTOSO\vadim instead of the account owner’s own credentials, CONTOSO\jeremy.
For organizations that plan to use automatic configuration, one of the requirements during server deployment is to create an internal DNS SRV record that maps one of the following records to the fully qualified domain name (FQDN) of the Enterprise pool or Standard Edition server that handles client sign-in requests:
- _sipinternaltls._tcp.<domain> (for internal TLS connections)
- _sipinternal._tcp. <domain> (for internal TCP connections, performed only if TCP is allowed)
When the client is set to use automatic configuration, it uses the SIP URI that is provided by the user to discover which server it should sign in to. Communicator does this by using DNS SRV records published for the domain part of the SIP URI.
For example, if the user enters a URI of sip:jeremy@contoso.com, Communicator uses contoso.com to discover a SIP server that uses DNS. Communicator looks for the following SRV records in its search for an appropriate server:
- _sipinternaltls._tcp.contoso.com
- _sipinternal._tcp.contoso.com
- _sip._tls.contoso.com
If these records do not exist, Communicator queries for host (A) records:
- sipinternal.contoso.com
- sipexternal.contoso.com
The first query looks for an internal server in the contoso.com domain that offers ports supporting TLS over TCP for clients. The second query seeks to discover an internal server in the contoso.com domain that offers TCP ports for clients. Finally, the third query looks for an Internet-reachable server for the contoso.com domain that offers ports supporting TLS over TCP for clients. Communicator never looks for an Internet-reachable server that supports TCP, because use of clear-text SIP on the Internet does not make sense from a security standpoint. In other words, Communicator is not aware whether the network that is being used is internal or external. Communicator queries for all DNS SRV records. However, it tries TLS over TCP connections first. TLS over TCP is forced through an Edge Server (no option to allow for unsecured TCP connections).
Finally, if all the DNS SRV records do not exist (not if they are not valid; only if they do not exist at all), the client falls back to sipinternal.<URI domain> and tries to resolve that host name. If the host name resolves to an IP address, Communicator tries to connect by using TLS over TCP, TCP, or both, depending on what the policy allows for. If this fails, it will try one last time with sipexternal.<URI domain>.
Communicator policies can be put in place to prevent TCP from being used, and this prevents the second query from being issued. The EnableStrictDNSNaming policy can also be specified, which requires strict names for the computers discovered. In this case, Communicator is allowed to connect to servers only if the name is a match with the domain in the domain part of the user’s SIP URI or if the FQDN is sip.<URI domain>. If this policy is not enabled, any server name of the form <servername>.<URI domain> is allowed. As an example, for sip:jeremy@contoso.com, the host sip.contoso.com is always allowed (strict policy or not). Server77.contoso.com, sipfed.contoso.com, and ap.contoso.com are all also allowed if strict naming policy is not enabled. The following server names are never allowed because they do not tightly fit the domain that the user’s URI specified. Therefore, the client does not trust these servers as valid logon points: sip.eng.contoso.com, sip.contoso.net, sip.com, sip.contoso.com.cpandl.com, and so on.
This tight validation between the host name and the URI is done specifically because the only configuration with which the client is provided is the SIP URI. Because of this, the client must be very careful not to allow DNS attacks to allow it to connect to any man-in-the-middle, who could thereby watch Communicator’s traffic. By having a tight tie between the URI and the host names allowed for logon, Communicator has better certainty that the certificate the user is validating actually has authority for the domain to which he is trying to log on to.
After the host name is identified, Communicator also resolves the host name to an IP address. This usually occurs as the result of the DNS SRV request, but until the IP address is resolved, Communicator cannot connect. This can be a problem during logon also.
The latest version of Communicator enables the ability to manually specify both an internal and external server to log on against. Communicator always attempts to connect to the internal server if it is available, but it falls back to the external server. Previously, Communicator had only a single manual entry, which created problems for mobile workers. With the ability to specify an internal and external server, it is now easier for administrators to configure and enable laptop configurations that work across internal and external networks. This increased functionality is also important for companies where the domain in the user’s URI differs from their SIP enterprise server’s domain. Because the administrator can configure Communicator (on a laptop, for example) once, the user does not need to remember the internal or external servers and administrators do not have to publish DNS SRV records for all the domains they want to support for remote access users.
The Office Communicator client enables the user to automatically connect to the appropriate Office Communications Server without actually putting in the server name. Regardless of whether the client is inside the internal network or is working externally, this feature redirects the client and allows it to authenticate and connect to its own Office Communications Server (in the case of Standard Edition) or home pool (in the case of Enterprise Edition). This feature has a significant DNS dependency. For this to work successfully, the appropriate SRV records should be published both internally and externally.
When the Office Communicator client first starts and the user tries to connect, Office Communicator always tries to connect to the server or home pool in its same domain, or by using the same SIP URI as in the sign-in address. For example, if the sign-in name that is used is kim.akers@fabrikam.com, Office Communicator looks for the home pool or Office Communications Server in the same DNS namespace, which is fabrikam.com. This process is made easier by the usage of DNS SRV records, which ultimately points the client to the FQDN of the home pool or server in the correct domain. The process works the same whether the client is in an internal or external network.
The client starts querying SRV records and, by default, it always tries to use TLS for authentication. If TLS fails, then and only then will it fall back to Transmission Control Protocol (TCP).
- _sipinternaltls._tcp.fabrikam.com
- _sipinternal._tcp.fabrikam.com
Either of these first two DNS records should be published and available in the internal DNS namespace. So, if by now the client gets the host name back, it directly connects to the home pool or the Office Communications Server. Or else, it continues its query process, knowing that it is currently not in the internal network.
- _sip._tls.fabrikam.com
- _sip._tcp.fabrikam.com
If either of these queries is a success, the client is redirected to the external edge of Access Edge Server and subsequently to the internal home pool or the Office Communications Server. However, if it still fails, in a final attempt it tries to look up the host records directly as in the following two examples. If this attempt to configure its settings automatically fails, the Office Communicator will fail and require manual intervention.
- sip.fabrikam.com
- sipinternal.fabrikam.com
|
http://technet.microsoft.com/en-us/library/dd637152(office.13).aspx
|
CC-MAIN-2013-20
|
refinedweb
| 1,593
| 53.41
|
2012-05-21 23:25:19 8 Comments
I'm updating a struct of mine and I was wanting to add a std::string member to it. The original struct looks like this:
struct Value { uint64_t lastUpdated; union { uint64_t ui; int64_t i; float f; bool b; }; };
Just adding a std::string member to the union, of course, causes a compile error, because one would normally need to add the non-trivial constructors of the object. In the case of std::string (text from informit.com)
Since std::string defines all of the six special member functions, U will have an implicitly deleted default constructor, copy constructor, copy assignment operator, move constructor, move assignment operator and destructor. Effectively, this means that you can't create instances of U unless you define some, or all of the special member functions explicitly.
Then the website goes on to give the following sample code:
union U { int a; int b; string s; U(); ~U(); };
However, I'm using an anonymous union within a struct. I asked ##C++ on freenode and they told me the correct way to do that was to put the constructor in the struct instead and gave me this example code:
#include <new> struct Point { Point() {} Point(int x, int y): x_(x), y_(y) {} int x_, y_; }; struct Foo { Foo() { new(&p) Point(); } union { int z; double w; Point p; }; }; int main(void) { }
But from there I can't figure how to make the rest of the special functions that std::string needs defined, and moreover, I'm not entirely clear on how the ctor in that example is working.
Can I get someone to explain this to me a bit clearer?
Related Questions
Sponsored Content
6 Answered Questions
[SOLVED] What is the difference between 'typedef' and 'using' in C++11?
- 2012-05-25 02:39:51
- Klaim
- 249041 View
- 839 Score
- 6 Answer
- Tags: c++ c++11 typedef using-declaration
9 Answered Questions
[SOLVED] What is a lambda expression in C++11?
7 Answered Questions
[SOLVED] C++11 introduced a standardized memory model. What does it mean? And how is it going to affect C++ programming?
- 2011-06-11 23:30:14
- Nawaz
- 208789 View
- 1813 Score
- 7 Answer
- Tags: c++ multithreading c++11 language-lawyer memory-model
4 Answered Questions
[SOLVED] What does T&& (double ampersand) mean in C++11?
- 2011-03-30 03:29:58
- paxdiablo
- 185263 View
- 749 Score
- 4 Answer
- Tags: c++ c++11 rvalue-reference c++-faq perfect-forwarding
8 Answered Questions
[SOLVED] Rule-of-Three becomes Rule-of-Five with C++11?
- 2011-01-24 13:51:45
- Xeo
- 65380 View
- 325 Score
- 8 Answer
- Tags: c++ constructor c++11 rvalue-reference rule-of-three
17 Answered Questions
[SOLVED] Virtual member call in a constructor
- 2008-09-23 07:11:30
- JasonS
- 167202 View
- 1271 Score
- 17 Answer
- Tags: c# constructor warnings resharper virtual-functions
5 Answered Questions
[SOLVED] Returning unique_ptr from functions
- 2010-11-30 17:44:37
- Praetorian
- 142674 View
- 328 Score
- 5 Answer
- Tags: c++ c++11 unique-ptr
1 Answered Questions
[SOLVED] Is there any concept of trivial/non-trivial member function?
- 2018-04-14 02:47:42
- bigxiao
- 115 View
- 2 Score
- 1 Answer
- Tags: c++ language-lawyer
2 Answered Questions
[SOLVED] Unions in C++11: default constructor seems to be deleted
- 2016-01-03 20:43:35
- vsoftco
- 2805 View
- 11 Score
- 2 Answer
- Tags: c++ c++11 language-lawyer unions
5 Answered Questions
[SOLVED] Non trivial struct constructor inside a union in C++
- 2015-07-10 14:17:04
- hetepeperfan
- 1281 View
- 2 Score
- 5 Answer
- Tags: c++ struct constructor unions
@Luc Danton 2012-05-22 01:48:44
That
new (&p) Point()example is a call to the Standard placement
newoperator (via a placement new expression), hence why you need to include
<new>. That particular operator is special in that it does not allocate memory, it only returns what you passed to it (in this case it's the
&pparameter). The net result of the expression is that an object has been constructed.
If you combine this syntax with explicit destructor calls then you can achieve complete control over the lifetime of an object:
When and where you should construct and destroy the
std::stringmember (let's call it
s) in your
Valueclass depends on your usage pattern for
s. In this minimal example you never construct (and hence destruct) it in the special members:
The following is thus a valid use of
Value:
As you may have noticed, I disabled copying and moving
Value. The reason for that is that we can't copy or move the appropriate active member of the union without knowing which one it is that is active, if any.
@Aconcagua 2016-06-08 11:32:37
"// Needed to get around a quirk of the language" - actually, this is wrong - you can call the destructor directly, if you do it right (need to resolve scope correctly):
p->std::string::~string();. More readable, though? Well, certainly looks more complicated, but uses a well-known data type, whereas the solution above is more compact (apart from the additional code line for the
using), but introduces a rarely known alias. Certainly a matter of personal taste (as concerning myself, I would vote for the well-known data type...).
@Ben Voigt 2012-05-22 02:07:51
There is no need for placement new here.
Variant members won't be initialized by the compiler-generated constructor, but there should be no trouble picking one and initializing it using the normal ctor-initializer-list. Members declared inside anonymous unions are actually members of the containing class, and can be initialized in the containing class's constructor.
This behavior is described in section 9.5.
[class.union]:
and in section 12.6.2
[class.base.init]:
So the code can be simply:
Of course, placement new should still be used when vivifying a variant member other than the other initialized in the constructor.
@dkim 2015-07-04 14:51:18
What happens if the
Fooconstructor is defined but does not choose one among those member variants (that is,
Foo() { }instead of
Foo() : p() { })? GCC 5.1 and Clang 3.6 turn out to compile the constructor without any warning or error: melpon.org/wandbox/permlink/gLkD49UOrrGhFUJc However, it is not clear what the standard says about that case.
@Ben Voigt 2015-12-22 15:38:22
@dkim: I'm pretty sure that leaves all the variant members in the same state, specifically storage obtained but initialization not performed. Section 3.8 (Object Lifetime) specifies the allowed operations on members in such a state.
|
https://tutel.me/c/programming/questions/10693913/c11+anonymous+union+with+nontrivial+members
|
CC-MAIN-2019-51
|
refinedweb
| 1,107
| 54.05
|
import random import math num_rand_vals = 100 min_value = 1 max_value = 6 rand_vals = [] print "Roll: number" for i in xrange(num_rand_vals): val = random.randint(min_value, max_value) rand_vals.append(val) if (i < 10): print "%d: %d" % (i, val)
Roll: number 0: 4 1: 3 2: 2 3: 3 4: 3 5: 6 6: 4 7: 2 8: 1 9: 4
min_seen = min(rand_vals) max_seen = max(rand_vals) mean_seen = (sum(rand_vals)+0.0) / len(rand_vals) print "When simulating %d random values between %d and %d" % (num_rand_vals, min_value, max_value) print "The smallest I saw was %d" % (min_seen) print "The biggest I saw was %d" % (max_seen) print "And the mean value was %.02f" % (mean_seen)
When simulating 100 random values between 1 and 6 The smallest I saw was 1 The biggest I saw was 6 And the mean value was 3.41
## Plot the individual vals plt.figure() plt.xlabel("Roll Index") plt.ylabel("Result") plt.scatter(range(num_rand_vals), rand_vals) plt.show()
## summarize how many times we saw each value tally = [0 for x in xrange(max_value+1)] for i in rand_vals: tally[i] += 1 print "value #times_seen" for t in xrange(len(tally)): print "%d: %d" % (t, tally[t])
value #times_seen 0: 0 1: 14 2: 22 3: 15 4: 19 5: 18 6: 12
print "We expected to see the number 6 %.02f percent of the time = %d times" % (100./6, num_rand_vals/6) print "We actually saw 6 %d times" % (tally[6])
We expected to see the number 6 16.67 percent of the time = 16 times We actually saw 6 12 times
## summarize the tally mean_tally = (sum(tally) + 0.) / (len(tally)-1) ## skip 0 min_tally = tally[1] sumdiff = 0.0 for i in xrange(min_value, max_value+1): diff = (tally[i] - mean_tally) ** 2 sumdiff += diff if (tally[i] < min_tally): min_tally = tally[i] sumdiff /= (len(tally)-1) stdev_tally = math.sqrt(sumdiff) print "the minimum tally was %d" % (min_tally) print "the maximum tally was %d" % (max(tally)) print "the mean tally was %.02f +/- %.2f" % (mean_tally, stdev_tally)
the minimum tally was 12 the maximum tally was 22 the mean tally was 16.67 +/- 3.35
## plot the tally, a histogram plt.figure() plt.xlabel("Value") plt.ylabel("Number of occurences") plt.bar(range(max_value+1), tally) plt.show()
## The hist function will do everything for us in one step plt.figure() plt.xlabel("Value") plt.ylabel("Number of occurences") plt.hist(rand_vals, bins=range(max_value+2)) plt.show()
## The hist function can also plot the probability of each number, i.e. the density function plt.figure() plt.xlabel("Value") plt.ylabel("Percent of occurences") plt.hist(rand_vals, bins=range(max_value+2), normed=True) plt.show()
|
http://nbviewer.jupyter.org/url/schatzlab.cshl.edu/teaching/exercises/stats/1.Rolling%20a%20die.ipynb
|
CC-MAIN-2018-13
|
refinedweb
| 437
| 57.57
|
. In the late 1980's more than 10 different character sets were developed to represent the alphabets of languages like Russian, Arabic, Hebrew, and Turkish. It became cumbersome to keep track of and manage the different character sets.
That’s why the Unicode standard was developed. It integrated all these character sets into a single encoding standard that boasts over 100,000 different characters from 146 languages. There happens to be a really great Smashing Magazine article explaining Unicode and other encoding methods.
UTF-8 is a variable width character encoding that represents all the Unicode characters in one to four 8-bit bytes. UTF-8 is the dominate character encoding standard for the web, in use on over 90 percent of web pages.
C# escape characters
A convenient way to include UTF-8 non-Latin characters in C# programs is by using character escape sequences. Escape sequences let us represent characters that have a specific meaning to the compiler, like quotation marks, which are used to denote text strings, as literal characters. For example, to store a string value in a variable, we use quotation marks to show that the text is string.
To store the sentence
“Scripts over everything!” as a string in a variable, we write the following line of code.
string message = “Scripts over everything!”;
When we run it, it works just fine and the message is printed.
However, if our text has quotation marks in it we will see a number of errors if we write the code below:
String message = “Drew said “Scripts over everything” ”;
To fix this we use escape characters.
For this case, we use the
\” escape sequence, which lets us add quotation marks as part of the string. The code should look like this:
String message = "Drew said \"Scripts over everything\" ";
Similarly, to use non-Latin characters supported in UTF-8 we need to use the
\u escape sequence followed by the hexadecimal notation for the specific character we want to use. We’ll get deeper into this in the next section.
Using UTF-8 in code
To get started with the case study solution we'll begin by creating a new project. Open Visual Studio 2017 and select File > New > Project. You'll see a dialog box asking what type of project we want to create. We need to select Console App under .NET Core. Let's name the app “textio”.
Click OK. You should see the following code in the
Program.cs file:
using System; namespace textio { class Program { static void Main(string[] args) { Console.WriteLine("Hello World!"); } } }
We need to install the Twilio .NET package using the NuGet Package Manager GUI, Package Manager command line interface (CLI), or .NET command line interface. Twilio has an awesome walk through on how to do this. In this tutorial, we'll use the NuGet Package Manager CLI. Open the console from the menu bar by selecting Tools > NuGet Package Manager > Package Manager Console. After it opens, enter the command
Install-Package Twilio to install the package.
Add types to the namespace
We need to make the compiler aware that we’re using certain types in our program. It’s also useful because it stops us from prepending namespaces when using their methods. For example, by including the line
using System;, we can simply say
Console.Write(message) instead of
System.Console.Write(message). Add the following
using directives:
using System.Diagnostics; using System.Globalization; using Twilio; using Twilio.Rest.Api.V2010.Account; using Twilio.Types;
Your
Program.cs file should look like this now:
using System; using System.Diagnostics; using System.Globalization; using Twilio; using Twilio.Rest.Api.V2010.Account; using Twilio.Types; namespace textio { class Program { static void Main(string[] args) { Console.WriteLine("Hello World!"); } } }
Add Twilio credentials
Before we continue we need to create a Twilio account. Head over to their website and sign up for a free account. Once the sign up process is completed, we need to add our Twilio Account SID and Auth Token to use the Twilio Programmable SMS service in our code. You can get these in your Dashboard once you’ve completed the sign-up process.
It’s important to remember that when working with production code we don’t enter secret keys or tokens as we are going to do here. Another post on the Twilio blog brilliantly shows how to handle developer secrets in production code.
We’ll also need a Twilio phone number. You can a free trial number from your Twilio Dashboard under Programmable SMS. Be sure to send yourself a trial message with the phone number you used to register with Twilio. You can only send text messages to registered phone numbers with a trial account. Of course, the phone number you use to register with Twilio must be able to receive SMS messages.
Now that we have our Account SID, Token, and Phone Number we delete the
Console.WriteLine("Hello World!"); statement from
Program.cs and replace it with the code below:
// Find your Account Sid and Auth Token at and add them below const string accountSid = "<Your Twilio Account SID>"; const string authToken = "<Your Twilio Auth Token>"; TwilioClient.Init(accountSid, authToken); // Add the number you want to send a text to here string rec = "<Your recipient's phone number>"; //Add your Twilio phone number here string phone = "<Your Twilio Phone Number>";
Replace the placeholders in the code above with the information from your Twilio dashboard. Use a registered phone number as the recipient’s phone number so you can send messages to it from your trial account.
Send the first text message
In the last line of the
Main method in
Program.cs, right after
var phone = "<Your Twilio Phone Number>";, paste the code below:
// The message you want to send comes here Console.WriteLine("Enter your message:"); // Prompt string text = Console.In.ReadLine(); text = DecodeEncodedNonAsciiCharacters(text); var to = new PhoneNumber(rec); // Putting everything together and sending the text var message = MessageResource.Create( to, from: new PhoneNumber(phone), body: text); Console.WriteLine("\nYou just sent a text with Twilio *Mic Drop* "); Console.WriteLine(message.Sid); Console.Write("\nSuccess! Press any key to exit..."); Console.ReadKey(true);
You’ll notice that this code includes a reference a new method,
DecodeEncodedNonAsciiCharacters. We are going to use the Windows command line to input the SMS messages we will send. By default the Windows command line does not support UTF-8 characters or escape sequences. This new method will take the text input from the command line, including the text for the escape sequences, and convert it to escape sequences in C#.
private static string DecodeEncodedNonAsciiCharacters(string value) { return System.Text.RegularExpressions.Regex.Replace( value, @"\\u(?<Value>[a-zA-Z0-9]{4})", m => { return ((char)int.Parse(m.Groups["Value"].Value, NumberStyles.HexNumber)).ToString(); }); }
The code blocks above strap everything together,
MessageResource.Create gets your number, the recipient, and the message, then sends the message using Twilio. The last few lines output a message to the command line containing the message ID (SID) returned by Twilio to confirm your message was sent.
Send a text message
Test the code with text. Run the application by pressing F5, enter the message “Hello! Is it memes you’re looking for?” at the command line prompt and press Enter. You should receive a text message like this:
Send a message with UTF-8 characters
We can send a message in Japanese using escape sequences. “Hello” in Japanese is こんにちは which can be represented by the escape sequence
\u3053\u3093\u306B\u3061\u306F
Pretty cool! I used the r12a Unicode converter app to convert the Japanese text to its escape sequence.
Run the app with F5 and enter “\u3053\u3093\u306B\u3061\u306F” when prompted for your message.
We should get lovely greeting in Japanese as show below:
Lets go a step further and add some emoji ✨, I used the r12a app again to get some emoji escape sequences. Run the app, enter "\u3053\u3093\u306B\u3061\u306F \u2728" and press Enter when prompted for a message.
We get this:
I’m a huge movie buff. Replace this message with the following code and see what you get!
`"\uD83D\uDC00 \uD83C\uDF72 \uD83D\uDC68\u200D\uD83C\uDF73 \uD83D\uDC45";`
Can you guess what movie this is?
Summary
That’s it! We’re sending texts in any language supported by UTF-8. Today, we went through UTF-8, what it is and why it's important. We found out about escape sequences and how they help us manipulate strings. We particularly learned to use these escape sequences to add UTF-8 characters in our texts and sent some dope texts in Japanese. Hopefully this makes it easier for your products to reach a larger audience.
Additional resources
You can find the complete project code on GitHub.
Feel free to reach out to me with questions on Twitter @malgamves, Instagram @malgamves or GitHub.
|
https://www.twilio.com/blog/text-emojis-non-latin-characters-c-sharp-dot-net-sms
|
CC-MAIN-2018-51
|
refinedweb
| 1,487
| 66.13
|
How to display full Basler USB camera view within customized window size
Hi, all, I am new to VS and OpenCV. But I think I have tried a lot to find answer to my question and ended up nothing. So I am trying to connect a Basler USB camera to computer and use visual studio and OpenCV to display the camera view. My ultimate goal is able to create a program to display the view, click capture, and then the code would capture the image and automatically do image processing on it. Now I am stuck at displaying the full field view of the camera. The default only gives me a small portion of the top left part of the full 27483840 pixel view. But if I set the window to the size 27483840, it would be too big. Do someone know how to make the window smaller, but still displays the full field view of the camera?
Thank you very much! My code:
include <iostream>
include "opencv2\opencv.hpp"
include <stdint.h>
using namespace cv; using namespace std;
int main(int, char**) { VideoCapture cap(0); if (!cap.isOpened()) { return -1; }
cap.set(CAP_PROP_FRAME_WIDTH, 3840); cap.set(CAP_PROP_FRAME_HEIGHT, 2748); while(1) { Mat frame; cap >> frame; imshow("Webcam", frame); if (waitKey(30) >= 0) break; } return 0;
}
|
http://answers.opencv.org/question/202852/how-to-display-full-basler-usb-camera-view-within-customized-window-size/
|
CC-MAIN-2019-04
|
refinedweb
| 215
| 83.15
|
Hi there I am writing a program that needs to use a function to iterate through an array and pick out the index number that has a capital M then return that index number, and only for the first occurrence of a capital M. if no capital M is found, then return a value of -1. here is what I have at the moment (I could be way out in left field with this as I am new to functions).
#include <stdio.h>
int findM (char string[], int numVals){
int i = 0;
int indexM;
for (i = 0; i < numVals; ++i){
if (string[i] == 'M'){
indexM = string[i];
break;
}
else {
indexM = -1;
}
}
return indexM;
}
int main(void) {
char userString [15] = "M as in Mancy";
printf("%d",findM(userString, 15));
return 0;
}
you get this because you are inserting the ASCI value of M in the indexM variable, but you should instead hold the index of that character M, right? so do like below in your findM function :
for (i = 0; i < numVals; ++i){ if (string[i] == 'M'){ indexM = i; break; } else { indexM = -1; } } return indexM; }
|
https://codedump.io/share/VwMq6e4lX76L/1/basic-functions-in-c
|
CC-MAIN-2017-04
|
refinedweb
| 184
| 61.13
|
by Tomislav Smrečki
Android Instant Apps are a cool new way to consume native apps without prior installation. Only parts of the app are downloaded and launched, giving the users a native look and feel in a couple of seconds.
How do they work?
First of all, don’t confuse them with Progressive Web Apps where a launcher icon opens a web app via the Chrome browser. An Instant app will actually be installed on your phone, but without the need to search for it on the Play Store.
Web URLs will trigger the Google Play Store on your phone and fetch only the part of the app that is associated with the requested URL. The rest of the app is not downloaded. This way users can quickly enjoy the native experience of your Android application.
What’s the background?
Well, you need to divide your Android project into a couple of modules. One of them is a base module with the essential code which is used in all other modules (API connection, database, shared preferences etc.). The other, feature modules, contain specific functionalities and activities which can be accessed via associated URLs.
Let’s say you have a web app with a list of products and a single page of the product. For example, you can link to launch the ProductsListActivity and to launch the ProductActivity.
To make them accessible as instant app activities, they need to be packed into individual feature modules and they need to have associated App Links defined in their module manifests. We will call them Product and Product list modules.
Now, when a user tries to open, both Product and Base modules will start to download and the ProductActivity will be launched.
What are app links and how are they defined?
You’ve probably heard of deep links. They are defined in the app manifest, and they will be registered to the OS. When a user tries to open such a link, the OS will ask the user to choose between opening the link in a web browser or in your app. However, this is not enough for Instant apps, you need to go one step further — App Links. You need to include the autoVerify=”true” property.
<activity android: <intent-filter android:
<action android: <category android: <category android:
<data android: <data android:
</intent-filter> </activity>
Your app will verify if the links you specified are really associated with your domain. For this, you need to include the assetlinks.json file into the following folder of your domain root:.
Also, notice the android:order=”100″ property. This is actually a priority in this case. If you have a product list and a product single that correspond to the same path (/products and /products/10), the product single activity will be launched if there’s an id after the /products path. If not, then the product list activity is launched.
It is very important to define this. If there are two activities that correspond to the same path, the Play Store won’t know which part of the app should be fetched.
Associate your app with your domain
The assetlinks.json will need to contain your SHA256 keystore hashes. The relation field is set to the default value below, and the target object needs to be filled with app specific data and your SHA256 hash of the keystore.
[{ "relation": ["delegate_permission/common.handle_all_urls"], "target": { "namespace": "android_app", "package_name": "com.example.app", "sha256_cert_fingerprints":["00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00"] } }]
When autoVerify=true does its magic, all associated App Links will directly launch your app. If you don’t have the app installed, the instant app will be downloaded instead.
Here’s an example of a demo app we did recently. When clicked on the associated link, a screen like this opens and offers to use the instant app instead. Note how quickly the app is opened, and on Oreo it’s even faster.
How to Define Android Instant Modules?
For an instant app, your project will consist of at least three different modules. You need to use Android Studio 3.0 for this. If you’re creating your app from scratch, there’s an option to enable the Instant app support for your project.
All of the following modules will be initialised automatically. If you’re modifying an older app, you’ll need to break the old app module into a single base module and a couple of feature modules. Also, you’ll need to create an App and an Instant app module, which you will use to build both regular and instant app APKs.
App Module
First, you have to create an app module which defines the dependencies for all other modules (base + feature modules). In the build.gradle file of this module, you will need to define the following:
apply plugin: 'com.android.application' ...
dependencies { implementation project(':product') implementation project(':productlist') implementation project(':base') }
Base Module
In this module, you will define the following dependency statements. Also, make sure that the ‘com.android.feature’ plugin is applied here.
apply plugin: 'com.android.feature' android { baseFeature true ... }
dependencies { api 'com.android.support:appcompat-v7:26.0.1' api 'com.android.support.constraint:constraint-layout:1.0.2' implementation 'com.google.firebase:firebase-appindexing:11.0.4' application project(':app') feature project(':product') feature project(':productlist') }
Note that here, the compile statements become API statements for the regular dependencies we used before. The application project and feature projects are defined separately.
Feature Module
This module will have the following setting, also with the com.android.feature plugin applied.
apply plugin: 'com.android.feature' ... dependencies { implementation project(':base') ... }
You need to state which module is your base module and include it with the implementation project statement. Next, you can include the dependencies which are required only for this specific module. For example, if you’re using an animation library which is not used in any of the other modules.
Instant App Module
Finally, now there’s a com.android.instantapp plugin to be included in the build.gradle file for the instantapp module.
apply plugin: 'com.android.instantapp' dependencies { implementation project(':product') implementation project(':productlist') implementation project(':base') }
In this module, we will define which modules will be built as instant apps. The result of the instantapp module build is a zip file with the instant app APKs which you can upload separately to Google Play Store in the Android Instant Apps release manager. These APKs are handled similarly as the regular ones, they have their own rollout history and versioning.
That’s it! It’s fairly simple to start developing Android Instant Apps. But, there’s always a but!
What were the Android Instant Apps’ challenges?
First of all, the Instant Apps are not enabled by default for now. If you want to try it, you need to check your phone settings under Google account and enable the Instant Apps setting.
Next, we found that it’s extremely important to specify app links data in the following format:
<intent-filter android: ... <data android: <data android: </intent-filter>
Both http and https schemes need to be defined as shown in this code snippet. Any other way would cause a link verification failure and the app wouldn’t be linked properly.
Also, there is a recommendation to include the following code snippet into one of the activities in your app manifest. This annotates which activity should be launched in case the Instant app is launched from the Settings or a system launcher.
<meta-data android:
The official documentation states that the Google Search would offer Instant app annotation by default (small thunder icon), but we had problems with it. For our demo app, this was not the case. Google Search results didn’t annotate our demo links as Instant apps and the links led to the web page. Only if we tried to open the associated link from another app, like Gmail, the whole instant app process was triggered and the instant app was launched. Have you encountered any similar problems?
Conclusion
When first announced two years ago, I was very enthusiastic about Android Instant Apps. They respond to the problem of users having to search for the apps on the Store and wait till they’re downloaded to start using them. Web apps are much more accessible in that regard and the ease of discovery is much better.
Instant apps come really close to filling this gap between web and native mobile apps. They already act very well and I think that they will become more popular with time. The main problems we encountered was a rather small community and the lack of proper documentation, but the situation on that matter is also getting better.
We would love to hear from you if you’ve tried using them or had any challenges implementing them!
Originally published at.
|
https://www.freecodecamp.org/news/android-instant-apps-101-what-they-are-and-how-they-work-8b039165ed24/
|
CC-MAIN-2021-31
|
refinedweb
| 1,503
| 57.27
|
@wownetortNikita Starichenko
6+ years full-stack developer
Hi everyone! There is a lot of information about different JS best practices. About various life hacks and features in this language. I want to tell you about equally useful, but less popular tips for working with this JavaScript.
1. Variables declared with “var” should be declared before they are used
Variables declared with
var
have the special property that regardless of where they’re declared in a function they “float” to the top of the function and are available for use even before they’re declared. That makes scoping confusing, especially for new coders.
To keep confusion to a minimum,
var
declarations should happen before they are used for the first time.
Bad example:
var x = 1; function fun(){ alert(x); // Noncompliant as x is declared later in the same scope if(something) { var x = 42; // Declaration in function scope (not block scope!) shadows global variable } } fun(); // Unexpectedly alerts "undefined" instead of "1"
Good example:
var x = 1; function fun() { print(x); if (something) { x = 42; } } fun(); // Print "1"
2. Variables should be declared with “let” or “const”
ECMAScript 2015 introduced the
let
and
const
keywords for block-scope variable declaration. Using const creates a read-only (constant) variable.
The distinction between the variable types created by
var
and by let is significant, and a switch to let will help alleviate many of the variable scope issues which have caused confusion in the past.
Because these new keywords create more precise variable types, they are preferred in environments that support ECMAScript 2015. However, some refactoring may be required by the switch from
var
to
let
, and you should be aware that they raise SyntaxErrors in pre-ECMAScript 2015 environments.
This rule raises an issue when
var
is used instead of
const
or
let
.
Bad example:
var color = "blue"; var size = 4;
Good example:
const color = "blue"; let size = 4;
3. The global “this” object should not be used
When the keyword this is used outside of an object, it refers to the global this object, which is the same thing as the window object in a standard web page. Such uses could be confusing to maintainers. Instead, simply drop the this, or replace it with window; it will have the same effect and be more readable.
Bad example:
this.foo = 1; // Noncompliant console.log(this.foo); // Noncompliant function MyObj() { this.foo = 1; // Compliant } MyObj.func1 = function() { if (this.foo == 1) { // Compliant // ... } }
Good example:
foo = 1; console.log(foo); function MyObj() { this.foo = 1; } MyObj.func1 = function() { if (this.foo == 1) { // ... } }
4. Variables and functions should not be declared in the global scope).
This rule should not be activated when modules are used.
Bad example:
var myVar = 42; // Noncompliant function myFunc() { } // Noncompliant
Good example:
window.myVar = 42; window.myFunc = function() { };
or
let myVar = 42; let myFunc = function() { }
or
// IIFE (function() { var myVar = 42; function myFunc() { } })();
5. “undefined” should not be assigned
undefined
is the value you get for variables and properties which have not yet been created. Use the same value to reset an existing variable and you lose the ability to distinguish between a variable that exists but has no value and a variable that does not yet exist. Instead, null should be used, allowing you to tell the difference between a property that has been reset and one that was never created.
Bad example:
var myObject = {}; // ... myObject.fname = undefined; // Noncompliant // ... if (myObject.lname == undefined) { // property not yet created } if (myObject.fname == undefined) { // no real way of knowing the true state of myObject.fname }
Good example:
var myObject = {}; // ... myObject.fname = null; // ... if (myObject.lname == undefined) { // property not yet created } if (myObject.fname == undefined) { // no real way of knowing the true state of myObject.fname }
6. “NaN” should not be used in comparisons
NaN
is not equal to anything, even itself. Testing for equality or inequality against
NaN
will yield predictable results, but probably not the ones you want.
Instead, the best way to see whether a variable is equal to
NaN
is to use Number.isNaN(), since ES2015, or (perhaps counter-intuitively) to compare it to itself. Since NaN !== NaN, when a !== a, you know it must equal
NaN
.
Bad example:
var a = NaN; if (a === NaN) { // Noncompliant; always false console.log("a is not a number"); // this is dead code } if (a !== NaN) { // Noncompliant; always true console.log("a is not NaN"); // this statement is not necessarily true }
Good example:
if (Number.isNaN(a)) { console.log("a is not a number"); } if (!Number.isNaN(a)) { console.log("a is not NaN"); }
7. Jump statements should not occur in “finally” blocks
Using return, break, throw, and continue from a
finally
block overwrites similar statements from the suspended try and catch blocks.
This rule raises an issue when a jump statement (break, continue, return and throw) would force control flow to leave a
finally
block.
Bad example:
function foo() { try { return 1; // We expect 1 to be returned } catch(err) { return 2; // Or 2 in cases of error } finally { return 3; // Noncompliant: 3 is returned before 1, or 2, which we did not expect } }
Good example:
function foo() { try { return 1; // We expect 1 to be returned } catch(err) { return 2; // Or 2 in cases of error } }
8. Promise rejections should not be caught by ‘try’ block
An exception (including reject) thrown by a promise will not be caught by a nesting try block, due to the asynchronous nature of execution. Instead, use catch method of Promise or wrap it inside await expression.
This rule reports try-catch statements containing nothing else but call(s) to a function returning a Promise (thus it’s less likely that catch is intended to catch something else than Promise rejection).
Bad example:
function runPromise() { return Promise.reject("rejection reason"); } function foo() { try { // Noncompliant, the catch clause of the 'try' will not be executed for the code inside promise runPromise(); } catch (e) { console.log("Failed to run promise", e); } }
Good example:
function foo() { runPromise().catch(e => console.log("Failed to run promise", e)); } // or async function foo() { try { await runPromise(); } catch (e) { console.log("Failed to run promise", e); } }
9. “await” should not be used redundantly
An async function always wraps the return value in a Promise. Using return
await
is therefore redundant.
Bad example:
async function foo() { // ... } async function bar() { // ... return await foo(); // Noncompliant }
Good example:
async function foo() { // ... } async function bar() { // ... return foo(); }
10. “void” should not be used
The
void
operator evaluates its argument and unconditionally returns undefined. It can be useful in pre-ECMAScript 5 environments, where undefined could be reassigned, but generally, its use makes code harder to understand.
Bad example:
void (function() { ... }());
Good example:
(function() { ... }());
11. Shorthand promises should be used
When a Promise needs to only “
resolve
” or ”
reject
“, it’s more efficient and readable to use the methods specially created for such use cases: Promise.resolve(value) and Promise.reject(error).
Bad example:
let fulfilledPromise = new Promise(resolve => resolve(42)); let rejectedPromise = new Promise(function(resolve, reject) { reject('fail'); });
Good example:
let fulfilledPromise = Promise.resolve(42); let rejectedPromise = Promise.reject('fail');
12. “future reserved words” should not be used as identifiers
The following words may be used as keywords in future evolutions of the language, so using them as identifiers should be avoided to allow an easier adoption of those potential future versions:
await
class
const
enum
export
extends
implements
import
interface
let
package
private
protected
public
static
super
yield
Use of these words as identifiers would produce an error in JavaScript strict mode code.
P.S. Thanks for reading! More tips coming soon!
Special thanks to SonarQube and their rules –
More tips: Top 25 C# Programming Tips
Create your free account to unlock your custom reading experience.
read original article here
|
https://coinerblog.com/top-12-lesser-known-tips-for-javascript-best-practices-8t26335n/
|
CC-MAIN-2021-10
|
refinedweb
| 1,293
| 57.27
|
/Hi guys ^^ can you help me ?? i keep getting these "error C2447: missing function header (old-style formal list?)" i just want to do 2 converters and somehow i end up in doingthis, pls guys help me ^^/
# include <iostream> # include <cstdlib> using namespace std; int choice; void menu(); int main() { double micrograms = 0, kilograms = 0, cups = 0, ounce = 0, metricton= 0, ton = 0, uston=0, rupees = 0, dollar = 0, euros = 0; while ( choice>=0&&choice<=9) { menu(); cin >> choice; switch (choice) { case 1: cout << "\nEnter the number of Cups: "; cin >> cups; cout << "\n" << cups << " cups is " << cups * 0.0625 << " gallons" << endl; break; case 2: cout << "\nEnter the number of kilos: "; cin >> kilograms; cout << "\n" << kilograms << " kg is " << kilograms *2.20462 << " pounds." << endl; break; case 3: cout << "\nEnter the amount of micrograms: "; cin >>micrograms ; cout << "\n" << micrograms << " micrograms is " << micrograms * 0.001 << " milligrams. " << endl; break; case 4: cout << "\nEnter the amount of ounce: "; cin >> ounce; cout << "\n" << ounce << " ounce is " << ounce * 28.3495<< " grams." << endl; break; case 5: cout << "\nEnter the amount of metric ton: "; cin >> metricton; cout << "\n" << metricton << " metric ton is " << metricton * 0.984207 << " Imperial Ton." << endl; break; case 6: cout << "\nEnter the amount of ton: "; cin >> ton; cout << "\n" << ton << " ton is " << ton * 32000 << " ounce." << endl; break; case 7: cout << "\nEnter the amount of U.S. ton: "; cin >> uston; cout << "\n" << uston << " U.S. ton is " << uston * 0.892857 << " Imperial ton." << endl; case 8: choice = 9; break; default: cout << "\nPlease enter a valid choice.\n " << endl; choice = 0; }system("pause");system("cls"); } cout << "End of program " << endl; system("pause"); return 0; } void menu() { cout << " Conversion in Weight: \n" << endl; cout << "1. Cups --> Gallons: " << endl; cout << "2. Kilograms --> Pounds: " << endl; cout << "3. Microgram --> Milligram: " << endl; cout << "4. Ounce --> Gram: " << endl; cout << "5. Metric Ton --> Imperial Ton: " << endl; cout << "6. Ton --> Ounce: " << endl; cout << "7. US Ton --> Imperial Ton: " << endl<< endl; cout << "To quit : type 8." << endl << endl; } const int MAX_CURRENCY = 3; const string currency_name[MAX_CURRENCY] = { " Pakistani Rupees", " Euro", " Dollar", }; const double exchange_rate[MAX_CURRENCY][MAX_CURRENCY] = { { 1, 0.0087, 0.0095 }, { 115.2, 1, 1.1 }, { 0.091, 104.73, 1} }; { int currency1 = 0, value = 0, currency2 = 0; double rate = 0; cout << "Currency Converter *Market values accurate as of 01/08/2014*\n" << endl; while (true) { cout << "Available Currencies:" << endl; cout << "---------------------" << endl; for (int i=0; i<MAX_CURRENCY; i++) cout << i+1 << ". " << currency_name[i] << endl; cout << "Currencies are chosen by entering their corresponding index value.\n\n"; cout << "Please choose a currency: "; cin >> currency1; currency1--; cout << "You have selected " << currency_name[currency1] << endl; cout << "Please enter a value in " << currency_name[currency1] << endl; cin >> value; cout << "You have chosen " << value << currency_name[currency1] << endl; cout << "Please choose the currency you wish to convert to: "<< endl; cin >> currency2; currency2--; cout << "You have chosen " << currency_name[currency2] << endl; rate = value * exchange_rate[currency1][currency2]; cout << value << " " << currency_name[currency1] << " = " << rate << currency_name[currency2] << endl<< endl; }; }
|
https://www.daniweb.com/programming/software-development/threads/503191/hi-first-year-student-here-cramming-for-finals
|
CC-MAIN-2022-27
|
refinedweb
| 475
| 63.7
|
Justin Carlson12,755 Points
Method not allowed when trying to add builder route?
I have been trying to figure out what I'm doing wrong here... I even deleted my workspace and started fresh with a new template for the course and still get this error after naming the bear.
"Method Not Allowed
The method is not allowed for the requested URL."
here is what I have for code I am sure its something simple that I have just missed but I can not find it:
import json from flask import (Flask, render_template, redirect, url_for, request, make_response) from options import DEFAULTS app = Flask(__name__) def get_saved_data(): try: data = json.loads(request.cookies.get('character')) except TypeError: data = {} return data @app.route('/') def index(): return render_template('index.html', saves=get_saved_data()) @app.route('/builder') def builder(): return render_template( 'builder.html', saves=get_saved_data(), options=DEFAULTS ) @app.route('/save', methods=['POST']) def save(): response = make_response(redirect(url_for('builder'))) data = get_saved_data() data.update(dict(request.form.items())) response.set_cookie('character', data) return response app.run(debug=True, host='0.0.0.0', port=8000)
1 Answer
Kenneth LoveTreehouse Guest Teacher
I have a feeling that you have a form in your HTML sending data, via POST, to either
/builder or
/.
|
https://teamtreehouse.com/community/method-not-allowed-when-trying-to-add-builder-route
|
CC-MAIN-2021-25
|
refinedweb
| 205
| 50.02
|
Search the Community
Showing results for tags 'gsap'.!.
Animate a line going from point A to B
Abarth posted a topic in GSAPI've recently got lucky with an answer from PointC in another thread, and now having spent a few hours researching without luck on another matter, I guess I'll try my luck again. The problem: I have 30 points, each with a X and Y value (DIV's with a Top and Left setting), placed around a map of 1536x1080px. A user can select any of these points, and when that happens, a line should be drawn (animated in) from one point, to the selected one. This seems really straight forward, and to some extend I have managed to do that, with canvas drawing (p5.js), but it turns out extremely unsharp, and seems to be way to complicated, for what it should do, I've even tried just creating a DIV, and animating on it's width, thereby faking a line. I'm assuming maybe the GSAP SVG animation tools can solve this, but had no luck finding it, it's all about pre made SVG files from illustrator, which then are masked in animations, I can't do that, seeing as I have too many points, and never know how many new ones will be created. Any good solutions to this? It seems to simple, to be this complicated :S / Chris'm running into a situation and I;m not able to solve it by my self. I have section 2 with a scroll to top when entering the section and is breaking the anchor menu, so for example if you are in section 0 and click on the menu to go to section 3 the scroll will stop at section 2. I have been searching and tried some of the solution but none of them worked. How could avoid the section with the autoscroll to hijack the menu? Thanks!!
Implementing Giaco's Page Slider (GSAP) in React
Kingsley88 posted a topic in GSAPHello everyone at GSAP. Thanks for the great ongoing job. For the past day or two, I've been trying to implement GIaco's full page slider on my react website. This is the link to the original on codepen - What I am trying to do is achieve the exact same thing but with react. So far I have not been able to get anything to work especially because some of the line of code in the codepen example above don't make much sense to me when trying to implement them in react. This is what I have so far - import React, { Component } from 'react'; import { TweenMax, TimelineMax, ScrollToPlugin, CSSPlugin, Expo } from 'gsap/all'; import ReactPageScroller from "react-page-scroller"; import '../../styles/components/home.scss'; import Nav from '../nav/Nav'; const plugins = [ CSSPlugin ]; class Home extends Component { state = { slides: [], animating: true } constructor(props) { super(props); this.Go = this.Go.bind(this) } componentWillMount() { const slide = this.state.slides; const indx = slide.length - 1; const Anim = this.state.animating; for(var i = slide.length; i--;) { slide[i].anim = TweenMax.to(slide[i], 0.7, { yPercent: -100, paused: true }); } document.addEventListener("wheel", this.Go); } Go(e){ var SD=isNaN(e)?e.wheelDelta||-e.detail:e; if(SD>0 && indx>0 ){ if(!Anim){Anim=slide[indx].anim.play(); indx--;} }else if(SD<0 && indx<box.length-1){ if(!Anim||!Anim.isActive()){indx++; Anim=box[indx].anim.reverse();} }; if(isNaN(e))e.preventDefault(); }; render() { return ( <div className="home"> <Nav /> <div className="slide" ref={(slide) => { this.state.slides.push(slide) }}>1</div> <div className="slide" ref={(slide) => { this.state.slides.push(slide) }}>2</div> <div className="slide" ref={(slide) => { this.state.slides.push(slide) }}>3</div> </div> ); } } export default Home; Please, if anyone could spare the time in this busy festive period to help me out, I would be ecstatic. Thanks all and Merry Christmas.
-ock)!!!
|
https://staging.greensock.com/search/?tags=gsap&updated_after=any&sortby=relevancy&page=5&_nodeSelectName=cms_records19_node&_noJs=1
|
CC-MAIN-2022-27
|
refinedweb
| 649
| 63.59
|
union of proxy groups More...
#include <vtkSMProxyGroupDomain.h>
union of proxy groups
The proxy group domain consists of all proxies in a list of groups. This domain is commonly used together with vtkSMProxyPropery Valid XML elements are:
* <Group name=""> where name is the groupname used by the proxy manager to refer to a group of proxies.
// .SECTION See Also vtkSMDomain vtkSMProxyPropery
Definition at line 39 of file vtkSMProxyGroupDomain.h.
Reimplemented from vtkSMSessionObject.
Add a group to the domain. The domain is the union of all groups.
Returns true if the value of the propery is in the domain. The propery has to be a vtkSMProxyPropery or a sub-class. All proxies pointed by the property have to be in the domain.
Implements vtkSMDomain.
Returns true if the proxy is in the domain.
Returns the number of groups.
Returns group with give id. Does not perform bounds check.
Returns the total number of proxies in the domain.
Given a name, returns a proxy.
Returns the name (in the group) of a proxy.
Returns the name (in the group) of a proxy.
Set the appropriate ivars from the xml element. Should be overwritten by subclass if adding ivars.
Reimplemented from vtkSMDomain.
Definition at line 83 of file vtkSMProxyGroupDomain.h.
|
http://www.paraview.org/ParaView3/Doc/Nightly/html/classvtkSMProxyGroupDomain.html
|
crawl-003
|
refinedweb
| 207
| 70.6
|
i am very very new to Java and i would like to know how can i compare 2 integers? I know == gets the job done.. but what about equals? Can this compare 2 integers? (when i say integers i mean "int" not "Integer").
My code is:
import java.lang.*;
import java.util.Scanner;
//i read 2 integers the first_int and second_int
//Code above
if(first_int.equals(second_int)){
//do smth
}
//Other Code
int is a primitive. You can use the wrapper
Integer like
Integer first_int = 1; Integer second_int = 1; if(first_int.equals(second_int)){ // <-- Integer is a wrapper.
or you can compare by value (since it is a primitive type) like
int first_int = 1; int second_int = 1; if(first_int == second_int){ // <-- int is a primitive.
JLS-4.1. The Kinds of Types and Values says (in).
|
https://codedump.io/share/FGrKlL0R3JJW/1/java-compare-2-integers-with-equals-or-
|
CC-MAIN-2016-44
|
refinedweb
| 133
| 78.14
|
I am having great trouble trying to make this program run by a double click from windows
when I do double click on it nothing happens but when i go to the processes for windows I can see that javaw is running and is like 8000K big but nothing pops up on the screen
I have tried a test program just a simple hello world created by JDeveloper.. and it works
but this program does not
any ideas?
My Manifest looks like the follows.. which was created by JDeveloper
Manifest-Version: 1.0
Main-Class: RemoveAddress
Created-By: Oracle JDeveloper 3.2
Name: TxtFilter.class
Java-Bean: True
Name: ConsoleWindow.class
Java-Bean: True
Name: RemoveAddress.class
Java-Bean: True
Name: ConsoleWindow$1.class
Name: RemoveAddress$2.class
Name: RemoveAddress$4.class
Name: connections.properties
Name: RemoveAddress$1.class
Name: RemoveAddress$3.class
Name: Storage.class
Java-Bean: True
Name: RemoveUtility.class
This is a GUI Application and the classes I am using are:
RemoveUtility
RemoveAddress (Main Class)
TxtFilter
ConsoleWindow
Storage
These classes have these imports in them:
import javax.swing.filechooser.FileFilter;
import java.io.*;
import java.util.*;
import javax.swing.*;
import oracle.jdeveloper.layout.*;
import java.awt.event.*;
import java.awt.*;
- Gran Roguismo agrees
|
http://forums.devshed.com/programming-42/java-jar-35212.html
|
CC-MAIN-2017-13
|
refinedweb
| 206
| 53.98
|
From: Martin Weiser (weiser_at_[hidden])
Date: 2002-07-01 04:43:22
Thanks for the clarification. I'll give the xgemm-specialization a second
try as soon as I find the time.
Yours,
Martin
On Samstag, 29. Juni 2002 15:04, Joerg Walter wrote:
> Yep. But I shouldn't only state it, but also show. So I wrote a little
> sample:
>
> ----------
> #include <iostream>
> #include <boost/numeric/ublas/vector.h>
> #include <boost/numeric/ublas/matrix.h>
> #include <boost/numeric/ublas/io.h>
>
> namespace boost { namespace numerics {
>
> // vector assignment_operation scalar
> template<>
> struct vector_assign_scalar<class scalar_multiplies_assign<double,
> double> > {
> void operator () (vector<double> &v, const double &t) {
> std::cout << "here we are" << std::endl;
> for (unsigned i = 0; i < v.size (); ++ i)
> v (i) *= t;
> }
> };
>
> } }
>
> int main () {
> numerics::vector<double> v (1);
> double d = 0;
> v.clear ();
> v *= d;
> return 0;
> }
>
> ----------
>
> which should show, how to achieve this (works for me under GCC 3.1).
> And yes, I realize, that vector_assign<> and matrix_assign<> don't have
> the optimal interface: they should have been free functions in fact,
> but this is the way, we got the thing working on MSVC 6.0 ;-(.
>
> > It's just that I considered specializing matrix_assign<> for dgemm
> > and, reading the code, thought the necessary expression type
> > information to be lost somewhere. Probably I'm just to dumb.
> > Preferring a design that allows such specializations, I just wanted
> > to draw your attention to this point.
--
|
https://lists.boost.org/Archives/boost/2002/07/31214.php
|
CC-MAIN-2019-35
|
refinedweb
| 234
| 57.98
|
This is an article for everyone who does not want to spend hours messing around with the AVIFile functions, if he only wants to read or change a simple AVI video. I have wrapped the most important AVIFile functions into three easy to use C# classes that can handle the following tasks:
These features cover the common use cases like creating a video from a couple of images and a wave sound, extracting the sound track from a video, cutting out short clips, or grabbing a single picture from a movie.
This article has got two sections:
The Explore tab lets you explore an AVI file, and offers simple tasks that don't need editable streams:
At the top of the form, you can choose the AVI file you want to explore. text.avi from the test data folder is pre-selected. On the left side, you can display header information about the video and the wave sound stream (if available). Also, you can see image frames from the video in a
PictureBox:
The images are read by a
VideoStream object.
GetFrameOpen prepares the stream for decompressing frames,
GetFrameClose releases the resources used to decompress the frame, and
GetBitmap decompresses a frame and converts it to a
System.Drawing.Bitmap.
AviManager aviManager = new AviManager(txtAviFileName.Text, true); VideoStream aviStream = aviManager.GetVideoStream(); aviStream.GetFrameOpen(); picFrame.Image = aviStream.GetBitmap(Convert.ToInt32(numPosition.Value)); aviStream.GetFrameClose(); aviManager.Close();
In the middle of the demo form, you can work with whole streams:
Decompress removes the compression from a video stream. It creates a new file and video stream, decompresses each frame from the old stream, and writes it into the new stream. The result is a large new .avi file with the same video but no compression.
Compress changes the compression of the video, or applies compression to an uncompressed video. It does the same as Uncompress, but compresses the new stream. These two functions use the same method
CopyFile:
private void CopyFile(String newName, bool compress){ //open compressed file AviManager aviManager = new AviManager(txtAviFileName.Text, true); VideoStream aviStream = aviManager.GetVideoStream(); //create un-/re-compressed file AviManager newManager = aviStream.DecompressToNewFile(newName, compress); //close compressed file aviManager.Close(); //save and close un-/re-compressed file newManager.Close(); }
Whenever an instance of
VideoStream creates a compressed stream, it asks you for a codec and settings:
Extract Bitmaps splits the video into many separate bitmap files:
VideoStream stream = aviManager.GetVideoStream(); stream.GetFrameOpen(); String path = @"..\..\testdata\"; for(int n=0; n<stream.CountFrames; n++){ stream.ExportBitmap(n, path+n.ToString()+".bmp"); } stream.GetFrameClose();
Of course, you can save the images in any format.
ExportBitmap is just a shortcut for these three lines:
Bitmap bmp = stream.GetBitmap(position); bmp.Save(FileName); bmp.Dispose();
Extract Video copies the whole video stream into a new AVI file. You can use it to get rid of all the other streams like MIDI, text and Wave sound.
The lower box deals with Wave sound. Extract Sound copies the whole sound stream into a Wave file. This is not a big task, it requires only four lines of code:
AviManager aviManager = new AviManager(txtAviFileName.Text, true); AudioStream audioStream = aviManager.GetWaveStream(); audioStream.ExportStream( @"..\..\testdata\sound.wav" ); aviManager.Close();
Extract a few Seconds lets you copy video and sound between second X and second Y. First, a
CopyForm dialog lets you enter X and Y, then these parts are cut out of the video and sound streams:
AviManager aviManager = new AviManager(txtAviFileName.Text, true); VideoStream aviStream = aviManager.GetVideoStream(); CopyForm dialog = new CopyForm(0, aviStream.CountFrames / aviStream.FrameRate); if (dialog.ShowDialog() == DialogResult.OK) { int startSecond = dialog.Start; int stopSecond = dialog.Stop; AviManager newFile = aviManager.CopyTo( "..\\..\\testdata\\video.avi", startSecond, stopSecond); newFile.Close(); } aviManager.Close();
Add Sound lets you choose a .wav file, and adds it to the video. You can use this feature to add a sound track to a silent video, for example, re-add the sound to a video extracted with Extract Video. Adding sound is a simple task of three lines:
String fileName = GetFileName("Sounds (*.wav)|*.wav"); if(fileName != null){ AviManager aviManager = new AviManager(txtAviFileName.Text, true); aviManager.AddAudioStream(fileName); aviManager.Close(); }
The last set of functions is about creating new video streams. Enter a list of image files in the box and animate them:
Create uncompressed builds a new video from the images, and saves it without any compression. Create and Compress does the same, except that it displays the compression settings dialog and compresses the images. Both methods create a new file, and pass a sample bitmap to
AddVideoStream. The sample bitmap is used to set the format of the new stream. Then, all the images from the list are added to the video.
//load the first image Bitmap bitmap = (Bitmap)Image.FromFile(txtFileNames.Lines[0]); //create a new AVI file AviManager aviManager = new AviManager(@"..\..\testdata\new.avi", false); //add a new video stream and one frame to the new file VideoStream aviStream = aviManager.AddVideoStream(true, 2, bitmap); Bitmap bitmap; int count = 0; for(int n=1; n<txtFileNames.Lines.Length; n++){ if(txtFileNames.Lines[n].Trim().Length > 0){ bitmap = (Bitmap)Bitmap.FromFile(txtFileNames.Lines[n]); aviStream.AddFrame(bitmap); bitmap.Dispose(); count++; } } aviManager.Close();
Add Frames appends the images to the existing video stream. To an uncompressed video stream, we could append frames by simply opening the stream and adding frames as usual. But a compressed stream cannot be re-compressed.
AVIStreamWrite - used by
AddFrame - would not return any error; but anyway, it would add the new frames uncompressed and produce nothing but strangely colored pixel storms. To add frames to a compressed stream, the existing frames must be decompressed and added to a new compressed stream. Then the additional frames can be added to that stream:
//open file Bitmap bmp = (Bitmap)Image.FromFile(txtFileNames.Lines[0]); AviManager aviManager = new AviManager(txtAviFileName.Text, true); VideoStream aviStream = aviManager.GetVideoStream(); //streams cannot be edited - copy to a new file AviManager newManager = aviStream.DecompressToNewFile( @"..\..\testdata\temp.avi", true); aviStream = newManager.GetOpenStream(0); //add images Bitmap bitmap; for(int n=0; n<txtFileNames.Lines.Length; n++){ if(txtFileNames.Lines[n].Trim().Length > 0){ bitmap = (Bitmap)Bitmap.FromFile(txtFileNames.Lines[n]); aviStream.AddFrame(bitmap); bitmap.Dispose(); } } aviManager.Close(); //close old file newManager.Close(); //save and close new file //delete old file, replace with new file System.IO.File.Delete(txtAviFileName.Text); System.IO.File.Move(@"..\..\testdata\temp.avi", txtAviFileName.Text);
Now that you know how to use the AVIFile wrapper classes, let's have a look at the background.
The Edit tab demonstrates tasks for editable AVI streams, like pasting frames at any position in the stream, or changing the frame rate:
When you have chosen a file to edit, an editable stream is created from the video stream, and the editor buttons become enabled. A normal video stream is locked; for inserting and deleting frames, you need an editable stream:
AviManager file = new AviManager(fileName, true); VideoStream stream = file.GetVideoStream(); EditableVideoStream editableStream = new EditableVideoStream(stream); file.Close();
On the left side, you can copy or cut frame sequences, and paste them at another position in the same stream:
Copying frames from one stream, and pasting them into another or the same stream, is only two lines of code:
//copy frames IntPtr copiedData = editableStream.Copy(start, length); //insert frames editableStream.Paste(copiedData, 0, position, length);
There is no other method for deleting frames than just cut and forget them:
//cut and paste frames IntPtr copiedData = editableStream.Cut(start, length); editableStream.Paste(copiedData, 0, position, length); //delete frames == cut without paste IntPtr deletedData = editableStream.Cut(start, length);
In the middle of the dialog, you can insert frames from image files anywhere in the stream, and change the frame rate to make the video play back slower or faster:
We can paste only streams, not bitmaps, so the bitmaps from the list are written into a temporary AVI file and then pasted as a stream:
//create temporary video file String tempFileName = System.IO.Path.GetTempFileName() + ".avi"; AviManager tempFile = new AviManager(tempFileName, false); //write the new frames into the temporary video stream Bitmap bitmap = (Bitmap)Image.FromFile(txtNewFrameFileName.Lines[0].Trim()); tempFile.AddVideoStream(false, 1, bitmap); VideoStream stream = tempFile.GetVideoStream(); for (int n=1; n<txtNewFrameFileName.Lines.Length; n++) { if (txtNewFrameFileName.Lines[n].Trim().Length > 0) { stream.AddFrame( (Bitmap)Image.FromFile(txtNewFrameFileName.Lines[n])); } } //paste the video into the editable stream editableStream.Paste(stream, 0, (int)numPastePositionBitmap.Value, stream.CountFrames);
Do you find your video too slow, or too fast? Tell the player application to play more/less frames per second:
Avi.AVISTREAMINFO info = editableStream.StreamInfo; info.dwRate = (int)(numFrameRate.Value * 10000); info.dwScale = 10000; editableStream.SetInfo(info);
The last box is not for editing, it is only a preview player. You should preview your editable stream before saving it to an AVI file.
A preview player is easy to implement, you only need a
PictureBox and the video stream you want to play. A label displaying the current frame index can be helpful, too. A start button, a stop button, and there you are:
private void btnPlay_Click(object sender, EventArgs e) { player = new AviPlayer(editableStream, pictureboxPreview, labelFrameIndex); player.Stopped += new System.EventHandler(player_Stopped); player.Start(); SetPreviewButtonsState(); } private void player_Stopped(object sender, EventArgs e) { btnPlay.Invoke( new SimpleDelegate(SetPreviewButtonsState)); } private void SetPreviewButtonsState() { btnPlay.Enabled = ! player.IsRunning; btnStop.Enabled = player.IsRunning; } private void btnStop_Click(object sender, EventArgs e) { player.Stop(); }
AviManger manages the streams in an AVI file. The constructor takes the name of the file and opens it.
Close closes all opened streams and the file itself. You can add new streams with
AddVideoStream and
AddAudioStream. New video streams are empty, Wave streams can only be created from Wave files. After you have created an empty video stream, use the methods of
VideoStream to fill it. But what actually happens when you add a stream?
There are two methods for creating a new video stream: create from a sample bitmap, or create from explicit format information. Both methods do the same, they pass their parameter on to
VideoStream and add the new stream to the internal list of opened streams, to close them before closing the file:
public VideoStream AddVideoStream( bool isCompressed, //display the compression dialog, // create a compressed stream int frameRate, //frames per second int frameSize, //size of one frame in bytes int width, int height, PixelFormat format //format of //the bitmaps ){ VideoStream stream = new VideoStream( aviFile, isCompressed, frameRate, frameSize, width, height, format); streams.Add(stream); return stream; } public VideoStream AddVideoStream( bool isCompressed, //display the compression dialog, // create a compressed stream int frameRate, //frames per second Bitmap firstFrame //get the format from this image // and add it to the new stream ){ VideoStream stream = new VideoStream( aviFile, isCompressed, frameRate, firstFrame); streams.Add(stream); return stream; }
Then,
VideoStream uses the format data to create a new stream. It calls
AVIFileCreateStream and, if
writeCompressed says so,
AVIMakeCompressedStream:
public VideoStream( int aviFile, //pointer to the file object bool writeCompressed, //true: create compressed stream int frameRate, //frames per second ... ){ //store format information //... //create the stream CreateStream(); } private void CreateStream(){ //fill stream information Avi.AVISTREAMINFO strhdr = new Avi.AVISTREAMINFO(); strhdr.fccType = Avi.mmioStringToFOURCC("vids", 0); strhdr.fccHandler = Avi.mmioStringToFOURCC("CVID", 0); strhdr.dwScale = 1; strhdr.dwRate = frameRate; strhdr.dwSuggestedBufferSize = frameSize; strhdr.dwQuality = -1; //default strhdr.rcFrame.bottom = (uint)height; strhdr.rcFrame.right = (uint)width; strhdr.szName = new UInt16[64]; //create the stream int result = Avi.AVIFileCreateStream(aviFile, out aviStream, ref strhdr); if(writeCompressed){ //create a compressed stream from //the uncompressed stream CreateCompressedStream(); }else{ //apply the format to the uncompressed stream SetFormat(aviStream); } } private void CreateCompressedStream(){ Avi.AVICOMPRESSOPTIONS_CLASS options = new Avi.AVICOMPRESSOPTIONS_CLASS(); options.fccType = (uint)Avi.streamtypeVIDEO; options.lpParms = IntPtr.Zero; options.lpFormat = IntPtr.Zero; //display the compression options dialog Avi.AVISaveOptions( IntPtr.Zero, Avi.ICMF_CHOOSE_KEYFRAME | Avi.ICMF_CHOOSE_DATARATE, 1, ref aviStream, ref options); //get a compressed stream Avi.AVICOMPRESSOPTIONS structOptions = options.ToStruct(); int result = Avi.AVIMakeCompressedStream( out compressedStream, aviStream, ref structOptions, 0); //format the compressed stream SetFormat(compressedStream); }
AVICOMPRESSOPTIONS_CLASS is the
AVICOMPRESSOPTIONS structure as a class. Using classes instead of structures is the easiest way to deal with pointers to pointers. If you don't know what I'm talking about, you probably have never used
AVISaveOptions or
AVISaveV in .NET. Take a look at the original declaration:
BOOL AVISaveOptions( HWND hwnd, UINT uiFlags, int nStreams, PAVISTREAM * ppavi, LPAVICOMPRESSOPTIONS * plpOptions );
LPAVICOMPRESSOPTIONS is a pointer to a pointer to an
AVICOMPRESSOPTIONS structure. In C#, structures are passed by value. If you pass a structure by
ref, a pointer to the structure is passed. Instances of classes are always passed to methods as pointers. So a class-parameter by
ref means a pointer to a pointer to the object. The C# declarations of
AVISaveOptions and
AVICOMPRESSOPTIONS are:
[DllImport("avifil32.dll")] public static extern bool AVISaveOptions( IntPtr hwnd, UInt32 uiFlags, Int32 nStreams, ref IntPtr ppavi, ref AVICOMPRESSOPTIONS_CLASS plpOptions ); [StructLayout(LayoutKind.Sequential, Pack=1)] public struct AVICOMPRESSOPTIONS {; } [StructLayout(LayoutKind.Sequential, Pack=1)] public class AVICOMPRESSOPTIONS_CLASS {; public AVICOMPRESSOPTIONS ToStruct(){ AVICOMPRESSOPTIONS returnVar = new AVICOMPRESSOPTIONS(); returnVar.fccType = this.fccType; returnVar.fccHandler = this.fccHandler; returnVar.dwKeyFrameEvery = this.dwKeyFrameEvery; returnVar.dwQuality = this.dwQuality; returnVar.dwBytesPerSecond = this.dwBytesPerSecond; returnVar.dwFlags = this.dwFlags; returnVar.lpFormat = this.lpFormat; returnVar.cbFormat = this.cbFormat; returnVar.lpParms = this.lpParms; returnVar.cbParms = this.cbParms; returnVar.dwInterleaveEvery = this.dwInterleaveEvery; return returnVar; } }
With this workaround, we are able to call
AVISaveOptions and (later on)
AVISaveV in C#. Now, the new stream can be filled with image frames using
AddFrame:
public void AddFrame(Bitmap bmp){ bmp.RotateFlip(RotateFlipType.RotateNoneFlipY); //lock the memory block BitmapData bmpDat = bmp.LockBits( new Rectangle( 0,0, bmp.Width, bmp.Height), ImageLockMode.ReadOnly, bmp.PixelFormat); //add the bitmap to the (un-)compressed stream int result = Avi.AVIStreamWrite( writeCompressed ? compressedStream : aviStream, countFrames, 1, bmpDat.Scan0, (Int32)(bmpDat.Stride * bmpDat.Height), 0, 0, 0); //unlock the memory block bmp.UnlockBits(bmpDat); //count the frames, so that we don't have to //call AVIStreamLength for every new frame countFrames++; }
Now, we are able to fill an empty stream with images. But what can we do to add frames to an existing stream? Well, first, we have to open the stream with the third constructor.
public VideoStream(int aviFile, IntPtr aviStream){ this.aviFile = aviFile; this.aviStream = aviStream; //read the stream's format Avi.BITMAPINFOHEADER bih = new Avi.BITMAPINFOHEADER(); int size = Marshal.SizeOf(bih); Avi.AVIStreamReadFormat(aviStream, 0, ref bih, ref size); Avi.AVISTREAMINFO streamInfo = GetStreamInfo(aviStream); //store the important format values this.frameRate = streamInfo.dwRate / streamInfo.dwScale; this.width = (int)streamInfo.rcFrame.right; this.height = (int)streamInfo.rcFrame.bottom; this.frameSize = bih.biSizeImage; this.countBitsPerPixel = bih.biBitCount; //get the count of frames that are already there int firstFrame = Avi.AVIStreamStart(aviStream.ToInt32()); countFrames = firstFrame + Avi.AVIStreamLength(aviStream.ToInt32()); }
If you are sure the video stream is not compressed, you can call
AddFrame now. Otherwise, you have to decompress the existing frames, and recompress them into a new stream:
public AviManager DecompressToNewFile(String fileName, bool recompress){ //create a new AVI file AviManager newFile = new AviManager(fileName, false); //create a video stream in the new file this.GetFrameOpen(); Bitmap frame = GetBitmap(0); VideoStream newStream = newFile.AddVideoStream(recompress, frameRate, frame); //decompress each frame and add it to the new stream for(int n=1; n<countFrames; n++){ frame = GetBitmap(n); newStream.AddFrame(frame); } this.GetFrameClose(); return newFile; }
DecompressToNewFile creates a writeable copy of the stream in a new file. You can add frames to this new stream, close the new
AviManager to save it, and then add the sound stream from the old file to complete the copy. Adding frames to a video is not easy, but this way it works.
Sometimes, you might have a video file with sound, but you only need the silent video, or only the sound. It is not necessary to copy each frame, you can open the stream as usual and export it with
AVISaveV. This works with all kinds of streams, only the compression options are different:
public override void ExportStream(String fileName){ Avi.AVICOMPRESSOPTIONS_CLASS opts = new Avi.AVICOMPRESSOPTIONS_CLASS(); //for video streams opts.fccType = (UInt32)Avi.mmioStringToFOURCC("vids", 0); opts.fccHandler = (UInt32)Avi.mmioStringToFOURCC("CVID", 0); //for audio streams //opts.fccType = (UInt32)Avi.mmioStringToFOURCC("auds", 0); //opts.fccHandler = (UInt32)Avi.mmioStringToFOURCC("CAUD", 0); //export the stream Avi.AVISaveV(fileName, 0, 0, 1, ref aviStream, ref opts); }
Now, we are able to build a video from bitmaps, and extract sound from it. And how does the sound get into the file? We could use
AVISaveV again, to combine the video and audio streams in a new file - but we don't have to. The easiest way to add a new audio stream is to open the Wave file as an AVI file with only one stream, and then copy that stream:
public void AddAudioStream(String waveFileName){ //open the wave file AviManager audioManager = new AviManager(waveFileName, true); //get the wave sound as an audio stream... AudioStream newStream = audioManager.GetWaveStream(); //...and add it to the file AddAudioStream(newStream); audioManager.Close(); } public void AddAudioStream(AudioStream newStream){ Avi.AVISTREAMINFO streamInfo = new Avi.AVISTREAMINFO(); Avi.PCMWAVEFORMAT streamFormat = new Avi.PCMWAVEFORMAT(); int streamLength = 0; //read header info, format and length, //and get a pointer to the wave data IntPtr waveData = newStream.GetStreamData( ref streamInfo, ref streamFormat, ref streamLength); //create new stream IntPtr aviStream; Avi.AVIFileCreateStream(aviFile, out aviStream, ref streamInfo); //add format new stream Avi.AVIStreamSetFormat( aviStream, 0, ref streamFormat, Marshal.SizeOf(streamFormat)); //copy the raw wave data into the new stream Avi.AVIStreamWrite( aviStream, 0, streamLength, waveData, streamLength, Avi.AVIIF_KEYFRAME, 0, 0); Avi.AVIStreamRelease(aviStream); }
I have added this method, because many people asked me how this could be done. To copy a part of the video stream from second X to second Y, the indices of the first and last frames have to be calculated from the frame rate and second. For the Wave stream, we must calculate the byte offsets from samples per second, bits per sample, and the requested seconds. The rest is only copy and paste:
public AviManager CopyTo(String newFileName, int startAtSecond, int stopAtSecond) { AviManager newFile = new AviManager(newFileName, false); try { //copy video stream VideoStream videoStream = GetVideoStream(); int startFrameIndex = videoStream.FrameRate * startAtSecond; int stopFrameIndex = videoStream.FrameRate * stopAtSecond; videoStream.GetFrameOpen(); Bitmap bmp = videoStream.GetBitmap(startFrameIndex); VideoStream newStream = newFile.AddVideoStream( false, videoStream.FrameRate, bmp); for (int n = startFrameIndex + 1; n <= stopFrameIndex; n++) { bmp = videoStream.GetBitmap(n); newStream.AddFrame(bmp); } videoStream.GetFrameClose(); //copy audio stream AudioStream waveStream = GetWaveStream(); Avi.AVISTREAMINFO streamInfo = new Avi.AVISTREAMINFO(); Avi.PCMWAVEFORMAT streamFormat = new Avi.PCMWAVEFORMAT(); int streamLength = 0; IntPtr ptrRawData = waveStream.GetStreamData( ref streamInfo, ref streamFormat, ref streamLength); int startByteIndex = waveStream.CountSamplesPerSecond * startAtSecond * waveStream.CountBitsPerSample / 8; int stopByteIndex = waveStream.CountSamplesPerSecond * stopAtSecond * waveStream.CountBitsPerSample / 8; ptrRawData = new IntPtr(ptrRawData.ToInt32() + startByteIndex); byte[] rawData = new byte[stopByteIndex - startByteIndex]; Marshal.Copy(ptrRawData, rawData, 0, rawData.Length); streamInfo.dwLength = rawData.Length; streamInfo.dwStart = 0; IntPtr unmanagedRawData = Marshal.AllocHGlobal(rawData.Length); Marshal.Copy(rawData, 0, unmanagedRawData, rawData.Length); newFile.AddAudioStream(unmanagedRawData, streamInfo, streamFormat, rawData.Length); } catch (Exception ex) { newFile.Close(); throw ex; } return newFile; }
If you are still interested in AVI videos, download the wrapper library and the demo application. Finally, I dare to say: have fun with AVIFile!
Adding frames to an existing stream does not work with all video codecs and/or bitmaps. You might get a
StackOverflowException or broken frames. If you find out why this happens, please let me know.
AviManager.CopyTo.
EditableVideoStreamand
AviPlayer, and a few memory leaks fixed.
VideoStream.GetBitmap.
VideoStream.GetFrameOpen, new property
VideoStream.FirstFrame. Thanks a lot to Michael Covington!
General
News
Question
Answer
Joke
Rant
Admin
|
http://www.codeproject.com/KB/audio-video/avifilewrapper.aspx
|
crawl-002
|
refinedweb
| 3,230
| 50.94
|
More C++ Idioms/nullptr
From Wikibooks, the open-content textbooks collection
[edit]
nullptr
[edit] Intent
To distinguish between an integer 0 and a null pointer.
[edit] Also Known As
[edit] Motivation
For many years C++ had an embarrassment of not having a keyword to designate a null pointer. The upcoming C++ standard, C++0x, promises to eliminate the embarrassment. C++ can't use NULL macro of C because of strong type checking of C++ makes it almost useless in expressions as below.
#define NULL ((void *)0) std::string * str = NULL; // Can't automatically cast void * to std::string * void (C::*pmf) () = &C::func; if (pmf == NULL) {} // Can't automatically cast from void * to pointer to member function.
So C++ uses literal integer 0 to designate so called null pointer. It works in overwhelmingly large number of cases but sometimes can be confusing in the presence of overloaded functions. For example, the func(int) overload below takes the precedence because the type of literal 0 is int.
void func(int); void func(double *); int main() { func (static_cast <double *>(0)); // calls func(double *) as expected func (0); // calls func(int) but double * may be desired because 0 IS also a null pointer }
More confusion arises when NULL macro is }
[edit] Solution and Sample Code
nullptr idiom solves some of the above problems in a library of null pointer. A recent draft proposal (N2431) by Herb Sutter and Bjarne Stroustrup recommends that a new keyword nullptr be added to C++. nullptr idiom is the closest match possible today using existing C++ features. The following nullptr implementation is a variant of the library based approach suggested by Scott Meyer in his book More Effective C++.
#include <typeinfo> const // It is a const object... class nullptr_t { public: template<class T> operator T*() const // convertible to any type of null non-member pointer... { return 0; } template<class C, class T> operator T C::*() const // or any type of null member pointer... { return 0; } private: void operator&() const; // Can't take address of nullptr } nullptr = {}; due to bug #33990 const int n = 0; if (nullptr == n) {} // Should not compile; but only Comeau shows an error. //int p = 0; //if (nullptr == p) {} // not ok //g (nullptr); // Can't deduce T int expr = 0; char* ch3 = expr ? nullptr : nullptr; // ch.
[edit] Consequences
There are some disadvantages of this technique and are discussed in the N2431 proposal draft. In summary, the disadvantages are
- A header must be included to use nullptr idiom. That means it is clear that language does not have a first class keyword for null pointer.
- Compilers can’t produce meaningful error messages when nullptr is implemented as a library.
|
http://en.wikibooks.org/wiki/More_C%2B%2B_Idioms/nullptr
|
crawl-002
|
refinedweb
| 443
| 51.99
|
therefore be available in a starter application, and be deployable without any fuzz or configuration overhead.
This blog-article demonstrates how to rapidly create the HRService – an ADF BC driven WebService on top of the EmployeesView ViewObject that exposes the EMPLOYEES table in the HR schema through the Employee EntityObject. However, its true purpose is to show how to create the deployment profile and deploy and test this service, either on the integrated WLS or on a standalone WebLogic Server, in the easiest way possible. Note that this easiest way is not the suggested way of working for real production environments.
This article assumes – in case you want to follow along for yourself – that you have JDeveloper 11g PS2, Oracle RDBMS 10gR2 or later with the HR sample schema and the SOA Suite 11g Run Time environment.
Create the ADF BC Project
First create a new JDeveloper application, either a Generic Application or one based on a specific Application Template, such as SOA or BPM application. Then create the ADF Model project that will contain the ADF Business Components that are to be exposed as Web Service.
Specify the name of project, for example Model. Press Next.
Specify the name of the default package, in this case org.hrm and press Finish.
The Project is created and added to the Application. Click on the New icon or the File | New menu option to start creating the Business Component definitions. Select the option Business Components from Tables under Business Tier | ADF Business Components.
Click OK.
Our business components are defined on top of database objects. Those can only be accessed through a database connection. We first need to create and configure that database connection before moving on with the definition of the business objects. Click on the green plus icon to create a new connection.
The Create Database Connection editor is presented. Specify the connection details for connecting to the HR schema in the database instance you want to work against.
Test the connection by pressing the Test Connection button and wait for the Success! message to appear. When it does, press OK. (when it does not, fiddle with your configuration details until it does).
Press OK to confirm the use of this HR database connection.
In the Entity Objects page that appears next, query the database objects, select the EMPLOYEES table, move it to the selection on the right side and specify the Entity Name as Employee.
Press Next.
Select the EmployeeView ViewObject as Updateable View Object and change its name to EmployeesView.
Press Next. Do not select any Read Only View Objects. Press Next.
Specify the Name of the Application Module to create as HRService.
Press Next.
The summary of all we have configured in the wizard is presented.
Press Finish to complete the wizard and confirm all the objects for creation.
Configure the Service Interface
Adding a Service Interface to the Application Module is a simple, declarative task. We indicate that the AM should have a Service Interface, what its name should be and then we specify which custom methods (none in our case) and which ViewObjects (one in our case) are to be included in the interface. For each ViewObject we can also specify which of the basic CRUD operations should be supported.
Double click the HRService in the Application Navigator. In the editor, select the tab Service Interface.
Click on the green plus icon to add a new Service Interface to the definition of the Application Module.
Specify the name as HRService (that name will eventually make it into the Endpoint URL for the WebService) and the target namespace as (the default) /org/hrm/common/.
Press Next.
Do not select any custom methods in the second step (we do not have any) and move on to step 3.
Select the EmployeesView1 ViewObject. Check the boxes for the Update and GetByKey operation – as these are the only two required for the business case discussed in the BPM article that this service is created for – and change the names of the methods to something readable. Note: these method names will become the operation names in the WSDL for the Web Service.
Press Next. The Summary appears.
Press Finish to create the Service Interface with the single Service View Instance.
We will now do two things to make it easier later on to deploy the ADF BC application with minimal configuration overhead. First, open the Configurations tab in the HRService Application Module editor. Select the configuration HRService as the Default Configuration for this application module:
Then press select the HRService Configuration and press the pencil icon to edit the details for this configuration.
Change the Connection Type for this Configuration from JDBC DataSource to JDBC URL. This will allow deployment that includes the database connection details without us having to configure a JDBC Data Source on the target Application Server. Note: this is NOT a good practice for production environments – we do this only to achieve effortless deployment for test purposes.
Preparing for Deployment on a stand alone Application Server
The ADF BC Service Interface is deployed using a special Deployment Profile, of type Business Components Service Interface. We will create that deployment profile next.
Click on the New icon – of the File | New menu option – to bring up the New Gallery. Select General | Deployment Profiles and select Business Components Service Interface:
Click on the OK button.
Specify HRServices as the name for the Deployment Profile. Press OK.
The Project Properties editor is brought up. The new deployment profile is presented, with its two constituents: Common and Middle Tier.
Press OK to close the editor.
Next, a special Application level Deployment Profile is created. Open the Application dropdown menu from the little icon shown in the upper right hand corner of the application navigator, behind the name of the application.
Select the option Application Properties.
Select the Deployment node in the navigator and click on the New button to create a new Deployment Profile for the application.
Select EAR File (default) as Archive Type. Type HRServices as the name of the deployment profile.
Press the OK button.
In the EAR Deployment Profile editor, select the node Application Assembly. Check the checkbox for Middle Tier in the HRServices deployment profile (of type Business Components Service Interface) that we have previously created.
Press the OK button.
Back in the Deployment page for the Application Properties, make sure that the checkbox Auto Generate and Synchronize weblogic-jdbc.xml Descriptors During Deployment is unchecked.
Press OK.
In order to have a nicer looking URL for the WebService endpoint as well as a better looking name for the application itself, we will open the Project Properties editor for the Model project, and open the Java EE Application node.
Set the Java EE Web Application Name and the Java EE Web Context Root (that will be part of the URL) to HRDataServices. Press OK to save these changes.
Prepare the stand alone application server
There is one important setting that you need to apply to the WLS domain that you will be deploying the ADF BC Web Service to (see also). This setting is is required to prevent the “Null Password given” error message that otherwise results when we access the Web Service. Simply put this setting instructs the WLS domain to accept passwords in deployed application.
Open the setDomainEnv.cmd or setDomainEnv.sh file – depending on your operating system – in the directory [WLS_HOME]\user_projects\domains\[your target domain]\bin and add the following line after the first occurrence of set JAVA_PROPERTIES.
set JAVA_PROPERTIES=%JAVA_PROPERTIES% -Djps.app.credential.overwrite.allowed=true
Deploying to a stand alone application server
We are now ready to deploy the application to a stand alone WebLogic Server domain. Note that I am assuming that you already have configured a connection to the target WLS domain. We will use that connection for the upcoming deployment.
Open the context menu for the Application – the same dropdown we used for creating the deployment profile for the application. Open the submenu Deploy and Select the HRServices deployment profile:
The Deployment Wizard appears.
In the first step, elect to deploy to Application Server:
Press Next.
In the second step, select the connection to the target Application Server, in my case the SOA_Suite11g_Local application server connection.
Press Next.
Select the radio button Deploy to selected instances in the domain and select the target server(s) to which you want to deploy the ADF BC Service application.
Make sure that the radio button Deploy as standalone Application is selected.
Press Next.
The Summary appears.
After a final review, press Finish to start the real deployment.
Testing the deployed Web Service application
After the deployment is complete, we can access the Web Service in its ‘runtime environment’. WebLogic Server provides easy support for testing webservices, available at the endpoint url for the WebServices, constructed from: which in our case resolves to . Open that URL in your browser, and the following test interface is presented:
Invoke the operation getEmployeeByKey for employeeId 102, results in:
When you open the FMW Enterprise Manager console and inspect the HRServices application deployment, you will find something akin to the next screenshot:
Run ADF BC WebService locally in the integrated WebLogic Server
Even easier than deploying on a stand alone application server instance is the use of the Integrated WebLogic Server in JDeveloper. If you just want to test the functionality of the ADF BC Service Interface, you can run – which auto deploys on the integrated WLS instance – the HRServiceImpl.java class. This class was generated when we configured the Service Interface for the Application Module.
The Integrated WLS server is started – when not already running – and the ADF BC application is deployed on it. The Application’s Context Root and the Name of the Service Interface, together with localhost:7101, make up the endpoint url for the Web Service:
You can open that URL in your browser, which will result in the same Test WebService UI as the stand alone WLS gave us previously:
Resources
Download the JDeveloper 11gPS2 application with the HRService Application Module described in this article: SalaryRaiseBPM.zip .
Andrejus Baranovskis:
ADF Developer’s Guide documentation:
Lynn Munsinger for the gem on “-Djps.app.credential.overwrite.allowed”
good…….helped me a lot
Great post. One question: can you create ADF BC web service for a complex data types (i.e. master – detail VO/EO relationships)?
Pingback: Android puts Oracle on the (Google) map. « AMIS Technology blog
good post, but you didn’t explan how can I call this webservice from another ADF Application?
Thanks
Pingback: Quickly creating, deploying and testing a WebService interface for ADF Business Components
|
https://technology.amis.nl/2010/12/29/quickly-creating-reploying-and-testing-a-webservice-interface-for-adf-business-components/
|
CC-MAIN-2018-05
|
refinedweb
| 1,776
| 54.42
|
Conversational Exchange (in 10 days!)
- Alexandrina Gallagher
- 2 years ago
- Views:
Transcription
1
2 Conversational Exchange (in 10 days!) By J. Peter Bruzzese Copyright 2014 First Edition 2
3 Conversational Exchange (in 10 days!) Copyright 2014. Printed in the United States of America First Printing: February 2014 Trademarks All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. We programs accompanying it. Orders This book is not for sale. However, if you would like copies at print cost plus shipping, please contact me: Cast and Crew of Conversational Exchange (in 10 days!) Author: Lead Technical Editor: Lead Expert Reviewer: Expert Reviewers: Audience Reviewers: Cartoonist: J. Peter Bruzzese Theresa Miller Janet Vargas Phoummala Schmitt Lasse Pettersson, MVP Steve Ahlgrim Paul Robichaux, MVP Jennette Bruzzese Shanna Giarrano Simon Goodway
4 Dedication To my fellow Exchange bloggers, evangelizers, and MVPs (past and present) in the hopes that many more can be educated about that which we have come to know and love: Exchange 4
5 Forward The country comedian Minnie Pearl used to say, "It costs a lot of money to look this cheap." is similar -- it takes a tremendous amount of complexity to make it seem so simple. Explaining that complexity would take a book longer than this one. I've worked on technology for a third of a century, and I'm still learning. In fact, as J. Peter Bruzzese makes clear, it would take a much bigger book than this to explain all the complexities of Microsoft Exchange alone. But J. Peter has taken on a rather harder task: explaining the most important things about Microsoft Exchange, as simply as possible, to someone who actually has a life beyond . He has done this with clarity and humor. If you want to learn about Exchange, but you're not particularly looking forward to it, this is the book for you. It gives enough information for you to convince most people that you're an Exchange expert, and enough real knowledge to be able to talk intelligently to the serious gurus. It's also a great foundation for growing into an Exchange guru yourself over time. Microsoft Exchange is an incredibly important piece of software. It's not the only system in the world, but it dominates the business market. Given the importance of in today's business world, a whole ecosystem of support and service has grown up around Exchange, with thousands of people making a fine living as Exchange experts. Whether you're hoping to become one, or just to talk with the one you married, this is the place to start. Nathaniel S. Borenstein Chief Scientist, Mimecast
6 Table of Contents Author Biography 1 Acknowledgements 2 Introduction from the Author 4 Chapter 1: An Overview of Microsoft s Exchange 6 Chapter 2: Exchange Server Roles 23 Chapter 3: Database Management 37 Chapter 4: Recipient Management 51 Chapter 5: Regulatory Compliance 61 Chapter 6: High Availability and Site Resiliency 76 Chapter 7: Unified Messaging 91 Chapter 8: Exchange Virtualization 101 Chapter 9: Exchange Security 110 Chapter 10: Office 365 (Exchange Online) 119 Appendix: 127 Basic Exchange Prerequisite Knowledge Vendor Sponsor: 140 Mimecast s Unified Management Index 145 6
7 Author Biography J. Peter Bruzzese has been working in the world of corporate networking since the early 90 s as a teenager. He started in document processing departments of Manhattan-based legal and corporate banking firms like Goldman Sachs and Solomon Smith Barney. This was a time of great change in the computing and corporate world as the Internet was just about to become a household name. J. Peter returned to school for networking on a leap of faith (with support from his wife) and he would eventually become one of the most certified IT professionals of his time. He passed certification exams for Microsoft networking, as well as exams through CompTIA, Novell, CIW and other vendors. In the late 90 s he established two different networking schools (NetEssentials Training and LAN-Slide Technologies) before being asked to write his first book for Coriolis on Active Directory Design, which he co-wrote with Wayne Dipchan. He would go on to write and contribute to over a dozen titles sold internationally and translated into just as many languages. More recently J. Peter Bruzzese formed an online company with Tim Duggan called ClipTraining. ClipTraining was formed in 2006 to provide training content through an online learning system that controls access and provides reporting for both individuals and corporate users. The goal is to assist in providing training and support for Windows, Office and other skills through short, task-based videos. J. Peter, as a noted expert on video creation (screencasts) and Microsoft technology has also worked with Pluralsight (formerly TrainSignal) in recent years to provide administrative training for Exchange 2010 and As a result of his community effort J. Peter has been awarded the prestigious MVP Award for Microsoft Exchange several years in a row. He is also a Microsoft Certified Trainer (MCT). In addition to books, J. Peter writes for a variety of different in-print and online tech periodicals. He does product review work for TechGenix (MSExchange.org). He has written the Enterprise Windows column for InfoWorld for 5+ years. And he speaks at technical conferences like TechMentor, MEC 2012/2014, FETC, Connections, TechEd and the Microsoft WPC. 1
8 Acknowledgements As always, my wife Jennette deserves the greatest amount of appreciation. She has supported me from the beginning in this ever changing career. There are truly too many to acknowledge and thank if I really start thinking about it. For this book I m going to say focused on acknowledging those who connect with Exchange directly so as to narrow my focus a bit. First off, I d like to thank the Exchange Team (past and present). Your work is appreciated on multiple levels. I ve always been proud to say I work with Exchange because it is such a stable and feature-rich product that continues to evolve in positive ways. Others at Microsoft I d like to thank include my MVP Lead Melissa Travers, Bharat Suneja, Ian Hameroff, Scott Scnoll, and Jeff Mealiffe. I d also like to thank Marissa Salazar, Navin Chand, Brian Shiers, Jake Zborowski, Jon Orton, and last but not least by any means, David Espinoza. I d also like to thank those folks at Waggener Edstrom who keep me informed as a journalist. You are much appreciated: Leigh Rosenwald, Kara Berman, Krista Valiante and others I ve worked with over the years. Next, I d like to thank my fellow MVPs (including those who have moved on to work for Microsoft directly), as well as Exchange Rangers, who I ve learned a great deal from. Your articles and insight have greatly improved my knowledge of Exchange. More specifically I d like to mention a few MVPs who I have bonded with and appreciate on a personal level. Tony Redmond, Paul Robichaux, Clint Boessen, Jaap Wesselius, Michel de Rooij, Jason Sherry, Jeff Guillet, Jim McBee, Lee Benjamin and Paul Cunningham. I d like to also mention Henrik Walther (one of the finest Exchange tech writers I ve read and fellow MSExchange.org contributor). 2
9 I d like to thank InfoWorld and Galen Gruman, Eric Knorr and Ted Samson who I have thoroughly enjoyed working with for the past few years with my Enterprise Windows column. I d like to thank the folks at TechGenix (aka MSExchange.org) Sean Buttigieg, Michael Vella and Barbara Matysik-Magro from TechGenix (MSExchange.org), Jay Gundotra from ENow, Ray Downes, Peter Melerud, Bhargav Shukla (MVP), and Jason Dover from KEMP Technologies, and Peter Bauer, Julian Martin, Steve McKenzie, Janet Vargas and Ani Hagopian from Mimecast. And a special thanks goes to Ed Liberman, David Davis and Scott Skinger. I d also like to mention my huge array of friends at Pluralsight including Aaron Skonnard, Chad Utley, Fritz Onion, Gosia Niklinski, Gary Eimerman, Joanna Beer, Lisa Szpunar, Sandy Moran and many, many others. Thanks to all of you for working with me over the years. I saved special acknowledgments for the end here. I d like to thank all of the folks who worked with me on this book. Theresa Miller, you ve proven to be a wonderful assistant to me over the time it took to create. And I very much appreciate all my expert reviewers including Janet Vargas, Phoummala Schmitt, Steve Ahlgrim, Lasse Pettersson (MVP) and Paul Robichaux (MVP). Thank you all again for your comments and suggestions to help make this book better. I call the geek in the pictures J. because while my friends all know me as Peter when I write and speak I use J. Peter. So apparently the J. represents my geek side. He s what I see when I look in the mirror (although I realize I look nothing like him). An alter ego so to speak. J. P. B. Greetings! I m J. 3
10 Introduction from the Author Excerpt from a real conversation with my mother: Mom: So, what is it you do? J. Peter: I assist in the planning and deployment of a messaging solution provided by Microsoft called Exchange (and Exchange Online through Office 365) where I m responsible for designing enterprise grade messaging environments, at times on a global scale, for organizations that require aspects like regulatory compliance, high availability and Unified Messaging be taken into consideration, along with other unified communication options. Mom: Ummm what? J. Peter: I do stuff mom. Mom: Oh that s nice. What exactly is Conversational Exchange all about? It s really an Exchange primer. It s meant to be of help to all those IT admins who are non-exchange admins looking to either grasp the concepts of Exchange or getting ready to jump into a more professional role with Exchange. It s also meant to assist IT decision makers who may not have time to be hands-on with Exchange but need to grasp all the concepts surrounding it. It s also meant to assist all of the sales, marketing, and PR folks that work on products that help add value to Exchange (and Exchange Online) to grasp the concept of what Exchange is all about and understand the features and terminology that go along with working in this field. And for those of you who are already Exchange experts it may help answer the question for your family and friends what is it 4
11 you do? so that you don t have to respond stuff any longer. I try to make the explanations as easy to swallow as I can. It won t always be easy to grasp every point but don t stress about that. Learn the concepts and keep moving forward. Like a puzzle, it will start to come together over time. It may be wise to take some time to research some of the terms and concepts. If you read this book and feel ready to go to the next level, becoming an Administrator with Exchange, you might consider setting up a lab to work with it. Perhaps consider picking up a book (there are several that are really good like Paul Robichaux s Microsoft Exchange Server 2013 Inside Out: Connectivity, Clients and UM or one I worked on called Mastering Exchange 2013 ). If you like videos, I ve created a ton of video training that you can watch at Pluralsight.com () If you want to dive wholeheartedly into the world of Exchange, here are a few resources to get you started: The Microsoft Exchange Team Blog: TechNet: Exchange Server for IT Pros: Exchange MVP Tony Redmond s Blog: Thoughts of an Idle Mind: Note: Tony also has a blog with WindowsITPo called Exchange Unwashed: I could list out a ton of other blogs I read and various Twitter accounts I follow. Easiest way to learn it all is just to follow and I ll point off toward all of them over time. I also have my own personal Exchange blog called ExclusivelyExchange.com that I add information to from time to time. 5
12 Chapter 1: An Overview of Microsoft s Exchange Just breathe. This isn t going to hurt. You may be new to Exchange or even servers in general, but one thing is certain: You are not new to . has become the underlying foundation for a new civilization based on global communication. According to recent estimates 2.3+ billion people use . About 150 billion s are sent per day. And most of that traffic is coming from the corporate world, according to the Radicati Group. Most people understand that they can open a browser and go to their favorite browser-based solution (Gmail, Yahoo, etc ) or open up an application, like Microsoft Outlook, and 6
13 send an to a colleague in the cubicle next to them perhaps, or on the other side of the globe and it works. So long as the address is accurate, they hit Send and it just works. Unbelievable really. Especially since the capability to send any form of is less than 50 years old. It s evolved a great deal over the years. Three Interesting Facts about Symbol: Ray Tomlinson setup an system in 1971 using ARPANET. Note: If you don t know what ARPANET is you need to read my book Conversational Geek (in 7 days!) Chapter 3: The Internet. Mr. Tomlinson used symbol to distinguish the user from the machine they were working on. MIME (Multi-Purpose Internet Mail Extensions): Proposed as a standard by Nathaniel Borenstein and Ned Freed in 1991 MIME extended the original capabilities of Internet so that we could include more than just plain ASCII text. MIME is the official Internet standard that defines the way that multimedia objects are labelled, compounded, and encoded for transport over the Internet as explained by Dr. Borenstein. Thanks to MIME we can send all sorts of things in like images, video, documents and so forth. The client has the ability to discern how to read the thanks to MIME headers. Dr. Borenstein can also be credited with sending out the first real MIME message with an attachment on March 11 th To see it, go here: Protocol Alphabet Soup: An Internet standard that servers use for sending and receiving is called SMTP (simple mail transfer protocol). SMTP uses port 25. Client applications, however, use SMTP only to send . To connect to a mailbox and retrieve client applications use either POP3 (Post Office Protocol using port 110) or IMAP (Internet Message Access Protocol using port 143). Later on in this chapter we will also discuss MAPI and Exchange Server. You can Wikipedia the word if you are truly interested in this history but to stay on point here with Exchange we ll keep moving forward. 7
14 The question though is how does it work? Now toward the back of the book in the Appendix you ll see we have an item that reviews some of the underlying technology for networks and the Internet that will help explain some of the how from a deeper technical perspective. But it s obvious that in addition to the technology that allows to travel from one place to another there must be some way that is managed and organized, secured from start to finish (hopefully) and waiting for an enduser to pick it up. works, in theory, like the sending of physical mail. If you have an address it means you have an mailbox where mail is delivered. When someone sends you a physical letter it must travel by various means (planes, trains, boats, trucks you get the picture) but the reason it reaches your literal mailbox is not just because there are roads and such to allow transport. There are also centers for organized distribution of that mail. Post offices that help organize that mail. When you go to the post office and mail a letter you may be putting it in the Local box (for people within your town) or the Out of Town Box (for people anywhere else in the world). Take this same illustration and place it in a large office that has a mailroom. You can send mail to others within your company and it is handled one way, or send it to persons elsewhere and it gets routed out. In order for to do the same thing for a company there must be servers that act as the post office or mail room for your organization. When you send an to a person within your organization that is sent from your computer so a special server that will handle its transport from that point forward. Now maybe the is going to a recipient with a mailbox on that server. Maybe it s going to a recipient within your organization that has a mailbox on another server. Maybe it s going to be sent out to another server in another company altogether where the receiving mailbox is located. The point is when you hit Send that begins a journey that either completes when it reaches the mailbox of the recipient or, should something go wrong, gets bounced back (like a return to sender ) with an NDR attached (non-delivery report). 8
15 So remember that mail room in your company? It handles the sending and receiving of mail at your site for your organization, but other sites have their own mail room (or multiple mail rooms depending on the company s size). In the world that mail room is replaced by a special server. Now there are a variety of different types of servers in the world, but the one that has grown to become the market leader the mother of all servers is Microsoft Exchange Server. Microsoft Exchange Server is a messaging platform that provides , scheduling and tools for collaboration. It s installed as a server-side application. In other words, you have a computer that is designed to be powerful enough to handle the workload that will put upon it and this is your server. You install a Server OS on it (like Server 2012 R2) and quite honestly it looks and acts on screen like your normal Windows OS (if running Server 2012 it looks like Windows 8 for the most part). And you install Exchange Server on top of your Server OS in much the same way you install Office or Outlook on top of your Windows OS. Am I oversimplifying the whole thing? Absolutely! So you Exchange experts reading this part and spiking out your blood pressure need to calm down just a bit. Folks reading this don t need to know how complex the process truly is. They don t need to know all the hoops we have to jump through making sure the AD schema is prepared and all the prerequisites are met before we do the install. We re not trying to make them Exchange admins (not yet anyway). I can hear some of you now. But you re leaving out virtualization you re leaving out the cloud Office 365!... you re Breathe. I m not leaving anything out. We ll get there. Ok, so Exchange Server is a server-side messaging application that handles incoming and outgoing for your organization. One of the things you can do with it is create mailboxes for everyone in your organization. How does it know who everyone is and how do you access these mailboxes? Well, to install Exchange Server you have to have an identity management system that works as a directory service. In the Microsoft world 9
16 we call this Active Directory. In order for you to log into your company network you get a username and password that is stored with Active Directory (along with other pertinent information like your address, mobile number, etc ). Exchange uses Active Directory in many ways but one of the key ways is to be able to create mailboxes for your people that connect to the AD network accounts. So when they log into the network and open Outlook up, if they have a mailbox on the Exchange Server they ll be able to send and receive . We mentioned earlier that you have POP/IMAP protocols that mail clients use to receive and SMTP for mail clients to send . However, Microsoft Outlook, when used within a company, uses MAPI to communicate with Microsoft Exchange. MAPI (Messaging Application Programming Interface) allows client applications to become messaging-aware and uses RPC (remote procedure calls) as its transport mechanism. In 2007 MAPI was also being called the Outlook-Exchange Transport Protocol (which was still just MAPI riding on RPC). With Exchange 2013 a change was made and MAPI connections are no longer supported. Instead RPC over HTTPS (or Outlook Anywhere) connections are supported for both internal and external client connectivity. Here is a great reference from Exchange MVP Tony Redmond entitled Exchange 2013 focuses on RPC-over-HTTPS Now before we go any further let s just take a step back and look at the history behind the product we know today as Exchange Server. Important Note to Reader: We haven t gotten too deep technically yet. But this next part is a bit overwhelming the first time through. Don t let it confuse you and don t give up. Many of the features I rattle off here really fast are covered in greater detail in later chapters in the book. 10
17 The History of Exchange A quick look at the history of Exchange will take you back about 20 years ago (1993) with the planned migration internally at Microsoft from a legacy XENIX-based system to a very early, beta version of Exchange. It wasn t until early 1996 that Exchange Server 4.0 was released to the public, a public already relying heavily on IBM/Lotus which dominated the messaging space. As they say, you ve come a long way baby! Note: Some may wonder why 4.0 was the first version. Exchange MVP Lasse Pettersson explains that prior to Exchange 4.0 Microsoft had a product called Microsoft Mail, released in 1991, and the last version was 3.5 so that explains the 4.0 version number. Exchange Server 4.0 was X.400 protocol based with support for X.500 directory services. Remember, this was before Active Directory was released (in 2000) so they still needed a directory service and the work they did on Exchange eventually helped with the creation of Active Directory (an LDAP based directory service that succeeds X.500). With Exchange 4.0/5.0 and 5.5 there was only one mailbox database. Starting with 4.0 the Exchange Team developed single instance storage (SIS), which would provide for an efficient way to reduce disk space by not keeping more than one copy of a message. So if someone sent the same message to multiple people the message body and attachments would only be stored once. Obviously this would keep the database size down (which was smart because disk space wasn t cheap in those days). 11
18 The database management solution for Exchange is called the Extensible Storage Engine or ESE (aka JET Blue with JET standing for Joint Engine Technology) is a transaction-based database engine. It s been likened to a distant cousin of the database used in early versions of Access (JET Red) but it s not an Access database nor is it a SQL database. It s been specifically optimized over the years to store hierarchical data (ie. folders, messages, attachments) and to survive crashes so that upon recovery the data loss is minimal. Ok so Exchange 4.0 was out the door. With no time to spare they went right to work on version 4.1, which turned into version 4.5 and then ultimately became the 5.0 release (shipped in early There were some great new aspects to 5.0 like the implementation of SMTP and LDAP v2, as well as the first version of the web-based client (Exchange Web Access, or EWA) which we know today as Outlook Web App. There was a 16 GB limit on the database size. On Exchange 5.0 CDs (if anyone has one) there was a file called EXGL32.DLL and if you renamed it to.avi it was a video Easter egg that would credit the Exchange 5.0 team while having fun in the process. Exchange 5.5, shipped at the end of 1997, continued to work off a separate directory service. It was sold in two editions: Standard (which maintained the 16 GB limit of the database size) and Enterprise (which allowed up to 16 TB). There were some additional feature differences between the two editions including transport connectors and clustering options. Incidentally, it was right around this time that I became proficient with Exchange and passed my certification for it in January of 1999 allowing me to become MCSE+I certified. The +I didn t impress anyone in 1999 and it impresses even fewer today. A term you may be familiar with is storm which occurs 12
19 when there is such a huge load of traffic generated that the servers go down (like a DDoS attack). This can occur when there is a spike in Reply All and distribution lists. One such storm that is well known at Microsoft occurred on October 14, 1997 with Exchange 5.5. It involved a distribution list called Bedlam DL3, which had about 13,000 addresses in it. One Microsoft employee asked to be removed from the list (to all) and others responded with Me too! and supposedly 15 million s were sent in the process, causing a crash. Read all about it here on the Exchange Team Blog: 626.aspx Exchange 2000 (v6.0) was released in November 2000 and this was the first version that dropped a separate directory service and now relied upon Active Directory. One of the features I liked with 2000 was an Instant Messenger feature but this didn t remain in the product for long (it was moved over to Office Live Communications Server and yanked from Exchange 2003). One pain about this release was the migration from Exchange 5.5 where you had this Active Directory Connector (ADC) which was anything but fun (although it did work). Exchange 2000 had us focused on the ability to create multiple storage groups where we could put multiple databases. We ll discuss the evolution of the Exchange database and architecture in Chapter 3. The Exchange System Manager in Exchange
20 Exchange 2003 (v6.5) was released in September of 2003 and included features like RPC/HTTPS (now known as Outlook Anywhere), cached mode and ActiveSync (a key piece for mobile client connectivity to Exchange). Spam was becoming a nightmare for admins and Exchange 2003 added some basic filtering features like connection filtering, recipient filtering and Sender ID filtering. To combat spam Exchange Admins often had to look at 3rd party options such as Sybari Antigen for Exchange. Today you may know this product as Microsoft s ForeFront Protection for Exchange Server. To see a list of build numbers and release dates for Exchange Server from 4.0 to 2007 SP3 go here: From Exchange 2007 forward the start of something new The Exchange Team was working on some awesome new features with the next version of Exchange (v7.0). The v7.0 had a lot of proof of concept code that didn t get a release however. Instead they took things to the next level and went in a new direction with v8.0 (released as Exchange 2007). 14
21 Exchange 2007 is actually v8.0 but because the Office team was releasing their 12 wave (and to sync with them) they call it E12 now. That may sound confusing but the first version was 4.0 so why should it matter to us that they jumped from 8 to 12? Exchange 2007 was shipped in December This version of Exchange had a variety of new features that have carried through to the Exchange 2013 version (although some have morphed a bit into better features). Some of the new features included: Server Roles: Exchange 2007 introduced 5 server roles. There were 4 internal roles: Mailbox, Hub Transport, Client Access and Unified Messaging. And 1 external, perimeter-based role called the Edge Transport. We ll discuss these further in Chapter 2. Continuous Replication: This feature allowed a copy of the active database to be copied and then transaction logs to be shipped to provide different levels of availability. There were 2 initial CR types and a third added with SP1 including: Local Continuous Replication (LCR), Cluster Continuous Replication (which had clustering features for automatic failover support), and Standby Continuous Replication. These will be discussed further in Chapter 6. Note: there was also a legacy clustering option called Single Copy Clusters (SCC) Unified Messaging: A way to have voic go into your Inbox by connecting to your existing PBX/IP-PBX system. Also a set of auto attendant features built right into Exchange. We ll discuss this further in Chapter 7. Exchange Management Shell (EMS): A new commandline/scripting language based on PowerShell was introduced and in some cases you could only perform things through the EMS. The Exchange Management Console (EMC) provided a GUI based administration method as well. Fun Fact: PowerShell was originally code named Monad prior to official release. 15
22 The Exchange Management Console The Exchange Management Shell (EMS) With Exchange 2007 a line was drawn in the sand that required x64 hardware running 64-bit versions of Server. This would require companies to purchase new hardware if they didn t have an x64 server to use. Another interesting change was made with Exchange 2007 upgrades and that was the inability to perform an in-place upgrade from a legacy version of Exchange (2003 and lower) directly to This remains the case with Exchange 2010 and You have to install the new Exchange server into an existing environment, coexist for a period of time, and transition over to the new when ready. Although some administrator s might balk at the inability to do an in-place upgrade, environment coexistence is often a 16
23 welcomed change for administrators because it simplifies migrations and gives us an excuse to update our systems to ensure performance for the newer implementation of Exchange. The term migration is typically used when referring to a move from one messaging system or a legacy Exchange system to a more modern version of Exchange that cannot coexist with the other system. For example, if you are trying to move from Lotus Notes to Exchange that is considered a migration. However, coexistence with a transition is an upgrade that takes you from a legacy version of Exchange to a modern version so long as the two can coexist. For example, Exchange 2007/2010 to Exchange 2013 would involve a period of coexistence (where you have both server types in your organization at the same time), you move mailboxes from legacy to modern at the needed pace (smaller organizations might do it in a weekend, others may take months), and then when all mailboxes have been transitioned over you decommission your legacy Exchange servers and have now upgraded so to speak. But again, there is no in-place upgrade. The 2 editions of Exchange still existed with In the Standard you could have 5 databases in up to 5 storage groups. The Enterprise edition supported up to 50 databases and up to 50 storage groups. Exchange 2010 Exchange 2010 (v14) was released to manufacturing (RTM d) in May of 2009 and released officially in November of Here were some of the major features that we still have with Exchange 2013: Database Availability Groups (DAG): Building off the continuous replication options with 2007 the 3 CR options were boiled down into 1 solution. Note: We use DAG in Exchange 2013 as well, although it s constantly being improved upon by the Exchange Team. Personal Archive: Hoping to help eliminate the proliferation of PSTs and as a result of storage becoming cheaper a personal archive feature was added so that admins could keep the Inbox 17
24 on higher performance disk and an archive on lower performing storage (if necessary). Storage Groups are dropped, as is Single Instance Storage (SIS) which improves performance greatly but creates the possibility of storage bloat. However, by this point storage is very inexpensive and you can design for better efficiency to avoid the lack of SIS from having that great an impact. For a list of Exchange Server 2010 (back to 4.0) server build numbers and release dates: Exchange 2013 Released to manufacturing in December of 2012, Exchange 2013 (v15) is the latest version of Exchange Server. Exchange 2013 has a handful of important what s new features including the following:, released with Exchange 2010) has been replaced by a single Web-based UI. Ordinarily I don't like Web-based consoles for administration; they always feel clunky and unfriendly. Plus, it has that Metro look, which leaves me cold. But to be honest I ve come to really appreciate the Exchange Admin Center because of its ease of use and the fact that I can access it easily from a browser. Exchange architecture revisions: Exchange 2007 and 2010 are broken into five server roles, mainly to address performance issues like CPU performance, which would suffer if Exchange were running as one monolithic application. But Microsoft has made progress on the performance side, so Exchange 2013 has just two roles: 18
25 Client Access server role and Mailbox server role. The Mailbox server role includes all the typical server components (including unified messaging), and the Client Access server role handles all the authentication, redirection, and proxy services. A new managed store: The store service has been completely rewritten in managed code (C#).: In previous versions of Exchange you had to have a public folder database for public folders, but now you can create a public folder mailboxes, which means they use regular mailbox databases. In turn, this means they can be made part of a database availability group for disaster recovery. DLP (data loss prevention): DLP is new in Exchange 2013's transport rules that warn or prevents users when they may be violating policies meant to prevent disclosure of sensitive data like a credit card number or Social Security number in an . The built-in DLP policies are based on regulatory standards. Outlook Web App enhancements: One awesome feature is support for offline access, which lets users write messages in their browser when offline, and have the messages delivered when they connect to the Internet. but it s a still a first attempt at this. 19
26 The Exchange Admin Center in 2013 For Exchange 2013 releases (including cumulative update releases) go here: Hosted Exchange, Exchange Online and Office 365 Terms you will hear in modern Exchange deployments include on-premise (where you install your Exchange Server in your own environment), hosted Exchange (where a service provider manages your Exchange server), cloud-based (where you go with an Office 365/Exchange Online solution) and Hybrid (where you combine both). Note the options shown in the free, online tool called the Deployment Assistant: US/exdeploy2013/ 20
27 The Exchange Deployment Assistant Hosted Exchange comes in many forms. In some cases the provider will place your organization s on the same server as other companies and this is called a multi-tenant scenario. You will not be aware of the other companies using the same server however but you share the power of that server and typically are provided minimalist tools to access and manage your end-users. In other cases you might have a virtual or dedicated server with a full Exchange deployment that you can have complete control over. Exchange Online (a part of Office 365) offers Exchange as a cloud service with Microsoft as the provider. You essentially choose from a variety of subscription options (with different features and prices) and manage your Exchange through the O365 admin tools and the Exchange Admin Center (EAC). Exchange Online has been designed to make it easier for companies who are not ready to jump all in with a cloud-based solution to tie their on-premise Exchange with their cloudbased Exchange online and form a Hybrid solution. It sounds easier in theory than it is in real life so when an Exchange admin tells you they are working on a hybrid configuration it s ok to raise your eyebrows and say Impressive how s that going? 21
28 The Big Takeaways This chapter has way too much to digest in one sitting. But it s meant to be that way. You were hit with a great deal of information all at once. The roller coaster of development over 20 years of this massive solution with so many features that one chapter couldn t present it all. But here we are finished with Chapter/Day 1. The big takeaway? Exchange is ever evolving. Knowing a bit about that process will help you learn about the here and now. We re going to use this same approach with other subjects like Server Roles and High Availability. Walk you through the history so that you can learn the process from its inception and build on it from that point. The goal here was to introduce you to a ton of new vocabulary all at once. In the chapters ahead we will take pieces of this discussion and break it down into more digestible bites. So don t lose steam now push on to Chapter 2! 22
29 Chapter 2: Exchange Server Roles If you have a small environment you can install Exchange Server on one Windows Server and it will handle all the work. But as your organization grows it may be wiser to install additional Exchange Servers to handle more mailboxes. And wouldn t it be nice if you could install Exchange Server in pieces so that you can really optimize performance on the servers and distribute server tasks? Well that line of thinking is what led to the ability to install Exchange as roles, not just one monolithic solution. In this chapter the evolution of these roles will be explained so you can know what is available with each flavor of Exchange you may be dealing with. Let s walk through in stages starting with Exchange
30 Exchange 2003 Roles I can just hear some of the Exchange vets now saying ahem pre-exchange 2007 didn t have roles. Not in the modern sense but The Exchange Team Blog said Exchange 2003 provided primitive server roles called BackEnd server and FrontEnd server. But let s not fight over semantics. The fact is with Exchange 2003 we were able to configure the front-end Exchange server to handle client requests and then proxy them back to the appropriate back-end Exchange server where the mailboxes resided. Client Connectivity to Mailboxes with Exchange 2003 Note: Prior to Exchange 2003 the savvy administrator found ways to create their own roles and ensure system performance. For example, a mailbox server without databases and the use of DNS could be turned into what we might call an Edge or Hub Transport server today. With front-end servers the internal clients connected to their mailbox using Outlook and MAPI but they connected directly to the mailbox servers (the back-end servers). The external clients used the front-end as more of a proxy that could handle RPC 24
31 over HTTPS (or Outlook Anywhere), Outlook Web Access (OWA), POP/IMAP or ActiveSync connections. What does it mean to proxy? When an end-user goes to access their mailbox the front-end server contacts Active Directory (specifically a global catalog server in the domain) and locates which back-end server contains the user s mailbox. That action of moving requests from the front to the back is what is meant by the word proxy. Now that is the general functionality of the front-end server but the exact functionality varies depending on the protocol used and the action being performed. No point in getting that deep into it, but good for you to know because the word proxy is going to come up again in this discussion. In addition to front-end/back-end for client access and mailboxes there was another concern with Exchange 2003 and that was the actual transport of messages. Within an Exchange organization (depending on its size and number of locations) you would have servers organized by routing groups. These groups would have connectors between them. Bridgehead servers would handle message transfer from one routing group to another (or to an external messaging system. Ok, so now let s take things to the next level. Exchange 2007 Server Roles Like I said at the outset, if you have a small or medium sized company with a few hundred mailboxes you can install all your Exchange required roles for 2007 on a single physical server. But with the larger enterprises ranging into thousands or tens of thousands of mailboxes, multiple office locations there was a need to provide a more flexible deployment approach. In addition, scaling up (which is necessary when your organization grows) was not easy to do with hardware that was not as powerful as what we have today. So scalability and flexibility, although with performance were all drivers for server roles to be built in to the deployment options. 25
32 The result was a breakdown of server roles into 5 roles. 3 of these roles (the Mailbox, Client Access and Hub Transport) are required and the other two (Edge Transport and Unified Messaging) are optional. In smaller environments you can install all 3 required roles on one box. Or you can install them on separate servers. In addition to flexible deployment options this also allows you to improve hardware utilization because the binaries installed are only what you have chosen. In other words, you don t install the whole huge solution on one server, just the bits you need. And only the services for those options will run. This makes the servers easier to configure, secure, maintain and size for hardware. Trivia: During the beta for Exchange 2007 there was a 6 th role planned. It s true! It was the Public Folder role (which was rolled into the Mailbox role). And the Hub Transport role was originally called the Bridgehead role because its function was similar to the bridgehead server functionality with routing groups in Exchange A Closer Look at Server Roles Let s start with the 3 required roles for an Exchange 2007 installation and then address the 4 th internal role and close it out with a discussion of the Edge role that resides in the perimeter. The Mailbox server role as its name implies hosts the mailbox databases as well as any public folder databases, while also providing MAPI access to Outlook clients. The Mailbox Role is ordinarily installed with other roles on a single server, such as the Hub Transport Role, Client Access Server Role and the Unified Messaging Server Role, as you can see in the graphic. The Client Access server role is similar to the front-end servers in Exchange 2000 or The Client Access server supports the use of Outlook Web Access for access using a web browser as well as Exchange ActiveSync for mobile devices. POP and IMAP are also implemented here, while Outlook Anywhere 26
33 support allows Outlook clients to connect from outside the corporate network without the use of a VPN. Server Role Installation Option for Exchange 2007 The Hub Transport Server Role will route messages within an Exchange organization and is similar to the Bridgehead server found in Exchange 2000/2003. It can also be configured to route external in lieu of the optional Edge Transport Server Role. The Hub Transport Server is reliant on the presence of Active Directory to have a logical infrastructure in place for the routing of internal messages. The Unified Messaging server role provides voice over IP capabilities to an Exchange Server in order to integrate , voic and incoming faxes as part of providing a universal Inbox. Outlook Voice Access (OVA) also opens the door to multiple access interfaces such as the phone. Given the potential complexity of telephony infrastructure such as IP-PBX and VoIP gateways, a telephony expert is suggested for the installation and configuration of the Unified Messaging server role. 27
34 Server Roles in Action in Exchange 2007 and 2010 The optional Edge Transport server role is meant to be the last hop for mail going out of your organization and the first hop for mail coming in. It acts like a smart host and sits on the perimeter and is not part of Active Directory. In addition, an Exchange Server configured as an Edge Transport server role cannot also be configured as any other role. Its main task is to sit on the perimeter of the network to provide security in the form of antispam/malware filtering agents and the implementation of organizational transport rules for an organization. Terminology Note: A smart host is an message transfer agent (or MTA) that allows an SMTP server to route to an intermediate mail server instead of directly to a recipient s server. Terminology Note: The perimeter network (or DMZ, demilitarized zone) is not required for your network or Exchange. Some like to have multiple firewalls with servers in-between that handle anti-spam, anti-virus and other protective pieces to ensure greater security. The Edge Transport role sits in that perimeter however if you didn t want to use one but wanted the antispam/malware capabilities you can enable these agents on the Hub Transport server. 28
35 Exchange 2010 Server Roles For the most part the server roles in Exchange 2010 remain exactly the same as There were some improvements made however. Let s focus our attention primarily on the Client Access (CAS) role. Remember with 2003 we said the front-end was primarily a proxy back to the mailbox servers? Well, that meant the mailbox servers were still doing a lot of the work. With the CAS role in Exchange 2007 the CAS really helped to offload a lot of the load from the mailbox server although internal clients still connected directly to the Mailbox role. With Exchange 2010 that changes thanks to a new service called the RPC Client Access service (MSExchangeRPC) making the CAS a true middle-tier solution. In Exchange 2010 the CAS role handles both external and internal connections to the Mailbox role (with the exception of Public Folder connections). This takes a ton of pressure of the Mailbox server and allows it to handle more concurrent connections. The CAS role handles the following connections and services: Outlook Web App: Allows you to access through a Web browser (including IE, Firefox, Safari and Chrome) Exchange ActiveSync: Allows you to synchronize your data between your mobile device or smart phone and Exchange Note: There are varying levels of ActiveSync support in devices and one key security element is remote wipe, which is not available for all devices Outlook Anywhere: Allows you to connect to your Exchange mailbox externally using Outlook (RPC over HTTP) without going through a VPN connection POP/IMAP: Mail clients other than Outlook that connect with POP or IMAP are supported through the CAS role 29
36 Availability Service: Shows free/busy data to Outlook 2007/2010 users Autodiscover Service: Helps Outlook clients and some mobile phones to automatically receive profile settings and locate Exchange services We ll discuss high availability in another chapter later on, but it s good to note that with the CAS role being so important you want to ensure you don t lose it. If your CAS server goes down, goes down. So you want to have more than one (just in case) and you can tie them together as a CAS array within a single site. Your CAS array should be load-balanced with either Microsoft software load-balancing (aka NLB) or a 3 rd party appliance based load balancer. Server Role Installation Option for Exchange
37 There were other important changes in Exchange 2010 especially with regard to the changes in high availability for the Mailbox server role, but we ll save those for later on. As you can see in the screenshot, the role install hasn t changed all that much between 2007 and 2010 with the exception of the missing active/passive mailbox roles. With Exchange 2010 SP1 however, the Exchange Team started encouraging Typical installations of Exchange, which included the three primary roles (MB, CAS and HT) on all internal servers rather than breaking up the roles. Exchange 2013 Server Roles Ok get ready were going to blow your mind now (and explain the cartoon at the start of this chapter). One thing you should know about Exchange at this point is that it is ever evolving (remember key takeaway from Chapter 1). And as hardware evolves and performance improves it was decided there wasn t a need for 5 server roles any longer. So, with Exchange 2013 things went from 4 internal roles to 2. And the Edge didn t get an update in 2013 at all. So what we have is the Mailbox role that handles the primary part of the Hub services, the Mailbox database and the UM role as well. And then we have the Client Access that handles authentication, redirection and proxy services with support for all the typical access protocols: HTTP, POP, IMAP and SMTP. In addition, these changes bring with it a change to client connectivity. RPC is no longer supported as a direct access protocol. All Outlook connectivity will be done through RPC over HTTPS (aka Outlook Anywhere). Immediately the upside to this is fewer namespaces needed for the connectivity. But this change combined with other adjustments in how clients connect will hopefully eliminate client issues (like the need to restart Outlook at times). Keep in mind that only Outlook 2007 and later is supported with Exchange
38 The kicker here is that these two roles closely resemble their 2003 counterparts in many ways. The CAS in 2013 (like the front-end in 2003) proxies/redirects connections back to the Mailbox server. The Mailbox server has the mailbox databases (and all the mailboxes logically) which also holds the public folders. Server Role Installation Option for Exchange 2013 Trivia: Some refer to the CAS in Exchange 2013 as the Client Access Front-End (or café ) server role. Personally I think it would have been better to give the role the new name to help folks grasp its new (ahem old) purpose. What happened to the Unified Messaging server role? Well, it isn t gone, it s now installed with the Mailbox role, so it s completely wrapped into that role now. What about the Hub Transport server role? Don t we NEED transport? Yep, and so those features have been split up between the CAS and Mailbox (with the Mailbox getting the majority in the split). And the Edge again? It wasn t updated in Exchange In other words you can still use the 2010 SP3 version if you like, or 32
39 go with another option for your anti-spam solution (perhaps an enterprise grade option in the cloud like Mimecast) but the Edge was not a major focus in the development of Exchange Server Roles in Action in Exchange 2013 There is talk of an update with SP1 but one MVP (who shall remain nameless) used the word lame to describe it. Maybe the Edge wasn t deployed enough to be worth development time. Perhaps Microsoft has other plans in mind. They recently dropped the majority of their Forefront product line, including Threat Management Gateway (TMG) which was oftentimes paired up with the Edge to provide anti-virus/malware and greater security. Alternative to Edge Transport Role with Exchange
40 The Exchange 2013 Transport Pipeline and Mailflow The Microsoft Exchange Team says the mail flow process occurs through the transport pipeline which is made up of three services. These services aid in transport on our Client Access and Mailbox servers (which may exist on the same server). The Client Access has the Front End Transport service while the Mailbox server has the Hub Transport service and Mailbox Transport service (which is made up of two services). The Front End Transport service on the Client Access server handles the flow of mail from the Mailbox server (specifically the Hub Transport service on the Mailbox server side) to the outside world. The Hub Transport service (or just Transport service) handles routing from the Front End Transport service to the Mailbox Transport service as well as between other servers within the organization internally. The Mailbox Transport service handles mail transport between the Hub Transport service and the mailbox database. Going back to the Front End Transport service it s basically a stateless proxy for inbound and outbound traffic with no traffic being queued as a result of that service, however, as mentioned, it can be used to filter traffic. That filtering can be based upon connections, domains, senders and recipients. It does inspect message content however. Inspection of the content itself can be done by the Transport service as it handle SMTP mail flow from the Front End Transport service to the Mailbox Transport service and into the database. Note: One important point regarding the Front End Transport service is that mail inbound and outbound to the Internet through an Edge will bypass this service. The Edge communicates directly with the Transport service on the Mailbox server. The two services that make up the Mailbox Transport service include the Mailbox Transport Submission service and the Mailbox Transport Delivery service. The Delivery side accepts messages from the Transport service and delivers them using RPC to the mailbox database. And the Submission service 34
41 receives, through RPC, messages from the local mailbox database and passes it to the Transport service. The Exchange 2013 Transport Pipeline I promised when I started writing this book that I wouldn t overinvolve the reader with too much deep technical content. And even though it s going to look like I m breaking that promise, believe me this is just the tip of the discussion on the transport pipeline and how all of this works. I just felt it was essential for you to see where the Hub Transport role really went. Deeper Dive: If you feel like seeing the full graphic of mail flow and the transport pipeline you can go here and scroll down: 35
42 The Big Takeaways Ok keep breathing, you re doing fine. The sum up from this chapter: In 2000/2003 we had primitive roles in that we could configure a front-end server for client access, a back-end server for mailboxes and a bridgehead to assist with transport. With 2007 and 2010 that changed into 5 official roles. 4 were internal (Mailbox, Client Access, Hub Transport and Unified Messaging) and one was external (perimeter/dmz) called the Edge Transport. In 2007 the CAS received more responsibility for client connectivity and lightened the load on the Mailbox. In 2010 it received all internal/external client connectivity load and that really lightened the load on the Mailbox server. In 2013 we go back to the style of roles from the days of We have a Client Access (front-end, or CAFÉ) and a Mailbox role. The Edge is ignored (so far) and transport services are broken up between the two remaining roles. UM is now automatically installed with the MB role. There it is you know everything you need to know to have a conversation about Exchange Server Roles. Next up a breakdown of the database, one of my favorite subjects. So don t lose steam now push on to Chapter 3! 36
43 Chapter 3: Database Management At its core Exchange is a database management server. It has to keep track of all the mailboxes you configure for people in your organization. It puts in those mailboxes and retains all that information in an organized way. The Exchange Server is all about sending and receiving , and storing it for end-users to read. In addition to , calendaring items, contacts, tasks and so forth all need to be housed somewhere and in the end it all goes into the Exchange database (and that database needs an engine to help manage it). It s important for you to visually realize that every that goes to your Mailbox server must go into a database, and this creates challenges because of the huge variety of messages Exchange 37
44 handles. From the tiny one-line s to the monster MB s with video attachments. The I/O (input/output) profile of a Mailbox server is not predictable it s very random. Read/write that occurs between memory and disk is substantial. At times there may be waves of messages, other times may be idle. So it s essential to understand how the databases work for us to understand how Exchange, at the core, works. Dr. Carl Sagan said You have to know the past to understand the present. We ve seen that in previous chapters and this one is going to be a review of the past as well to help build your understand up to modern times. Ok, so let s get started! Legacy Exchange Databases You might recall we talked about the Extensible Storage Engine (ESE) in Chapter 1. This engine has morphed over the years and been improved upon. One benefit of ESE was Atomic, Consistent, Isolated, Durable (ACID) transactions to make for more reliable and recoverable data management. With early flavors of Exchange (4.0, 5.0 and 5.5) there were three databases: Dir.edb: Held the directory information for all mailboxes Pub.edb: The Public Folder database Priv.edb: The mailboxes (all of them) were in this single file database The maximum database file size was 16 GB so that was a bit of a limitation on how much your Exchange Server could handle. And then with Exchange 2000 (and later 2003) things changed a bit. With Active Directory now handling the directory 38
45 information (ie. usernames and passwords for mailbox users) we didn t need a dir.edb database anymore. Another change was the ability to create multiple databases (called mailbox stores) and store them in containers called storage groups. At that time we also saw the introduction of.stm files, which were paired up with database (.edb) files to provide content in native MIME format. If you recall what MIME does from Chapter 1 the.stm files were simply making things faster for content conversion requirements. Storage Groups in Exchange 2003 Note: Storage groups may sound like a good idea because you can put multiple databases in them, which is great compared to only having one. But what was odd is that the transaction logs for the databases were merged together in that one storage group. We haven t discussed transaction logs just yet but you ll see soon enough why that might lead to frustration. 39
46 Back in the day, the max size of 16 GB left our users with mailbox sizes as small as MB in size and heavily reliant on.pst files (another can of worms discussed in chapter 5). Depending on the number of users in your organization you could end up with many mailbox servers to administer and maintain. More databases per server was a welcomed change. We could use fewer servers while still maintaining smaller database sizes which was important for achieving respectable database backup and recovery times. Modern Exchange Databases (2007/10/13) Exchange 2007 actually yanked the.stm file from Exchange and took us back to the single.edb file for storing content. The storage group maximum number of databases was increased to 50 but overall it was clearer with 2007 that the focus was shifting to the database and away from the storage group. The direction was to have one database per storage group, so that eliminated the need for storage groups. Storage Groups in Exchange
47 And as a result, with Exchange 2010 we no longer had to worry about storage groups. And the ESE database engine received enhancements that improved I/O by 70% (meaning, Exchange 2010 can read/write s to disk 70% faster than 2007 using the same engine). These improvements included increasing the page size from 8KB to 32KB, storing header data in a single database (DB) table, and compressing attachments. In turn, because of these optimizations, you actually have more options for using cheaper disks for your Exchange server. Note: With older versions of Exchange due to the high I/O requirements many organizations used dedicated SAN to ensure excellent performance for their Exchange environments. With the I/O changes in Exchange 2007 and higher we are able to consider options such as Direct Attached Storage (DAS) and SATA disk which helps the corporate pocketbook. What do we mean by cheaper disks? Well, think about all the work your Mailbox server has to perform. People in your company are constantly tapping it for their . To get the best performance out of it in the past you would want to make sure you had the most expensive, highest performing disks. These were usually SAN or RAID arrays. Don t stress too much about the words. Type SAN or RAID array in an image search in Google and you ll see what we mean. They were highperformance, but they were budget busters. With the enhancements in Exchange 2010 and lower I/O this allows for lower cost SATA disks or JBOD storage (Just a Bunch of Disks) which has had slow adoption but is a great option. And another fact is that with Exchange 2010 you can mount up to 100 databases. Note: One important point to keep in mind is that Microsoft removes Single Instance Storage with Exchange The idea behind SIS is when a message is sent to a bunch of people (perhaps with a large file included) the original message is stored once. SIS is replaced by database compression technology and new tools to help administrators to purge mailboxes and reduce the overall size of the database. But dropping SIS supposedly helped improve performance a great deal. 41
48 With Exchange 2013 additional enhancements have been made to improve the performance of the ESE database. Although with the RTM version of Exchange 2013 the number of mounted databases was back to 50 (like with Exchange 2007), with Cumulative Update 2 (CU2) the support for a maximum of 100 mounted mailbox databases per server was added (for those who have the Enterprise Edition license). Note: What s a Cumulative Update? Well, you may know these as hotfixes or rollup updates. Starting with Exchange 2013 the Microsoft Exchange Team adheres to a scheduled delivery model (every quarter, give or take) where a CU is released for the product. And at times these updates (and feature improvements) are rolled into a Service Pack (SP). Now for years some folks have been wanting a change to SQL as the underlying database for Exchange but in the words of Exchange Team expert Ross Smith IV SQL squeals like a pig where ESE is easy. My guess is that he means that ESE still outperforms SQL with regard to the type of transactions Exchange requires. Now in legacy versions of Exchange the information store was a single process (store.exe). With Exchange 2013 the information store has been completely rewritten in C# and renamed the Managed Store. This new store has two processes, the Microsoft.Exchange.Store.Service.exe and the Microsoft.Exchange.Store.Worker.exe process. With each mounted database you have another Microsoft.Exchange.Store.Worker.exe process started up. So, each mount-request will create a new worker process which exits when a database is successfully dismounted. This means that the process of one database does not necessarily impact another process/database when e.g. it hangs. So if you have 40 mounted databases there will be 41 processes working to support these. You can see the worker processes if you take a look at Task Manager on your Exchange server and have multiple databases mounted. Note in the figure below that the Store.Service.exe is present and used to help with managing the Store.Worker.exe processes. 42
49 Task Manager and the Store.Worker and Store.Service Processes This helps isolate single database issues without impacting other active databases running on the same server. Database failover and physical disk handling have been improved, reducing IOPS utilization by + 50% and now supporting disk capacity up to 8TB. ESE has also been enhanced with deeper checkpoint depth for both active and passive database copies. How the Database Really Works Try to visualize the flow of into your server. It s sent from a person s client. The 0 s and 1 s are flying around the Internet until they arrive at your Mailbox server. Entering the Mailbox Server 43
50 So what happens next? The enters the Mailbox server, it goes through the memory and it s written first to transaction logs. Transaction logs don t do anything. They are 1 MB in size 1024 KB), a reduction from 5 MBs, which we had in legacy versions of Exchange (pre-exchange 2007). Depending on how busy the Exchange Mailbox server is the data is also written to the database (.edb file) which is one monolithic file. Now the message has been written to two locations: the database where the user will check in using his client and retrieve his mail, and the transaction logs where the is broken up, depending on its size, into 1 MB chunks. Note: New is not the only reason for transaction logs to be created. For every transaction that occurs on the server (new , deleted , a change to an message, a modified attachment ), that information is written to a transaction log. Transaction logs are created in a log stream; in other words, they follow a sequential manner, kind of like a factory line where the current log is E00.log and then when it fills up it gets renamed and moved over. Although the current log written to might like an E00.log, renamed logs might look more like E E.log. The current log is not committed to the database and does not have its name changed until after it is filled to the full 1 MB capacity. Then, it is closed out. There is a checkpoint file to keep track of which transaction logs have been added or committed to the database so that none are missed. If the database ever corrupts, and the transaction logs are safe, you can restore a backed up database and using the logs, the restored database can be made current by applying the changes recorded in the logs since the last backup. 44
51 The Database Process at Work Reserve logs (.jrs) also exist (10 of them), just in case the disk space runs out and you need a little space (although a few MB of extra transaction log space won t buy you much). If the drive you are using for your database runs out of space the database dismounts and the store processes will stop and your Exchange server will be dead in the water until you clear up space. The Database, Transaction Logs, Reserve Logs and Checkpoint We mentioned earlier the problem with storage groups being the transaction logs and that is because the transaction logs are intertwined within a storage group. In the event you have 3 databases in the same storage group, the transaction logs will continue to use the log stream approach for all three at once. This might have a negative impact on performance and disaster recovery at some point which is why the recommendation to use one database for each storage group was given, and why we no longer use storage groups in Exchange. 45
52 In the past, it was best practice is to move your database and transaction logs off the drive that holds the system files (basically, where you have installed Exchange, let s say the C:\ drive) and then separate the database from the transaction logs by putting them on separate volumes backed by different physical disks. To go one step further, if you can place your databases on a form of striped volume (if redundancy is provided some other way) or a striped volume with parity (a RAID 5 setup) to enable fault tolerance, and if you can mirror your transaction logs, you can achieve the best-practice level of storage for your mailbox servers. It s also called database per log isolation. This is still considered best practice for stand-alone (not using high availability) deployments. However, if you are using high availability (which is covered in Chapter 6) isolation of disks and logs is not required. It s important to note what happens to your database and transaction logs over time. The database continues to grow. Microsoft wants you to keep it to under 200 GB but they have a maximum size of 2 TBs. The transaction logs grow too. At 1 MB each you can have thousands and thousands of them. But when you perform a full or incremental backup of your Exchange store databases logs are truncated (or removed). Note that logs typically purge daily after successful database backups. If a backup fails then the logs will not clear, so it is important to over allocate any disk that your logs will reside on. If you run out of disk space your database will shut down, so supersize your drives to ensure database uptime. Fortunately SATA disk as an option for new versions of Exchange as this makes disks cheap enough to accommodate your Exchange sizing needs. 46
53 Concepts of Database Management One thing that makes Office 365 (or Exchange Online) very interesting is that you don t have a way to manage your databases. You cannot access the properties of your databases and configure settings and options because this is all handled by Microsoft. For some administrators that is a bit stifling because they want to get their hands under the hood and alter settings. What kind of settings and such can we alter? Well, one thing we ve mentioned in this chapter is mounted databases. You can mount or dismount a database. When mounted it is online and accessible by users to get their mail. When dismounted it s offline and inaccessible and maintenance tasks can be performed on the database. Configuration of Databases in Exchange 2013 If we go into the Properties of a database in Exchange 2013 we see four options: general, maintenance, limits and client settings. The general tab gives information about the database location, information about backups and so forth. You can see the last backup (full or incremental) and other important information. 47
54 The General tab of Mailbox Database Properties The maintenance tab allows you to configure standard journaling (discussed in Chapter 5: Regulatory Compliance), set the schedule for the database maintenance time and configure a few other options, including enabling circular logging. The Maintenance tab of Mailbox Database Properties 48
55 Circular logging allows transaction logs to be overwritten or purged. This is fine if you don t need to worry about the transaction logs being retained (for example, if you use replication as a backup solution) but otherwise you don t want to enable circular logging because it eliminates your logs, which are helpful if you have to restore a backup of your database due to corruption or disk failure. The Limits tab of Mailbox Database Properties The limits tab allows you to specify the amount of storage the mailboxes in the database are allowed to have. You can see in the figure above that there are defaults, but these can be adjusted. You can see there are three stages: issue a warning, prohibit send, prohibit send and receive. When the final limit is reached, the server will stop receiving from others and end-users will not be able to send. You can also see that there are deleted items and deleted mailboxes settings. When a user deletes something from their Inbox it goes in the Deleted Items folder. If they delete it from that folder it isn t gone yet. They have 14 days (by default, unless altered here) to recover that deleted item before it is purged. So, deleted item recovery is a great option for mailbox item level 49
56 recovery. The same is true of deleted mailbox retention. If an employee quits or is fired and the administrator is told to delete their mailbox, that mailbox is still available for restoration for 30 days (by default). If the person returns within that timeframe you can restore the mailbox without looking to a backup. The final tab, client settings, has the option to configure the offline address book. An offline address book is very helpful if an end-user is working and needs to send s but cannot connect to the Exchange server directly (maybe they are on a plane) and cannot access the live Global Address List (or GAL). The offline address list can allow them to still find addresses in that list. Note: the entire GAL may not be in that list, that s why it is configurable. The admin can determine what that address book contains. The Big Takeaways The Exchange Mailbox server is all about the databases. Mailboxes live in those databases and managed thanks to the Extensible Storage Engine (ESE). All mail coming in and out is handled by the Mailbox server and mail items, calendar items, contacts and tasks are all stored in the database. Database architecture has evolved and been enhanced over the course of Exchange s lifetime. The current iteration is designed to allow for cheaper storage solutions (like JBOD) no longer requiring the expensive SAN solutions we needed in times past (although many admins still prefer the higher performance disks). comes into the server and through memory into transaction logs and into the database.edb file. A checkpoint file keeps track of which logs have been entered and which ones haven t (to ensure recovery if there is a crash). Those logs are truncated upon full/incremental backup (or purged automatically if you enable circular logging on the database). The database has properties that you can configure including limits and deleted item/mailbox retention times for the database. And there you have it. Time to move on to Chapter 4! 50
57 Chapter 4: Recipient Management There are so many different types of recipients for an Exchange server. That may sound odd because you may be thinking the only recipient type is a user who is assigned a mailbox. But there are many recipient types, some of which have mailboxes and some that do not. But let s not jump too far ahead of ourselves. This chapter is going to focus primarily on the most obvious of recipient types, the user mailbox. Then we will review other recipient types and conclude with a discussion of public folders (now called modern public folders). Ok, so let s get started! 51
58 User Mailbox Management User mailboxes are the most common recipient type. Each mailbox is associated with an Active Directory account. The account in Active Directory may already exist and you create a mailbox that connects to that account. Or perhaps you have to create both at the same time. Now which method you choose may depend upon whether or not the user was created prior to your installation of Exchange, which may be the case. Or it could be that you don t have permissions to access or create accounts within Active Directory and so you cannot create the end-users, you have to wait for the AD admins to perform that task. But either way, these two must co-exist (the AD account and the mailbox) to have a user mailbox in Exchange. When you create the mailbox for an end user you can determine which database is going to hold that mailbox. If you have more than one database, you can put the mailbox in the one that is best for that end-user. Sometimes you need to balance out your databases and you can move mailboxes when you need to from one database to another. Creating these accounts is super easy. No matter which version of Exchange you are using it s not rocket science to create new user mailboxes (or delete them if you so desire). And the real beauty of the user mailbox is in how easily it is for end-users to connect to their mailbox once complete. Because they are logged into their computer on the network with their AD credentials, when they open up Outlook (let s assume all modern stuff here Exchange 2013 and Outlook 2013 so I don t have to explain pre- Autodiscover) the application can automagically find the Exchange server thanks to a service called Autodiscover. Credentials are passed and the mailbox is automatically located and the end user is connected up. Seriously, this is an awesome feeling the first time you install an Exchange server, create a mailbox and see that user open Outlook and connect. It makes me feel good with every new installation. And even better is when you can send and receive internally as well as externally (which requires a bit more 52
59 configuration to ensure mail can flow externally but let s not get too far ahead of ourselves). User Mailboxes in Exchange 2013 Ok, so you ve created this user mailbox. They can connect to it through Outlook and send (internally only to start). Now what? What configuration is available to you? There are so many things you can configure with user mailboxes. You can configure them one at a time through the Exchange Admin Center (or select multiple mailboxes and configure in bulk) or you can use the Exchange Management Shell (the Powershell command line method if you recall from Chapter 1). You might configure the following: Individual mailbox quotas (that take precedence over the ones set for the mailbox database) Deleted item retention settings (that also take precedence over the ones set for the mailbox database) 53
60 Message size restrictions (send and receive maximum message sizes) Mailbox features (enable/disable) like Unified Messaging, mobile device support, Outlook Web App (OWA), etc as well as policy choices Send As and Send on Behalf Of or Full Access mailbox delegation settings Note: Let s just dive deeper on that last one. Imagine you have a VP of Sales who has an assistant that needs to be able to send mail out but it has to look as if that mail came from the VP of Sales. If you give the assistant Send As permissions they will be able to do that through their Outlook. However, if you want the assistant to be able to send but make it clear that it is coming from the delegate, you give them Send on Behalf Of permissions and in the From: line of the it will be clear that 54
61 the was sent by the assistant, on behalf of the VP of Sales. Obviously if you want a delegate to have full permission to open and use the VP of Sales mailbox as if it were their own, you can provide Full Access permissions to the assistant. Other Recipient Types We re not going to deep dive into all of these so if they don t all make perfect sense don t worry about it right now. Exchange admins may never even use some of these recipient types never ever. Like this first one. Linked mailboxes are only used in specific deployment scenarios and so, as one example, if you aren t in an environment with Exchange deployed in a resource forest, you may never need a linked mailbox. Linked Mailbox: Accessed by users in a separate, trusted forest (often used when Exchange is deployed in a resource forest) Groups: There are several different types of groups you can create with Exchange and the two most common are distribution groups and dynamic distribution groups. o o Distribution groups have a static membership (that you configure manually) and when you send an to that group s address, it goes to all the members in the group. Dynamic distribution groups are based on criteria through filters so that persons are added or removed from the group based on their attributes. The membership of the group is derived at the time any given message is sent. So imagine a group where the criteria is location (Orlando) and a person named Sue is transferred to the Orlando branch office. The moment their criteria is changed in AD to reflect that their location is now Orlando, they are automatically added to that dynamic distribution 55
62 group. Should their location change, they will be removed automatically as well. Resource Mailboxes: These are great for keeping track of equipment and scheduling the use of company equipment as well as meeting locations. This mailbox is created solely for scheduling, not for sending and such. There are two types: o o Equipment mailboxes can be used for scheduling and keeping track of projectors, laptops, even company cars or whatever needs to be managed Room mailboxes can be used for scheduling meeting locations (like the conference room, the auditorium, the lab, the training room and so forth) Mail Contacts: These are contacts that have an object in Active Directory that is mail-enabled, but the address is external (meaning there is no on-premise mailbox the mailbox is handled by someone else Gmail, Yahoo, whoever) so you can find them in the Global Address List (GAL) but they are contacts, not user mailboxes. You might use these for outside contractors that work for your company so that users can locate their s easily in the GAL. Mail Users: These are users who have accounts in Active Directory, so they can log into your network, onto an AD domain, but they don t have a mailbox on-premise. They have external mailboxes. These might be for temp workers who you don t want to give mailboxes to but need to be able to easily , or for other workers that fit in needed a login to the domain but not a mailbox. Shared Mailboxes: These are usually set up for collaboration purposes. Like a or an so that multiple people can be 56
63 given Full Access, Send As or Send on Behalf Of access and then persons can be allowed to monitor and/or send from the common account. Note: In addition to shared mailboxes there are two other collaboration mailbox types: site mailboxes and public folder mailboxes. Site mailboxes are new to Exchange and they require both Exchange 2013 and SharePoint 2013 to allow for a new form of collaboration through the Outlook 2013 client. They aren t widely used yet however. Modern Public Folder Mailboxes In every version of Exchange up to Exchange 2013 when you wanted to use public folders for collaboration within your organization you had to create a separate database for it. Now I might be jumping ahead here because I m assuming you know what public folders are, but you might not depending on your experience and the work environments you ve been in, so let me back up here. A public folder structure is usually started with a clear purpose in mind so that end-uses can collaborate easily or share information easily with other within their organization. Perhaps it is designed by location to start (as you can see from the figure to the right). Typically an admin will start the top-level folders and assign permissions to others to manage them going forward. Oftentimes public folders sprawl out of control and become the dumping grounds for all sorts of material that goes out of date. Preventing users from creating top-level public folders and requiring them to request public folders through a Service Desk can prevent this feature from sprawl and turning into a dumping ground. 57
64 Public Folder Permissions In legacy versions of Exchange one of the benefits to the way public folders were designed is that you could provide for availability and faster access to them by creating replicas. So imagine a large organization with a branch office in London and one if Trinidad. Let s say you had a top tier folder for each location. You could have an Exchange server in each location with a public folder database on those Exchange servers. Perhaps the original public folder for London was created on the Exchange server in London. You could create a replica of that public folder and place it on the server in Trinidad. This would accomplish two things. There would now be a duplicate of that folder and if folks in Trinidad wanted to access it they could get to it faster because it is now local, instead of going over the wire to the London server for that folder and its contents. Starting with Exchange 2013 it doesn t work like that anymore. Now you create a public folder mailbox and select a mailbox database to host it. Then you add public folders into that public folder mailbox. So the separate database for public folders is gone as is the separate replication architecture for replicas. 58
65 Public Folder Mailbox in Exchange 2013 Public Folders in Exchange 2013 As with legacy public folders you can see from the figure above that you can have subfolders nested within your top tier folders and you can mail-enable a public folder so that folks can items to it directly. You can also configure folder permissions through the EAC as well as through the Outlook client. 59
66 When designing your public folder top-tier layout you will want to put folders in mailboxes that are close to the people who will access them often. With modern connectivity speeds this may not be a problem for folks to access a public folder located in another location (but that will depend on the connection speeds of the remote location). On the plus side however, because the public folders are now in a normal mailbox database they come under the availability features (aka DAG) that we will discuss soon in Chapter 6. Now, if you have an existing public folder structure and you need to migrate over to this new structure it s going to take some research and work on your part. The Big Takeaways There are a lot of different recipient types but the most commonly used is the user mailbox. There are a lot of configuration options for user mailboxes including individual quotas, deleted item retention, features enabled/disabled and more. Additional recipient types include linked, shared, and resource mailboxes as well as contacts, groups and more. Each has its own function within your Exchange environment (they ve thought of pretty much everything). Collaboration mailboxes like shared mailboxes and site mailboxes (when configured to work with SharePoint 2013) help within your organization. Public folder mailboxes ( modern public folders ) are a legacy collaboration solution with a modern twist to it. Now we create a public folder mailbox in a mailbox database (as opposed to a public folder database) and add our public folders (and/or subfolders) to that PF mailbox. Now it s time to get into some legalese with you folks in Chapter 5 and Regulatory Compliance. 60
67 Chapter 5: Regulatory Compliance What do you think of if I say Enron, WorldCom or Tyco? These were highly publicized corporate and accounting scandals. They exposed a lack of proper supervision and a lack of strong regulation against the fraudulent practices taking place in major corporations. As a result billions were lost and public confidence was shaken. With Enron, one of the world s largest accounting firms, Arthur Andersen, collapsed due to evidence that was brought up from company . To combat this unfortunate trend toward fraud the result has been a slew of new laws that improve government auditing of businesses. These problems, combined with issues like employee harassment cases (sexual or bullying) has led to an increase in regulations, many of which apply to communications within an organization being a primary form of auditable 61
68 communication. Note: It s impossible to track and audit every conversation (some are discretely held in the shadows), but is another story. That s where the Exchange admin comes in. What is Regulatory Compliance? Basically, there are a variety of regulations that currently exist including: Sarbanes-Oxley Act of 2002 (SOX) Security Exchange Commission Rule 17a-4 (SEC Rule 17 A-4) Gramm-Leach-Bliley Act (Financial Modernization Act) Financial Institution Privacy Protection Act of 2003 Health Insurance Portability and Accountability Act of 1996 (HIPAA) Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act of 2001 (aka the Patriot Act) European Union Data Protection Directive (EUDPD) Some of these you may be aware of. These are just the highlights, there are many more in effect. And different countries have their own versions of these laws. How do these laws affect you? How do you determine what you need to do for your organization? Well, every company is different. If you are in the healthcare, government, financial fields than there are regulations that apply to you that may not apply to other organizations. For example, you may be obligated to retain communication data longer than another company. It s essential that you have a legal advisor (or team of advisors) to make sure you are clear on what you are obligated to do. 62
69 Now while some organizations have a legal person or team to assist (or even have compliance officer positions) smaller businesses need some help in this regard. To assist there is a site called Business.USA ( ) that provides smaller businesses with resources. Let s answer a quick question whose side are you on in a legal case against members of your organization? Answer: the company s side. Not the individual members, even if you know them and like them. Remember, if a person went outside the bounds of the law and, as an example, sexually harassed someone in the company, you need to be able to protect the company. The company must be able to supply the communications to either exonerate the person of wrongdoing, or provide transparency to show that they don t condone the behavior. If they don t provide such communications they can be held liable for not being compliant with the regulations. So if you are responsible for the Exchange environment you need to know the technologies involved and know when they truly meet enterprise grade regulatory compliance, and when they don t. There is one line from Sarbanes-Oxley that is worth keeping in mind, not to scare the reader but to educate as to the seriousness of complying with investigations. It says in Section 802(a) of the SOX, 18 U.S.C.. Ahem <cough><cough> yikes! Now just so you know, I have not heard of any of my fellow Exchange admins going to jail but it s good to see the seriousness of this part of an Exchange admins job with regard to complying with company policy and legal requirements. 63
70 What it Boils Down To So what is regulatory compliance? Following the laws for your business. And in the end what does that mean? Primarily, it means being able to discover all relevant communications ( , IM, etc ) for the period of time stipulated by your corporate policy, which should be based on the legal requirements. It also means being able to provide through auditing that no tampering has taken place (ie. to prove if an was read or not by an individual, to prove that someone else didn t send an that is inappropriate or illegal, etc ). Focusing on the first part, let s say your organization is responsible for discovering relevant messages for 7 years. This is not a backup (although in times past that s all we had) and it s not availability (which is just the current state of your existing operation) but it is retention of relevant data (and by relevant we mean no spam/junk). It s an archive of your data. In the past we may have relied on backups of the data, but those have not proven to be as reliable as an official archive nor are backups easy to search and discover information on so they prove time consuming in the event of litigation against your organization. So, if you were asked to provide all s from Joe User to Jane User from 3 years ago between the months of March and July would you be able to do it? Without an enterprise grade archive solution, one that captures all relevant and makes it tamper proof, and one that provides impressive discovery tools that might be a ginormous task. Note: Some companies that do not have the strict legal requirements to retain data have begun to establish corporate policies that openly state that they retain nothing. 15 day retention and that s it. While this may be legal in their case, it may not be wise. In the case of litigation your accuser will have their proof, but you won t. Why let the opposition be the only one with cards at the table? legally speaking that is. 64
71 Exchange Regulatory Compliance Features Every flavor of Exchange in the past 10 years has improved upon the regulatory compliance features available to administrators. Here is a list of the current line-up of built-in tools (some of which are enterprise grade and others that are not quite there yet): Personal Archive You may be thinking Awesome, Exchange has a built in archive solution, that s going to save me money. Well, not so fast. Note the word personal at the front. To explain see if the following scenario sounds familiar: You re working away and you get a message that your mailbox quota has been reached. That s funny, you think because I just deleted stuff last week. But all that stuff is sitting in your Deleted Items and so it is still part of your mailbox size. And your quota is a stingy 1 GB (let s say) so you need to clean house a bit (Inbox, Sent Items, Deleted Items, etc ). Easy enough, so what do you do? You ve been taught how to create a.pst file for your mail. This file is on your desktop system and you can create it through Outlook and move tons of over to it yet still access it from Outlook easily when you need it. A great solution! That resolves the quota problem right? Indeed it does resolve the quota problem. However, it creates a new problem for admins. Discovery becomes harder if not impossible when end-users have the ability to pull stuff to their desktops (which may not be backed up). The solution? The personal archive feature in Exchange This is like a second mailbox in a sense in that you can place the archive in the same database as the existing mailbox for a person or you can put it in a completely different database (on a completely different system or drive). The value here for admins who are crazy about performance and mailbox quotas (due to the expense of putting mailboxes on high-performance, expensive disks) is that you can put the archive on cheap disks and you can make them larger quota sizes (or unlimited if you like) so that your end users now have a place to put old without moving 65
72 them to.pst files. Archived messages can be searched for just like messages in your primary mailbox (both are indexed for search). The benefit for the users is that now they can see the archive even when connecting through Outlook Web App (OWA), something they couldn t do with a.pst file. Note: You cannot access your personal archive through ActiveSync (mobile device) connections. So, can the personal archive feature assist with being compliant with regulations? It can. It helps ensure data is discoverable and not floating around on desktops. However, this feature is not quite enterprise grade. Part of the reason is that end-users still have the ability to delete their own . Let s see how you can prevent that with our next feature. In-Place Hold (also Legal/Litigation Hold) One of the biggest nightmares for compliance with Exchange 2013 is that out-of-the-box users have the ability to delete their own mail permanently. Unless you place mailboxes on legal hold an end-user can delete incriminating evidence before it is backed up. Legal Hold (aka Litigation Hold) is a feature in Exchange that allows an administrator to stop users from deleting . They don t know their mailbox is on hold so they still delete items, but they are still discoverable. Note: The Recoverable Items folder is used with In-Place Hold when items are deleted. They are removed from the users view but because they are in Recoverable Items they can be discovered through an In-Place ediscovery search. The update to Legal Hold in Exchange 2013 is called In-Place Hold. This allows you to focus not on the entire mailbox but on specific items and timeframes. You can specify what you want to hold by using keywords, senders and/or recipients, start/end dates, message types ( /calendar items, etc ) 66
73 This all sounds cool doesn t it? But there are two issues with it. First off, you don t typically put a mailbox on hold unless requested to by HR, Legal, whomever. And by that point the person (if they have half a brain) has already deleted stuff. (The odd thing about deleting inappropriate or incriminating stuff is that you can only delete one end of it. The other person still has it!!! And if they are in the same company it s still in Exchange!) Now, you could be very proactive and just put all mailboxes on legal hold but that would just bloat your database really fast. Possible, but not your best course of action. My honest feeling on this is that your best course of action to ensure a solid enterprise grade archive that is untouchable by end-users but helpful should they need to restore accidentally deleted , is to go with a thirdparty, cloud-based solution. My personal choice in this regard is Mimecast. All coming in/out/internal gets archived. End-users know it, so that is already a deterrent from them using your system to send stupid/incriminating . It s all held in the cloud and easily discoverable. End-users can restore s they delete but cannot delete s from the archive. Perfect solution! In-Place ediscovery At times your organization may be required to discover content within Exchange (or Exchange Online). Reasons may be due to corporate policy, regulatory compliance or a good ol fashion lawsuit. Someone says in appropriate s have been sent and you re asked to provide Legal with all correspondence between all persons involved. An overwhelming task without tools for that level of discovery (aka ediscovery). 67
74 Now with Exchange 2010 we had a feature called Multi-Mailbox Search which wasn t super awesome. I didn t like it personally but it was free and built-in. The next level in Exchange 2013 is called In-Place ediscovery, which is a much more polished version. Using In-Place ediscovery you can estimate your search results, preview the results, copy the results to a Discovery mailbox or even over to a.pst file for transport elsewhere. Again, you can only discover what you have right? So the discovery tool is only part of this puzzle. It s good to see it improving. And with Exchange you can give folks in authority (like HR) their own access to perform ediscovery so that you don t have to stress about it as the Exchange admin. Messaging Records Management (MRM) You can tell users they need to clean up their mailboxes but oftentimes they misunderstand. They may ignore you completely or delete stuff only to have it sitting in the deleted items folder (which is still a part of the mailbox and takes space). It was clear to Exchange developers that an automatic way to help out the end users was needed. lifecycle management is essential. Messaging Records Management was introduced with Exchange 2007 (MRM 1.0) and it involved Managed Folders. This evolved into MRM 2.0 in Exchange 2010 and now 2013 and it s a combination of Retention Policies and Retention Tags to assist with retaining messages that are important to retain and removing messages that are non-essential. The value to a solution like MRM is that you can reclaim wasted storage space and assist end-users to be more productive by removing clutter. You also help in retaining the important messages for a necessary period of time. So perhaps to help you start thinking about when you might use this imagine again the scenario above where you have end-users deleting items but putting them in their Deleted Items folder. What if there was a policy that said all in the Deleted Items folder would be deleted after 90 days. That would resolve the bloat there. What if you had another policy that said all mail past 68
75 the 180 day mark in the users Inbox would be moved to the personal archive (assuming you are using that feature). Can you see how this feature might help because it works automatically instead of leaving it up to your users. So there are retention tags with retention settings that can apply to the whole mailbox or to specific default folders (like Inbox or Deleted Items) or to specific messages (depending on the tag time). These tags are added into retention policies, which are then applied to end users. Note: you can only have one policy applied per user. The Managed Folder Assistant (a service running on the Mailbox role) applies the policy to the end-users mailbox. There are three different types of retention tags: Default Policy Tag: Applies retention settings to untagged mailbox items Retention Policy Tag: Applies to default folders like Inbox, Deleted Items, etc Personal Tags: Allow users to apply to folders created or individual items Retention Tags in Exchange
76 Journaling Journaling is an interesting feature in that it allows you to set up a separate mailbox (that you call your journal mailbox) and you allow to be recorded into that mailbox for easy review or to use as a form of archive. Microsoft makes it clear that journaling may not satisfy your legal requirements for regulatory compliance so don t think of it in the same category as an archive, however it may help. One way it might help is if you perhaps have a company or legal policy that requires a review of employee/client communications. With journaling you can easily review the communication by searching through the journal mailbox, which captures the traffic you have specified (depending on your configuration settings). There are two flavors of journaling: Standard journaling, which is performed at the database level and has the journaling agent focus on that entire database (and you choose if you want incoming/outgoing or both journaled). Premium journaling allows the agent to focus on a granular level on individual mailboxes or distribution groups (and it requires an Enterprise CAL, or client access license). Note: Office 365 doesn t allow access to the databases so you cannot perform standard journaling, however some of the plans you can choose will allow you to perform premium journaling. Make sure you know the features allowed in the plan you choose. Transport Rules Transport rules is one of my favorite Exchange features ever since they showed up in Exchange They re similar to junk rules that you might set up for your Inbox, the difference being that Inbox rules work when a message is delivered whereas transport rules are handled in transit. 70
77 Basically there is a transport rules agent that processes the rules created so messages are analyzed as they move within your organization and you establish the criteria to be met. Now in legacy Exchange (2007 and 2010) the transport was handled by the Hub Transport server role (and, if used, the Edge Transport server role in the perimeter). In Exchange 2013 the rules and the agent are handled by the Mailbox server role. The way you create a transport rule is to determine the condition, the action and any exemptions. So here is the basic anatomy of a rule: Anatomy of Transport Rules It s NOT that complicated and that is one of the things I love most about transport rules. Now as for the practical uses for these rules, they include the following: You may need to apply disclaimers to messages that leave your organization and rather than leave it up to end-users to configure this in their signature you can configure a Disclaimer transport rule. 71
78 You can prevent inappropriate content from entering or exiting your organization based upon specific key words or file types. You can filter information that you have listed as being confidential or redirect inbound/outbound messages for inspection and approval before being sent. You can track or archive messages that are sent from specific individuals or teams of individuals to ensure compliance is met. Data Loss Prevention Data Loss Prevention (DLP) is a new feature in Exchange 2013 and what it does is piggyback off of transport rules to help protect your organization from giving out sensitive information. Sometimes people are so comfortable with and trust implicitly that is a safe means of communication that they put their personal sensitive information in it: banking information, SS# s, credit card information, etc DLP policies allow us to configure special kinds of transport rules for Exchange to look for this sensitive data being sent. Policies can be outright enforced, or they can be enabled as tips so that users are warned by a Policy Tip that they are about to violate a policy (although this requires users to use Outlook 2013). There are different ways to implement the DLP policies (like creating your own custom policy from scratch) but the easiest way is to pick a policy from one of the templates. You ll note in the following figure that there are templates based on US laws and if you could scroll you would see that there are a mixture of other laws included from other countries. Note also that the definitions for each template make it clear that the use of the policy doesn t ensure compliance with any 72
79 regulation. The names are to help you make mental connections not to imply compliance. DLP in Exchange 2013 Mailbox/Administrator Audit Logging Auditing is very important in compliance. You have to be able to show that the mailbox owner is the one who sent the in question and not someone with access permission. Even Administrators have to be auditable in that they can prove who they gave permissions to. Everything can be logged. Note everything CAN be logged, but not everything is logged by default. You have to enable mailbox audit logging and oddly enough this has to be done through PowerShell and the Exchange Management Shell. With administrator audit logging it s interesting that everything is logged by cmdlets in PowerShell. So if the administrator creates a new mailbox you will not see the Exchange Admin Center 73
80 method he used but the New-Mailbox cmdlet that ran in the background as a result. Everything done in the web-based console is actually performed through PowerShell commands behind the scenes, so all commands are recorded with the exception of the Get- and Search-cmdlets because these do not affect changes. Auditing Options in Exchange 2013 Information Rights Management IRM is one of my least favorite features in this list. Information Rights Management (IRM) is a solution to assist with information/data leakage by establishing rights-protected content. Traditional methods that assist with this include encryption, company policies and such, but IRM works with a feature called Active Directory Rights Management Services (RMS) so that you can restrict recipients with regard to what they can do with an they receive. For example, do you need to prevent an from being forwarded or printed? Do you want to prevent an from being cut and pasted using the Snipping Tool? Do you want to 74
81 give an or attachment an expiration date? That s what IRM is for. Now you might be thinking, but J. Peter these are awesome capabilities that IRM enables, how could you NOT like it?! Well, look here is the problem. First off, it s not the easiest thing to set up and configure. Second, IRM is a deterrent, not an absolute prevention feature. For starters, it only works for inhouse recipients. So once that leaves your organization you lose that control. And even though you cannot use the Snipping Tool or Print Screen to capture s you can use a third-party tool. Or you can take a picture of your screen with those nifty camera phones the kids are all the rage about these days. Of course we ol timers could just take a pencil and paper out and jot down the content. So it s a deterrent to a degree but I m not convinced the effort involved justifies the rewards obtained (aka the ends do not justify the means in this case). The Big Takeaways The world scene is ever changing. With scandals mounting the decision to fight back with stricter regulations with harsher penalties seemed to be the only resort for lawmakers. To be in compliance requires a great deal of effort and with being such a key player in corporate communications it falls to the Exchange admins to institute proper levels of archiving and ensure ediscovery and so forth. Exchange is increasing its compliance management features with each release (and in Office 365) but currently there are aspects to these features that are not enterprise-grade or they simply require too much effort to ensure compliance whereas a third-party solution that handles the archive side with robust ediscovery tools may be worth the price tag. Listen to your legal team, do your homework on third-party options, and make wise choices. Oh and stay out of jail! 75
82 Chapter 6: High Availability and Site Resiliency High Availability with Exchange. No subject has been as much fun to discuss over the past few Exchange editions (starting with Exchange 2007). I feel this is one of the most valuable features in Exchange and the Exchange developers only continue to make it better and better. Obviously at this level, the primer level, we won t be going into the nitty gritty of it all. But by the end you ll understand that there are ways to ensure availability of your Exchange environment (and I don t mean your Mailbox servers only). 76
83 To truly grasp the present, we have to take a step into the past once again. High Availability and Site Resiliency In the old days we talked about ways to make servers more fault tolerant. In other words they can lose a hard drive but still keep working. To accomplish this required redundancy. Multiple power supplies, multiple drives configured in mirrored sets or RAID 5 arrays. More expensive hardware and software solutions were developed. Servers were clustered to either balance their load between them, improve availability or provide for higher performance by combining processing ability. To increase the availability of your servers and services was possible but it was going to cost you lots!!! Exchange, as a mission critical application, became a heavy focus with regard to availability promises and Service Level Agreements (SLAs). SLA s might be made between a buyer and seller, between a company and a set of contractors, even between the company and its own employees (individuals or teams). It s a pledge of sorts that they will maintain a certain level of quality work, often demonstrated by servers being up (uptime) and available to serve. Obviously uptime is worthless if the services aren t available, so just having a server continuing to be up does us no good. This is where the concept of business continuity replaces the discussion of fault tolerance from times past. Hardware, software, application, network and ultimately, end-user access to all of this. You can have everything running, but if the user can't access it from where they are, it might as well not be running at all. Note: SLA s often indicate the amount of uptime in percentages (like 99.9%) with some kind of restitution provided should this not be met. It s not only important to have redundancy and availability of a disk (with databases), but of the whole server, of multiple servers, connections internally, connections to the Internet you don t want a single point of failure to keep people from accessing their 77
84 . And so you do your best to eliminate those single points of failure (and yes, that can get expensive). You may even have a secondary datacenter set up to allow for site resiliency. Or you might use a Branch office to be your secondary location. Two terms you may hear include Recovery Time Objective (RTO) and Recovery Point Objective (RPO). This is usually part of a SLA and they stipulate the RTO (acceptable time without service being available) and RPO (how much data, past and present, must be restorable in the RTO). In the past these were negotiable terms and companies knew that the faster the recovery and the more data that need to be restorable, the more the expense. But in modern times you often hear people say the RTO is immediately, and the RPO is up to the nano-second. They want no downtime and they want all their data. Now that is a challenge and Exchange has been designed well to try and handle that challenge. Mailbox Server Availability and Resiliency The secret to mailbox availability and resiliency lies right in the very database architecture we discussed in Chapter 3. Do you remember we talked about coming into a server and going first into transaction logs and then into the.edb database (aka the active copy of the database)? Well, if they can go into one database, why couldn t those little 1 MB files be shipped over to another server with a duplicate of the database and be played (or re-played) into that database to provide for a secondary copy (aka a passive copy). And so that is what they did! Continuous Replication Starting with Exchange 2007 a feature called continuous replication was introduced where the database is initially copied and the log files are shipped over and replayed constantly to keep the passive copy of the database up-to-date. 78
85 Now you might be thinking, there has to be more to it than that right? Well, that s the underlying concept. Obviously there were some kinks to work out in the beginning. CR and Exchange 2007 With Exchange 2007 there were several continuous replication offerings (only one of which might truly be considered high availability because it used clustering services and was automatic). Let s review the 2007 offerings: Local Continuous Replication (LCR). LCR is a single server solution that uses asynchronous log shipping and replay from one set of disks to another. The solution is one that requires a manual switch to move from the primary copy of the data to the secondary. It s called the poor man s cluster by some although there is no clustering technology used. The problem with this solution is that it only provided disk redundancy and resiliency. So if the server died, the mailbox server was off-line. 79
86 Cluster Continuous Replication (CCR). CCR is a clustered solution that only allows for 2 nodes in the cluster where one is the active node and the other is the passive node for automatic failover. This solution allows for two different systems and two different sets of storage offering a greater level of availability because single points of failure are eliminated. Asynchronous log shipping and replay is used in this solution to keep the database up to date between the active and passive copy of the data. This was the only true high availability offering because clustering services were used and so the process for failover was automatic. It offered both server and disk redundancy but more was needed for site resilience. 80
87 Note: CCR uses a feature called the transport dumpster to help ensure the passive is brought up to date with that may not have been shipped over. The transport dumpster holds on to for a brief period of time. It s been replaced by Safety Net in Single Copy Clusters (SCC) SCC is NOT a continuous replication solution. It was available with Exchange 2007 RTM though so I thought I would mention it here. It offered a similar solution to what we had in Exchange 2003, multiple systems with a single storage group that is shared between the nodes of the cluster. Again, we have one active server with one or more passive servers waiting for a failover. This solution did provide for server redundancy and the disks were usually on some kind of an array with RAID providing the fault tolerance for the disks. Standby Continuous Replication (SCR) SCR was introduced in Exchange 2007 SP1. It has the same overall features as LCR in that there are no clustering parts being used and the goal is to allow for availability from one site to another. It s a continuous 81
88 replication solution with a manual failover if something goes wrong. Ok, so this is what we had with Exchange 2007, now let s move forward to see what they did with Exchange 2010 Database Availability Groups (DAG) So, you recall with Exchange 2010 that storage groups are gone and the focus is on the database right? Well, something else that is gone are all the high availability (and semi-high availability) offerings. What?! Yep, everything you just learned is not in LCR, CCR, SCR (even SSC) all gone. But continuous replication has been retained and put to work in a new solution called the Database Availability Group (DAG). And it is awesome seriously. So, DAGs use continuous replication. There is an active copy of the database (that s the one everyone still connects to for their mail). And you can have a passive database copy too. In fact, up 82
89 to 16 servers can be part of a DAG. The design options are outside this books scope, but suffice to say 16 servers means you shouldn t be losing your availability all that easily if you design and deploy this properly. In addition to continuous replication a DAG uses clustering components like heartbeats and a witness server (with a witness directory) to connect members of a DAG and maintain quorum. Ok I know, I know I just used too many unfamiliar (unexplained) terms and concepts and you felt nauseous. Let s go through this slowly. A heartbeat is a simple method that servers in a cluster use to check-in with one another to ensure they are still alive (so to speak). It s essential a way for the servers to say I m ok! back and forth between them. Does the lack of a heartbeat indicate the server is down? Well, it could mean that. If you have two servers, one has the active database and the other has the passive and the passive no longer gets the heartbeat from the active it thinks I must step up here and become the active!. But what if the network cable just got cut (or something crazy like that)? If the passive becomes the active while the active is still up and running this is bad. Like crossing the streams bad (look it up). The terms we use in the clustering world are split-brain syndrome and world chaos. Neither sounds good right? So how do we prevent this? Well, two servers aren t going to cut it. You need to at least have a third server to be the witness. And with DAG it doesn t have to be another Exchange server, it can be any server acting as a witness server with a witness share. This file share witness resource is really only needed when there is an even number of DAG members and so you need the witness as a referee to maintain quorum. Techie Note: As you add servers to the DAG and go from odd to even (2, 4, 6, etc ) the quorum model is changed 83
90 automatically from Node Majority to a Node and File Share Majority model. Now quorum is a term that is used in many situations and it applies to a voting process. It s a consensus of voters. So in our scenario above with 2 servers, if there is a third file share witness to provide quorum and the line is cut between the active and passive, the passive will check with the witness to see if it should take over. If the witness can still communicate with the active, the passive stands down. Now expand this concept and consider that we might be dealing with multiple sites and multiple passive copies of a database. A lot of communication has to continue working for this to succeed. You might be thinking, do I really need all those passive copies? Well, if the active fails, the passive will automatically go into action and your end users will never know there was a hiccup. If two go down, a third would be nice. If the site goes down, it sure would be good to have a secondary location (Branch or datacenter of some sort) with another passive or two. You can see how this could get expensive too. Each server is hardware (even if virtualizing) and software and licensing and administration time and maintenance and so forth. Another value to the passive copies is the ability to apply a lag on the log replay during the configuration. This will make it so that if a virus or corruption hits your active database you have time to revert back to a lagged copy that wasn t affected. To create a DAG you simply create the Database Availability Group and give it a name. You then add the members to the group (these would be Mailbox servers and they can only be members of one DAG at a time). And then you can consider which of your active databases you would like to have passive copies of on the other members of the DAG. 84
91 Lagged copies or database copies can also exist in remote datacenters for the ability to provide a high availability solution to your end users. Consider the DAG configuration below: There are 3 servers and they are all in the same site in this case. Each has an active database (just one for simplicity). We ve created a DAG and put all three servers in the DAG as members. We decided to make 2 passive copies of each database and one of the passives is a 2 hour lagged copy (just as an example). If System 1 goes down, System 2 will be ready to make the Passive DB1 into the active until you repair System 1. Theresa Miller says think of it like RAID, but instead of disks you are using databases. DAC Mode One thing we haven t talked about is failing over between sites. Is this possible? Absolutely. And if everything is ok with regard to the connections to the secondary site there is no different than if the server holding the passive copy was local. However what happens if you have two sites (a primary datacenter and a secondary datacenter). Imagine having 2 Exchange Mailbox 85
92 servers in each site with a witness in the primary site. What happens if the primary site has a blackout? Well, it s 3 to 2 and the 3 are down from the loss of power right? So at that point even though we have 2 fully functional servers in the secondary datacenter they will not mount their passive copies and immediately provide a failover. They have no way of communicating with the witness to provide quorum. So are you dead in the water? Well, in this case if the blackout were to continue for period of time and it was decided you need your up and running (perhaps you have workers in some locations with power, they simply need to access their ) you can provide a manual switchover (not a failover). To perform the switchover you have to manually shrink the cluster to regain quorum. So this is where DAG provides site resiliency not as a high availability process but more as a disaster recovery process. It s not automatic, but it doesn t take long to do the switchover. Now, this is great news. You can manually step in and get back up and running with your passive copies. But what happens if the blackout ends and your primary site comes back online? What if the Internet connection is down and there is no way for them to know the others are serving as the active now? That could be a problem. They wake up and think hum... guess the other servers are down not realizing they were the ones that went down and the others are active. That s again bad. Split brain at the database level. To resolve this problem a feature called Datacenter Activation Coordination Mode (DAC mode) was created. It s a property setting that you configure (through the Exchange Management Shell) when you create your DAG (or when you extend it to another datacenter or AD site). If you lose power and have to manually switchover to your secondary site and the power is restored to the primary datacenter while the WAN link is down, 86
93 DAC mode prevents the servers in the primary datacenter from mounting their databases even though they think they have a quorum. Exchange 2013 DAG While there were huge changes in high availability offerings between Exchange 2007 and 2010, the changes between 2010 and 2013 are more subtle. We still use DAG with 2013 and everything you just learned still applies. But they have continued to improve the solution. One improvement is the ability to place your witness server in a third location. This would allow for a site failover in the event one site loses power because the other site would still be able to communicate with the witness (in the third site). It s still not a perfect solution because there are many ways the quorum could be disturbed by WAN connection outages and so forth. In addition, there have been some improvements with regard to automated maintenance for lagged copies (which is connected to another new feature in 2013 called the Safety Net, which replaces the transport dumpster feature). Some minor improvements like automatic database reseeding after a storage failure and a few other tweaks have also been added. To learn more about Exchange Server 2013 Database Availability Groups (with lots of graphics) check out this great article by Exchange MVP Paul Cunningham on his blog, ExchangeServerPro.com: 87
94 Non-Mailbox Server Availability and Resiliency I know it feels like the only important server in the world is the Mailbox server, but that s not true. Obviously you want to make sure your databases are accessible by Exchange so you put a lot of effort into that plan. Perhaps you have a backup/recovery solution you use and DAG for availability and so forth. But all of that comes to a screeching halt if you don t have other, nonmailbox server considerations in order. First off, with legacy Exchange (2007 and 2010) the other server roles had to be considered. Now the Hub Transport was easy. To provide redundancy and availability all you had to do was add another HT server into your environment. The Edge Transport was/is similar in that you add a second one, copy over the settings (manually or clone it) and you re moving forward. The Client Access needed redundant servers set up (remember you can put server roles together so you didn t necessarily need lots of new servers, just servers with multiple role deployments of Exchange). With CAS servers you needed to provide the load balancing portion. You would do this, in most cases, with a hardware load balancer and you would put your CAS servers in an array. With the CAS role in Exchange 2013 remember the CAS handles ALL client connectivity (although it proxies or redirects back to the Mailbox server so it doesn t do any data rendering like its legacy version older sisters). Because CAS no longer keeps any state or session data it doesn t need a Layer 7 load balancer, it can work with Layer 4. However, application aware load balancers (aka smart load balancers or Application Delivery Components) are the norm and are better in my opinion. Sure you can try DNS round robin in a lab (with no real load balancing) or you can play with Windows Network Load 88
95 Balancing (NLB), but for production environments you ll want a hardware or virtual load balancer. Note: We don t discuss OSI model layers so when I use the terms Layer 4 and Layer 7 that may be a bit new. You can do some quick research on this but ultimately Layer 7 is smarter and higher up the chain (the application layer) as opposed to Layer 4 (the transport layer). Here you can see an example of two load balancers (these are KEMP LoadMasters) so that we have redundancy of even our load balancers for higher availability. We have 2 CAS servers and 3 Mailbox servers configured as members of a DAG. Aside from your Exchange servers you have to make sure you have redundant domain controllers, DNS servers, routers, switches, WAN connections, etc And you may look to alternative, third-party solutions to provide some form of continuity. In the event your Exchange environment goes down (either on-premise or cloud-based with Office 365) it would be nice to have an alternative way to keep working (aka continuity). 89
96 The Big Takeaways There is so much to learn from this chapter. It may require two or three reads to get it all straight. Possibly some additional research. But ultimately here is what you want to remember: Exchange uses continuous replication (staring with Exchange 2007) to ship transaction logs over to a passive copy of the database (or databases with DAG) in order to provide a way to failover or switchover to another server if necessary. That server could be in the same site or it could be in another site, but the goal is to provide little or no downtime. The solution has evolved over the years and in its current form it s called Database Availability Groups (DAG). You can have up to 16 members of a DAG. The configuration options and the design and deployment aspects of DAGs are limitless. It s worth researching DAG design options to see how these can truly help your organization obtain high availability and site resiliency. In the end, however, DAG will keep your Exchange environment available for that point in time. But that won t help you if you need to reach back 5 years for discovery purposes (so don t forget the need for an archive) or if you need to restore something from a few months back that is no longer in deleted item recovery. In those cases an easily accessible archive or a backup solution will be needed. 90
97 Chapter 7: Unified Messaging Next to Exchange high availability in terms of topics I love to talk about is Unified Messaging. People have so many misconceptions about the solution. Some are outright afraid of playing with it! I m here to tell you it s not crazy hard to learn and it s not going to require a telephony overhaul that s called LYNC! (one of the most difficult solutions I ve ever played with so don t expect me to pen a Conversational Lync any time soon I ll farm that out. Maybe a Baby Talk Lync if I were to write it. But I digress Unified Messaging (UM) includes services designed to provide a universal Inbox of , voic and (if configured) incoming faxes. If implemented within your environment you ll 91
98 have a variety of great new features added to your end-users world, services that they may already have in some form through another vendor but will now come through Exchange directly. Unified Messaging 101 So, within your organization you probably have a PBX (or modern IP-PBX) that handles your incoming calls. Think of the PBX like a phone server. You may never touch it (or even see it) but if your company has 10, 100, 1000 people working in it you realize that there are telephone experts (in the field of telephony) who make that all happen for your company. Now, you probably have a voic part to your environment as well. So you have a company number, you get an extension and if someone calls and goes to your extension and you don t pick up, they leave you a voic . Simple enough, right? Ok, so again, the UM services provided in Exchange (starting with Exchange 2007) isn t looking for you to throw out your entire telephony infrastructure but rather for you to break off one little piece of it the voic piece. With UM, if configured properly, you can have voic s left for users and these will be placed in their Inbox as an MP3 (or some other audio format) and can even be transcribed in the itself! How cool is that?! Outlook 2013 with a Voic and Voic Preview 92
99 Note: With Exchange 2007 there was an incoming fax portion provided but with 2010/2013 you can configure this if you have a partner fax server solution with a URI provided by the fax solution provider. So, basically, Exchange 2007 could also receive fax calls, but 2010/2013 require you to use a third-party fax service to which it will send the incoming fax calls. With Exchange 2007 there was a specific role called the Unified Messaging role. And this carried forward with Exchange You could install it with other server roles (not the Edge, but all the others). However, with Exchange 2013 going down to 2 roles, as you recall from Chapter 2, the UM services have been placed on the Mailbox role and they are installed automatically when you install the Mailbox role. Not to worry, if you don t use the UM services they won t interfere with the performance of your Mailbox server in any way so you don t have to worry about turning these services off. Unified Messaging Features There are a lot of cool features that come with the UM role. Let s take a look at a group: Outlook Voice Access (OVA): Users can call their Inbox and access their voic , , calendar, and contacts (all read to them with text-to-speech) and they can update things (like their schedule) using voice. Voice Mail Preview: Uses speech-to-text to take a voic and put a text preview of it in your Inbox. It s not perfect but it uses a best-guess method for words it doesn t know. Incoming Fax: Can be configured after you establish a relationship with a fax vendor and then faxes will be sent to your Inbox as a.tif file. Call Answering Rules: Like Outlook rules for phone calls, these rules allow end-users to determine how they want calls to be handled. 93
100 Play on Phone: Allows users to play their voic s on a phone, rather than through the computer s speakers (for a bit more privacy) Auto Attendant: Lets Exchange answer the phone for a department or an entire company, with either default or company-specific prompts to help people navigate to the right person in your company, or be provided information you want to automate. You can have the user respond with voice or DTMF. DTMF stands for dual tone multi-frequency and yes, you can forget that immediately. Just remember, DTMF means using your keypad. If the auto attendant cannot understand you or if you simply prefer to use the keypad it s good to have that configured to use DTMF (or have an alternate DTMF auto attendant ready) Language Packs: Allows you to configure alternative languages for your UM services. Depending on the language (and if there is an available language pack) you can have auto attendants in the language of the caller configured, with voic preview transcription provided as well for some language. Message Waiting Indicator: Always good to know you have a message. Missed Call/Voice Mail Notification Texts: Again, good to have notification capabilities. Making UM Work First of all, I never recommend the IT admins or Exchange admins, who already have enough to do and learn, dive into the world of telephony. That is a respectable field in and of itself and experts already exist for it so rely on them. Your expert or team of experts should be heavily involved in your configuration plans for UM integration. 94
101 However, it doesn t hurt to dabble a bit in another field right? So let s do a little dabbling and you ll see it won t kill you. To start with there is the Public Switched Telephone Network (PSTN). Think of the PSTN like the Internet of phones; it s what lets you pick up a phone in Alabama and place a call to Zimbabwe just by dialing a number. Some phone company provides the lines coming into your company, including connecting them to the PSTN. Now you have to know that if you have 1000 people in your company they don t literally run 1000 phone lines into it. Instead they provide what are called trunk lines. Depending on the discussed needs of the company a ratio is decided upon to ensure there are enough lines available when people pick up their phones to work. Obviously a call center will need more lines through the trunk lines than a normal business. The lines come in and are configured to work through PBXs or IP-PBXs. These are devices that ultimately allow you to have a phone extension and be able to call the cubicle next to yours or outside to a pizza place for lunch. Again, we re not looking to break this - we LIKE pizza. Now, if you have a legacy PBX in your environment you have to first check to see if it will work with UM. If not you will need to replace that (and you might as well go 21 st century with the IP- PBX). If your legacy PBX will work with Exchange, you ll need to purchase a VoIP Gateway, which pretty much translates the legacy PBX communication method into an IP based method so that traffic can go over your network wires to your server. If you are already using an IP-PBX than you should be able to get it to work with the UM services if the type of IP-PBX is supported. So, imagine this better through the simplified drawing here with an AudioCodes VoIP Gateway. 95
102 The Super Cool UM Lab Ok, so I don t want you to freak out and think you have to go out and buy all that stuff to get UM up and running in a lab. In fact, when I work with it, demo it, etc I use a simple lab setup using an AudioCodes MP-114 VoIP Gateway (and it s awesome). 2 cheap phones plugged into the FXS ports. Two lines out for FXO if I want to test that and a line for network connectivity. Now, the actual configuration of the gateway takes some learning (again, the telephony team is a must, but this is still dabbling). Step-by-step configuration is provided to help you get UM up 96
103 and running though, so you don t have to over-stress about the complexity. Oh I should mention, FXS (Foreign Exchange Station) System or Subscriber is an interface that drives a telephone, delivers battery, etc.. FXO (Foreign Exchange Office) connects to phone lines. PBXs have both FXO and FXS interfaces and the telephony speak can stop here. Configuring the UM Services Remember, Exchange databases and such have been around for 20 years. Veteran Exchange admins will tell war stories about the ol days in the trenches with it. But UM was just released in 2007 and many admins have still never played with it. So, it s one of those subjects that offers even footing your own modern battle to boast about overcoming perhaps. UM does have its complexities too, so don t let anyone fool you into thinking this is a solution you enable and it works. You will have to learn a bit about the configuration side to it, starting with dial plans. Dial plans, IP Gateways, Policies and Auto Attendants are all part of the fun of configuration that is Unified Messaging. Remember, this is a primer, so we ll pitch this underhand and easy. Let s start with the dial plan. A dial plan is a way to tell Exchange that there is a set of extensions that belong together and are, let s say, 4 digits. You can configure only one dial plan per user. So if the New York office has extensions already setup for their PBX of 4 digits starting with 2000, you might have one extension be 2001, and then 2002, 2003, etc So your New York office will have a dial plan that specifies a 4-digit extension length. A user in New York and is UM enabled would have the New York dial plan configured for her user account, let s say she s assigned extension Now, each dial plan has to know where the real PBX or IP-PBX is at and so there is also a UM IP Gateway configured, which can use the IP address or FQDN of the VoIP Gateway or the IP-PBX for communications. 97
104 Ok, so let s take a step back. Once you have a good understanding of your existing telephony infrastructure and have your new VoIP Gateway ready (if necessary, unless using an IP- PBX) you go to your Exchange Admin Center and need to configure UM (which is already installed in Exchange 2013). So, what do you do next? First, create a UM dial plan to match the extensions set up in your PBX or IP-PBX Second, create and configure your UM IP Gateway(s) so Exchange knows where the PBXes are on the network Third, configure the UM Mailbox Policy so that user mailboxes get the right UM settings Fourth (optionally) create and configure the UM Auto Attendant(s) so that Exchange can answer the phone for your company or department Once complete, you UM-enable end-users, associate a policy with their account and configure their PIN settings (unless set to be done automatically). 98
105 One thing to keep in mind is that the UM Mailbox Policy is the way you can enable/disable certain user features with UM. For example, you can turn Outlook Voice Access on/off, or Play on Phone, etc It s here that you can configure the inbound fax server URI, a host of other configuration pieces. UM Mailbox Policy in Exchange 2013 I ll be honest, it s a bit tough to be able to show you all the different screens and settings in all of this. It s too much to show you everything here, but if you are really interested in how it is all configured I recommend you check out my video training courses through Pluralsight. I demonstrate how to configure just about everything in Exchange, including Unified Messaging. 99
106 The Big Takeaways Unified Messaging provides a universal Inbox for your endusers so that now they can receive voic right in their Inbox (and, if configured, incoming fax). You can dabble in a lab environment with this but if you plan on deploying it for real, make sure you have telephony experts to assist with configuration of your PBX/IP-PBX connection to your Exchange environment. The process for configuring UM on the server side is to create a dial plan and then UM IP Gateway. Configure the UM Mailbox Policy, perhaps establish auto attendants, enable end-users and start testing. And remember, UM also allows for features like Outlook Voice Access (OVA) so you can check your on your phone and alter your schedule if need be. Late for a meeting? Call OVA and let it be known. UM is worth testing and implementing. At least, in my opinion. 100
107 Chapter 8: Exchange Virtualization Virtualization may be a new concept for you so we ll start this off by explaining what virtualization is and how it can benefit an Exchange (or any server) environment. There is a good deal of debate about virtualizing Exchange (parts of it or all of it) but some of that is simply the inability of people to change and accept technology (yes, even if they work with technology). 101
108 In my day, we didn t have no virtual-zation mumbo jumbo! If you wanted to have multiple OS s on one computer you used dual booting!!! Sure you could only run one OS at a time! But we liked it we LOVED it! a la SNL s Dana Carvey. Virtualization 101 Virtualization is a big buzz word these days. It s more than a marketing term though. In short, if you think about your computer and all that hardware running, and then you think about your Operating System and how you have to install that on top of your hardware (on the bare metal so to speak) what are you really doing? You install the OS program on the hard drive right? Well when your computer boots up and looks at your hard drive it finds out where the OS is located and begins booting up your system. That OS is installed on the bare metal. But you know what? That makes life so much harder when you want to install an OS because you have to worry about the type of hardware, do you have all the drivers, what if the server outgrows the hardware, can you move it? And that is where virtualization comes in. Virtualized systems have a hypervisor. The hypervisor is a thin layer that sits between the hardware and the OS itself. So now you can install things much easier because it s not directly on the bare metal. Now initially we had what are called Type-2 hypervisors. These you installed right on your desktop and they allowed you to install another OS on top of your existing OS. So I remember running Windows Vista with VMware s Workstation (a Type 2 hypervisor) running Server 2003 R2. I could perform testing and do demonstrations with the virtualized server I had. To the server itself it thought it was running right on bare metal. But in reality it was running on top of an OS with a hypervisor inbetween. The negative here is that this made for such a huge performance hit. The better approach was to go with a Type 1 hypervisor. 102
109 A Type 1 hypervisor (like VMware s ESXi or Microsoft s Hyper- V) sits right on top of the bare metal. And the virtual machines (or VMs) sit on top of that. Each virtual machine (VM) requires processor power and memory but most modern systems are so powerful that they are being underutilized these days. So it s pretty awesome that you can utilize them more fully by running different server types (ones that have solutions that cannot be installed on the same server so you need multiple servers anyway). Now the hypervisor itself is not the cool part anymore. The cool part is the management solutions that are used on the back end to keep track of all your VMs and also assist with backing up the data and being prepared to help move VMs when necessary, or have them be redundant and fail over to another system if necessary (if one server crashes). These are features that modern virtualization management Benefits to Virtualizing Exchange We have to be honest here Exchange runs best on the bare metal. Exchange MVP Clint Boessen summed it up best on his blog when he said Exchange performs best when it can interact with the physical components of a server directly. If you disagree with this statement that s usually a symptom exhibited right after a VMware conference - hopefully it will go away. So when we speak of benefits we are primarily speaking of benefits to your environment as a whole. For example, virtualizing Exchange servers brings server reduction (which is reduction in power needed to run those servers, reduction in heat generated and cooling necessary). It also brings server consolidation which is great for space savings and helps reduce underused servers in your environment. Most admins would agree that server management is easier too when your servers are virtualized. These are all good things. The negative is that there are complexities and costs that may be hidden at first. And admins need to have the skills to administer properly (which at this time, if they don t they may fit the grumpy legacy admin at the outset of this chapter). 103
110 Exchange Virtualization Exchange admins have been virtualizing Exchange from the moment the technology existed. At first they learned that it wasn t always best for their Mailbox servers (some still believe that). But the truth is, Microsoft didn t support it. We still did it but Microsoft didn t begin supporting virtualization of Exchange until August of Now, you may be wondering which vendors are supported by Microsoft for virtualization of Exchange. Well, obviously Microsoft s own Hyper-V (that makes sense). And yes, VMware Citrix Red Hat and a host of others who are part of the SVVP program (Server Virtualization Validation Program): Which solution is the best one for virtualizing Exchange? I ll be honest, it doesn t matter. If it is supported, it s supported. I m sure if you ran tests (and they have) and peered into the nanoseconds of performance response and such VMware would eke out a win over Hyper-V. But that s not really the deciding factor here. Which virtualization solution are you most comfortable working with? Which one do you already use? Keep using it, so long as it is supported. One thing to keep in mind is that although Microsoft supports virtualization of Exchange, there are also lots of rules with regard to doing it. For example, Exchange 5.5 and 2000 are not supported at all in production in hardware virtualization environments. Exchange 2003 is supported (barely) and there are some key conditions you have to meet. And then things start to loosen up with Exchange 2007 forward. There is a support document for legacy Exchange virtualization conditions you can find here: There is a special Exchange 2013 Virtualization page you can find here: 104
111 Best Practices for Exchange Virtualization I liked this next quote from one of our reviewers, Phoummala Schmitt, in an article she wrote on the Petri site:). Over the years speaking and writing about this subject you pick up a handful of best practices to help folks avoid making mistakes. Here are a few: Don t Oversubscribe Memory: Some hypervisors have the ability to allow you to oversubscribe or dynamically adjust the amount of memory available to the guest VMs. While this may work for some workloads it doesn t work for Exchange because Exchange uses memory on an ongoing basis, so it won t release it and that causes problems. So dynamic memory allocation shouldn t be used with Exchange. Avoid Virtualization Sprawl: Sometimes admins try to squish too many VMs on one server. With Exchange you are encouraged to provide the appropriate processor, RAM, storage and network connectivity that you would a physical server (a little extra wouldn t hurt). You don t want your Exchange server to be hurting for resources. A rule of thumb for virtual hosts is that they consume CPU overhead of 5-10% (but there is no absolute number here). Exchange does support a 2:1 processor to logical processor ratio (but even in this case it s better if you go 1:1). Ensure Multiple Network Connections: VMs are sometimes configured on a server with all of them using the same network 105
112 connection. That could be improved upon by a simple quad-port network card so you have port density for your virtualized servers. Use Pass-through iscsi: Although you have the option of going with.vhd files, you will experience better performance (especially with your Mailbox servers) if you go with pass-through storage. (Network-attached Storage or NAS is not supported). Also, dynamically expanding disks or disks that use some kind of differencing or delta mechanisms are not supported (disk size must be fixed). Oh and snapshots are not supported (they aren t application aware yet). Note: You can use the Exchange Server Role Requirements Calculator to size your storage properly. It s an Excel spreadsheet provided by the Exchange Team and it s an amazing tool. Here is the 2013 version (there are legacy versions too): Live Migration and vmotion technologies (along with others of a similar nature) are supported for Exchange relocation for a planned migration of your VMs, however, failover activity at the hypervisor level just result in a cold boot when the VM is activated at the target. Hyper-V s Quick Migration is not supported. Note: Microsoft really wants us to use DAG for our server failover and switchover needs with Exchange. Keep that in mind. If you want to test your Exchange environment (virtualized or not) you can use tools like Jetstress and LoadGen to mimic real world stress on your environment to ensure you ve designed it well. Jetstress tests the performance of the disk subsystem and LoadGen will simulate client connectivity (and perceived traffic loads). 106
113 Which Roles to Virtualize Depending on which flavor of Exchange you are running, either 2007/2010 or 2013 this discussion changes a bit. The role that causes the most amount of heated debate is the Mailbox role. So, a little story. I m out speaking at the TEC Conference in Vegas in At some point I m speaking about virtualizing the Mailbox roles and a man in the back of the room yells out you can t virtualize the Mailbox role! which was technically not accurate but he meant you shouldn t virtualize it. I thought how rude who does this guy think he is? Well, he was an HP expert and did I mention he was built like one of those Ultimate Fighter guys. I told him to pipe down or we d take this outside. (Ahem not quite I respectfully disagreed with him in a very peaceful manner. He was quite scary). Why the division? Two experts, two opinions. Well, his experience was driving his comments and at that time he knew that production mailboxes would do better on physical servers, which in some cases was true, but not all. Remember, support for virtualization of Exchange was only 1 year old and we only had Exchange 2007, so the difference of opinion was warranted. The critics felt/feel that the Mailbox role is so CPU and I/O intensive that virtualizing this role in a production environment is a mistake and performance will suffer. As time has progressed and Exchange 2010 and now 2013 have arrived we see virtualization of server roles becoming more common, including the Mailbox role. Now, if you aren t convinced about that you might try virtualizing a member of your DAG for a passive copy, or for a secondary datacenter. So you can keep your production Mailbox server on physical hardware and go with virtualization for your DAG. For Exchange 2007/2010 the Client Access and Hub Transport roles are supported for virtualization. What about the Client Access 2013 server? That is supported too. Just provide the proper resources for it to run properly. As for the Edge Transport, you CAN virtualize it but you would only do that if you planned on putting other servers in your perimeter on the 107
114 same box (otherwise why bother). Some worry about an escape attack if the hacker goes through the VM and there have been a few security exploits that may allow that to happen. A virtual machine (VM) escape is an exploit where the attacker can run code to break through the virtualized server and interact with the hypervisor. This would be a very scary situation because the hacker could access other VMs and data. As for the UM role being supported for virtualization it wasn t with Exchange Starting with Exchange 2010 SP1 it was supported but with odd requirements (tons of RAM and only stand-alone on the system, no multi-role install). With Exchange 2013 it is fully supported and baked into the Mailbox role. I think that that the decision to virtualize Exchange is more complicated than whether or not it works and is supported. For example, the design of the VMware infrastructure is a factor too. I have been in situations where a large enough Exchange farm would warrant its own VMware farm. This is based upon the organizations design principles for VMware. In that case it is important to determine if the extra layer of complexity and support required for VMware is worth the benefits of putting Exchange on VMware. So when making these kinds of design decisions as to whether or not to use bare metal hardware/vmware (Hyper-V) or both be sure to consider this factor too. Should You Virtualize Exchange? There s no single right or wrong answer to that. Take stock of your environment and needs and consider ways virtualization can assist. Perhaps start small and only virtualize servers that aren t critical and see how they perform. Personally I don t think I ve installed Exchange on bare metal in over 5 years. Maybe longer. 108
115 The Big Takeaways First up, never get into a heated debate with an Exchange geek who happens to also look like he could be an ultimate fighter. Next, it s time to embrace the future because virtualization is a technology that is here to stay. It s one of the pillars to public and private cloud technologies and so we need to perhaps evolve a bit and compromise with regard to Exchange being installed in a virtualized environment. The biggest key is to ensure you provide Exchange with the same resources (CPU/RAM/etc ) that you would give the server if it were installed on bare metal. Know what is and isn t supported based on your flavor of Exchange. Know the best practices for Exchange and virtualization: ie. use pass-through iscsi, not fixed.vhds no memory overcommit or dynamic memory and so on. Yes it s true, running Exchange on bare metal is best. But can we gain other benefits by virtualizing Exchange while making the performance hit seem negligible from the end-user perspective? Absolutely. With the right planning and good foundation for your infrastructure and storage, virtualizing Exchange can be a great option. I currently maintain a completely virtualized Exchange environment for a global company, across 2 datacenters using a stretched DAG. We have no physical Exchange servers in our environment that supports users in over 50 countries. 109
116 Chapter 9: Exchange Security I ll be honest, this one is going to be hard. Hard for me to explain and hard for you to understand. You see, security for Exchange data involves a lot of different pieces. One minute we ll be talking about certificates for your server, another we ll be discussing anti-spam/anti-virus or, if we want to jump to endusers we can discuss encryption. It s a real hodgepodge and none of it is super easy to grasp. That s why I let it hit the TOC so late in the book. Not to worry, by this point you are no longer newbies. Hey, if you made it through Chapters 1 through 8 you deserve to be here. 110
117 Anti-Spam and Anti-Malware Features Microsoft knows how much junk is sent into Exchange every day. They could leave the junk mail protection to third-party companies (and for a while they did) but they realized some folks weren t doing anything to protect themselves and so they had to bake in some form of protection. Anti-spam features have been evolving for a while in Exchange but the new anti-malware feature just arrived with Exchange Even though you have these features built-in, many will look for more help, either with an Edge server in the perimeter (partnered up with an anti-virus solution) or something in the cloud. Microsoft recommends their own Exchange Online Protection tool. I personally like Mimecast s protection because it comes with a bevy of additional features. You ll need to do your homework and decide for yourself. Both the anti-spam and anti-malware features are super easy to explain. One spam or malware (viruses, etc ) become identified in the world at large and your anti-spam/malware tools are updated to recognize it (along with a billion other pieces of junk) they stand at the watch and divert (aka quarantine), delete or reject stuff that matches. For example, Exchange has an anti-spam feature called Content Filtering. This was a feature formerly known in Exchange 2003 as Intelligent Message Filter and it can examine messages based on keywords, message size and so forth. It then gives the message a spam confidence level (SCL) from 0 to 9. The number is a gauge to indicate if a message appears likely to be spam (9) or not so likely (0) and everything in-between. Based on the SCL number you can take actions (Delete which won t even notify the sender, Reject where the sender is told, Quarantine where it will be sent to an address for analysis). The higher the SCL, the stronger the reaction. There are a variety of anti-spam features in Exchange. All of these features try and do one thing protect your organization from harmful junk. Sometimes it s just spam, sometimes more, but it s good to have help. Sometimes it isn t enough and you 111
118 need to look at doubling up on your efforts to keep spam/malware out. Speaking of malware, Exchange 2013 also has a built-in malware filter. It s not super robust just yet (a version 1.0 product, although the Office 365 flavor is more of a 1.5) but it will detect malware and delete it and/or send alert text if you configure it to do so. Role-Based Access Control (RBAC) With Exchange 2010/2013 we evolved permissions in Exchange away from access control lists (ACLs) and moved toward roles. The concept is simple in theory and the underlying permissions themselves are based upon something solid, PowerShell. Ultimately the way permissions are determined is through underlying cmdlets and parameters attached to the roles that are assigned to role groups. So the default roles have existing cmdlets attached that can be altered to make for enhanced roles or lesser roles depending on whether you add or remove cmdlets and/or parameters. There are currently 12 built-in Role Groups. If you want to give someone permissions on a broad scale, just assign them into one of the default groups. For example, to allow a person to perform Discovery searches and place a person s mailbox on legal hold, you add them to the Discovery Management Role Group. If you want to give them control over the entire organization you add them to the Organization Management Role Group. So, there are 12 of these different groups you can utilize, including the following: Organization Management View-Only Organization Management Recipient Management UM Management Discovery Management Records Management Server Management 112
119 Help Desk Hygiene Management Compliance Management Public Folder Management Delegated Setup Each Role Group has Roles assigned to them to break it down further. There are 67 different Roles. For example, the Discovery Management Role Group mentioned a moment ago has 2 Roles assigned, the Mailbox Search role and the Legal Hold role. Those roles have entries that are based upon PowerShell. There are cmdlets and parameters that are assigned to each role to allow a person who is assigned to a Role or Role Group the ability to use the EAC to perform the tasks that utilize PowerShell behind the scenes through cmdlets and parameters seeing as how all Exchange management eventually ends up with a PowerShell command being run. By default, the Exchange Administrator is made a member of the Organization Management role group and that has nearly all the roles assigned. In smaller organizations you might have one or two IT administrators handling the Exchange environment and so they might both be in the Organization Management role group and will be capable of performing all tasks. However, if your organization is mid-to-large in size you might begin delegating others to the various role groups so that they can take some of the load off your plate. However, in some cases the built-in role groups may not suffice. You may need to create specific role groups using roles of your choosing, and that is also possible. You ll find RBAC to be quite simple if you stay within the default role groups and roles and administrate the process through the EAC, however, it s obvious that this can become much more involved when you start looking into more granular control through the EMS. Now here is where it becomes interesting. While it is relatively easy to find the Role Groups and definitions for each, and it is even easy to locate the Roles with explanations for these as well, 113
120 it becomes a bit of a challenge to locate the entries that go along with those roles. Now if you are reading this and thinking why would one need to get that involved in the process? I ll tell you. While the default Role Groups and Roles are great and certainly more extensive than anything we ve had in legacy Exchange permission options they are designed to be flexible and allow for the ultimate in granular permission settings. You can create your own Roles and Role Groups (based off of those Roles). Often times the way this is done is by using an existing Role as your parent and having a child Role strip out whatever permissions you need to change. The one caveat here is that you cannot have a child role have more permissions than the parent. So if you don t know what permissions you are starting with (ie. the cmdlets and parameters themselves) and only have some foggy explanation about the Role and what it does, you may have difficulty creating the new roles. You need that information and it has to be easier to get at than it currently is which is through various PowerShell commands seeking out the management role entries of the roles. There are several tools that can help but I like the free CodePlex tool RBAC Manager: There is so much more that can be said about RBAC but not without going much deeper. The best chapter to read, in my opinion, on the subject is Chapter 12 of the Sybex book Mastering Exchange Server Really well done. Aside from that you can jump on Pluralsight and watch some of my video lessons about it which are equally riveting! Certificates Talk about a discussion that could be its own chapter. Certificates are part of real-world Exchange, not just lab-world 114
121 Exchange. The book Mastering Exchange Server 2013 said We think that most people, they don t understand what certificates really are or how they work. Certificates and PKIs are stark naked voodoo mainly because they ve traditionally been complicated to deploy and play with. I thought that was a funny line but ultimately the author was trying to say most folks place concept of certificates in the same category as the dark arts. We ll steer clear of the difficult parts (no talk of X.509 cert standards or anything weird). Let s just try and explain what a certificate does. So your clients are connecting to your servers and they may be using protocols like HTTP, SMTP, POP and IMAP to do it. It s important to secure communications from server to server and from client to server. Secure Socket Layers (SSL) is used for securing communications (one of the methods). By default, client communications use SSL for encryption for Outlook Web App, ActiveSync and Outlook Anywhere. And SSL requires a digital certificate. A certificate is like a verification card. A way to authenticate that the holder is truly who they claim to be. One Exchange MVP, Lasse Pettersson, likes to compare a certificate to a passport and a Certificate Authority (CA) to a global passport agency that is trusted by everyone. There are three different types that can be used with Exchange including: Self-signed: These are automatically created and used by Exchange the moment you install it. They allow Exchange to work out of the box and they are fine for lab environments, but they aren t meant for production Exchange. Imagine a person approaching you and saying I m trustworthy, you can use my services. Here is my card that I wrote and signed myself. That may not impress you as much as if it was signed by someone you knew and respected. So, the self-signed are temporary until you obtain the appropriate SSL cert. Windows PKI-generated: You can set up your own Windows Server with Certificate Services and obtain a PKI cert through your own organization. So, if you are comfortable running your own in-house certificate 115
122 authority (CA) this is a viable option. However, many find that the low cost of option 3 and the ease of deployment make it the better choice. Trusted third-party certs: These are purchased from a trusted certificate authority (CA) for reasonable prices. These certs, when provided by a known CA, are automatically trusted by client computers and mobile devices. If your organization is allowing external access to Outlook Web App, ActiveSync to mobile devices or Outlook Anywhere 3 rd party certificates are the best option. Note: With CAS and Mailbox servers residing on separate servers you only have to worry about changing the self-signed certs on the CAS servers because the Mailbox server doesn t accept direct connection from clients. However, with multi-role servers you have to change the certificate. Now, one of the most important tasks with a certificate is getting all your names registered. Some of these are known by clients (like mail.companyname.com or something for each of your offered services: OWA, OA, ActiveSync) and others may be used behind the scenes like the server FQDN or Autodiscover services (like autodiscover.companyname.com). You may have legacy Exchange servers in your environment and may need a legacy.companyname.com name registered. Because we are looking at registering so many names you want to obtain a Subject Alternative Name (SAN) aka Unified Communications (UC) certificate. These SAN/UC certificates allow you to pay for one certificate (rather than multiple certs) and add all your server names and external URLs to it. Another option with certificates is for you to purchase a wildcard certificate. Something like *.yourcompanydomain.com so that it covers all the subdomain naming you can come up with. Although wildcard certs work, many are not comfortable with the security implications of having a certificate that can be used openended. So SAN/UC certs are preferred because they are created 116
123 specifically for the names you provide and are thus considered more secure. Once you have all your planning done you are going to follow the steps provided by Microsoft to generate a certificate request specifically for Exchange 2013 and then use that request to obtain your cert through your provider. I use GoDaddy, others use Digicert or some other provider. The choice is yours really. Once you get the certificate you will import it on your server(s) and assign it to services. See not as scary as they make it out to be right? No more difficult to grasp than high availability or Unified Messaging right? Transport Layer Security (TLS) flows internally in a secured environment. TLS is used with internal communications and it s the latest version of the SSL protocol. At times you also have partners that you connect Exchange to using send connectors. You can configure mutual TLS authentication to provide session-based encryption and authentication. With mutual TLS each server validates the other server s certificate as opposed to TLS where no authentication is performed or sometimes one side authenticates. When sending to another organization that doesn t have TLS the message will not be encrypted. TLS will only work if both the sender and receiver have it enabled with the mail systems. Client-Side Protection Concepts Security is more than server-side, its client-side too. Some of the best security you can provide for your organization may be the result, not of technology, but of training. 117
124 Some additional client-side options include things like dual factor authentication. This would be something that protects user access to the domain, which would enhance security. The use of an encryption solution may be worth considering. S/MIME is a client-side technology that provides signed or encrypted messaged, however because of the nature of S/MIME messages they may not fall within your company s policy lines because they cannot be scanned, cannot have disclaimers applied, cannot be inspected, etc.. so that may not work for your organization. We discussed the use of IRM earlier on when discussing Regulatory Compliance. It s not a perfect technology but it does offer a few deterrents that add to your security for clients. One interesting set of training is from KnowBe4. They partnered up with Kevin Mitnick, notorious social engineer, and created training that should scare end-users and train them properly not to click links to bank accounts and things of that nature. They also have mock tests that allow administrator to see if a person has benefited from the training or not. The Big Takeaways This chapter is a cornucopia of different security considerations. Anti-spam/anti-malware options, permissions, certificates and so forth. And we just scratched the surface. Obviously there is more you can do to secure Exchange. Hardening Exchange (as they call it) may include firewall and port usage hardening. Certainly making sure all your servers are up to date and patched is key. If you are using Hyper-V for your virtualization you want to consider using Server Core for your parent. So there are lots of additional security options to consider. But hopefully you have a better idea now of what is involved to protect your mission critical solution. 118
125 Chapter 10: Office 365 (Exchange Online) What is Office 365? Well it s a confusing name for a great solution. Its predecessor had an even worse name: Business Productivity Online Suite (or BPOS for short). The reason Office 365 is confusing is because many folks think it is referring to the next flavor of Office, and to a degree they are correct (I ll explain that). But the primary offering is actually Microsoft s hosted versions of Exchange (Exchange Online), SharePoint and Lync. Let s break down what Office 365 is all about. 119
126 Clearing Up the O365 Confusion As mentioned, Office 365 is partially all about the hosted services you can obtain by choosing a package that fits your needs. At the same time it s also about subscription Office (if you pick a plan that includes the Office suite). There are three service family plans: Small Business (up to 25 users) Midsize (up to 300 users) and Enterprise (over 250 users). Even if you have a small business of 10 people you can choose an Enterprise plan if it has the features you need/want. Every plan you choose has a base of services (they all include the Office 365 Platform, all include Exchange Online, all include SharePoint Online, almost all include Lync Online, all include Office Web Apps) and then vary with add-on services like Project Online, Yammer Online or the Office applications subscriptions. Logically, the plan you choose will have a price tag attached and this will often drive the decision on which plan is best for you. You want to be careful that the plan you choose includes features you want. For example, if you get a small business plan you may not have some of the regulatory compliance features you would like to have (like premium journaling). You can always upgrade your plan if you need to but it would be better to know up-front what your plan supports. These plans are not just based on number of seats, they have enabled/disabled features to consider and some include Office while others do not. So, you might be thinking Ok, so if I go with Office 365 I get Exchange Online right? Exchange 2013? The answer is yes and no. Initially, when they upgraded the BPOS platform to Exchange 2013 you would have gotten that flavor of Exchange. But one of the coolest things about Office 365 and Exchange Online is that they are always making improvements to it. And you don t have to wait for a cumulative update or a service pack to see the improvement or new feature. It s online first! So you don t get Exchange 2013 you get the latest flavor of Exchange available, Exchange 365. Note: Eventually many of the online features will be provided to the on-prem edition through updates and service packs. 120
127 So the Office 365 flavor can be more capable than the on-prem version in some cases. Case in point is the anti-spam features. When Exchange 2013 RTM d you could only administer antispam through the Exchange Management Shell. No EAC option. But with Exchange Online (through Office 365) I can see now that some of the anti-spam features are in the GUI. So I get the latest interface in the cloud version of Exchange. The same is true of your Office applications. You can still buy Office 2013 and install it directly on a desktop for a user. But if you buy the subscription (with your Office 365 plan) your user s Office products will update to the latest features and such immediately. Again, Office 365 gets all the enhancements first, and in some cases may be the only platform to get enhancements. There is no guarantee that a feature will come down the pipe to your on-prem version. Hosted or Cloud-Based Exchange I may have clarified what Office 365 is but not what hosted Exchange is, or Exchange Online specifically. Hosted Exchange isn t a new concept. Providers years back said hey, we can set up Exchange for you and give your company accounts with their domain name (just point the MX records to us) and you can have Exchange without the stress! It s a great idea really and one that smaller businesses (and mid-level too) have appreciated. But those earlier, multi-tenant deployments came with a lot of limitations. As an Exchange admin I couldn t get under the hood and make any real changes. Modern hosted Exchange providers have been evolving so that they provide higher end services at a reasonable price in order to try and compete with Microsoft s Office 365. Another vendor trying to compete with it is Google Apps, which offers hosted and services as well, but I think the pendulum has swung back in Microsoft s favor on that. Google Apps is yesterday, Office 365 is today. 121
128 In addition to hosted Exchange you can go with a dedicated virtual server that has a full version of Exchange on it. So in that case you have more control over the Exchange environment but don t have to worry about the hardware it is running on. Depending on the organization you work with (healthcare, finance, government) hosted or Exchange may not be an option for you. You may need on-premise Exchange. But, if that isn t a concern and you are looking to go toward a hosted or cloud-based solution you need to do your homework and choose one that works best for you needs. Hybrid On-Premise/Office 365 What some companies are doing is mixing the two options together. Because Microsoft built Exchange and offers O365 they have made it easier for the two to work together. Some call it the best of both worlds. The organization can keep mailboxes in-house that are of a more sensitive nature while allowing Office 365 to handle non-critical mailboxes (like temporary workers perhaps). Or they can use the archive features of Office 365 combined with on-premise mailboxes. With the hybrid model users can find each other across platforms through a common global address list (GAL) and can share calendar information (aka free/busy data). Exchange admins can use the same Exchange Admin Center tool to administrate both, which makes it convenient as well. A Tour of Office 365 I personally love working with Office 365. When I log into the Office 365 admin center I m greeted with an overview of services (as you can see from the figure). I can see immediately if there are any issues with my services and see if there are health issues. I can easily add new users to my portal, pull up reports and more. It s very easy. But that isn t my favorite part. My favorite part is that even though this is a hosted, cloud-based deployment of Exchange for my organization, I can still select the Admin link in 122
129 the top right corner and choose the Exchange administration option. This brings to an (almost) fully functional Exchange Admin Center. I say almost because you cannot configure server hardware (databases and such). It s really awesome to be able to control and configure Exchange Online using the same tools I m used to using on-premise. Often times with hosted solutions it doesn t work that way. You get some kind of proprietary tool set (web-based) that gives you very limited options. But with Office 365 you get a very robust administration experience. As close to on-premise as you can hope for with a hosted solution in my opinion. Office 365 Admin Center Note: Office 365 can be managed through the EAC but it can also be managed through a remote PowerShell session. 123
130 The Big Takeaways Office 365 is Microsoft s hosted suite of communication and collaboration solutions including Exchange Online, SharePoint Online, Lync Online and several other options depending on the plan you choose. There are a variety of plans to choose from with different features and price tags attached. You need to make sure the plan you choose is best for your needs. Some plans come with a subscription to Office so that users can install the latest version of Office applications. One of the values to Office 365 is that all of the solutions (the server-side ones and end-user ones) are kept up to date and are the latest iterations of those solutions available. So even if you have an on-premise Exchange 2013 server with CU3 or SP1 installed, the online O365 version of Exchange (aka Exchange Online) will still be more current. Same with your Office apps. With Office 365 you can perform hybrid configurations of Exchange, meaning you can have a portion of your Exchange environment be on-premise and another portion be in the cloud, with O365. There are alternative hosting options too. Alternative providers with different types of cloud-based Exchange or non-exchange offerings. You may find, with some research, that these better fit your needs. Perhaps a price point you prefer based on the services offered. Perhaps solutions that add greater value for a competitive price. 124
131 Parlez-vous Exchange? Do you speak Exchange? Yes yes you do speak Exchange. If you have read through the past 10 chapters then you can sincerely say you have a good grasp of conversational Exchange, including its online younger brother Office 365. Through this book we have addressed the primary terminology and concepts behind Exchange (past and present) but there is so much more to learn. There are books that are 1000 pages on Exchange and even that isn t enough. We haven t even discussed configuration (step-by-step) or design/deployment, migration strategies, monitoring options, PowerShell (a monster subject in and of itself) but that s ok. This book was meant to establish a base level of communication. Conversational Exchange, not Fluency or Native Exchange just yet. Perhaps as you read the chapters you did some research on the subject matter and that helped you round out the concepts. Or perhaps you have watched some of the videos available (I have a ton of them with Pluralsight on Exchange 2010 and 2013). Visual learners will no doubt appreciate seeing things done with Exchange and taking their knowledge to the next level. For more insight you can sign up for videos at Pluralsight.com You can visit my blog: ExclusivelyExchange.com or follow me on Or maybe you are a hands-on kind of learner. You need to do it, install it yourself, immerse yourself in it, fail a few times with something and plough through so that you own it. And if you are that person you know there is a great deal ahead of you. But it s possible one day you may be an Exchange admin, perhaps even Microsoft Certified (the 2013 tests are quite difficult). And hopefully you ll look back and remember where you got your start. Right here with Conversational Exchange. And should I need a job at that time (kidding) 125
132 We did our best to make this book easy to read. Nevertheless I know it wasn t easy per se. I had Exchange MVPs and Exchange gurus reading it, but I also had newbies, friends and family (including my own wife, who suffers enough listening to me talk about it day-to-day). They all contributed to the effort to make this book readable and I d like to thank them for their help. And I d like to congratulate you for making it through. J. Peter Bruzzese 126
133 Appendix: Basic Exchange Prerequisite Knowledge Folks, the last thing I wanted in the very first chapter of this book was to confuse you or scare you off. So I saved some of this information for the end of the book. It s important that you grasp a few basic underlying networking concepts before you dive right into the world of Exchange. If you peruse this information and feel you know it already, skip it! If you read the book and you don t want to learn more, skip it! If you don t think this information is part of what you need to learn Exchange, skip it! But if you don t understand the basics of networking, or TCP/IP, or Active Directory you might just want to keep reading. 127
134 How a Network Works If you understand the way a small network works you will also understand, to some degree, how large networks operate. You may have a basic grasp of networking from your home network, where you know you pay for a connection to allow your home systems and devices to connect to the Internet. However, you can set up a home router that allows your devices to communicate with each other while not being completely exposed to the entire world. Your home network might have a WiFi enabled router with some systems plugged into it directly (or perhaps you have a Sonos bridge or a Hue bridge connected directly in) and then you have devices connecting through your in-house WiFi. Let s dive just a bit deeper into the physical side to a network. The Physical Pieces to a Network Most home networks are designed not just to connect computers to each other or printers; rather, they are designed to link to the Internet connection coming into the home. The incoming connection might be a DSL line or cable modem or satellite, depending on your local providers. Hopefully no one reading this book is still dealing with dial up. Now the Internet providers usually set up the connection to one computer in your home. Their little box has an Ethernet connection that uses a cable to connect to your computer s network port. This cable is called a Category 5 Ethernet cable. Why Category 5? Well, as you might expect there were earlier categories, 1 through 4, which are not used anymore. The future categories are: you guessed it, 6 and 7; these are new to the Ethernet cable scene. Cat 6 is used for Gigabit Ethernet and is backward compatible with 5. Some of the terms you might see with Ethernet cables include: 10BaseT, 100BaseT and 1000BaseT. These indicate the amount of data the cable can transmit per second, either 10 Megabits (not 128
135 bytes but bits), 100 Megabits or 1000 Megabits (often referred to as Gigabit speed). Why is it called Ethernet? Ethernet was developed in the early 1970 s at Xerox PARC by Robert Metcalfe and others. The reason it was called Ether-net was based on the concept of luminous ether which was once thought to carry electromagnetic waves through space. At that time, many networking systems were proprietary (that is, unique to a given environment) and the idea was to indicate that the Ethernet wasn t just for one type of system but for all systems. The cables you might use in a home network are easy to distinguish from your phone cables but they do have some things in common. For example, the connectors look similar. If you look at the end of a phone wire you see a little head with a clip. If you look closer you ll see that there are copper looking pins inside. That is an RJ-11 connector. Now if you look at the end of an Ethernet cable you ll see that it is slightly bigger and has more pins, eight to be exact. That is called an RJ-45 connector. Most wired networks are going to use the Category 5 Ethernet cables with RJ-45 connectors on the end. One end plugs into the back of your computer, either into the motherboard itself or into a network card. The other end plugs directly into the cable modem or FiOS that is provided by your Internet Service Provider (ISP). Your provider (or ISP) may be the same company that provides your cable television and/or home phone. However, to increase the use of that Internet connection 129
136 toward other computers within your home you will need a special device called a router. A router is like a post office. Communication between the Internet, your home network and computers within your home network is handled through packets. These are like pieces of mail that travel from one home to the next. If you want to mail something officially to your neighbor you would take your mail to the post office and put it in the Local box. If you want to mail it to another state or country you would put it in the Out-of-Town box. The post office would handle it from that point. Your router will send packets from one computer to another computer; and from the Internet to your computers. One thing to note is that when your router is simply connecting computers in your home it is actually acting as a switch, not a true router. That little point isn t meant to confuse you but to help you when you decide to purchase a router at some point because you may want a router that also has a 4-port switch. What about wireless? Well, most routers have the ability to be wireless access points. This allows your systems with built-in wireless connectivity like: most modern laptops, ipads, other tablet systems, and desktops with wireless cards installed or USB wireless connection, to access the router, each other and the Internet. Whether wired or wireless, how do these devices actually talk to each other? How does the router know to which computer to send information? MAC Addresses and TCP/IP All devices that are on a network, on the Internet have a built-in MAC address (Media Access Control). These are typically assigned by the manufacturer of the device and are assigned using hexadecimal numbers, for example: A-3E-D5-5E. Note: You can easily find a PC computer s MAC address by opening a Command Prompt (click the Start orb and type cmd) 130
137 and then within the command prompt type in either getmac or ipconfig /all We now have a clear way of seeing that every device is unique and that is great because it helps prevent confusion. Nonetheless, the numbers are not that easy to work with; and there is no order to them amongst devices. Let s say in your house you have a couple of different computers and some Wi-Fi enabled devices, like a tablet PC or mobile device or ereader (Kindle); you wouldn t want to write down and remember the Mac address of each device to communicate, would you? You might be thinking, doesn t my computer also have a name? Can we use these names instead of the Mac numbers? You can use the name on your local area network (LAN), which is the network in your home. (Note: A LAN is different from a wide area network or WAN, which you don t have to worry about for your home). Then again the names don t really help your network keep track of all the devices and Mac addresses for easy communication. Instead all the devices on your home network and on the Internet use a TCP/IP Address. The computers locate each other using TCP/IP addresses and only then does it acquire the true MAC address of a computer to link for communication. TCP/IP stands for transmission control protocol/internet protocol and it is actually more than two protocols but a whole suite of them. Now, you may be wondering what we mean by protocols. Well, protocols are sometimes used to mean a language or a set of standards. Having standards causes the different manufacturers to follow a protocol when developing things that will work together on a network. Think of trying to follow a recipe when everyone has a different size teaspoon and cup measurement. It would never work. The standard measurement allows everyone to cook the same meal in much the same way TCP/IP has standards or protocols. Going back to our post office analogy, in much the same way you might follow certain standards when boxing packages you send to someone; TCP/IP has a standard set for packets that go out on the wire and framed or boxed properly. You also need to make 131
138 sure that the packages are addressed properly. The same is true with TCP/IP. To send a simple document from one computer to another, even on the same network, the document needs to be broken up into packages and then sent over the wire to the other computer. The two computers might both be plugged into, or connected wirelessly, to the router. Yet, like a post office needs an address to locate the recipient, the router uses the TCP/IP address to locate or route its packages to its recipient. Note the following graphic. It s meant to illustrate how TCP/IP uses IP addressing to deliver in much the same way we have an address system that works for real mail. What have we evaluated so far? A local area network (LAN) uses Ethernet cabling with RJ-45 connectors to connect computers to routers or the wireless router wirelessly; and the router helps to make sure packets of data get from one system to another. TCP/IP is the set of standards (or protocols) that make it possible for this communication to take place. TCP/IP addressing also makes it much easier to bridge the gap between the MAC address and the computer or device. What is great about TCP/IP is that even though it is all numbers, it is a lot easier to work with and organize than MAC addresses. 132
139 How the Internet Works You probably have some kind of Internet Service Provider like Comcast, AT&T, Brighthouse, CenturyLink or one of the main ones available. Thus, you have a connection coming into your home. That connection might allow you to plug in one computer or you might connect it to a router. Internally you may have a local network connected off that router with IP addresses that you have chosen. However, the Internet uses IP addresses that are given out specifically for use on the global network. The router has an internal IP address, which is also called the default gateway once you configure your in-house computers to access the Internet. The router also has an external IP address, which connects it to the ISPs network. The router basically transfers data back and forth between the ISPs network and your internal home network. Now when you open up your Internet Browser, (maybe you like Internet Explorer, maybe Firefox, maybe Google Chrome, maybe Safari or some other option) you type in the URL to the website you are looking to access. URL stands for Uniform Resource Locator which is a fancy way of saying website address. The URL is made up of the protocol you want to use followed by a colon and two slashes, like: Then you add the path to be able to locate the web site. That web site is being hosted on a server, or group of servers, and to access it you need to know the IP address of the server or server group. But, how can you know the IP addresses of every web site in the world? Well, we don t have to know every IP address, we just type in or whoever we are looking to reach. That path helps us to find the site through the use of Domain Name Service (DNS) servers. Domain Name Services (DNS) It s simple. You need to call a plumber but don t remember the number, look it up in the phone directory. The directory is in 133
140 alphabetical order so you can find who or what you need by subject. DNS Services are servers that are on the Internet to help us find the IP addresses of web sites or send s to mail servers and so on. These servers are organized by domain, just like an alphabetical phone directory. The root for the whole DNS system is a period (.) which is odd because we never type that in. If we typed a period (.) it would be at the end of the URL, because it is assumed we would leave it out. Instead we end our URLs with.com,.gov,.net,.org and so on. For countries, there may be ones like.uk,.cn and so on. That is why not all URLs we type in are.com, but can include other ending points. The servers are registered under the domain that is chosen by the person who is setting up their domain name. For example, you might go to a popular domain name registration site like GoDaddy.com; and there is usually a domain finding dialog box for you to check to see if your domain name you want to register is already taken. Keep in mind you don t ever have to register a domain name. However, if you want to have your own web page for personal or business use you will need to register the domain name first. When a domain is registered, the Domain Name Server (DNS) keeps a record of where the location of the web server is for that domain; and for other servers, if you have them, like 134
141 servers and so on. When you type in something like your computer has no idea where the web servers are for Microsoft. Instead it very quickly checks in with a DNS server. That DNS server looks to see which category is needed. It sees.com and says okay; let s check to see if Microsoft is registered. It finds Microsoft and finds the DNS servers that are configured to tell you where the www services are hosted. An IP address is provided back to the computer to tell it how to find the servers for the web site. Now your browser knows the IP address on the Internet to connect to that www service and enables viewing of the web page. Remember, it already knows to use the HTTP protocol because that is in the beginning of the URL. It also knows to use Port 80 to make this connection to the channel on the web server hosting the pages. And after that expeditious process; viola the page appears before you. When you want to access a more secure site you type in https:// which requests a secure page type through Secure Socket Layers. Without going into great detail, SSL sites are more secure for your banking, purchasing and other secure transaction needs. The SSL connections are usually shown in your browser with a lock graphic of some sort to let you know it is safe to proceed. By default, SSL uses port 443 rather than port 80 for this connection. DNS provides this hierarchical, organized set of registered IP addresses and domain names for everyone on the Internet and it is all behind the scenes. This is also true for other services like . If you type in the DNS servers are able to locate the server for the to be sent to the correct server. The DNS server uses the same pattern we explained above except instead of the IP address hosting www services it provides the IP address for the server hosting the services. It does this because the DNS Server has MX records configured. So when you type www the DNS server responds 135
142 with the IP address of the web server. If you type in an address, the DNS server provides the IP addresses configured as MX records for your organization. So can see that DNS is essential to and Exchange. Exchange Server requires DNS The Network around You What an amazing thing if you work for a company that has a network with cables, routers, switches and more. Do you even realize what a tremendous learning experience is right in front of you? Too often though people go to work, sit down, log in, work, and log out at the end of the day. Never wondering what makes it all happen. If you want to start learning more take a look BEHIND your computer. See the cables? Where do they go? Do you have a false floor that allows cables to be hidden? Or are they all just out in the open? How many servers does your company have? Why not ask your IT admin or network team about that. You probably log in and connect to servers for file saving purposes right? And your computer connects to a printer, to services and more. Are your services on-premise (located somewhere in the building) or are they hosted or cloud-based? Active Directory Active Directory (AD) is an identity management system and directory service. When you log into your work domain you need a username and password. The identity management system confirms that you are who you say you are and provides you with the ability to log in and access resources on the network (files, printers, etc ). At the same time your name is in Active Directory, which can be used to provide your address, phone numbers, position in the company and a host of other important details that can be searchable as a result of the directory service. There are many different directory services, but Active Directory is the one that Microsoft has created. 136
143 Servers Make the World Go Round In a network you have client desktops, laptops devices and you have servers that provide services. Servers sound scary when the reality is that they are literally there to serve you, so don t be too worried about them. It s good to have an idea of what different types of servers there are and what they do. I m going to list out a few (not all) and you ll note that I have a Microsoft slant here, not necessarily because I m partial but because these are the ones I work with. Server Type Active Directory DNS/DHCP File (services) Description Provides a way for workstations (desktop systems) to log in. It maintains username/passwords for the people in your organization and provides your system a security access token when you log in. It also maintains directory information about persons (if you input that into the system, like address/phone/etc.) and offers a variety of tools for management of your network These are services provided within a network and could be included with other server services (like your AD server). As you recall, DNS provides name services (although in this case we mean internally on your network) and DHCP provides IP address leases for your client systems. These servers are designed to allow client connectivity so that persons can save their files to the network server. This is good for two reasons: It s easier to back up the one file server rather than 100 clients, and it s easier for collaboration when documents are on a 137
144 network share. Server Type Print (services) SQL Exchange SharePoint Lync Hyper-V IIS (Web Server) Description Allows you to connect one or more printers up to a server and allow users to print through it. All the processing work is done on the server and the documents go into a print queue and are printed in the order received unless you tweak priority settings. Provides database services, which are necessary when working with other server-types like SharePoint. An Exchange Server is used to provide services. Users get a mailbox that allows them to send/receive , keep track of their calendar and contacts, even receive voic (if configured properly). SharePoint allows for easier collaboration within an organization. You access it through your browser and you can have web pages that have document libraries (with workflows, versioning and so forth), personal web pages, lists, and much more. Provides a software based communications server system, that provides IM (instant messaging), Presence, VoIP (voice over IP) and conferencing capabilities (audio/voice and web conferencing). Virtualization services (discussed in a moment). Internet Information Services allow that server to host web pages and web-based content. 138
145 There are so many other server types for monitoring and managing (System Center tools), etc and other options beyond Microsoft too. The Big Takeaways Ultimately, the big takeaway here is that there is a lot more to learn about the underlying network infrastructure than we let on at the beginning of the book. Truth is, to truly get into the world of Exchange you have to get a solid grasp of networking, DNS and Active Directory, Server installation and configuration and so on. So rather than scare you with all that to start with, we jumped into Exchange history. 139
146 Vendor Sponsor: Mimecast s Unified Management Most information you read about when it comes to a third-party solution is written by the third-party. They tell you we re awesome! And here is a document that proves it! <cough><cough> written by us <said in a whisper>. Even if it is true it certainly does cause an eyebrow to rise and the cynical side to us comes out. That s why I told my friends at Mimecast I wanted them to let me write this up in my way. I want you to see their solution through my eyes. I won t be able to give you every last bell and whistle but I will certainly be able to tell you how it will add value to either your on-prem or Office 365 Exchange. 140
147 Mimecast was founded in 2003 by Peter Bauer and Neil Murray. These were regular people, IT admins, MCSE s, that saw a problem and went to work fixing it. The problem they saw was that was becoming more and more complex to handle. They went to work on a solution that was in the cloud and provided management. Security management can mean so many things, so what is it REALLY that Mimecast provides? Well, for starters, anti-spam and anti-malware. Keep the junk from ever reaching your onpremise Exchange or Office 365 servers. Mimecast s solution sits between your organization and the Internet and provides complete protection from spam, viruses, malware, phishing and data leaks. Archive In addition, Mimecast provides an enterprise-grade archive solution with a powerful, high-performance ediscovery piece. This reduces your on-premise storage costs because the archive ensures you have an accessible copy of that data at all times. Let me explain this a bit further because I don t think everyone understands the value of this solution. If you recall the chapter on regulatory compliance we talked about having a personal archive, which is great for eliminating PST files but not great for enterprise archive and regulatory compliance protection. Why? Because end-users can delete whatever they want. And for that to stop you have to enable a form of legal hold (litigation hold or In-Place Legal Hold in Exchange 2013). This creates more storage bloat but does stop end-users from deleting things permanently. With the Mimecast solution you have archived before it even reaches your on-prem/o365 servers. Users can delete whatever they want, who cares? You have an archive. Now the cool thing is that this is an accessible archive, not backup tapes that sit in a vault. End-users are given tools that integrate with Outlook so that they can peruse their archive and find s they 141
148 may have deleted accidentally and restore them (no IT intervention required just a little training). BUT if they want to delete an that may be incriminating nope, not possible. I like to call this preventative litigation. Think about it. If you know, as an end-user, that everything you send and receive is being archived, is non-deletable, is easily located with ediscovery how stupid would you have to be to send something inappropriate? Hence, preventative litigation. Continuity I remember at 5 years old being in the movie theatre for the first Superman with Christopher Reeves. Do you remember the part where Lois Lane falls out of the helicopter and Superman catches her saying Don t worry maam, I ve got you. And she says You ve got me?! Whose got you!!!??? Classic line. Good question though. So, you have all these different types of Service Level Agreements out there (we talked about this in the book). SLA s promise many things and one of them is availability of your services. But what happens if/when service goes down? It happens. It happens with on-premise Exchange and it happens with hosted solutions and even Office 365. Sure, the SLA typically offers some kind of restitution but what if you don t want restitution, you want availability of service? 142
Microsoft Exchange 2013 Ultimate Bootcamp Your pathway to becoming a GREAT Exchange Administrator
Microsoft Exchange 2013 Ultimate Bootcamp Your pathway to becoming a GREAT Exchange Administrator Introduction Microsoft Exchange with its inherent high level of security features, improved assistant,
10135A: Configuring, Managing, and Troubleshooting Microsoft Exchange Server 2010
10135A: Configuring, Managing, and Troubleshooting Microsoft Exchange Server 2010 Course Number: 10135A Course Length: 5 Day Course Overview This instructor-led course will provide you with the knowledge
5053A: Designing a Messaging Infrastructure Using Microsoft Exchange Server 2007
5053A: Designing a Messaging Infrastructure Using Microsoft Exchange Server 2007 Course Number: 5053A Course Length: 3 Days Course Overview This three-day instructor-led course provides students with the
BUILT FOR YOU. Contents. Cloudmore Exchange
BUILT FOR YOU Introduction is designed so it is as cost effective as possible for you to configure, provision and manage to a specification to suit your organisation. With a proven history of delivering
4 Critical Risks Facing Microsoft Office 365 Implementation
4 Critical Risks Facing Microsoft Office 365 Implementation So, your organization has chosen to move to Office 365. Good choice. But how do you implement it AND deal with the following issues: Keep email
Load Balancing Exchange 2007 SP1 Hub Transport Servers using Windows Network Load Balancing Technology
Load Balancing Exchange 2007 SP1 Hub Transport Servers using Windows Network Load Balancing Technology Introduction Exchange Server 2007 (RTM and SP1) Hub Transport servers are resilient by default. This
70-662: Deploying Microsoft Exchange Server 2010
70-662: Deploying Microsoft Exchange Server 2010 Course Introduction Course Introduction Chapter 01 - Active Directory and Supporting Infrastructure Active Directory and Supporting Infrastructure Network
Resonate Central Dispatch
Resonate Central Dispatch Microsoft Exchange 2010 Resonate, Inc. Tel. + 1.408.545.5535 Fax + 1.408.545.5502 Copyright 2013 Resonate, Inc. All rights reserved. Resonate Incorporated and
MAKING THE TRANSITION
DEFINITIVE GUIDE TO EXCHANGE SERVER 2010 MIGRATION CHAPTER THREE EXCHANGE 2003 TO : MAKING THE Planning a move to Microsoft Exchange 2010? Keep your email up and running throughout your migration with
Microsoft. Exchange 2013. Referent: Daniel Glomb System Architect
Microsoft Exchange 2013 Referent: Daniel Glomb System Architect Agenda What s new Architecture Client Access Server Mailbox Server Migration Outlook 2013 / OWA What s new in Exchange 2013 Exchange Administration
WHITE PAPER. The Move to Exchange 2013: Migraine or Migration? By J. Peter Bruzzese, Exchange MVP. Sponsored by
The Move to Exchange 2013: Migraine or Migration? By J. Peter Bruzzese, Exchange MVP Sponsored by ABOUT THE AUTHOR J. Peter Bruzzese (cofounder and CIO of ClipTraining) is a Microsoft MVP, an internationally
Exchange 2010 Roadmap Series: Transition and Migration Sponsored by Dell, Inc. and Intel
Exchange 2010 Roadmap Series: Transition and Migration Sponsored by Dell, Inc. and Intel Speakers: Patrick Devine and Lee Benjamin Patrick Devine: Hello, and welcome to a searchexchange.com presentation,)
EAsE and Integrated Archive Platform (IAP)
EAsE and Integrated Archive Platform (IAP) HP Outlook Web Access (OWA) Extension on Exchange 2007 Table of Contents Overview... 2 Microsoft Outlook Web Access 2007 (OWA 2007)... 2 HP Outlook Web Access
Discuss the new server architecture in Exchange 2013. Discuss the Client Access server role. Discuss the Mailbox server role
Discuss the new server architecture in Exchange 2013 Discuss the Client Access server role Discuss the Mailbox server role 5 major roles Tightly coupled Forefront Online Protection for Exchange Edge Transport,
Data Ingestion into Office 365
Data Ingestion into Office 365 Data ingestion sounds like something you need to go to the doctor for because over the counter stuff just isn t helping enough. The fact is that while some start-up greenfield
Today s Government Environment
Today s Government Environment What customers tell us IT Decision Makers E-mail is mission critical to our business Securing the enterprise is a top concern Compliance is difficult to enforce Built-in
Exchange Server 2007 Design Considerations
Exchange Server 2007 Design Considerations Product Group - Enterprise Dell White Paper By Ananda Sankaran Sumankumar Singh April 2007 Contents Introduction... 3 Server Roles in Exchange 2007... 4 Mailbox
What s New and Cool in Exchange 2013
What s New and Cool in Exchange 2013 free e-book An extensive in-a-nutshell guide The main focus for Exchange 2013 is further reducing cost of deployment and management. The four main areas that were changed
The Exchange Management Shell
THE ESSENTIAL GUIDE TO Exchange Management Using EMS By Paul Robichaux SPONSORED BY The Exchange Management Shell (EMS) offers more power and flexibility for managing and monitoring Exchange Server 2010
Protecting Exchange 2010 An Executive Overview of Double-Take Availability
Protecting Exchange 2010 An Executive Overview of Double-Take Availability Introduction Native backup can provide higher levels of data and system protection, but the need for 3rd-party software and hardware
Exchange 2013 Uusi sähköposti. Jussi Lehtoalho Principal Consultant, Microsoft Oy Sakari Kouti Järjestelmäpäällikkö, FC Sovelto Oyj
Exchange 2013 Uusi sähköposti Jussi Lehtoalho Principal Consultant, Microsoft Oy Sakari Kouti Järjestelmäpäällikkö, FC Sovelto Oyj Agenda Uusi Exchange 2013 arkkitehtuuri Exchange 2013 versiot Exchange
Exchange 2013 Deployment, Coexistence, Virtualization. Jeff Mealiffe Senior Program Manager Exchange Product Group
Exchange 2013 Deployment, Coexistence, Virtualization Jeff Mealiffe Senior Program Manager Exchange Product Group Agenda Fundamentals of Deployment Upgrade and Coexistence Public Folder Migrations Virtualization
Microsoft Exchange Server 2013. Design, Deploy and Deliver an Enterprise Messaging Solution
Brochure More information from Microsoft Exchange Server 2013. Design, Deploy and Deliver an Enterprise Messaging Solution Description: Get the knowledge
Workshop purpose and objective
Messaging Workshop purpose and objective Workshop purpose Facilitate planning discussions for messaging coexistence Considerations of Office 365 limits and features Objectives Identify Microsoft Office
Migrating Exchange Server to Office 365
Migrating Exchange Server to Office 365 By: Brien M. Posey CONTENTS Domain Verification... 3 IMAP Migration... 4 Cut Over and Staged Migration Prep Work... 5 Cut Over Migrations... 6 Staged Migration...
MICROSOFT EXCHANGE SERVER 2007 upgrade campaign. Telesales script
MICROSOFT EXCHANGE SERVER 2007 upgrade campaign Telesales script This document was created to help prepare an outbound telesales professional for a Microsoft Exchange Server 2007 upgrade sales call. This
Lesson Plans Configuring Exchange Server 2007
Lesson Plans Configuring Exchange Server 2007 (Exam 70-236) Version 2.1 Table of Contents Course Overview... 2 Section 1.1: Server-based Messaging... 4 Section 1.2: Exchange Versions... 5 Section 1.3:
Load Balancing Exchange 2007 Client Access Servers using Windows Network Load- Balancing Technology
Load Balancing Exchange 2007 Client Access Servers using Windows Network Load- Balancing Technology In this article I will show you how you can load-balance Exchange 2007 Client Access Servers (CAS) using
Exchange 2007 Overview
Exchange 2007 Overview Presented by: Chris Avis IT Evangelist Microsoft Corporation Seattle Windows Networking User Group February 2009 monthly meeting What Will We Cover? New features in Microsoft Exchange,
Exchange 2013 Server Architecture: Part 1. Jeff Mealiffe Senior Program Manager Exchange Product Group
Exchange 2013 Server Architecture: Part 1 Jeff Mealiffe Senior Program Manager Exchange Product Group Agenda Part 1 Overview of the new Architecture The Client Access server role Part 2 The Mailbox server
Herzlich willkommen. Was ist neu in Exchange 2013. 1
Herzlich willkommen Was ist neu in Exchange 2013 1 Vorstellung der Referenten Markus Luchsinger Datacenter und Microsoft Solution Architekt connectis AG EMBA FH, IDV-TS, MCITP und MCT Seit
Email Archiving Brings Solid Advantages Years ago when you set up an email environment, regardless of the solution you utilized, the term archive wasn t common. Certainly backup was essential and disaster
Course Syllabus. Implementing and Managing Microsoft Exchange Server 2003. Key Data. Audience. Prerequisites
Course Syllabus Key Data Product #: 1947 Course #: 2400 Number of Days: 5 Format: Instructor-Led Certification Exams: 70-284 This course helps you prepare for the following Microsoft Certified Professional
Load Balancing Microsoft Exchange 2016. Deployment Guide
Load Balancing Microsoft Exchange 2016 Deployment Guide rev. 1.0.1 Copyright 2002 2016 Loadbalancer.org, Inc. Table of Contents About this Guide... 4 Loadbalancer.org Appliances Supported... 4 Loadbalancer.org
MICROSOFT EXCHANGE, OFFERED BY INTERCALL
MICROSOFT EXCHANGE, OFFERED BY INTERCALL Comparison Sheet The table below compares in-product or service feature availability between Microsoft 2013 on-premises and Online within. Planning and Deployment
E-MAIL AND SERVER SECURITY DEPLOYMENT GUIDE 1 E-MAIL AND SERVER SECURITY Deployment Guide 2 CONTENTS 1. Overview 3 2. Deployment scenarios 5 2.1 Stand-alone server 5 2.2 Deploying the product with F-Secure
E-MAIL AND SERVER SECURITY DEPLOYMENT GUIDE 1 E-MAIL AND SERVER SECURITY Deployment Guide 2 CONTENTS 1. Overview 3 1.1 How the product works 3 1.2 Product contents 4 2. Deployment scenarios 5 2.1 Stand-alone
Why You Need Email Archiving
Why You Need Email Archiving Table of Contents Introduction...2 The IT Administrator...3 The Email User...5 The Team Leader...6 The Senior Manager/Business Owner...7 Conclusion...8-1
GRAVITYZONE HERE. Deployment Guide VLE Environment
GRAVITYZONE HERE Deployment Guide VLE Environment LEGAL NOTICE All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means, electronic or mechanical,
Introduction. Part I Introduction to Exchange Server 2010 1
Contents Introduction xxix Part I Introduction to Exchange Server 2010 1 Chapter 1 Introduction to Exchange Server 2010 3 Part II Brief History of Exchange Servers 4 New Features in Exchange Server 2010
Guide to Deploying Microsoft Exchange 2013 with Citrix NetScaler
Deployment Guide Guide to Deploying Microsoft Exchange 2013 with Citrix NetScaler Extensive guide covering details of NetScaler ADC deployment with Microsoft Exchange 2013. Table of Contents Introduction
2400 - Implementing and Managing Microsoft Exchange Server 2003
2400 - Implementing and Managing Microsoft Exchange Server 2003 Introduction This five-day, instructor-led course provides students with the knowledge and skills that are needed to update and support a
GFI Product Manual. Getting Started Guide
GFI Product Manual Getting Started Guide info@gfi.com The information and content in this document is provided for informational purposes only and is provided "as is" with no warranty
AX Series with Microsoft Exchange Server 2010
Deployment Guide AX Series with Microsoft Exchange Server 2010 v.1.2 DG_0512.1 DEPLOYMENT GUIDE AX Series with Microsoft Exchange Server 2010 Table of Contents 1. Introduction... 4 1.1 Prerequisites and
MS-10135 - Configuring, Managing and Troubleshooting Microsoft Exchange Server 2010
MS-10135 - Configuring, Managing and Troubleshooting Microsoft Exchange Server 2010 Introduction This course will provide you with the knowledge and skills to configure and manage a Microsoft Exchange
Five essential considerations for your Exchange 2010 implementation
Five essential considerations for your Exchange 2010 implementation Whitepaper Dell IT Management Software as a Service THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL
AX Series with Microsoft Exchange Server 2010
Deployment Guide AX Series with Microsoft Exchange Server 2010 v.1.1 DEPLOYMENT GUIDE AX Series with Microsoft Exchange Server 2010 Table of Contents 1. Introduction... 4 1.1 Prerequisites and Assumptions...4
Steve Goodman MCT, MCITP (EMA Exchange 2007, 2010, Office 365, Lync and Enterprise Admin), MCSE, MCSA, MCTS, VMware VCP 3+4 and Sun Solaris SCSA
Steve Goodman MCT, MCITP (EMA Exchange 2007, 2010, Office 365, Lync and Enterprise Admin), MCSE, MCSA, MCTS, VMware VCP 3+4 and Sun Solaris SCSA A comprehensive excerpt from Steve Goodman s Exchange Blog
The Exchange 2010 Ecosystem
The Exchange 2010 Ecosystem Joe Hoegler Practice Leader, Kraft Kennedy Microsoft Certified Master Exchange 2010 TECH9 S e s s i o n G o a l s Discuss the capabilities and limitation of native features
Quality is Advantage
Quality is Advantage Microsoft Exchange Server 2013 Configuring Course duration: 32 academic hours Exam Code: 70-662 This course is designed for novice IT specialists, who wish to master maintenance,
6445A - Implementing and Administering Small Business Server 2008
6445A - Implementing and Administering Small Business Server 2008 Table of Contents Introduction Audience At Clinic Completion Prerequisites Microsoft Certified Professional Exams Student Materials Course
Score your ACE in Business and IT Efficiency
Score your ACE in Business and IT Efficiency Optimize your Data Center capabilities with Cisco s Application Control Engine (ACE) Agenda In this webinar, you will be given an insight into the
OVERVIEW. DIGIPASS Authentication for Office 365
OVERVIEW DIGIPASS for Office 365 Disclaimer Disclaimer of Warranties and Limitation of Liabilities All information contained in this document is provided 'as is'; VASCO Data Security assumes no responsibility
Digital certificates and SSL
Digital certificates and SSL 20 out of 33 rated this helpful Applies to: Exchange Server 2013 Topic Last Modified: 2013-08-26 Secure Sockets Layer (SSL) is a method for securing communications between
Hosted Vs. In-House for Microsoft Exchange: Five Myths Debunked
Hosted Vs. In-House for Microsoft Exchange: Five Myths Debunked Email has become the single most important tool for business communication, period. In a recent King Research survey of mid-market IT professionals
THE WINDOWS AZURE PROGRAMMING MODEL
THE WINDOWS AZURE PROGRAMMING MODEL DAVID CHAPPELL OCTOBER 2010 SPONSORED BY MICROSOFT CORPORATION CONTENTS Why Create a New Programming Model?... 3 The Three Rules of the Windows Azure Programming Model...
Exchange 2010 migration guide
HOW-TO GUIDE Exchange 2010 migration guide This guide details the best practices to follow when migrating from Microsoft Exchange 2003 to Microsoft Exchange 2010. The guidelines provided explain how to
Exchange-based email. Types of email. Why use Exchange for email?
Exchange-based email Types of email POP3 Exchange Lotus Notes Squirrel mail Pine They are all pretty basic and limited except Exchange email. Lotus Notes has lots of functionality, but it s a big pain,:
Mahmoud Magdy Microsoft MVP Exchange server Tech Lead Ingazat Information Technology. Mohamed Fawzi Senior Infrastructure Consultant Link Development
Upgrading from Microsoft Exchange Server 2003/2007 to Exchange Server 2010: Tips, Tricks, and Lessons Learned Mahmoud Magdy Microsoft MVP Exchange server Tech Lead Ingazat Information Technology Mohamed
Installing GFI MailEssentials
Installing GFI MailEssentials Introduction to installing GFI MailEssentials This chapter shows you how to install and configure GFI MailEssentials. GFI MailEssentials can be installed in two ways: Installation
How to Configure Outlook 2013 to connect to Exchange 2010
How to Configure Outlook 2013 to connect to Exchange 2010 Outlook 2013 will install and work correctly on any version of Windows 7 or Windows 8. Outlook 2013 won t install on Windows XP or Vista. 32-bit
CREATING YOUR ONLINE PRESENCE
CREATING YOUR ONLINE PRESENCE Congratulations on signing up for your webhosting package, you ve just completed the first and most important step in establishing your online presence. There are just a few
Deploying and Managing Microsoft Exchange Server 2013
Deploying and Managing Microsoft Exchange Server 2013 Module Overview 1. Exchange Server 2013 Prerequisites and Requirements 2. Exchange Server 2013 Deployment 3. Managing Exchange Server 2013 1. Exchange
Office 365 from the ground to the cloud
Office 365 from the ground to the cloud Webinar 8 Preparing for Exam 74-325 July 2014 The Series The Basics Building Your Office 365 Practice Cross-Selling and Upselling Opportunities Microsoft Azure and
Title of Presentation
Title of Presentation Hosted Messaging And Collaboration Solution Hosted Services Part 1 Name Title Organization Agenda Hosted Messaging and Collaboration Solution Hosted Services Technical Overview Hosted,
Microsoft Exchange Server 2007 deployment scenarios for midsize businesses
Microsoft Exchange Server 2007 deployment scenarios for midsize businesses Executive summary... 2 Sample deployment scenarios... 2 Introduction... 3 Target audience... 3 Prerequisites... 3 Customer profile...
Feature and Technical
BlackBerry Enterprise Server for Microsoft Exchange Version: 5.0 Service Pack: 4 Feature and Technical Overview Published: 2013-11-07 SWD-20131107160132924 Contents 1 Document revision history...6 2 What's
Evoko Room Manager. System Administrator s Guide and Manual
Evoko Room Manager System Administrator s Guide and Manual 1 1. Contents 1. Contents... 2 2. Read this first! Introduction to this Guide... 6 3. User Guide... 6 4. System Architecture Overview... 8 ----
Course 10135A: Configuring, Managing and Troubleshooting Microsoft Exchange Server 2010
Course Syllabus Course 10135A: Configuring, Managing and Troubleshooting Microsoft Exchange Server 2010 About this Course This five-day, instructor-led course will provide you with the knowledge and skills
Introduction to the AirWatch Cloud Connector (ACC) Guide
Introduction to the AirWatch Cloud Connector (ACC) Guide The AirWatch Cloud Connector (ACC) provides organizations the ability to integrate AirWatch with their back-end enterprise systems. This document
Introduction to the EIS Guide
Introduction to the EIS Guide The AirWatch Enterprise Integration Service (EIS) provides organizations the ability to securely integrate with back-end enterprise systems from either the AirWatch SaaS environment
Metalogix Replicator. Quick Start Guide. Publication Date: May 14, 2015
Metalogix Replicator Quick Start Guide Publication Date: May 14, 2015 Copyright Metalogix International GmbH, 2002-2015. All Rights Reserved. This software is protected by copyright law and international
Deploying Exchange Server 2007 SP1 on Windows Server 2008
Deploying Exchange Server 2007 SP1 on Windows Server 2008 Product Group - Enterprise Dell White Paper By Ananda Sankaran Andrew Bachler April 2008 Contents Introduction... 3 Deployment Considerations...
|
http://docplayer.net/1036558-Conversational-exchange-in-10-days.html
|
CC-MAIN-2018-26
|
refinedweb
| 38,600
| 59.94
|
the wait.
Getting started with the new Angular 2 components is entirely different than it was with Kendo UI For jQuery. Just like all of you, I have to learn these strange new concepts and how to use Kendo UI in a brave new world of modules, directives and the like. I recently sat down for an afternoon with the beta components to see what it was like to get up and running with Kendo UI And Angular 2. This was my experience.
Choosing a Starting Point
One of the more difficult things about Angular 2 is just getting started. Gone are the days when we could just drop script tags in our page and be done. Angular 2 has many dependencies and needs a build step to bring together all of its own JavaScript, along with your JavaScript into something that is cross-browser compatible. Fortunately, there are a lot of great tools and starter kits out there. Unfortunately, they all use different module loaders and this means that how you get started with Kendo UI will vary depending on which module loader you use.
SystemJS vs. Webpack
In the JavaScript bundler/module loader world, there are currently two primary contenders: Webpack, the industry darling that has been widely adopted by React developers; and SystemJS—a universal module loader that tries to be really good at just loading any type of JavaScript module, be it CommonJS, RequireJS or ES6.
Depending upon which starter kit you choose for Angular 2, you will be using either SystemJS or Webpack. The trouble is that you may not realize which one is being used straight away if you aren’t terribly familiar with either of these module loaders. That’s a problem because, when it comes to Kendo UI, Webpack works very well, and SystemJS requires a bit more configuration. And when it comes to configuration, here there be dragons.
That’s why after examining the myriad of excellent starter kits and GitHub project templates out there, I recommend that you use the Angular CLI with Kendo UI.
Angular CLI
The Angular CLI is the official tool for getting up and running with Angular 2 and it’s built by some great folks in the community in conjunction with the Angular 2 team. I officially recommend it for several reasons:
- It generates what I believe to be the cleanest and simplest empty Angular 2 project;
- It uses Webpack and does a great job of configuring almost all of it for you;
- It has generators that you will definitely use since Angular 2 projects like to contain a LOT of files.
To install the Angular CLI, visit the docs and make sure you have the right versions of Node and npm installed. After that, it’s a simple matter of…
> npm install -g angular-cli
Note to Windows users: you will also need to have the C++ libraries installed with Visual Studio. If you do not have these libraries installed, simply try and create a new C++ project of any kind and Visual Studio will download and install them. They are huge. I am sorry.
Once the CLI is installed, you can create a new Angular 2 project with the
ng command.
> ng new kendo-ui-first-look --style=scss
This creates a new Angular 2 project and then tells you that it is “Installing packages for tooling via npm”. It installs all of the generated project’s dependencies, which is a lot of packages. A lot. There are so many packages that it will take a non-trivial amount of time to complete this step, even on my Macbook Pro with an i7 and 16 gigs of RAM. This is something I’m hoping will get better as the CLI gets better and things like Yarn make me hopeful.
The
–style=scss flag specifies that we want a new Angular 2 project with SASS support. SASS is a CSS pre-processor that makes it really easy to include and override external CSS frameworks such as Bootstrap.
Once the project is created, you can run it with the
serve command.
> ng-serve
If you examine the terminal or command prompt, you can see Webpack doing its thing.
At this point, the app is running, but how do you load it in your browser? If you scroll up just a bit in the terminal, you will see where it tells you the port on which the app is running.
And if you load that URL in your browser…
Awesome! Your app works. Or at least it says it does and computers don’t lie.
Let’s take a look at the project. Open up the directory where you created the project. Inside of that directory is a
src folder. If you open up the
app.component.ts file, you’ll see the Angular 2 component that has a property called
title. This
title property is bound in the
app.component.html file with the syntax
{ title }. If you were to change the value of
title in
app.component.ts, it will change the message that is displayed in the app without requiring a reload, so you can just leave this browser window running at all times.
Before we add Kendo UI to this application, we’re going to bring in Bootstrap as our CSS framework, since this is the framework that Kendo UI recommends and integrates seamlessly with.
Including Bootstrap
We’re going to include the SASS version of Bootstrap because the Angular CLI has tremendous SASS support built in and it makes it really easy to include third party CSS frameworks.
> npm install bootstrap-sass --save
This will copy Bootstrap from npm into your
node_models folder. What we need is the Bootstrap CSS. We can include this with an
@import statement in the
styles.scss file.
$icon-font-path: "~bootstrap-sass/assets/fonts/bootstrap/"; @import "~bootstrap-sass/assets/stylesheets/bootstrap";
The first line sets the variable that points to the Bootstrap icon font. That variable is then used in the Bootstrap SASS file that is imported below. The Angular 2 CLI has all of the build steps for SASS already wired up, so this “just works”.
Note that when you write or include SASS in the
styles.scss file, these styles are available to the entire application. Angular 2 has a feature called Style Encapsulation that allows you to specify styles that are restricted to one or more components, but not the entire application. This is a powerful feature and I encourage you to watch this short presentation from Justin Schwartzenberger which explains this in graceful detail.
If you look at the app now, it looks similar, but the font has changed since Bootstrap normalizes the basic CSS properties such as font. It already looks a lot better!
At this point, we could use any Bootstrap CSS component. Change the contents of
app.component.html to the following:
<div class="container"> <div> <h1>{ title }</h1> </div> </div>
Now let’s add a Kendo UI Button to this application. Of course, you could use a Bootstrap button here, but, for the sake of learning how we include Kendo UI, we’re going with a Kendo UI button. Besides that, the default theme for Kendo UI For Angular 2 is pretty amazing.
First, you are going to need to register the Kendo UI npm endpoint. This is going to ask you to login with your Telerik username and password as well as an email address. If you don’t have one, you can register for one here.
> npm login --registry= --scope=@progress
Once you’ve logged in, you can install the Kendo UI Button component.
> npm install -S @progress/kendo-angular-buttons
Special thanks to @tj_besendorfer, who pointed out that installing Kendo UI widgets while running `ng serve` can cause issues with files not being copied properly because they are in use. If you run into a an issue that looks somewhat like "The unmet dependencies are @progress/kendo-data-query@^0.2.0, and tslint@^3.0.0.", stop the development web server (ng serve) and then run `npm install` and then `ng serve` again.
This will install the Kendo UI Button component into the
@progress folder in your
npm_modules directory. In order to use this button, you need to import it into whatever module you want to use it with. In our case, we have only one module, the
app.module.ts, so we’ll import it there.
import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { FormsModule } from '@angular/forms'; import { HttpModule } from '@angular/http'; import { AppComponent } from './app.component'; // Import the Kendo UI Component import { ButtonsModule } from '@progress/kendo-angular-buttons'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, FormsModule, HttpModule, // import the Kendo UI Component into the module ButtonsModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { }
Lastly, we need to include the CSS that the Kendo UI Button requires. The Kendo UI Default theme is delivered via a separate NPM package.
> npm install -S @telerik/kendo-theme-default
We can then include it in
styles.scss the same way that we included Bootstrap.
/* Bootstrap CSS */ $icon-font-path: "~bootstrap-sass/assets/fonts/bootstrap/"; @import "~bootstrap-sass/assets/stylesheets/bootstrap"; /* Kendo UI CSS */ @import "~@telerik/kendo-theme-default/styles/packages/all";
Now the button can be used in the
app.component.html.
<div class="container"> <div> <h1>{ title }</h1> </div> <div> <button kendoButton [primary]="true" (click)="buttonClicked()">Don't Click Me!</button> </div> </div>
The button
click event is bound to an event handler called
buttonClicked. We need to add that event into the
app.component.ts file.
import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.scss'] }) export class AppComponent { title = 'app works!'; // Kendo UI Button click event handler buttonClicked() { alert("Clickity Clack!") } }
Let’s add another commonly used Kendo UI widget: the Kendo UI Dialog. This was previously known as the Kendo UI Window.
> npm install -S @progress/kendo-angular-dialog
Just like with the Kendo UI Button, import the Kendo UI Dialog component in the
app.module.ts file.
import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { FormsModule } from '@angular/forms'; import { HttpModule } from '@angular/http'; import { AppComponent } from './app.component'; // Import the Kendo UI Components import { ButtonsModule } from '@progress/kendo-angular-buttons'; import { DialogModule } from '@progress/kendo-angular-dialog'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, FormsModule, HttpModule, // import the Kendo UI Components into the module ButtonsModule, DialogModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { }
Add the markup for a Kendo UI Dialog component to the
app.component.html file directly below the button.
<div class="container"> <div> <h1>{ title }</h1> </div> <div> <button kendoButton [primary]="true" (click)="buttonClicked()">Don't Click Me!</button> </div> <kendo-dialog I am a super simple Kendo UI Dialog! </kendo-dialog> </div>
If you look at your app now, you will see the dialog component.
It would be better if the button opened the dialog since that’s how we normally use them. To do that, we need to set the
*ngIf property of the dialog to a boolean. This
*ngIf is controlling the visibility of the dialog. So if we set that attribute to a property whose value is false, the dialog will not display. If we toggle it to true, the dialog pops up and the background goes dark. In this case, I have chosen the property
dialogOpen, which hasn’t been created yet.
<div class="container"> <div> <h1>{ title }</h1> </div> <div> <button kendoButton [primary]="true" (click)="buttonClicked()">Don't Click Me!</button> </div> <kendo-dialog I am a super simple Kendo UI Dialog! </kendo-dialog> </div>
This means that our
buttonClicked event simply needs to set a property called
dialogOpen to
true. The close event then toggles it back to false, and I’m changing the
title property as well just to show off the binding of Angular 2.
import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.scss'] }) export class AppComponent { title = 'app works!'; dialogOpen = false; // Kendo UI Button click event handler buttonClicked() { this.dialogOpen = true; } dialogClosed() { this.dialogOpen = false; this.title = "Nice Job!"; } }
You’re Ready To Go!
With that, we’ve got a functional Angular 2 application complete with Kendo UI And Bootstrap and you’re ready to build—well—anything!
The Kendo UI For Angular 2 Beta features many of the most popular controls, including the Grid and Data Visualization. We’re on track for a Release Candidate in January which will include even more of your favorite components, with many more to come early next year. We know that you would prefer to have all of these components right now, and honestly, so would we! However, we have always believed in building the very best, and sometimes that takes more time than we would like, but we believe that it will be well worth the wait.
For more information, check out our official Getting Started Guide, as well as the Beta components and demos.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/using-kendo-ui-with-angular-2-1
|
CC-MAIN-2017-26
|
refinedweb
| 2,187
| 62.88
|
[
]
Jason Smith commented on COUCHDB-431:
-------------------------------------
Alex, thanks for your thoughts. Some feedback:
I disagree that CORS is not hard security. It is hard security because same-origin is the
primary protector of all data on the web. Without same-origin restrictions, any site you visit
could reach your data on any other site you visit. It is also hard security in that it is
both difficult and important to get right.
I am hoping for a fail-safe system.
It seems reasonable that the database admin should control the CORS permissions of that database.
For a /_config change, that requires the server admin.
The _config imposes practical problems too. It is not a general system registry: all the data
must go section-key-value. There is no way to have a per-database config except to use an
entire new section as a namespace, with each DB name inside it. IMO I don't like that at all.
The only place I know of for per-database config is the _security object. Since CORS is about
security, it's logical to place it there (where the db admin can modify it).
Finally, the original idea that I heard about CORS was to allow people to specify any header.
I don't think that is fail-safe. I have no idea what headers people use out there. It seems
impossible to evaluate the security of permitting any header for any response.
Finally, CORS headers are generally generated dynamically. The only exception is the wildcard
header which for CouchDB would be a very dangerous setting. That means any code from any site
on any browser can access your couch without the user knowing but (potentially) with the user's
couch credentials. Therefore, I am hoping for a whitelist to specify, "yes,
is my own website and I trust all its code to access this couch".
> Support cross domain XMLHttpRequest (XHR) calls by implementing Access Control spec
> -----------------------------------------------------------------------------------
>
> Key: COUCHDB-431
> URL:
> Project: CouchDB
> Issue Type: New Feature
> Components: HTTP Interface
> Affects Versions: 0.9
> Reporter: James Burke
> Assignee: Randall Leeds
> Priority: Minor
> Attachments::
|
http://mail-archives.apache.org/mod_mbox/couchdb-dev/201105.mbox/%3C1683324868.55057.1306816787460.JavaMail.tomcat@hel.zones.apache.org%3E
|
CC-MAIN-2015-27
|
refinedweb
| 351
| 65.73
|
Selection sorts are a very common method of sorting. Here is a selection sort that is implemented in Python.
def selection_sort(items): for i in range(len(items) - 1): # Assume that the smallest item is located at index i smallest = i # Now loop through the rest of the list for j in range(i + 1, len(items)): # Is our item at index j smaller than our smallest item? if items[j] < items[smallest]: smallest = j # Now swap elements to perform the sort temp = items[smallest] items[smallest] = items[i] items[i] = temp if __name__ == '__main__': items = ['Bob Belcher', 'Linda Belcher', 'Gene Belcher', 'Tina Belcher', 'Louise Belcher'] print(items, '\n') print('Sorting...') selection_sort(items) print(items)
Selection sorts use nested loops to iterate over a list. We check if the item at index j is less than the current smallest item. If j is smaller than smaller, we assign smaller to j.
Once we complete the nested loop, we can perform a swap. Our driver program demonstrates this with strings, but thanks to Python's duck typing system, this will sort any object that implements
__lt__.
When run, we get this output
['Bob Belcher', 'Linda Belcher', 'Gene Belcher', 'Tina Belcher', 'Louise Belcher'] Sorting... ['Bob Belcher', 'Gene Belcher', 'Linda Belcher', 'Louise Belcher', 'Tina Belcher']
Advertisements
|
https://stonesoupprogramming.com/2017/05/08/selection-sort-python/
|
CC-MAIN-2018-05
|
refinedweb
| 213
| 61.77
|
Release 2.3 Mon May 6 16:18:02 EST 2002
This is a another release of the pydns code, as originally written by Guido van Rossum, and with a hopefully nicer API bolted over the top of it by Anthony Baxter <anthony@interlink.com.au>.
This code is released under a Python-style license.
I'm making this release because there hasn't been a release in a heck of a long time, and it probably deserves one. I'd also like to do a substantial refactor of some of the guts of the code, and this is likely to break any code that uses the existing interface. So this will be a release for people who are using the existing API...
There are several known bugs/unfinished bits
- processing of AXFR results is not done yet.
- doesn't do IPv6 DNS requests (type AAAA)
- docs, aside from this file
- all sorts of other stuff that I've probably forgotten.
- MacOS support for discovering nameservers
- the API that I evolved some time ago is pretty ugly. I'm going to re-do it, designed this time.
Stuff it _does_ do: - processes /etc/resolv.conf - at least as far as nameserver directives go. - tries multiple nameservers. - nicer API - see below. - returns results in more useful format. - optional timing of requests. - default 'show' behaviour emulates 'dig' pretty closely.
To use:
import DNS reqobj=DNS.Request(args) reqobj.req(args)
args can be a name, in which case it takes that as the query, and/or a series of keyword/value args. (see below for a list of args)
when calling the 'req()' method, it reuses the options specified in the DNS.Request() call as defaults.
- options are applied in the following order:
- those specified in the req() call or, if not specified there, those specified in the creation of the Request() object or, if not specified there, those specified in the DNS.defaults dictionary
name servers can be specified in the following ways: - by calling DNS.DiscoverNameServers(), which will load the DNS servers
from the system's /etc/resolv.conf file on Unix, or from the Registry on windows.
by specifying it as an option to the request
- by manually setting DNS.defaults['server'] to a list of server IP
addresses to try
- XXXX It should be possible to load the DNS servers on a mac os machine,
from where-ever they've squirrelled them away
name="host.do.main" # the object being looked up qtype="SOA" # the query type, eg SOA, A, MX, CNAME, ANY protocol="udp" # "udp" or "tcp" - usually you want "udp" server="nameserver" # the name of the nameserver. Note that you might
# want to use an IP address here
rd=1 # "recursion desired" - defaults to 1. other: opcode, port, ...
There's also some convenience functions, for the lazy:
to do a reverse lookup: >>> print DNS.revlookup("192.189.54.17") yarrina.connect.com.au
to look up all MX records for an entry: >>> print DNS.mxlookup("connect.com.au") [(10, 'yarrina.connect.com.au'), (100, 'warrane.connect.com.au')]
Documentation of the rest of the interface will have to wait for a later date. Note that the DnsAsyncRequest stuff is currently not working - I haven't looked too closely at why, yet.
There's some examples in the tests/ directory - including test5.py, which is even vaguely useful. It looks for the SOA for a domain, checks that the primary NS is authoritative, then checks the nameservers that it believes are NSs for the domain and checks that they're authoritative, and that the zone serial numbers match.
see also README.guido for the original docs.
- bugs/patches to the tracker on SF -
-
|
https://bitbucket.org/jaraco/pydns/src/a9e63f944ed4?at=r230
|
CC-MAIN-2015-48
|
refinedweb
| 616
| 66.33
|
query is an isomorphic interface for working with Qe - Query envelopes.
There are two distinct parts of
query:
An example query:
// Resource to query// Match conditions// also supports `.or()`// Number of results// Result to skip; // `callback(err, results)`
npm install mekanika-query
Build Qe - Query envelopes:
var myq = ;// -> {do:'find',on:'villains',limit:5}
Plug into Qe adapters with
.useAdapter( adptr ):
var superdb = ;myq;
Invoke adapter with
.done( cb ):
var {console;}myq;// Passes `myq.qe` to `myq.adapter.exec()`
query chains nicely. So you can do the following:
;
Go crazy.
Initiate a query:
// -> new Query// Or with an adapter
Build up your Qe using the fluent interface methods that correspond to the Qe spec:
create(),
find(),
update(),
remove()
on()
ids(),
match()
populate()
limit(),
offset()
meta()
Qe are stored as
query().qe - so you can optionally assign Qe directly without using the fluent interface:
var myq = ;myqqe = on:'villains' do:'find' limit:5;// Plug into an adapter and executemyq;myq; // `cb` receives (err, results)
.doactions
The available
.do actions are provided as methods.
All parameters are optional (ie. empty action calls will simply set
.do to method name). Parameter descriptions follow:
bodyis the data to set in the Qe
.body. May be an array or a single object. Arrays of objects will apply multiple
idsis either a string/number or an array of strings/numbers to apply the action to. Sets Qe
.idsfield.
cbcallback will immediately invoke
.done( cb )if provided, thus executing the query (remember to set an adapter).
Available actions:
All methods can apply to multiple entities if their first parameter is an array. ie. Create multiple entities by passing an array of objects in
body, or update multiple by passing an array of several
ids.
Update/find/remove can all also .match on conditions. (See 'match')
.matchconditions
Conditions are set using the following pattern:
.where( field ).<operator>( value )
Operators include:
.is().
not().
Examples:
;// Match any record with `{name: 'Mordecai'}`;// Match records where `age` is 21 or higher
Multiple conditions may be added using either
.and()or
.or():
// AND chain;// OR chain;
To nest match container conditions see the
query.mc() method below.
query.mc()
The fluent
.where() methods are actually just delegates for the generalised
query.mc() method for creating
MatchContainer objects.
The Qe spec describes match containers as:'$boolOp': mo|mc ...
The 'mc' array is made up of match objects (mo) of the form
{$field: {$op:$val}}
'mc' objects chain the familiar
.where() method and match operator methods. For example:
var mc = query;// Generates Qe match container:and: power:gte:50 state:neq:'scared'
Which means, the fluent API expression:
;
Is identical to:
;
The upshot is nesting is fully supported, if not fluently. To generate a Qe that matches a nested expression as follows:
power > 30 && type == 'wizard' || type == 'knight'
A few approaches:
// Using 'where' and 'or' to set the base 'mc';// Directly setting .match and passing 'mc';
.updateoperators
query supports the following update operator methods (with their update object Qe output shown):
{$field: {inc: $number}}
{$field: {pull: $values}}
{$field: {push: $values}}
Where
field is the field on the matching records to update,
number is the number to increment/decrement and
values is an array of values to pull or push.
query can delegate execution to an adapter.
Which means, it can pass Qe to adapters and return the results.
To do this, call
.done( cb ) on a query that has an adapter set.
; // cb( err, results )
This passes the Qe for that query, and the callback handler to the adapter. The errors and results from the adapter are then passed back to the handler -
cb( err, results)
Specifically,
query#done( cb )delegates to:query#adapter;
Pass an adapter directly to each query:
var myadapter = ;;
This is sugar for the identical call:
query().useAdapter( myadapter );
See for more details on adapters.
query supports pre and post
.done(cb) request processing.
This enables custom modifications of Qe prior to passing to an adapter, and the custom processing of errors and results prior to passing these to
.done(cb) callback handlers. Note that middleware:
Pre-middleware enables you to modify the query prior to adapter execution (and trigger any other actions as needed).
Pre methods are executed before the Qe is handed to its adapter, and are passed
fn( qe, next ) with the current Qe as their first parameter, and the chaining method
next() provided to step through the queue (enables running asynchronous calls that wait on
next in order to progress).
To pass data between pre-hooks, attach to
qe.meta.
next()accepts one argument, treated as an error that forces the query to halt and return
cb( param )(error).
Pre hooks must call
next() in order to progress the stack:
{// Example modification of the Qe passed to the adapterqeon += ':magic_suffix';// Go to next hook (if any);};// Adds `preHandler` to the pre-processing queue
Supports adding multiple middleware methods:
; // etc// OR;
Post-middleware enables you to modify results from the adapter (and trigger additional actions if needed).
Post middleware hooks are functions that accept
(err, results, qe, next) and must pass
next() the following params, either:
(err, results)OR
(Error)object to throw
Failing to call
next() with either
(err,res) or
Error will cause the query to throw an
Error and halt processing.
Posts run after the adapter execution is complete, and are passed the the
err and
res responses from the adapter, and
qe is the latest version of the Qe after
pre middleware.
Important note on Exceptions! Post middleware runs in an asynchronous loop, which means if your post middleware generates an exception, it will crash the process and the final query callback will fail to execute (or be caught). You should wrap your middleware methods in a
try-catchblock and handle errors appropriately.
You may optionally modify the results from the adapter. Simply return (the modified or not)
next(err, res) when ready to step to the next hook in the chain.
{tryerr = 'My modified error';res = 'Custom results!';// Call your own external hooks;// MUST call `next(err, res)` to step chain// Can pass to further async callsif hasAsyncStuffToDo;// Or just step sync:else ;catch e// Note 'return'. NOT 'throw':; // Cause query to throw this Error};// Adds `postHandler` to post-processing queue
Also supports adding multiple middleware methods:
; // etc// OR;
Ensure you have installed the development dependencies:
npm install
To run the tests:
npm test
To generate a
coverage.html report, run:
npm run coverage
If you find a bug, report it.
Released under the Mozilla Public License v2.0 (MPL-2.0)
|
https://www.npmjs.com/package/mekanika-query
|
CC-MAIN-2017-43
|
refinedweb
| 1,089
| 64.3
|
Hi, While running a simple thread wake-up benchmark on a 4-way ia64 system I noticed an interesting behaviour. The program was stolen from Ian Wienand's pthreadbench suite. It is a simple producer/consumer program with 1 producer and N consumers. The thing is, with some values of N, the program runs almost 10 times slower with NPTL than with LinuxThreads. With 1 consumer, NPTL has a slight advantage, and with both libraries, the 2 threads run on a single CPU. (CPU activity was monitored using xosview) With 2 consumers, NPTL uses 3 CPU, LT uses 2. NPTL is FOUR times faster. Actually it seems that LT always uses 2 CPU when there is more than 1 consumer. With 3 consumers, NPTL uses 4 CPU and is twice as fast as LT. >From now on NPTL always seems to use 4 CPUs. With 4 or 5 consumers there is no clear winner. But with more consumers, NPTL is MUCH slower than LT. Even 25 times slower with 60 consumers... This system is running Linux 2.5.67 and NPTL 0.36. I guess the performance problem is due to the fact that NPTL uses more CPUs than LT and therefore uses the cache with much less efficiency than LT. My questions are: -Is this problem a benchmark-only problem, unlikely to happen in real programs ? -If not, who is to be blamed ? NPTL, or the kernel ? Maybe just some tuning to do. You can find the test program as an attached file. It takes one parameter, the number of consumers. Simon.
/* thread benchmark header */ #include <stdio.h> #include <unistd.h> #include <pthread.h> #include <signal.h> #include <sys/time.h> #include <assert.h> void do_test(void); int time_to_wait; int nb_cons; /*--; #ifdef DEBUG printf("EMPTIED!"); #endif pthread_cond_signal( &condition.empty ); pthread_mutex_unlock( &condition.mutex ); } } void do_test(void) { pthread_t threads[100]; int i = 0; /* have 10 workers */ for ( ; i < nb_cons ; i++ ) pthread_create( &threads[i], NULL, thread, (void*)NULL ); /* fill a queue, signal to threads to empty it */ while ( 1 ) { pthread_mutex_lock( &condition.mutex ); /* if the queue is full, signal for worker to clean it */ if ( condition.value ) { pthread_cond_broadcast( &condition.full ); //pthread_cond_signal( &condition.full ); pthread_cond_wait( &condition.empty , &condition.mutex); } pthread_mutex_unlock( &condition.mutex ); /* fill it back up */ pthread_mutex_lock( &condition.mutex ); #ifdef DEBUG printf("FILLED\n"); #endif condition.value = 5; pthread_mutex_unlock( &condition.mutex ); } } /* global */ struct timeval start,end; extern unsigned long things_done; extern char * things; /* on alarm print out results */ void on_alarm(int signo) { struct timeval diff ; double diff_secs; /* grab things done before we continue */ unsigned long stamp = things_done; gettimeofday(&end, NULL); timersub( &end, &start , &diff ); diff_secs = diff.tv_sec + diff.tv_usec*1e-6; printf("%d %s in %g sec = ", stamp, things, diff_secs); printf("%g per second\n", stamp / diff_secs ); exit(1); } /* main */ int main(int argc, char *argv[]) { static struct sigaction alarm_m; assert(argc > 1); //time_to_wait = atoi(argv[1]); time_to_wait=5; nb_cons = atoi(argv[1]); assert(nb_cons <= 100); printf("Testing during %d seconds with %d consumers\n", time_to_wait, nb_cons); /* setup alarm handler */ alarm_m.sa_handler = on_alarm; sigfillset(&(alarm_m.sa_mask)); sigaction(SIGALRM , &alarm_m, NULL); alarm( time_to_wait ); gettimeofday( &start , NULL ); while ( 1 ) do_test(); }
|
https://listman.redhat.com/archives/phil-list/2003-April/msg00074.html
|
CC-MAIN-2017-30
|
refinedweb
| 514
| 61.02
|
The QScriptEngine class provides an environment for evaluating Qt Script code. More...
#include <QScriptEngine>
Inherits QObject.
This class was introduced in Qt 4.3.
The QScriptEngine class provides an environment for evaluating Qt Script code.
See the QtScript documentation for information about the Qt Script language, and how to get started with scripting your C++ application.
Use evaluate() to evaluate script code.
QScriptEngine myEngine; QScriptValue three = myEngine.evaluate("1 + 2");
evaluate() can throw a script exception (e.g. due to a syntax error); in that case, the return value is the value that was thrown (typically an Error object). You can check whether the evaluation caused an exception by calling hasUncaughtException(). In that case, you can call toString() on the error object to obtain an error message. The current uncaught exception is also available through uncaughtException(). can be invoked from script code. Such functions must have the signature QScriptEngine::FunctionSignature. You may then pass the function as argument to newFunction(). Here is an example of a function that returns the sum of its first two arguments:
QScriptValue myAdd(QScriptContext *context, QScriptEngine *engine) { QScriptValue a = context->argument(0); QScriptValue b = context->argument(1); return QScriptValue(engine, a.toNumber() + b.toNumber()); }
To expose this function to script code, you can set it as a property of the Global Object:
QScriptValue fun = myEngine.newFunction(myAdd); myEngine.globalObject().setProperty("myAdd", fun);
Once this is done, script code can call your function in the exact same manner as a "normal" script function:
QScriptValue result = myEngine.evaluate("myAdd(myNumber, 1)");
You can define shared script functionality for a custom C++ type by creating your own default prototype object and setting it with setDefaultPrototype(); see also QScriptable.
Use fromScriptValue() to cast from a QScriptValue to another type, and toScriptValue() to create a QScriptValue from another value. You can specify how the conversion of C++ types is to be performed with qScriptRegisterMetaType() and qScriptRegisterSequenceMetaType().
See also QScriptValue and QScriptContext.
The function signature QScriptValue f(QScriptContext *, QScriptEngine *).
A function with such a signature can be passed to QScriptEngine::newFunction() to wrap the function.() and hasUncaughtException(). this engine's Global Object.
The Global Object contains the built-in objects that are part.
Returns true if the last script evaluation (whether direct or indirect) resulted in an uncaught exception; otherwise returns false.
The exception state is cleared every time a script function call is done in the engine, or when evaluate() is called.
See also uncaughtException(), uncaughtExceptionLineNumber(), and uncaughtExceptionBacktrace(). QScriptExtensionPlugin and Creating QtScript Extensions.().
This is an overloaded member function, provided for convenience.);
Creates a QtScript object of class Object.
The prototype of the created object will be the Object prototype object.
See also newArray() and QScriptValue::setProperty().
Creates a QtScript object that represents a QObject class, using the the given metaObject and constructor ctor.
Enums of metaObject are available as properties of the created QScriptValue. When the class is called as a function, ctor will be called to create a new instance of the class.
See also newQObject().().
Creates a QtScript object of class RegExp with the given regexp.
See also QScriptValue::toRegExp().
This is an overloaded member function, provided for convenience.
Creates a QtScript object of class RegExp with the given pattern and flags.() and QScriptValue::toVariant().);
Warning: This function is not available with MSVC 6. Use qScriptValueFromQMetaObject() instead if you need to support that version of the compiler.
Sets the default prototype.
See also defaultPrototype(), qScriptRegisterMetaType(), QScriptable, and Default Prototypes Example..
See also processEventsInterval().().
Warning: This function is not available with MSVC 6. Use qScriptValueFromValue() instead if you need to support that version of the compiler.
See also fromScriptValue() and qScriptRegisterMetaType().().
The function signature QScriptValue f(QScriptContext *, QScriptEngine *).
A function with such a signature can be passed to QScriptEngine::newFunction() to wrap the function.. If you only want to define a common script interface for values of type T, and don't care how those values are represented, use setDefaultPrototype() instead.", QScriptValue(engine, s.x)); obj.setProperty("y", QScriptValue(engine,Value.3.
|
https://doc.qt.io/archives/qtopia4.3/qscriptengine.html
|
CC-MAIN-2021-49
|
refinedweb
| 664
| 51.04
|
Welcome to the Parallax Discussion Forums, sign-up to participate.
void loop()
{
//Read analog inputs on IO1
pots[0] = map(muxShield.analogReadMS(1,0), 0, 1023, 0, 300); //IO1, pin 0
pots[1] = map(muxShield.analogReadMS(1,1), 0, 1023, 0, 300); //IO1, pin 1
pots[2] = map(muxShield.analogReadMS(1,2), 0, 1023, 0, 300); //IO1, pin 2
pots[3] = map(muxShield.analogReadMS(1,3), 0, 1023, 0, 300); //IO1, pin 3
pots[4] = map(muxShield.analogReadMS(1,4), 0, 1023, 0, 300); //IO1, pin 4
pots[5] = map(muxShield.analogReadMS(1,5), 0, 1023, 0, 300); //IO1, pin 5
pots[6] = map(muxShield.analogReadMS(1,6), 0, 1023, 0, 300); //IO1, pin 6
pots[7] = map(muxShield.analogReadMS(1,7), 0, 1023, 0, 300); //IO1, pin 7
pots[8] = map(muxShield.analogReadMS(1,8), 0, 1023, 0, 300); //IO1, pin 8
//Print the results
Serial.print(pots[0]); Serial.print(",");
Serial.print(pots[1]); Serial.print(",");
Serial.print(pots[2]); Serial.print(",");
Serial.print(pots[3]); Serial.print(",");
Serial.print(pots[4]); Serial.print(",");
Serial.print(pots[5]); Serial.print(",");
Serial.print(pots[6]); Serial.print(",");
Serial.print(pots[7]); Serial.print(",");
Serial.println(pots[8]);
// Give Unity time to read lines
delay(27);
}
#include "simpletools.h" // Include simple tools
#include "fdserial.h"
fdserial *unity;
fdserial *disp;
int main()
{
// (RX, TX, 0, Baud)
unity = fdserial_open(1, 0, 0, 57600);
disp = fdserial_open(31,30,0,57600);
char seps[] = (" ,\t\n");
char store[8];
writeChar(disp, CLS);
dprint(disp, "Click this terminal, \n");
dprint(disp, "and type on keyboard...\n\n");
char c;
while(1)
{
c = fdserial_rxChar(unity);
//Cycle through string
for(int i = 0; i <8; i++){
//Test for start of input--- Result: Confirmed good
if(c = "!"){
i = 0;
//dprint(disp,"Start.\n");
}
store = c;
//Test for end of input--- Result: Confirmed good
if(c = "#"){
// Reached the end, so print the results of storage
for(int j = 0; j < 8; j++){
dprint(disp,store[j]);
}
dprint(disp," END\n");
i = 8;
break;
}
}
}
}
It can be done on the prop in C. The ascii characters the arduino is sending need to be converted to integers or floats, and the result used for the math.
Life is unpredictable. Eat dessert first.
You can try this parser (quickly put together, but tested as working)... i.e. may not be the best 'C' code but provides a working example. You may need to test with real data input...
Note: The C tab on this website's input field provides proper 'code' formatting.
dgately
char *strtok(char *str, const char *delim)
Parameters
str − The contents of this string are modified and broken into smaller strings (tokens).
delim − This is the C string containing the delimiters. These may vary from one call to another.
You would still need to turn it in to integer.
a while loop maybe x=x*10 +ascII byte -48;
The strtok function cannot be used on constant strings
The identity of the delimiting character is lost
The strtok() function uses a static buffer while parsing, so it's not thread safe
The solution I chose seemed to give a more "insteructable" path for a new C user...
But, if you know what you are doing with strtok, and test for possible problems, by all means, use it!
dgately
It will change the token you looking for to a zero, so the returned pointer will now point to a smaller chunk of text that is zero terminated.
This could work after strtok inserted the zero:
pnt = strtok(NULL, ",");
while (*pnt) x=x*10+*pnt++ -48;
Thus,
should be:
or:
Although, I do agree with Tony regarding the use of strtok if you are looking to add the 'string.h' lib to the code.
However, I would do something like:
// Declare a delimiter
const char s[4] = ",";
Then you could read in the first token as
char* token;
token = strtok(str,s);
if token is not 0, then read in the rest of the string as:
token = strtok(0, s);
You would only need to compare for the "#" to get the end of the input:
int ret;
ret = strcmp(token, "#");
Ex:
fdserial_rxFlush(ble);
Also, I like to check whether or not there is anything in the buffer before reading in the data:
Ex:
fdserial_rxPeek(ble)
This will not remove a character from the buffer like fdserial_rxChar does.
dgately
while (*s++ = *t++); // string copy until zero
me too!
Understood. It's just a matter of style. However, I have to correct myself since 'NULL' typically in C is defined as "((void *)0)" or 0 if '__cplusplus' is defined and is used with pointers. 'NUL' with a single 'L' would be the equivalent to '\0'.
In your example you are looking for the end of a string not the end of the array.
If the array string had something like a '\0' in the middle of the string, the loop would stop before the end of the array. Ex: "This is \0 an array" will produce
"This is ".
To be on the safe side you should get the sizeof of the array and used that to parse through the array.
EX: Although this would give the size of the entire array.
Also, I believe you meant :
Yes, thanks for noticing that!
dgately
If it works with C# then why not just use that to run your code in the Unity Engine?
Apologies if this question is stupid, I am just starting out with Unity and am just a little curious to know about the game engine.
0,5,89,4,54,202,3,25,180
... or like this:
!0,5,89,4,54,202,3,25,180#
The latter, of course, is easier to deal with.
Since I find I can write a Spin program faster than the C compiler can compile anything I bash together in C -- and I've been working on a parser today so that the Propeller can connect to a show-control program called VenueMagic -- I'm submitting a Spin example, anyway. Being a C programmer, you should be able to convert this pretty easily as it doesn't rely on any special libraries or conversion functions. I used RealTerm to test because it allows me to build a string and then send it.
Hollywood, CA
It's Jon or JonnyMac -- please do not call me Jonny.
Well, I tried to do the conversion to C on my own. This code compiles but doesn't work (like the Spin version did). Perhaps you can fix it.
It took two hours and a double-shot of Jameson's whiskey, but I got it to work. I'm sure many of the C gurus will point out my ignorant flaws.
Hollywood, CA
It's Jon or JonnyMac -- please do not call me Jonny.
|
http://forums.parallax.com/discussion/167761/read-in-string-from-serial-and-parse-in-c
|
CC-MAIN-2018-30
|
refinedweb
| 1,132
| 62.88
|
Not too long ago I read a question from someone who wanted to know how to make the Accordion control display more than one child at time. The response said to use a VBox.
This question and the response intrigued me, so I set out to create a control which would allow any number of its children to be open simultaneously. After some tinkering I came up with the Stack components: VStack and HStack, show in the following figures.
The VStack component looks very much like Accordion, except zero, one, or any number of its children can be visible at once. HStack is the horizontal equivalent.
These might not be the most useful components, but there are some interesting things you can use in your own components. You can download the source (a zip file Flex Builder 2 project) here: Stack Component Source.
How It’s Made
I created a base component called StackBase which extends UIComponent. StackBase has a single child: a Box container; VStack extends StackBase and makes the child a VBox and HStack extends StackBase and makes the child an HBox.
My goal was to capitalize on the layout management already built into the Box classes and to use "off-the-shelf" parts when possible, making the component easier write, understand, and maintain.
The DefaultProperty
I wanted the Stack components to behave just like any other Flex navigator component, such as Accordion. Meaning, I wanted to be able to use it like this:
<adobe:VStack …>
<mx:Canvas …>
<!– canvas children here –>
</mx:Canvas>
<mx:Canvas
<!– canvas children here –>
</mx:Canvas>
<!– etc. –>
</adobe:VStack>
I used the [DefaultProperty] metadata tag to tell the Flex compiler which property should be used if none is specified. Now it might not look obvious, but those Canvas containers within the VStack definition do belong to a property – contents – of the StackBase class. Open StackBase.as and you’ll see what I mean:
[DefaultProperty("contents")]
public class StackBase extends UIComponent …
and further down you’ll see the contents property defined:
[ArrayElementType("mx.core.Container")]
public function set contents( value:Array ) : void …
The contents property is an Array and for the Stack components, it is an Array of mx.core.Container classes (or any class that extends Container, such as Canvas). Just try and put something other than a Container in stack and you’ll get the same error as if you were using a ViewStack or Accordion.
Creating the Content
The StackBase createChildren() method creates the Box. The contents – children of the Box – cannot be created at this time in the component life cycle because the contents property may not be set. A better place is in commitProperties().
In case you didn’t know, commitProperties() is called once all of the properties have been set and the component children created (or in response to invalidateProperties()). The children may not yet be visualized, but they are available to have their own properties set. This means the contents Array is set with the components to go into the Box and the Box is ready to accept children.
The process of creating the content actually involves creating a couple of additional components. The content given in the MXML file is not made directly the children of the Box. Instead, a Canvas is created and it is given the content children and a control to open and close the child. This is a StackHeaderButton control which is part of this package.
When the content has been created you have:
Box
Canvas
StackHeaderButton
container-specified
Canvas
StackHeaderButton
container-specified
etc.
Sizing
The trick to this component is sizing the content properly. Suppose for example you have a VStack with 3 children. If all of them are visible, how much room do they take up? Suppose only one of them is visible?
The solution is that all of the open children evenly divide the space available, minus the space taken by the StackHeaderButtons. Using the example, when all three are visible they use approximately 1/3 each of the space. If one is closed, the remaining two occuply 1/2 of the the space. You are welcome to take this component and modify it to do something differently.
Interactivity
The StackHeaderButtons not only visually separate the content but they also open and close the content. When you click on a header, the child slides either open or closed and the remaining children have their sizes adjusted. This should appear fairly smooth because I used a Resize effect to do this.
My algorithm goes something like this:
First count the number of children in the content which are closed ("collapsed" in the code). Then take the space occupied of the Stack control and divide it evenly among the open children, subtracting the space taken for the StackHeaderButtons, of course.
As this is being done, a Resize effect is created for each child. Afterall, when one child closes the open ones increase their size.
All of the Resize effects are placed into a Parallel effect so that all of the adjustments are done at once. When the calculations have completed the Parallel effect is played and the contents adjust to their new sizes.
VStack vs. HStack
I had originally planned this to be a vertical control, but after I saw how it all came together I decided to add in the HStack control. I changed StackBase to use some protected functions for determining the size and position of its child content and the HStack component overrides these functions and returns width instead of height or x instead of y.
Mostly the controls are the same and StackBase takes care of the majority of the work.
Skins
The StackHeaderButton uses a skin class for its appearance. A skin is a class whose sole job is to provide the visualization for a component. I started off using a simple Button for the header, but decided it was easier to rotate a Label than the label of a Button. You can change the appearance of the headers just by writting your own skin classes.
Summary
Even if you don’t find these controls useful themselves, use them as a guide to building your own components. I have to admit that using Box, Canvas, and Resize made the job easier, but if you want to write the whole thing from scratch, go for it. Just pay attention to the Flex framework component life cycle.
Some things of note in this component are:
[DefaultProperty] and [ArrayElementType] meta tags. These tags make it easier for people to use the component.
Resize and Parallel effects. You can make a whole lot happen all at once and make the control appealing to use.
Skins. Think about how your component is visualized and then write those separately as skins. This will make customizing its appearance easier and it separates the function of the component from its presentation.
Embedded Fonts. For the HStack to look correct, the labels on the StackHeaderButtons are rotated 90 degrees. They would be invisible if you didn’t use an embedded font for them. To make things speedy, the Flash Player uses system fonts for most of the text. But system fonts do not have vector paths (outlines) that describe the letters, so they cannot be rotated. By embedding a font you can rotate, skew, and scale text controls easily.
|
http://blogs.adobe.com/peterent/2007/04/03/the_stack_compo/
|
CC-MAIN-2015-35
|
refinedweb
| 1,223
| 62.58
|
On 24.08.2021 12:50, Anthony PERARD wrote:
> Currently, the xen/Makefile is re-parsed several times: once to start
> the build process, and several more time with Rules.mk including it.
> This makes it difficult to reason with a Makefile used for several
> purpose, and it actually slow down the build process.
I'm struggling some with what you want to express here. What does
"to reason" refer to?
> So this patch introduce "build.mk" which Rules.mk will use when
> present instead of the "Makefile" of a directory. (Linux's Kbuild
> named that file "Kbuild".)
>
> We have a few targets to move to "build.mk" identified by them been
> build via "make -f Rules.mk" without changing directory.
>
> As for the main targets like "build", we can have them depends on
> there underscore-prefix targets like "_build" without having to use
> "Rules.mk" while still retaining the check for unsupported
> architecture. (Those main rules are changed to be single-colon as
> there should only be a single recipe for them.)
>
> With nearly everything needed to move to "build.mk" moved, there is a
> single dependency left from "Rules.mk": $(TARGET), which is moved to
> the main Makefile.
I'm having trouble identifying what this describes. Searching for
$(TARGET) in the patch doesn't yield any obvious match. Thinking
about it, do you perhaps mean the setting of that variable? Is
moving that guaranteed to not leave the variable undefined? Or in
other words is there no scenario at all where xen/Makefile might
get bypassed? (Aiui building an individual .o, .i, or .s would
continue to be fine, but it feels like something along these lines
might get broken.)
> @@ -279,11 +281,13 @@ export CFLAGS_UBSAN
>
> endif # need-config
>
> -.PHONY: build install uninstall clean distclean MAP
> -build install uninstall debug clean distclean MAP::
> +main-targets := build install uninstall clean distclean MAP
> +.PHONY: $(main-targets)
> ifneq ($(XEN_TARGET_ARCH),x86_32)
> - $(MAKE) -f Rules.mk _$@
> +$(main-targets): %: _%
> + @:
Isn't the conventional way to express "no commands" via
$(main-targets): %: _% ;
?
> --- a/xen/Rules.mk
> +++ b/xen/Rules.mk
> @@ -9,8 +9,6 @@ include $(XEN_ROOT)/Config.mk
> include $(BASEDIR)/scripts/Kbuild.include
>
>
> -TARGET := $(BASEDIR)/xen
> -
> # Note that link order matters!
Could I talk you into removing yet another blank line at this occasion?
> @@ -36,7 +34,9 @@ SPECIAL_DATA_SECTIONS := rodata $(foreach a,1 2 4 8 16, \
> rodata.cst$(a)) \
> $(foreach r,rel rel.ro,data.$(r).local)
>
> -include Makefile
> +# The filename build.mk has precedence over Makefile
> +mk-dir := .
What's the goal of this variable? All I can spot for now it that ...
> +include $(if $(wildcard
> $(mk-dir)/build.mk),$(mk-dir)/build.mk,$(mk-dir)/Makefile)
... this is harder to read than
include $(if $(wildcard ./build.mk),./build.mk,./Makefile)
which could be further simplified to
include $(if $(wildcard build.mk),build.mk,Makefile)
and then maybe altered to
include $(firstword $(wildcard build.mk) Makefile)
> --- /dev/null
> +++ b/xen/build.mk
> @@ -0,0 +1,58 @@
> +quiet_cmd_banner = BANNER $@
> +define cmd_banner
> + if which figlet >/dev/null 2>&1 ; then \
> + echo " Xen $(XEN_FULLVERSION)" | figlet -f $< > $@.tmp; \
> + else \
> + echo " Xen $(XEN_FULLVERSION)" > $@.tmp; \
> + fi; \
> + mv -f $@.tmp $@
> +endef
> +
> +.banner: tools/xen.flf FORCE
> + $(call if_changed,banner)
> +targets += .banner
To make the end of the rule more easily recognizable, may I ask that
you either insert a blank line after the rule or that you move the +=
up immediately ahead of the.
|
https://lists.xenproject.org/archives/html/xen-devel/2021-10/msg00458.html
|
CC-MAIN-2021-49
|
refinedweb
| 567
| 69.99
|
can please give this code !i am going to do a factorial function using recursion definition and then . my professor told me that we are going to create a three screen.
the first screen should be input an MAIN MENU then above that RECURSION below that again the EXIT BUTTON.. then second screen SHOULD BE INPUT an meaning of recursion and about information and also types of recursion below that create a text box to put a :ENTER YOUR CHOOSE:,AND EXIT AND CLOSE BUTTON. last screen is the meaning of recursion and uses and then their syntax. below that input a : DO YOU WANT TO CONTINUE: YES OR NO..after that if yes : ENTER NUMBER: id try again input another number. if no back to the main menu>
please help for this project! thank you!
public double factorialRecursion(int number) { if (number == 1) return 1; else return number * factorialrecursion(number - 1); }
Check out more 3 ways to calculate factorial of a number - program for factorial using recursion, while and for loop in c#
Here is an example that calculates the factorial number using recursion
public class FactorialExample{ public static long factorial(int n){ if(n <= 1) return 1; else return n * factorial(n - 1); } public static void main(String [] args){ int num=5; System.out.println(factorial(num)); } }
|
http://www.roseindia.net/answers/viewqa/Java-Beginners/26895-factorial-using-recursive.html
|
CC-MAIN-2015-40
|
refinedweb
| 219
| 50.57
|
Oct 26th, 2015
With our User Retention Reports you can dive into your user loyalty. Depending on the type of application you monitor, you can select day, week or monthly granularity. You can also go back in history and verify to which extent you have managed to increase the user loyalty. You can easily compare cohorts and e.g measure to which extent your specific marketing campaigns works. With this data you can better understand how to drive you business further.
Sept 14, 2015.
July 6, 2015
Being committed to our cross-platform support we have now released official versions of Telerik Analytics monitors for NativeScript and Xamarin. The API is the uniform API offered across all platforms. With these releases we now support 15 different platforms!
June 28, 2015
We have created a new view named "Exception Overview":
The view presents an analysis of all exceptions you have ever received. Based on this we identify those exceptions that most likely need attention from your developers. You can immediately see e.g. those exceptions that affect most of your end-users. Or which exception types occur the most.
May 29, 2015
You can now share your dashboards with people that don't have a login to Telerik Analytics! You can e.g. show interesting KPIs on your intranet, or you can share data with your resellers. You can also share your success with your clients. Simply create a sharing URL and place this on your website, intranet or email it to your peers.
May 9, 2015
Jan. 19, 2015
Aug. 27, 2014
May 25, 2014
Mar. 17, 2014
Feb. 18, 2014
Feb. 11, 2014
Jan. 25, 2014
EQATEC Application Analytics has been rebranded to Telerik Analytics since it is now closely integrated with many of the Telerik products. No changes have been made to namespaces to avoid breaking your applications.
Telerik Analytics is now part of the Telerik Platform - our new end-to-end mobile development platform for creating web, hybrid and native apps, which we announced January 28th 2014..
Post your feedback via the Analytics UserVoice portal or the Public forums
See the updates feed
Explore Telerik Platform
|
http://www.telerik.com/support/whats-new/analytics
|
CC-MAIN-2017-04
|
refinedweb
| 359
| 68.06
|
I have a specific problem that I'm working out that involves a lot of XML processing. My set of requirements is perhaps a bit more involved than we should try to tackle for a first go at this. But I throw them out to start the discussion...
In my project, I use XML-encoded data in two distinct contexts: internal to the program, and externally. Internally, I'm typically working with a well-known and fixed data schema and don't want to incur the overhead of formal validation (a well-formed XML document is fine). The project additionally sources and sinks XML data from external sources. It would be nice to be able to validate these against a schema.
The boost::xml library should be flexible enough to allow for both non-validating and validating deserialization of XML data.
Currently I use the MIT/X-licesed non-validating [expat] XML Parser Toolkit written by James Clark to generate a series of callbacks that I use to build a directed graph representation of the document using the BGL. I'll expound further on this approach if it's not met with overwhelming resistance. It's super fast, and super flexible. Using BGL's visitor concept, I can convert the graph representation created by my expat callbacks directly into a structure containing STL containers. Because of the way I've implemented the visitor (and the mapping tables it uses), essentially I get light-weight schema validation without ever formally dealing with an external schema document (the schema is fixed when I create the maps passed to the visitor).
Currently I just skim the output structure off and free the graph when I'm done. However, keeping the object around and maintaining the data in a graph is an approach that should be carefully considered because it makes several other features I'd like to see in boost::xml quite easy to implement.
Some have suggested that Spirit, and not expat, be used. I haven't used Spirit yet. I'm concerned that Sprit will be much larger than expat. Personally, I don't see the problem with using a small, and proven, non-validating parser as a front end but am interested in hearing what other people have to say about it. (Note that I have my own ideas about how to validate against a schema that use BGL visitors and maps - this is why expat works for me - all I care about is that the XML document is well-formed and expat tells me this. Parenthetically, validation against a schema in my scheme requires that the maps fed to the BGL visitor be created from the schema document - Spirit is probably an excellent choice for parsing the schema document in order to create these maps).
That's enough for now. Hack up this page and add your comments. A boost::xml library would be very useful to me and many others I think.
When I think of an 'XML library', I think of much more than just parsing xml files. The DOM API in particular involves in-memory manipulation of a document tree, and the specs suggest a specifically optimized internal structure to make access efficient.
So what's really at stake here is a tree interface and implementation that can support DOM-like manipulation (node insertion, removal, xpath-based node lookup, etc., etc.). The parser is only a small part of it.
As XML and co. is quite a huge set of specs, I wouldn't dare to suggest to create yet another implementation. Rather, I'm suggesting that a C++-like API is built that can wrap existing implementations, such as libxml2 ()
-- Stefan Seefeld
- Chris
Hi Chris, I'm not argueing about the possibility to internally represent a dom tree using BGL. However, lots of tree manipulations can be highly optimized taking the semantical specifics of xml and related standards into account (xpath, xml namespaces, xinclude, xlink, etc.). I doubt you can get as efficient (speed and memory wise) with a generic graph library as you can get with a domain specific implementation such as libxml2.
[actually, it may be an interesting experience: the examples I include in my submission are fairly small. Could you rewrite them with a dom tree (manually) built using BGL ? Could you measure performance for things like xpath lookup or node insertion (respecting all the specs such as namespace adjustments etc.) ]
-- Stefan Seefeld
... lots of tree manipulations can be highly optimized taking the semantical specifics of xml and related standards into account (xpath, xml namespaces, xinclude, xlink, etc.)
Stefan, it seems to me that we're ultimatly dealing with a tree of vertices that correspond to entities in the XML document. Remember that BGL is generic like the STL is generic. That is, once you compile the container it stores specific types of data in an extremely efficient manner (underlying storage is provided STL containers). I'm having trouble imagining any traversal, edge/vertex insert/remove operation that might be required by any of the XML-related specifications that you cite that couldn't be handled with extreme efficiency and elegance by the BGL.
Could you rewrite them with a dom tree (manually) built using BGL?
Yes - I'm swamped right now trying to get a product demo running but will be able to do this early in June. No big deal to work up some simple examples that use expat to create the graph and visitor algorithms to operate on it. It will be a small amount of work to extract my current stuff from the context that it's in currently and package it as a little toy for us to play with. Note: all these navigation, indexing related XML specifications I think are fairly trivial to get going. I don't have a good story about how to do XSLT transforms yet though. Something to think about.
This is a great discussion. Keep it going.
- Chris
Hi,
We started to work on an XML library based on C++ iostreams specification two and half years ago and a little before we discovered the boost libraries. In the mean time, the library has come to a point where the main features of the base layer have been developed and simultaneously we have become enthusiastic users of the boost libraries. Slowly, we have come to the conclusion that we should submit the library to boost.
We do not think that the work can undergo a formal review yet but we feel it has reached a stage where we can reasonably expect comments, critics and maybe help from other developers.
The library is called XiMoL (XML input/output) and is compliant with the C++/STL streams specification. It introduces a new type of streams (xiostream) that derives from wiostream and that is used for XML input/output. The library is divided in three parts.
A first part tackles character encoding. We rely on the iconv library from GNU for the raw functionality, which has been wrapped as a facet called codecvt. It might be possible that the GNU licence is unacceptable for boost and we could then change the underlying library, which would not affect the interface of the facet.
A second part of the library is dedicated to XML parsing. This part could have benefited from Spirit but it was written long before the parser generator appeared on our radar-screen.
Finally, the third part is the API of the library, which consists mainly in functions for stream parsing.
For those who are interested a CUJ article is due for publication and for those who cannot wait XiMoL is available as a CVS module on SourceForge.
We are looking forward to any suggestion and critics.
Florent Tournois and Cyril Godart.
Revision 2004.05.09 by People/ChrisRussell
In the interest of wrapping up a dangling thread: I started this discussion by proffering ideas about using James Clark's expat C library to drive the creation of a parse tree stored in a BGL graph container. I have recently started working with Spirit and see that this library has much to offer. I need to get further into Spirit before I decide just how bad my BGL-based XML parser solution is. At a minimum, expat is history in favor of a Spirit-based front-end for my current implementation.
I still believe that BGL can be usefully applied to this problem. But it's also likely that Spirit can handle the entire task as well. I'm not prepared to comment on which technique is preferable in terms of performance and code readability - I just haven't gotten that far with Spirit yet.
Parenthetically, my current expat / BGL-based XML parser implementation is part of a larger project destined for SourceForge in the hopefully not-too-distant future (see my profile for more info). Once this larger project is released, I will be able to refer directly to my implementation source in CVS and would be happy to debate the merits of various approaches to XML document parsing then.
- Chris
|
http://www.crystalclearsoftware.com/cgi-bin/boost_wiki/wiki.pl?BoostXMLDiscussion
|
CC-MAIN-2017-13
|
refinedweb
| 1,518
| 60.24
|
One how to obtain a list of sensor and how we can use one of them (i.e Barometer sensor). We want to create an app that show the current pressure:
Using sensor in android
When we develop an android app and we need a specific sensor so that our app can run we have two different choices:
- Specify it the sensor in the AndroidManifest.xml
- Detect the sensor list and check if the one we are interested on is available
If we specify it in the AndroidManifest.xml, we simply need to add this line:
and once we have selected ‘User feature’, we have:
We can retrieve the sensor list too and we need a bit of code.
@Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Get the reference to the sensor manager sensorManager = (SensorManager) getSystemService(Service.SENSOR_SERVICE); // Get the list of sensor List<Sensor> sensorList = sensorManager.getSensorList(Sensor.TYPE_ALL); List<Map<String, String>> sensorData = new ArrayList<Map<String,String>>(); for (Sensor sensor: sensorList) { Map<String, String> data = new HashMap<String, String>(); data.put("name", sensor.getName()); data.put("vendor", sensor.getVendor()); sensorData.add(data); } }
At line 7 we get the reference to the SensorManager, used to handle sensor, then at line 10 we get the sensor list. In this case we want to have all the sensors present in our smartphone so we use Sensor.TYPE_ALL. If we wanted just one time we can filter the list passing the type of the sensor we are looking for. For example, if we want to have all the barometer sensor we can use:
List<Sensor> sensorList = sensorManager.getSensorList(Sensor.TYPE_PRESSURE);
Once we have the list we can simply show it using a ListView and SimpleAdapter. The result (in my smartphone) is
What’s now? We can have several information from the Sensor class, for example Vendor sensor resolution, min and max range. You have to keep in mind that sensor range can vary among different sensors. Once we have the list we can check if the smartphone supports our sensor. Now we have our sensors we want to get information from them.
Sensor Events
To get information from a sensor there’s a simple method: register a listener. First we have to select the sensor we are interested on and then register our listener. In our case, we are interesting on barometer sensor so we have:
// Look for barometer sensor SensorManager snsMgr = (SensorManager) getSystemService(Service.SENSOR_SERVICE); Sensor pS = snsMgr.getDefaultSensor(Sensor.TYPE_PRESSURE); snsMgr.registerListener(this, pS, SensorManager.SENSOR_DELAY_UI);
At line 4 we register our listener. Notice that the last parameter represents how fast we want to get notified when the value measured by the sensor changes. There are several values but notice that a too fast notification rate can have some side effects on your apps. To register a class as a listener we have simply implements an interface SensorEventListener for example:
public class PressActivity extends Activity implements SensorEventListener { @Override public void onAccuracyChanged(Sensor sensor, int accuracy) { } @Override public void onSensorChanged(SensorEvent event) { float[] values = event.values; pressView.setText("" + values[0]); } }
At line 3 we override a method that is called when the accuracy changes. This parameter represents the confidence level of the value we get from the sensor. The other method (the one more interesting) is onSensorChanged that is called when the value changes. In this case we simply get the first value e show it in a TextVIew. The result is shown below:
For example a typical application can show the pressure trend to know if the sun will shy or we will have clouds.
Source code available soon.
can i have the source code to get sensor data,
if u can it will be big help for me,
because im doing research project and i cant get sensor data properly if u can please send me those source code to my mail : harshagayan90@gmail.com
Thanks!!!
Hi, Could I have the source code? I am working on my senior project now.
Thanks
|
http://www.javacodegeeks.com/2013/09/android-sensor-tutorial-barometer-sensor.html/comment-page-1/
|
CC-MAIN-2015-06
|
refinedweb
| 670
| 54.52
|
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Editing Todo Items8:57 with Jason Seifer
It's time to add the functionality to edit todo items! We can create them but we may want to change the content at some point. In this video we'll write the tests and code for editing todo items.
Code Snippets
Update URL Options:
def url_options { todo_list_id: params[:todo_list_id] }.merge(super) end
That will put the todo_list_id param in every method in the controller.
- 0:00
Okay.
- 0:00
So, when we last left off, we had done
- 0:02
our feature specs for to-do items and creating them.
- 0:07
Now, lets go ahead and do [UNKNOWN] deleting them as well.
- 0:11
So, lets go ahead and create a new file here.
- 0:14
Just gonna save it as edit spec.
- 0:18
I can grab a little bit of that right there,
- 0:22
paste that in and we'll say editing to do items.
- 0:28
Okay, looking good there. Now, in order to edit to do items let's go
- 0:32
ahead and create one before all these tests with another let statement.
- 0:39
And the let statement will actually grab the to
- 0:41
do list from line four and create it there.
- 0:45
So let's go ahead and save it, editing these is successful with valid content.
- 0:51
[BLANK_AUDIO]
- 0:56
[SOUND] So we'll visit our to do list,
- 1:01
like we did before. But this time, within to do items,
- 1:06
[BLANK_AUDIO]
- 1:13
that's where we'll click the edit link. [SOUND]
- 1:19
And then, we'll fill in content
- 1:26
with instead of milk, Lots of Milk.
- 1:34
And, we'll click the button save.
- 1:40
And then we'll look and make sure that the page has the content saved to-do
- 1:46
item. And then what we'll do is, we'll
- 1:51
reload the to-do item and make sure that the title is Lots of Milk.
- 1:56
[SOUND]
- 2:04
Okay.
- 2:08
Now, let's go ahead and run this and see what happens.
- 2:15
Okay, and it's saying it can't find the CSS to do items one.
- 2:21
Now, this makes sense, because we haven't updated out to do list item page yet.
- 2:26
So let's open up our view,
- 2:30
and we'll say. [SOUND]
- 2:35
We'll use that DOM ID helper that we had before.
- 2:39
That'll let us go within this particular to do item.
- 2:47
Okay, run that again and see what happens.
- 2:51
Oh, and it looks like we added an extra s on there.
- 2:56
Try that again and make sure that works.
- 2:59
Okay, unable to find link edit, that makes sense.
- 3:02
We didn't add the edit link in. Let's go ahead and do that.
- 3:08
So I'm going to move this down here. And I'm going to say, Link_to "Edit".
- 3:14
And this time we're going to say edit_todo_list_todo_item_path(todo_item).
- 3:21
Now this is going to fail, but I'm going
- 3:23
to show you something cool in just a second.
- 3:27
All right, saying no route matches all of this, saying todo_list
- 3:32
ID, and it looks like it's being sent in the todo_item.
- 3:36
So if we wanted to get this to work, we could send in todo_list and todo_item,
- 3:43
and that would take care of sending in the todo_list ID and the todo_item ID.
- 3:48
Okay.
- 3:48
So that gets us past that particular error,
- 3:51
but I'm going to show something really cool.
- 3:54
If we go into the controller which we have to do anyway.
- 3:59
So here is the create method.
- 4:01
If we make the edit method.
- 4:04
[SOUND]
- 4:11
Save that, and it'll say edit couldn't be found.
- 4:13
[SOUND]
- 4:21
Okay, we'll create that template. Run this one more time.
- 4:24
[BLANK_AUDIO]
- 4:29
Okay, so it's looking for the form, but let
- 4:31
me show you this cool little thing that we can
- 4:34
do, in order to not have to type the to
- 4:36
do list every single time while we're in a controller.
- 4:40
Or in a view.
- 4:41
If we create this URL options method.
- 4:44
Every single time it's looking for to do list ID.
- 4:47
We can give it the ID of the to do list because we already have it.
- 4:51
So, if we say
- 4:58
to do list ID is the params to do list ID. And we add in .merge super.
- 5:06
Every single time it generates a URL for us, that will be called.
- 5:10
So, if we go back and run this again, we should
- 5:14
see the same error message, which we'll fix in a minute.
- 5:18
And now we can go out, and remove that. The error message shouldn't change.
- 5:25
Okay,
- 5:25
and it doesn't.
- 5:26
Normally we would re-factor something like that later But, I
- 5:29
thought that's pretty cool and we wanna check it out now.
- 5:33
Okay so unable to find the field content.
- 5:37
So that means it's looking for a form here.
- 5:39
So let's go ahead and grab our new form and copy and paste it into our edit page.
- 5:46
Now let's go ahead and run it and see what happens.
- 5:50
So it says the first
- 5:50
argument in the form can't be nil or empty.
- 5:53
Well, that's because we haven't done anything in the edit action here.
- 5:56
So, let's go ahead and we can copy and paste the finding of the to do list.
- 6:03
And, this time, for edit, we're going to find it by params, by date.
- 6:10
Cause we're editing rather than creating a new one.
- 6:13
Run that again.
- 6:15
Okay, the action update could not be found, because we haven't made it yet.
- 6:22
So I'm just gonna go ahead and copy and paste it in here.
- 6:26
And what we're doing is finding the to-do
- 6:30
list, and finding the to-do list item And then,
- 6:33
if we are able to update the attributes
- 6:36
with the item params that we did down here.
- 6:39
We save it and redirect,
- 6:40
or else we render the edit action again.
- 6:43
So let's go ahead and run that and see what happens.
- 6:48
Oh, and it's saying undefined method title for to do item.
- 6:51
And that's in the edit spec on line 23.
- 6:55
And it's because we are looking for the content and not the title.
- 7:01
All right, that passes. Great.
- 7:05
Let's go ahead and write another test here.
- 7:08
Saying it's successful with valid content, and then let's go ahead and say.
- 7:13
That it fails or is unsuccessful with no content.
- 7:26
So we'll take that out of there. And then this time.
- 7:30
We expect the page to not have content saved to do list item.
- 7:35
We'll expect the page to have the content, content can't be blank.
- 7:42
And we'll reload the to do item, and make sure that it didn't change.
- 7:46
The content is still milk.
- 7:50
Run that again. Okay.
- 7:57
And we can do this one more time. It's unsuccessful with not enough content.
- 8:07
And we have already added these validations on to
- 8:10
do item, so they should still be making sense here.
- 8:14
In these tests
- 8:19
and it looks like we did not update
- 8:24
our failure message alright run that again okay that looks good let's
- 8:32
go ahead and commit this looks like I forgot to save that file.
- 8:36
I'm just gonna add everything here.
- 8:45
Added the specs and code for editing to-do items.
- 8:50
Alright, well that all looks good.
- 8:53
In our next video we're gonna clean up a little bit of our code.
|
https://teamtreehouse.com/library/build-a-todo-list-application-with-rails-4/build-a-todo-list-application-with-rails-4/editing-todo-items
|
CC-MAIN-2016-50
|
refinedweb
| 1,494
| 91.92
|
capnp-rpc-unix
See LICENSE.md for details.
Contents
Why does my connection stop working after 10 minutes?
How can I return multiple results?
Can I create multiple instances of an interface dynamically?
How can I debug reference counting problems?
How can I import a sturdy ref that I need to start my vat?
How can I release other resources when my service is released?
Is there an interactive version I can use for debugging?
Can I set up a direct 2-party connection over a pre-existing channel?
How can I use this with Mirage?
Overview
Cap'n Proto is a capability-based RPC system with bindings for many languages.
Some key features:
APIs are defined using a schema file, which is compiled to create bindings for different languages automatically.
Schemas can be upgraded in many ways without breaking backwards-compatibility.
Messages are built up and read in-place, making it very fast.
Messages can contain capability references, allowing the sender to share access to a service. Access control is handled automatically.
Messages can be pipelined. For example, you can ask one service where another one is, and then immediately start calling methods on it. The requests will be sent to the first service, which will either handle them (in the common case where the second service is in the same place) or forward them until you can establish a direct connection.
Messages are delivered in E-Order, which means that messages sent over a reference will arrive in the order in which they were sent, even if the path they take through the network gets optimised at some point.
This library should be used with the capnp-ocaml schema compiler, which generates bindings from schema files.
Status
RPC Level 2 is complete, with encryption and authentication using TLS and support for persistence.
The library has unit tests and AFL fuzz tests that cover most of the core logic.
It is used as the RPC system in ocaml-ci.
The default network provided supports TCP and Unix-domain sockets, both with or without TLS.
For two-party networking, you can provide any bi-directional byte stream (satisfying the Mirage flow signature)
to the library to create a connection.
You can also define your own network types.
Level 3 support is not implemented yet, so if host Alice has connections to hosts Bob and Carol and passes an object hosted at Bob to Carol, the resulting messages between Carol and Bob will be routed via Alice.
Until that is implemented, Carol can ask Bob for a persistent reference (sturdy ref) and then connect directly to that.
Installing
To install, you will need a platform with the capnproto package available (e.g. Debian >= 9). Then:
opam depext -i capnp-rpc-unix
Structure of the library
The code is split into several packages:
capnp-rpccontains the logic of the Cap'n Proto RPC Protocol, but does not depend on any particular serialisation.
The tests in the
testdirectory test the logic using a simple representation where messages are OCaml data-structures
(defined in
capnp-rpc/message_types.ml).
capnp-rpc-lwtinstantiates the
capnp-rpcfunctor using the Cap'n Proto serialisation for messages and Lwt for concurrency.
capnp-rpc-netadds networking support, including TLS.
capnp-rpc-unixadds helper functions for parsing command-line arguments and setting up connections over Unix sockets.
The tests in
test-lwttest this by sending Cap'n Proto messages over a Unix-domain socket.
capnp-rpc-mirageis an alternative to
-unixthat works with Mirage unikernels.
Libraries that consume or provide Cap'n Proto services should normally depend only on
capnp-rpc-lwt,
since they shouldn't care whether the services they use are local or accessed over some kind of network.
Applications will normally want to use
capnp-rpc-net and, in most cases,
capnp-rpc-unix.
Tutorial
This tutorial creates a simple echo service and then extends it.
It shows how to use most of the features of the library, including defining services, using encryption and authentication over network links, and saving service state to disk.
A basic echo service
Start by writing a Cap'n Proto schema file.
For example, here is a very simple echo service:
interface Echo { ping @0 (msg :Text) -> (reply :Text); }
This defines the
Echo interface as having a single method called
ping
which takes a struct containing a text field called
msg
and returns a struct containing another text field called
reply.
Save this as
echo_api.capnp and compile it using capnp:
$ capnp compile echo_api.capnp -o ocaml echo_api.capnp:1:1: error: File does not declare an ID. I've generated one for you. Add this line to your file: @0xb287252b6cbed46e;
Every interface needs a globally unique ID.
If you don't have one, capnp will pick one for you, as shown above.
Add the line to the start of the file to get:
@0xb287252b6cbed46e; interface Echo { ping @0 (msg :Text) -> (reply :Text); }
Now it can be compiled:
$ capnp compile echo_api.capnp -o ocaml echo_api.capnp --> echo_api.mli echo_api.ml
The next step is to implement a client and server (in a new
echo.ml file) using the generated
Echo_api OCaml module.
For the server, you should inherit from the generated
Api.Service.Echo.service class:
module Api = Echo_api.MakeRPC(Capnp_rpc_lwt) open Lwt.Infix open Capnp_rpc_lwt let local = let module Echo = Api.Service.Echo in Echo.local @@ object inherit Echo.service method ping_impl params release_param_caps = let open Echo.Ping in let msg = Params.msg_get params in release_param_caps (); let response, results = Service.Response.create Results.init_pointer in Results.reply_set results ("echo:" ^ msg); Service.return response end
The first line (
module Api) instantiates the generated code to use this library's RPC implementation.
The service object must provide one OCaml method for each method defined in the schema file, with
_impl on the end of each one.
There's a bit of ugly boilerplate here, but it's quite simple:
The
Api.Service.Echo.Pingmodule defines the server-side API for the
pingmethod.
Ping.Paramsis a reader for the parameters.
Ping.Resultsis a builder for the results.
msgis the string value of the
msgfield.
release_param_capsreleases any capabilities passed in the parameters.
In this case there aren't any, but remember that a client using some future
version of this protocol might pass some optional capabilities, and so you
should always free them anyway.
Service.Response.create Results.init_pointercreates a new response message, using
Ping.Results.init_pointerto initialise the payload contents.
responseis the complete message to be sent back, and
resultsis the data part of it.
Service.returnreturns the results immediately (like
Lwt.return).
The client implementation is similar, but uses
Api.Client instead of
Api.Service.
Here, we have a builder for the parameters and a reader for the results.
Api.Client.Echo.Ping.method_id is a globally unique identifier for the ping method.
module Echo = Api.Client.Echo let ping t msg = let open Echo.Ping in let request, params = Capability.Request.create Params.init_pointer in Params.msg_set params msg; Capability.call_for_value_exn t method_id request >|= Results.reply_get
Capability.call_for_value_exn sends the request message to the service and waits for the response to arrive.
If the response is an error, it raises an exception.
Results.reply_get extracts the
reply field of the result.
We don't need to release the capabilities of the results, as
call_for_value_exn does that automatically.
We'll see how to handle capabilities later.
With the boilerplate out of the way, we can now write a
main.ml to test it:
open Lwt.Infix let () = Logs.set_level (Some Logs.Warning); Logs.set_reporter (Logs_fmt.reporter ()) let () = Lwt_main.run begin let service = Echo.local in Echo.ping service "foo" >>= fun reply -> Fmt.pr "Got reply %S@." reply; Lwt.return_unit end
Here's a suitable
dune file to compile the schema file and then the generated OCaml files
(which you can now delete from your source directory):
(executable (name main) (libraries lwt.unix capnp-rpc-lwt logs.fmt) (flags (:standard -w -53-55))) (rule (targets echo_api.ml echo_api.mli) (deps echo_api.capnp) (action (run capnp compile -o %{bin:capnpc-ocaml} %{deps})))
The service is now usable:
$ opam depext -i capnp-rpc-lwt $ dune exec ./main.exe Got reply "echo:foo"
This isn't very exciting, so let's add some capabilities to the protocol...
Passing capabilities
@0xb287252b6cbed46e; interface Callback { log @0 (msg :Text) -> (); } interface Echo { ping @0 (msg :Text) -> (reply :Text); heartbeat @1 (msg :Text, callback :Callback) -> (); }
This version of the protocol adds a
heartbeat method.
Instead of returning the text directly, it will send it to a callback at regular intervals.
The new
heartbeat_impl method looks like this:
method heartbeat_impl params release_params = let open Echo.Heartbeat in let msg = Params.msg_get params in let callback = Params.callback_get params in release_params (); match callback with | None -> Service.fail "No callback parameter!" | Some callback -> Service.return_lwt @@ fun () -> Capability.with_ref callback (notify ~msg)
Note that all parameters in Cap'n Proto are optional, so we have to check for
callback not being set
(data parameters such as
msg get a default value from the schema, which is
"" for strings if not set explicitly).
Service.return_lwt fn runs
fn () and replies to the
heartbeat call when it finishes.
Here, the whole of the rest of the method is the argument to
return_lwt, which is a common pattern.
notify callback msg just sends a few messages to
callback in a loop, and then releases it:
let (>>!=) = Lwt_result.bind (* Return errors *) let notify callback ~msg = let rec loop = function | 0 -> Lwt.return @@ Ok (Service.Response.create_empty ()) | i -> Callback.log callback msg >>!= fun () -> Lwt_unix.sleep 1.0 >>= fun () -> loop (i - 1) in loop 3
Exercise: create a
Callback submodule in
echo.ml and implement the client-side
Callback.log function (hint: it's very similar to
ping, but use
Capability.call_for_unit because we don't care about the value of the result and we want to handle errors manually)
To write the client for
Echo.heartbeat, we take a user-provided callback object
and put it into the request:
let heartbeat t msg callback = let open Echo.Heartbeat in let request, params = Capability.Request.create Params.init_pointer in Params.msg_set params msg; Params.callback_set params (Some callback); Capability.call_for_unit_exn t method_id request
Capability.call_for_unit_exn is a convenience wrapper around
Callback.call_for_value_exn that discards the result.
In
main.ml, we can now wrap a regular OCaml function as the callback: () = Lwt_main.run begin let service = Echo.local in run_client service end
Step 1: The client creates the callback:
Step 2: The client calls the
heartbeat method, passing the callback as an argument:
Step 3: The service receives the callback and calls the
log method on it:
Exercise: implement
Callback.local fn (hint: it's similar to the original
ping service, but pass the message to
fn and return with
Service.return_empty ())
And testing it should give (three times, at one second intervals):
$ ./main Callback got "foo" Callback got "foo" Callback got "foo"
Note that the client gives the echo service permission to call its callback service by sending a message containing the callback to the service.
No other access control updates are needed.
Note also a design choice here in the API: we could have made the
Echo.heartbeat function take an OCaml callback and wrap it, but instead we chose to take a service and make
main.ml do the wrapping.
The advantage to doing it this way is that
main.ml may one day want to pass a remote callback, as we'll see later.
This still isn't very exciting, because we just stored an OCaml object pointer in a message and then pulled it out again.
However, we can use the same code with the echo client and service in separate processes, communicating over the network...
Networking
Let's put a network connection between the client and the server.
Here's the new
main.ml (the top half is the same as before): secret_key = `Ephemeral let listen_address = `TCP ("127.0.0.1", 7000) let start_server () = let config = Capnp_rpc_unix.Vat_config.create ~secret_key listen_address in let service_id = Capnp_rpc_unix.Vat_config.derived_id config "main" in let restore = Capnp_rpc_net.Restorer.single service_id Echo.local in Capnp_rpc_unix.serve config ~restore >|= fun vat -> Capnp_rpc_unix.Vat.sturdy_uri vat service_id let () = Lwt_main.run begin start_server () >>= fun uri -> Fmt.pr "Connecting to echo service at: %a@." Uri.pp_hum uri; let client_vat = Capnp_rpc_unix.client_only_vat () in let sr = Capnp_rpc_unix.Vat.import_exn client_vat uri in Sturdy_ref.with_cap_exn sr run_client end
You'll need to edit your
dune file to add a dependency on
capnp-rpc-unix in the
(libraries ... line and also:
$ opam depext -i capnp-rpc-unix
Running this will give something like:
$ dune exec ./main.exe Connecting to echo service at: capnp://sha-256:3Tj5y5Q2qpqN3Sbh0GRPxgORZw98_NtrU2nLI0-Tn6g@127.0.0.1:7000/eBIndzZyoVDxaJdZ8uh_xBx5V1lfXWTJCDX-qEkgNZ4 Callback got "foo" Callback got "foo" Callback got "foo"
Once the server vat is running, we get a "sturdy ref" for the echo service, which is displayed as a "capnp://" URL.
The URL contains several pieces of information:
The
sha-256:3Tj5y5Q2qpqN3Sbh0GRPxgORZw98_NtrU2nLI0-Tn6gpart is the fingerprint of the server's public key.
When the client connects, it uses this to verify that it is connected to the right server (not an imposter).
Therefore, a Cap'n Proto vat does not need to be certified by a CA (and cannot be compromised by a rogue CA).
127.0.0.1:7000is the address to which clients will try to connect to reach the server vat.
eBIndzZyoVDxaJdZ8uh_xBx5V1lfXWTJCDX-qEkgNZ4is the (base64-encoded) service ID.
This is a secret that both identifies the service to use within the vat, and also grants access to it.
The server side
The
let secret_key = `Ephemeral line causes a new server key to be generated each time the program runs,
so if you run it again you'll see a different capnp URL.
For a real system you'll want to save the key so that the server's identity doesn't change when it is restarted.
You can use
let secret_key = `File "secret-key.pem" for that.
Then the file
secret-key.pem will be created automatically the first time you start the service,
and reused on future runs.
It is also possible to disable the use of encryption using
Vat_config.create ~serve_tls:false ....
That might be useful if you need to interoperate with a client that doesn't support TLS.
listen_address tells the server where to listen for incoming connections.
You can use
`Unix path for a Unix-domain socket at
path, or
`TCP (host, port) to accept connections over TCP.
For TCP, you might want to listen on one address but advertise a different one, e.g.
let listen_address = `TCP ("0.0.0.0", 7000) (* Listen on all interfaces *) let public_address = `TCP ("192.168.1.3", 7000) (* Tell clients to connect here *) let start_server () = let config = Capnp_rpc_unix.Vat_config.create ~secret_key ~public_address listen_address in
In
start_server:
let service_id = Capnp_rpc_unix.Vat_config.derived_id config "main"creates the secret ID that
grants access to the service.
derived_idgenerates the ID deterministically from the secret key
and the name. This means that the ID will be stable as long as the server's key doesn't change.
The name used ("main" here) isn't important - it just needs to be unique.
let restore = Restorer.single service_id Echo.localconfigures a simple "restorer" that
answers requests for
service_idwith our
Echo.localservice.
Capnp_rpc_unix.serve config ~restorecreates the service vat using the
previous configuration items and starts it listening for incoming connections.
Capnp_rpc_unix.Vat.sturdy_uri vat service_idreturns a "capnp://" URI for
the given service within the vat.
The client side
After starting the server and getting the sturdy URI, we create a client vat and connect to the sturdy ref.
The result is a proxy to the remote service via the network that can be used in
exactly the same way as the direct reference we used before.
Separate processes
The example above runs the client and server in a single process.
To run them in separate processes we just need to split
main.ml into separate files
and add some command-line parsing to let the user pass the URL.
Edit the
dune file to build a client and server:
(executables (names client server) (libraries lwt.unix capnp-rpc-lwt logs.fmt capnp-rpc-unix) (flags (:standard -w -53-55))) (rule (targets echo_api.ml echo_api.mli) (deps echo_api.capnp) (action (run capnp compile -o %{bin:capnpc-ocaml} %{deps})))
Here's a suitable
server.ml:
open Lwt.Infix open Capnp_rpc_net let () = Logs.set_level (Some Logs.Warning); Logs.set_reporter (Logs_fmt.reporter ()) let cap_file = "echo.cap" let serve config = Lwt_main.run begin let service_id = Capnp_rpc_unix.Vat_config.derived_id config "main" in let restore = Restorer.single service_id Echo.local in Capnp_rpc_unix.serve config ~restore >>= fun vat -> match Capnp_rpc_unix.Cap_file.save_service vat service_id cap_file with | Error `Msg m -> failwith m | Ok () -> Fmt.pr "Server running. Connect using %S.@." cap_file; fst @@ Lwt.wait () (* Wait forever *) end open Cmdliner let serve_cmd = Term.(const serve $ Capnp_rpc_unix.Vat_config.cmd), let doc = "run the server" in Term.info "serve" ~doc let () = Term.eval serve_cmd |> Term.exit
The cmdliner term
Capnp_rpc_unix.Vat_config.cmd provides an easy way to get a suitable
Vat_config
based on command-line arguments provided by the user.
And here's the corresponding
client.ml: connect uri = Lwt_main.run begin let client_vat = Capnp_rpc_unix.client_only_vat () in let sr = Capnp_rpc_unix.Vat.import_exn client_vat uri in Capnp_rpc_unix.with_cap_exn sr run_client end open Cmdliner let connect_addr = let i = Arg.info [] ~docv:"ADDR" ~doc:"Address of server (capnp://...)" in Arg.(required @@ pos 0 (some Capnp_rpc_unix.sturdy_uri) None i) let connect_cmd = let doc = "run the client" in Term.(const connect $ connect_addr), Term.info "connect" ~doc let () = Term.eval connect_cmd |> Term.exit
To test, start the server running:
$ dune exec -- ./server.exe \ --capnp-secret-key-file key.pem \ --capnp-listen-address tcp:localhost:7000 Server running. Connect using "echo.cap".
With the server still running in another window, run the client using the
echo.cap file generated by the server:
$ dune exec ./client.exe echo.cap Callback got "foo" Callback got "foo" Callback got "foo"
Note that we're using
Capnp_rpc_unix.with_cap_exn here instead of
Sturdy_ref.with_cap_exn.
It's almost the same, except that it displays a suitable progress indicator if the connection takes too long.
Pipelining
Let's say the server also offers a logging service, which the client can get from the main echo service:
interface Echo { ping @0 (msg :Text) -> (reply :Text); heartbeat @1 (msg :Text, callback :Callback) -> (); getLogger @2 () -> (callback :Callback); }
The implementation of the new method in the service is simple -
we export the callback in the response in the same way we previously exported the client's callback in the request:
method get_logger_impl _ release_params = let open Echo.GetLogger in release_params (); let response, results = Service.Response.create Results.init_pointer in Results.callback_set results (Some service_logger); Service.return response
Exercise: create a
service_logger that prints out whatever it gets (hint: use
Callback.local)
The client side is more interesting:
let get_logger t = let open Echo.GetLogger in let request = Capability.Request.create_no_args () in Capability.call_for_caps t method_id request Results.callback_get_pipelined
We could have used
call_and_wait here
(which is similar to
call_for_value but doesn't automatically discard any capabilities in the result).
However, that would mean waiting for the response to be sent back to us over the network before we could use it.
Instead, we use
callback_get_pipelined to get a promise for the capability from the promise of the
getLogger call's result.
Note: the last argument to
call_for_caps is a function for extracting the capabilities from the promised result.
In the common case where you just want one and it's in the root result struct, you can just pass the accessor directly,
as shown.
Doing it this way allows
call_for_caps to release any unused capabilities in the result automatically for us.
We can test it as follows:
let run_client service = let logger = Echo.get_logger service in Echo.Callback.log logger "Message from client" >|= function | Ok () -> () | Error (`Capnp err) -> Fmt.epr "Server's logger failed: %a" Capnp_rpc.Error.pp err
This should print (in the server's output) something like:
Service logger: Message from client
In this case, we didn't wait for the
getLogger call to return before using the logger.
The RPC library pipelined the
log call directly to the promised logger from its previous question.
On the wire, the messages are sent together, and look like:
What is your logger?
Please call the object returned in answer to my previous question (1).
Now, let's say we'd like the server to send heartbeats to itself:
let run_client service = Capability.with_ref (Echo.get_logger service) @@ fun callback -> Echo.heartbeat service "foo" callback
Here, we ask the server for its logger and then (without waiting for the reply), tell it to send heartbeat messages to the promised logger (you should see the messages appear in the server process's output).
Previously, when we exported our local
callback object, it arrived at the service as a proxy that sent messages back to the client over the network.
But when we send the (promise of the) server's own logger back to it, the RPC system detects this and "shortens" the path;
the capability reference that the
heartbeat handler gets is a direct reference to its own logger, which
it can call without using the network.
These optimisations are very important because they allow us to build APIs like this with small functions that can be composed easily.
Without pipelining, we would be tempted to clutter the protocol with specialised methods like
heartbeatToYourself to avoid the extra round-trips most RPC protocols would otherwise require.
Hosting multiple sturdy refs
The
Restorer.single restorer used above is useful for vats hosting a single sturdy ref.
However, you may want to host multiple sturdy refs,
perhaps to provide separate "admin" and "user" capabilities to different clients,
or to allow services to be created and persisted as sturdy refs dynamically.
To do this, we can use
Restorer.Table.
For example, we can extend our example to provide sturdy refs for both the main echo service and the logger service:
let write_cap vat service_id cap_file = match Capnp_rpc_unix.Cap_file.save_service vat service_id cap_file with | Error (`Msg m) -> failwith m | Ok () -> Fmt.pr "Wrote %S.@." cap_file let serve config = let make_sturdy = Capnp_rpc_unix.Vat_config.sturdy_uri config in let services = Restorer.Table.create make_sturdy in let echo_id = Capnp_rpc_unix.Vat_config.derived_id config "main" in let logger_id = Capnp_rpc_unix.Vat_config.derived_id config "logger" in Restorer.Table.add services echo_id Echo.local; Restorer.Table.add services logger_id (Echo.Callback.local callback_fn); let restore = Restorer.of_table services in Lwt_main.run begin Capnp_rpc_unix.serve config ~restore >>= fun vat -> write_cap vat echo_id "echo.cap"; write_cap vat logger_id "logger.cap"; fst @@ Lwt.wait () (* Wait forever *) end
Exercise: add a
log.exe client and use it to test the
logger.cap printed by the above code.
Implementing the persistence API
Cap'n Proto defines a standard Persistence API which services can implement
to allow clients to request their sturdy ref.
On the client side, calling
Persistence.save_exn cap will send a request to
cap
asking for its sturdy ref. For example, after connecting to the main echo service and
getting a live capability to the logger, the client can request a sturdy ref like this:
let run_client service = let callback = Echo.get_logger service in Persistence.save_exn callback >>= fun uri -> Fmt.pr "The server's logger's URI is %a.@." Uri.pp_hum uri; Lwt.return_unit
If successful, the client can use this sturdy ref to connect directly to the logger in future.
If you try the above, it will fail with
Unimplemented: Unknown interface 17004856819305483596UL.
To add support on the server side, we must tell each logger instance what its public address is
and have it implement the persistence interface.
The simplest way to do this is to wrap the
Callback.local call with
Persistence.with_sturdy_ref:
module Callback = struct ... let local sr fn = let module Callback = Api.Service.Callback in Persistence.with_sturdy_ref sr Callback.local @@ object ... end
Then pass the
sr argument when creating the logger (you'll need to make it an argument to
Echo.local too):
let logger_id = Capnp_rpc_unix.Vat_config.derived_id config "logger" in let logger_sr = Restorer.Table.sturdy_ref services logger_id in let service_logger = Echo.Callback.local logger_sr @@ Fmt.pr "Service log: %S@." in Restorer.Table.add services echo_id (Echo.local ~service_logger); Restorer.Table.add services logger_id service_logger;
After restarting the server, the client should now display the logger's URI,
which you can then use with
log.exe log URI MSG.
Creating and persisting sturdy refs dynamically
So far, we have been providing a static set of sturdy refs.
We can also generate new sturdy refs dynamically and return them to clients.
We'll normally want to record each new export in some kind of persistent storage
so that the sturdy refs still work after restarting the server.
It is possible to use
Table.add for this.
However, that requires all capabilities to be loaded into the table at start-up,
which may be a performance problem.
Instead, we can create the table using
Table.of_loader.
When the user asks for a sturdy ref that is not in the table,
it calls our
load function to load the capability dynamically.
The function can use a database or the filesystem to look up the resource.
You can still use
Table.add to register additional services, as before.
Let's extend the ping service to support multiple callbacks with different labels.
Then we can give each user a private sturdy ref to their own logger callback.
Here's the interface for a
DB module that loads and saves loggers:
module DB : sig include Restorer.LOADER val create : make_sturdy:(Restorer.Id.t -> Uri.t) -> string -> t (** [create ~make_sturdy dir] is a database that persists services in [dir]. *) val save_new : t -> label:string -> Restorer.Id.t (** [save_new t ~label] adds a new logger with label [label] to the store and returns its newly-generated ID. *) end
There is a
Capnp_rpc_unix.File_store module that can persist Cap'n Proto structs to disk.
First, define a suitable Cap'n Proto data structure to hold the information we need to store.
In this case, it's just the label:
struct SavedLogger { label @0 :Text; } struct SavedService { logger @0 :SavedLogger; }
Using Cap'n Proto for this makes it easy to add extra fields or service types later if needed
(
SavedService.logger can be upgraded to a union if we decide to add more service types later).
We can use this with
File_store to implement
DB:
struct module Store = Capnp_rpc_unix.File_store type t = { store : Api.Reader.SavedService.struct_t Store.t; make_sturdy : Restorer.Id.t -> Uri.t; } let hash _ = `SHA256 let make_sturdy t = t.make_sturdy let load t sr digest = match Store.load t.store ~digest with | None -> Lwt.return Restorer.unknown_service_id | Some saved_service -> let logger = Api.Reader.SavedService.logger_get saved_service in let label = Api.Reader.SavedLogger.label_get logger in let callback msg = Fmt.pr "%s: %S@." label msg in let sr = Sturdy_ref.cast sr in Lwt.return @@ Restorer.grant @@ Callback.local sr callback let save t ~digest label = let open Api.Builder in let service = SavedService.init_root () in let logger = SavedService.logger_init service in SavedLogger.label_set logger label; Store.save t.store ~digest @@ SavedService.to_reader service let save_new t ~label = let id = Restorer.Id.generate () in let digest = Restorer.Id.digest (hash t) id in save t ~digest label; id let create ~make_sturdy dir = let store = Store.create dir in {store; make_sturdy} end
Note: to avoid possible timing attacks, the
load function is called with the digest of the service ID rather than with the ID itself. This means that even if the load function takes a different amount of time to respond depending on how much of a valid ID the client guessed, the client will only learn the digest (which is of no use to them), not the ID.
The file store uses the digest as the filename, which avoids needing to check the ID the client gives for special characters, and also means that someone getting a copy of the store (e.g. an old backup) doesn't get the IDs (which would allow them to access the real service).
The main
serve function then uses
Echo.DB to create the table:
let serve config = (* Create the on-disk store *) let make_sturdy = Capnp_rpc_unix.Vat_config.sturdy_uri config in let db = Echo.DB.create ~make_sturdy "/tmp/store" in (* Create the restorer *) let services = Restorer.Table.of_loader (module Echo.DB) db in let restore = Restorer.of_table services in (* Add the fixed services *) let echo_id = Capnp_rpc_unix.Vat_config.derived_id config "main" in let logger_id = Capnp_rpc_unix.Vat_config.derived_id config "logger" in let logger_sr = Restorer.Table.sturdy_ref services logger_id in let service_logger = Echo.service_logger logger_sr in Restorer.Table.add services echo_id (Echo.local ~service_logger); Restorer.Table.add services logger_id service_logger; (* Run the server *) Lwt_main.run begin ...
Add a method to let clients create new loggers:
interface Echo { ping @0 (msg :Text) -> (reply :Text); heartbeat @1 (msg :Text, callback :Callback) -> (); getLogger @2 () -> (callback :Callback); createLogger @3 (label: Text) -> (callback :Callback); }
The server implementation of the method gets the label from the parameters,
adds a saved logger to the database,
and then "restores" the saved service to a live instance and returns it:
method create_logger_impl params release_params = let open Echo.CreateLogger in let label = Params.label_get params in release_params (); let id = DB.save_new db ~label in Service.return_lwt @@ fun () -> Restorer.restore restore id >|= function | Error e -> Error (`Capnp (`Exception e)) | Ok logger -> let response, results = Service.Response.create Results.init_pointer in Results.callback_set results (Some logger); Capability.dec_ref logger; Ok response
You'll need to pass
db and
restore to
Echo.local too to make this work.
The client can call
createLogger and then use
Persistence.save to get the sturdy ref for it:
let run_client service = let my_logger = Echo.create_logger service "Alice" in let uri = Persistence.save_exn my_logger in Echo.Callback.log_exn my_logger "Pipelined call to logger!" >>= fun () -> uri >>= fun uri -> (* Wait for results from [save] *) Fmt.pr "The new logger's URI is %a.@." Uri.pp_hum uri; Lwt.return_unit
Notice the pipelining here.
The client sends three messages in quick succession: create the logger, get its sturdy ref, and log a message to it.
The client receives the sturdy ref and prints it in a total of one network round-trip.
Exercise: Implement
Echo.create_logger. You should find that the new loggers still work after the server is restarted.
Summary
Congratulations! You now know how to:
Define Cap'n Proto services and clients, independently of any networking.
Pass capability references in method arguments and results.
Stretch capabilities over a network link, with encryption, authentication and access control.
Configure a vat using command-line arguments.
Pipeline messages to avoid network round-trips.
Persist services to disk and restore them later.
Further reading
capnp_rpc_lwt.mliand
s.mldescribe the OCaml API.
Cap'n Proto schema file format shows how to build more complex structures, and the "Evolving Your Protocol" section explains how to change the schema without breaking backwards compatibility. is a good place to ask questions (tag them as "capnp").
The capnp-ocaml site explains how to read and build more complex types using the OCaml interface.
E Reference Mechanics gives some insight into how distributed promises work.
Why does my connection stop working after 10 minutes?
Cap'n Proto connections are often idle for long periods of time, and some networks automatically close idle connections.
To avoid this, capnp-rpc-unix sets the
SO_KEEPALIVE option when connecting to another vat,
so that the initiator of the connection will send a TCP keep-alive message at regular intervals.
However, TCP keep-alives are sent after the connection has been idle for 2 hours by default,
and this isn't frequent enough for e.g. Docker's libnetwork,
which silently breaks idle TCP connections after about 10 minutes.
A typical sequence looks like this:
A client connects to a server and configures a notification callback.
The connection is idle for 10 minutes. libnetwork removes the connection from its routing table.
Later, the server tries to send the notification and discovers that the connection has failed.
After 2 hours, the client sends a keep-alive message and it too discovers that the connection has failed.
It establishes a new connection and retries.
On some platforms, capnp-rpc-unix (>= 0.9.0) is able to reduce the timeout to 1 minute by setting the
TCP_KEEPIDLE socket option.
On other platforms, you may have to configure this setting globally (e.g. with
sudo sysctl net.ipv4.tcp_keepalive_time=60).
How can I return multiple results?
Every Cap'n Proto method returns a struct, although the examples in this README only use a single field.
You can return multiple fields by defining a method as e.g.
-> (foo :Foo, bar :Bar).
For more complex types, it may be more convenient to define the structure elsewhere and then refer to it as
-> MyResults.
Can I create multiple instances of an interface dynamically?
Yes. e.g. in the example above we can use
Callback.local fn many times to create multiple loggers.
Just remember to call
Capability.dec_ref on them when you're finished so that they can be released
promptly (but if the TCP connection is closed, all references on it will be freed anyway).
Using
Capability.with_ref makes it easier to ensure that
dec_ref gets called in all cases.
Can I get debug output?
First, always make sure logging is enabled so you can at least see warnings.
The
main.ml examples in this document enable some basic logging.
If you turn up the log level to
Debug, you'll see lots of information about what is going on.
Turning on colour in the logs will help too - see
test-bin/calc.ml for an example.
Many references will be displayed with their reference count (e.g. as
rc=3).
You can also print a capability for debugging with
Capability.pp.
CapTP.dump will dump out the state of an entire connection,
which will show you what services you’re currently importing and exporting over the connection.
If you override your service’s
pp method, you can include extra information in the output too.
Use
Capnp_rpc.Debug.OID to generate and display a unique object identifier for logging.
How can I debug reference counting problems?
If a capability gets GC'd with a non-zero ref-count, you should get a warning.
For testing, you can use
Gc.full_major to force a check.
If you try to use something after releasing it, you'll get an error.
But the simple rule is: any time you create a local capability or extract a capability from a message,
you must eventually call
Capability.dec_ref on it.
How can I import a sturdy ref that I need to start my vat?
Let's say you have a capnp service that internally requires the use of another capnp service:
Here, creating the
Frontend service requires a sturdy ref for the
Backend service.
But this sturdy ref must be imported into the frontend vat.
Creating the frontend vat requires passing a restorer, which needs
Frontend!
The solution here is to construct
Frontend with a promise for the sturdy ref, e.g.
let run_frontend backend_uri = let backend_promise, resolver = Lwt.wait () in let frontend = Frontend.make backend_promise in let restore = Restorer.single id frontend in Capnp_rpc_unix.serve config ~restore >|= fun vat -> Lwt.wakeup resolver (Capnp_rpc_unix.Vat.import_exn vat backend_uri)
How can I release other resources when my service is released?
Override the
release method. It gets called when there are no more references to your service.
Is there an interactive version I can use for debugging?
The Python bindings provide a good interactive environment.
For example, start the test service above and leave it running:
$ ./_build/default/main.exe Connecting to server at capnp://insecure@127.0.0.1:7000 [...]
Note that you must run without encryption for this, and use a non-secret ID:
let config = Capnp_rpc_unix.Vat_config.create ~serve_tls:false ~secret_key listen_address in let service_id = Restorer.Id.public "" in
Run
python from the directory containing your
echo_api.capnp file and do:
import capnp import echo_api_capnp client = capnp.TwoPartyClient('127.0.0.1:7000') echo = client.bootstrap().cast_as(echo_api_capnp.Echo)
Importing a module named
foo_capnp will load the Cap'n Proto schema file
foo.capnp.
To call the
ping method:
echo.ping("From Python").wait()
<echo_api_capnp:Echo.ping$Results reader (reply = "echo:From Python")>
To call the heartbeat method, with results going to the server's own logger:
echo.heartbeat("From Python", echo.getLogger().callback).wait()
Service logger: "From Python"
To call the heartbeat method, with results going to a Python callback:
class CallbackImpl(echo_api_capnp.Callback.Server): def log(self, msg, _context): print("Python callback got %s" % msg) echo.heartbeat("From Python", CallbackImpl()) capnp.wait_forever()
Python callback got From Python Python callback got From Python Python callback got From Python
Note that calling
wait_forever prevents further use of the session, however.
Can I set up a direct 2-party connection over a pre-existing channel?
The normal way to connect to a remote service is using a sturdy ref, as described above.
This uses the NETWORK to open a new connection to the server, or reuses an existing connection
if there is one. However, it is sometimes useful to use a pre-existing connection directly.
For example, a process may want to spawn a child process and communicate with it
over a socketpair. The calc_direct.ml example shows how to do this:
$ dune exec -- ./test-bin/calc_direct.exe parent: application: Connecting to child process... parent: application: Sending request... child: application: Serving requests... child: application: 21.000000 op 2.000000 -> 42.000000 parent: application: Result: 42.000000 parent: application: Shutting down... parent: capnp-rpc: Connection closed parent: application: Waiting for child to exit... parent: application: Done
How can I use this with Mirage?
Note:
capnp uses the
stdint library, which has C stubs and
might need patching to work with the Xen backend. explains why OCaml doesn't have unsigned integer support.
Here is a suitable
config.ml:
open Mirage let main = foreign ~packages:[package "capnp-rpc-mirage"; package "mirage-dns"] "Unikernel.Make" (random @-> mclock @-> stackv4 @-> job) let stack = generic_stackv4 default_network let () = register "test" [main $ default_random $ default_monotonic_clock $ stack]
This should work as the
unikernel.ml:
open Lwt.Infix open Capnp_rpc_lwt module Make (R : Mirage_random.S) (C : Mirage_clock.MCLOCK) (Stack : Mirage_stack.V4) = struct module Mirage_capnp = Capnp_rpc_mirage.Make (R) (C) (Stack) let secret_key = `Ephemeral let listen_address = `TCP 7000 let public_address = `TCP ("localhost", 7000) let start () () stack = let dns = Mirage.Network.Dns.create stack in let net = Mirage_capnp.network ~dns stack in let config = Mirage_capnp.Vat_config.create ~secret_key ~public_address listen_address in let service_id = Mirage_capnp.Vat_config.derived_id config "main" in let restore = Restorer.single service_id Echo.local in Mirage_capnp.serve net config ~restore >>= fun vat -> let uri = Mirage_capnp.Vat.sturdy_uri vat service_id in Logs.app (fun f -> f "Main service: %a" Uri.pp_hum uri); Lwt.wait () |> fst end
Contributing
Conceptual model
An RPC system contains multiple communicating actors (just ordinary OCaml objects).
An actor can hold capabilities to other objects.
A capability here is just a regular OCaml object pointer.
Essentially, each object provides a
call method, which takes:
some pure-data message content (typically an array of bytes created by the Cap'n Proto serialisation), and
an array of pointers to other objects (providing the same API).
The data part of the message says which method to invoke and provides the arguments.
Whenever an argument needs to refer to another object, it gives the index of a pointer in the pointers array.
For example, a call to a method that transfers data between two stores might look something like this:
- Content: - InterfaceID: xxx - MethodID: yyy - Params: - Source: 0 - Target: 1 - Pointers: - <source> - <target>
A call also takes a resolver, which it will call with the answer when it's ready.
The answer will also contain data and pointer parts.
On top of this basic model the Cap'n Proto schema compiler (capnp-ocaml) generates a typed API, so that application code can only generate or attempt to consume messages that match the schema.
Application code does not need to worry about interface or method IDs, for example.
This might seem like a rather clumsy system, but it has the advantage that such messages can be sent not just within a process,
like regular OCaml method calls, but also over the network to remote objects.
The network is made up of communicating "vats" of objects.
You can think of a Unix process as a single vat.
The vats are peers - there is no difference between a "client" and a "server" at the protocol level.
However, some vats may not be listening for incoming network connections, and you might like to think of such vats as clients.
When a connection is established between two vats, each can choose to ask the other for access to some service.
Services are usually identified by a long random secret (a "Swiss number") so that only authorised clients can get access to them.
The capability they get back is a proxy object that acts like a local service but forwards all calls over the network.
When a message is sent that contains pointers, the RPC system holds onto the pointers and makes each object available over that network connection.
Each vat only needs to expose at most a single bootstrap object,
since the bootstrap object can provide methods to get access to any other required services.
All shared objects are scoped to the network connection, and will be released if the connection is closed for any reason.
The RPC system is smart enough that if you export a local object to a remote service and it later exports the same object back to you, it will switch to sending directly to the local service (once any pipelined messages in flight have been delivered).
You can also export an object that you received from a third-party, and the receiver will be able to use it.
Ideally, the receiver should be able to establish a direct connection to the third-party, but
this isn't yet implemented and instead the RPC system will forward messages and responses in this case.
Building
To build:
git clone cd capnp-rpc opam pin add -ny . opam depext -t capnp-rpc-unix capnp-rpc-mirage opam install --deps-only -t . make test
If you have trouble building, you can use the Dockerfile shown in the CI logs (click the green tick on the main page).
Testing
Running
make test will run through the tests in
test-lwt/test.ml, which run some in-process examples.
The calculator example can also be run across two Unix processes.
Start the server with:
$ dune exec -- ./test-bin/calc.exe serve \ --capnp-listen-address unix:/tmp/calc.socket \ --capnp-secret-key-file=key.pem Waiting for incoming connections at: capnp://sha-256:LPp-7l74zqvGcRgcP8b7-kdSpwwzxlA555lYC8W8prc@/tmp/calc.socket
Note that
key.pem does not need to exist. A new key will be generated and saved if the file does not yet exist.
In another terminal, run the client and connect to the address displayed by the server:
dune exec -- ./test-bin/calc.exe connect capnp://sha-256:LPp-7l74zqvGcRgcP8b7-kdSpwwzxlA555lYC8W8prc@/tmp/calc.socket/
You can also use
--capnp-disable-tls if you prefer to run without encryption
(e.g. for interoperability with another Cap'n Proto implementation that doesn't support TLS).
In that case, the client URL would be
capnp://insecure@/tmp/calc.socket.
Fuzzing
Running
make fuzz will run the AFL fuzz tester. You will need to use a version of the OCaml compiler with AFL support (e.g.
opam sw 4.04.0+afl).
The fuzzing code is in the
fuzz directory.
The tests set up some vats in a single process and then have them perform operations based on input from the fuzzer.
At each step it selects one vat and performs a random (fuzzer-chosen) operation out of:
Request a bootstrap capability from a random peer.
Handle one message on an incoming queue.
Call a random capability, passing randomly-selected capabilities as arguments.
Finish a random question.
Release a random capability.
Add a capability to a new local service.
Answer a random question, passing random-selected capability as the response.
The content of each call is a (mutable) record with counters for messages sent and received on the capability reference used.
This is used to check that messages arrive in the expected order.
The tests also set up a shadow reference graph, which is like the regular object capability reference graph except that references between vats are just regular OCaml pointers (this is only possible because all the tests run in a single process, of course).
When a message arrives, the tests compare the service that the CapTP network handler selected as the target with the expected target in this simpler shadow network.
This should ensure that messages always arrive at the correct target.
In future, more properties should be tested (e.g. forked references, that messages always eventually arrive when there are no cycles, etc).
We should also test with some malicious vats (that don't follow the protocol correctly).
sha256=cb771a4bae4b26e2fe225eb0a7ee3ee4a3e9bc3802d3b7094e32f4d7c55a2054
sha512=bb499492ac404008effc17bea06444055ff2b42de2a41eab9c27383f64b3e2f9da713b4b887ec4ccb22cb9fd0d2a05a969b80781497a1853e5ec5f1ec728a963
>= "0.7.0"
>= "1.0.1" & with-test
= version
|
https://ocaml.org/p/capnp-rpc-unix/1.2.2
|
CC-MAIN-2022-33
|
refinedweb
| 7,537
| 60.11
|
- MusicForMellons last edited by MusicForMellons
How can I set AppFullscreen at pageload? I tried:
import { AppFullscreen } from 'quasar' export default { mounted () { this.toggleFullscreen() }, methods: { toggleFullscreen () { AppFullscreen.toggle() } }
But does not work…
AppFullscreen is just a wrapper over the Fullscreen API ( Some platforms do not support it well (like iOS or Android on Samsung S4 are just one examples I stumbled upon). If you target those platforms, it’s better to use a Cordova plugin (example, not tested it myself: for this rather than relying on the Web Fullscreen API.
I am a bit puzzled. I am building a webapp, not a hybrid (Cordova) app. Would the cordova plugin you mention still be of use in my case?
AppFullscreen does not use Cordova fullscreen plugin. It uses the Web Fullscreen API, so yes you can use it for a webapp.
Ok…, I was referring to the Cordova Plugin you mentioned, but I think your answer implies that is not for webapps then. Thanks.
Cordova plugins are for Cordova apps only. And Web APIs are usable in both webapps and Cordova and Electron.
@MusicForMellons curious what you ended up doing here? I’m finding that if I use
mounted()or
created()for
AppFullscreen.request()it doesn’t work. However, if I put it into a button it does work.
Ok, thanks for the feedback @MusicForMellons! It is definitely pretty spotty based on the browser implementation.
|
https://forum.quasar-framework.org/topic/243/how-to-set-appfullscreen-at-pageload
|
CC-MAIN-2022-21
|
refinedweb
| 233
| 67.96
|
ITDefense 2008 Next Week
Sunday, 20. January 2008, 20:42:27
linux, programming and security
Sunday, 20. January 2008, 20:42:27
Sunday, 20. January 2008, 20:37:53
$ host localhost.opera.com localhost.opera.com has address 127.0.0.1
Sunday, 26. August 2007, 20:04:04
$ gcc avtag.c open.c vxfuzz.c -o vxfuzz -Wl,-rpath,$PWD -L. -llnxfv
$ ./vxfuzz input.exe [*] vxfuzz $Version: $ [*] ---------------------------- [*] PRNG SEED: 0x12cc65c1 [*] AVInitialise(); [*] AVScanObject(); [!] signal 11, attempting to dump progress. [+] Signal caught, dumping file to vx.out. $ uvscan --secure ./vx.out Segmentation fault
Tuesday, 17. July 2007, 08:19:41
I'm not a big fan of the anti virus industry, apart from selling the ridiculous idea of blacklisting malicious content, trying to improve security by giving attackers hundreds of thousands more lines of code to explore is just naive. You should question any vendor who is trying to sell you something with the promise that adding more code will improve your security, good security is always inversely proportional to the amount of code exposed to attackers. This is why firewalls are good security; they reduce the amount of code attackers can reach. This is why IDS and anti virus are bad security; they increase the amount of code attackers can reach.
(note: I'm sure I'll get some responses from people who make their money peddling this crap, or who have bought into the marketing from these vendors explaining what an idiot I am).
McAfee, however, have really managed to annoy me. Some readers may recall CVE-2006-6474, an insecure DT_RPATH in McAfee VirusScan. We reported this issue to McAfee in December, and they still havnt issued a fix to their suckers^Wcustomers. What would be funny (if it wasn't causing me work), is that a company that makes its money analyzing executables (which presumably requires hiring at least a few engineers who understand how they work) are having great difficulty understanding this simple bug.
The problem is that when McAfee compiled their product, they specified that the DT_RPATH should include the working directory.$ objdump -p uvscan | grep RPATH RPATH /lib:/usr/lib:/usr/local/lib:.
Note the . at the end, this would have been specified by something like$ gcc -Wl,-rpath,/lib:/usr/lib:/usr/local/lib:. -o uvscan ...
This tells the dynamic linker that it should search for required libraries in the working directory. Therefore, if a file matching one of the NEEDED tags can be found in the working directory, the dynamic loader will open it and execute the code from within it.
This is very easy to exploit, here is an example.$ cat uvscan.c #include void __attribute__((constructor)) init() { fprintf(stderr, "owned.n"); return; } $ gcc -fPIC -shared -o liblnxfv.so.4 uvscan.c $ uvscan liblnxfv.so.4 owned.
Using this attack on automated systems that use uvscan would be trivially easy, or simply convince a user to download the file, or tar archive containing the file.
eg:$ tar -zxf cool-thing.tar.gz $ cd cool-thing $ uvscan * owned
The solution is very simple, although they didnt realise it, they didnt mean to specify ".", they meant "$ORIGIN", which is a shortcut interpreted by ld.so meaning the location of the binary. To fix the problem, they simply have to make that change to their build scripts, a simple one line fix.
When I explained this to McAfee, their response was:
McAfee disagrees with your statement that this is a "high" severity issue, as the privilege of the executed code is not raised from the privileges of the executing user.
I don't even know where to begin explaining whats wrong with this statement. Firstly, the privilege is most definitely raised, from "remote attacker" without any privileges to "executing user". How can a company that makes their money blacklisting malicious executables not understand that when a remote attacker sends you a malicious file, executing that file without intending to is most definitely a "privilege escalation" attack?
It's incomprehensible how moronic this company is.
Since this issue has been reported, at least two or three times a month I'm contacted by one of their customers asking me to explain the problem or how they can workaround it. I emailed McAfee and asked them if they want any help fixing this issue, as I'm sick of doing their technical support for them, but they never responded.
In protest of this, I'm thinking of starting a Month of McAfee Bugs (MOMcB) project, which perhaps will get their attention and force them to start taking security problems in their products seriously. Modeled on the same format of previous MO?B projects, I would report a new flaw in McAfee VirusScan everyday for a month, including an exploit (if it wouldn't be too time consuming). Finding these flaws is ridiculously easy, I've already found several using fuzzing (I assume most (if not all) will also affect their windows product).
Here you go, you can have this one for free uvscan --secure vstestcase.003 Segmentation fault
Feel free to try it on windows and let me know if it works.
Unfortunately their scanner is way too slow to make fuzzing feasible, presumably due to expensive licensing and self-checking. Luckily, the actual scanning takes place in a DSO they ship with their product, so all I had to do was reverse engineer their API and make a quick frontend more suitable for fuzzing.
This wasn't easy, they have a very bizarre API that involves passing stacks of tags around. I eventually understood enough to make it work.
So a minimal uvscan replacement would look something like this.../* intialise the scanner */ state = t_init(0x00070008); state = t_push(state, 0x0064000C, sizeof(void *), nop); /* datfile locations */ state = t_push(state, 0x0068000C, sizeof(scandat), scandat); state = t_push(state, 0x0069000C, sizeof(namesdat), namesdat); state = t_push(state, 0x006A000C, sizeof(cleandat), cleandat); AVInitialise(t_get(state), buf, buf->len, 0, 0); /* first dword of buf is now a state cookie */ o = buf; /* construct scan instructions */ object = t_init(0x000D0008); /* callback thats passed results */ object = t_push(object, 0x0064000C, sizeof(void *), results); /* scan options */ object = t_push(object, 0x01F6000C, 0, 0); object = t_push(object, 0x0193000C, 0, 0); object = t_push(object, 0x0190000C, 0, 0); object = t_push(object, 0x0191000C, 0, 0); object = t_push(object, 0x0250000C, 0, 0); object = t_push(object, 0x01f6000C, 0, 0); object = t_push(object, 0x0194000C, 0, 0); object = t_push(object, 0x0194000C, 0, 0); object = t_push(object, 0x01F6000C, 0, 0); object = t_push(object, 0x0005000C, 0, 0); /* target */ object = t_push(object, 0x00CB000C, strlen(target) + 1, target); AVScanObject(*(o + 1), t_get(object), tmp, strlen(target) + 1, NULL);
Once I had this working, writing a quick fuzzer around it was trivial. I'll write another blog post about how I reverse engineered it, with some example code snippets.
So, what does the community think, I can easily find enough bugs for a MOMcB, should I start one? Once the Project is over, I wouldd release the fuzzer I've written so that anyone interested can find even more bugs. If anyone is interested in getting involved, or would like a copy of the fuzzer to start their own project, let me know.
Sunday, 8. July 2007, 11:52:06.
Monday, 28. May 2007, 22:13:21
#include <string.h> #include <stdio.h> #include <locale.h> #include <ctype.h> int main(int argc, char **argv) { char buf[128]; int len; /* initialise the buffer */ memset(buf, '\0', sizeof(buf)); /* setup locale for toupper() */ setlocale(LC_ALL, ""); /* check an argument was specified */ if (argc >= 2) { /* get the first 10 characters */ len = snprintf(buf, sizeof(buf), "you entered: %.10s", argv[1]); /* check for non-printing characters */ while (len--) buf[len] = isprint(buf[len]) ? toupper(buf[len]) : '?'; /* output string */ fprintf(stdout, "%s\n", buf); } return 0; }
Scroll down for discussion.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
snprintf() wont overrun the `buf` buffer, so specifying too much data wont cause a crash
$ ./a.out `perl -e 'print "A"x"1024"'`; echo $? YOU ENTERED: AAAAAAAAAA 0
However, the return code from snprintf() isnt checked, and it can return -1 for a number of reasons, including malloc() failing, or invalid multibyte characters in the string. If any of the users of this application are using a utf8 locale, they could be vulnerable to a security issue.
$ LC_ALL=en_GB.utf8 ./a.out `printf "\x80foobar"`; echo $? Segmentation fault 139
Its worth remembering that a number of common libraries will call setlocale() for you in their constructors, so even if you didnt intend to, it could be called for you. Using the return code from snprintf() or sprintf() without testing for failure could result in multiple security vulnerabilities, so its always worth checking even if you dont modify the locale.
Monday, 5. March 2007, 15:37:08
Friday, 29. December 2006, 22:47:45
$ find /usr/bin -type f -perm -4000 -or -perm -2000
$ mkdir ~/.suids $ ln /bin/su ~/.suids
# mount -o bind /tmp /tmp # mount -o remount,bind,nosuid /tmp /tmp
Thursday, 21. December 2006, 19:10:51
#include <stdlib.h> #include <stdio.h> int main(int argc, char **argv) { int a, b; if (argc != 3) return 1; a = atoi(argv[1]); b = atoi(argv[2]); return b ? a / b : 0; }
$ ./a.out 42 0; echo $? 0 $
$ ./a.out -2147483648 -1; echo $? Floating point exception (core dumped) 136
Friday, 15. December 2006, 04:43:51
Showing posts 1 - 10 of 22.
|
http://my.opera.com/taviso/blog/
|
crawl-001
|
refinedweb
| 1,553
| 63.59
|
The Smoker Beggar
As they say,beggars can’t be choosers,in fact beggar take what they can get.
A beggar on the street can make one cigarette out of every 6 cigarette butts he finds. After one whole day of searching and checking public ashtrays the beggar finds a total of n cigarette butts.
You have to find the number of cigarettes he can make and smoke from the butts he found.
Constraints
1 ≤ T ≤ 100
1 ≤ n ≤ 10^13
Input
The first line contains T — the number of test cases.
Each test case contains a number n-the number of cigarette butts he found.
Output
Print the number of cigarettes the begger smoke.
#include <iostream> using namespace std; int main() { //cout << "Hello World!" << endl; int a,b,sum=0,n,i=1; cin>>n; while(n--) { sum=0; cin>>a; while(a>=6) { b=a%6; a=a/6; sum+=a; a+=b; } cout<<"Case "<<i++<<": "<<sum<<endl; } return 0; }
competitive Archive
hackerearth Archive
|
https://coderinme.com/the-smoker-beggar-hackerearth-coderinme/
|
CC-MAIN-2019-09
|
refinedweb
| 166
| 72.46
|
# ][cg] is now a plugin for [Gradle][], which adds [Clojure][clj]` form is the first in the file. Comments may precede the form. The symbol is allowed to be fully qualified: `clojure.core/ns`. * [Clojars][cr].2.0' } } group = 'example.group' version = '1.0.0'' } configureClojarsDeploy(uploadArchives) A small walkthrough: *] This can be even more simplified by registering clojuresque with your local Gradle installation. Put the clojuresque jar in `${GRADLE_HOME}/lib` and add the following line to `${GRADLE_HOME}/plugin.properties`: clojure=clojuresque.ClojurePlugin From now on you can skip the whole `buildscript` stuff and just use `usePlugin('clojure')` to load the plugin. ## Filter In the filesets you can specify filters with `include` resp. `exclude`. This is fine for mostly file based languages. However clojure is strongly based on namespaces. Therefor the clojure part of the source sets support also `includeNamespace` and `excludeNamespace` which can be used to filter on the namespace name. Eg. to exclude examples from the final jar one could use sourceSets.main.clojure { ex `ueberjar` task with ueberjar.enabled = true Then invoking `gradle ueberjar` will create a jar file with all runtime dependencies included. ## Issues This is **alpha** software! Expect problems! Please report issues in the bugtracker at [the lighthouse tracker][lh]. Or email them to me. -- Meikel Brandmeyer <mb@kotka.de> Frankfurt am Main, January 2010 [Gradle]: [Groovy]: [clj]: [cg]: [lh]: [cr]: [hudson]: [antbug]: [aot]:
|
https://bitbucket.org/myfreeweb/clojuresque/src/5cb061a790e9/?at=stable
|
CC-MAIN-2015-14
|
refinedweb
| 229
| 60.92
|
In this article, I will show how to set a simple multipage website with esbuild. The code used here is similar to the one I used in this webpack 5 example, so you can use those two articles to compare both bundlers - in both speed & ease of use.
Use cases
There are multiple reasons you would like to do it. You can have multiple single-page apps in one repository and share the build setup to speed things up. Another reason would be to make sure the build creates chunks that are reusable between apps. Or, you have a legacy HTML+JS application with many HTML files & you want to keep separate JS for each of them.
Code
First, we have 2 simple HTML files on the top level of the project:
a.html:
<html> <head> <meta http- <title>A part of our website</title> <link rel="shortcut icon" href="#" /> <div id="view"></div> <script type="module" src="./dist/a.js"></script> </head> <body> <h1>Page A</h1> <p id="core"></p> <p id="a">(a placeholder)</p> <p id="b">(b placeholder)</p> <p><a href="b.html">Go elsewhere</a></p> </body> </html>
b.html:
<html> <head> <meta http- <title>B part of our website</title> <link rel="shortcut icon" href="#" /> <div id="view"></div> <script type="module" src="./dist/b.js"></script> </head> <body> <h1>Page B</h1> <p id="core"></p> <p id="a">(a placeholder)</p> <p id="b">(b placeholder)</p> <p><a href="a.html">Go elsewhere</a></p> </body> </html>
Each of the files imports its JS file, and as we are using ESM output, we need to import it with
<script type="module" .... All paragraphs
<p id="... are meant to be updated by our JS & to let us see if all works as expected.
JavaScript
We have 3 simple files for testing our setup. Each file is using jQuery, so we can see if the chunks are created as expected - with the big library included in the shared file, and individual files small.
Besides that, the files are rather trivial
./src/core.js:
import $ from "jquery"; $("#core").html("Core module");
./src/a.js:
import $ from "jquery"; import "./core"; $("#a").html("A file, file A");
./src/b.js:
import $ from "jquery"; import "./core"; $("#b").html("B yourself");
Dependencies
Before installing the dependencies, let's first create an npm package:
$ npm init -y Wrote to /home/marcin/workspace/github/esbuild-multipage/package.json: { "name": "esbuild-multipage", "version": "1.0.0", "description": "Example repo for an article about multipage with esbuild", "main": "index.js", ...
Then you can install your dependencies with:
$ npm install --save jquery esbuild added 2 packages, and audited 3 packages in 826ms found 0 vulnerabilities
Build script
The build script is similar to what I used in the lazy loading example. If your configuration becomes more complicated, at some point will make sense to move it to a build script - as I show here.
To have your build available with
npm run build, add to
package.json:
{ ... "scripts": { ... "build": "esbuild src/a.js src/b.js --bundle --outdir=dist --splitting --format=esm" } ...
The values in the command:
src/a.js&
src/b.s- the entry points of the application
--bundle- bundle mode of esbuild
--outdir=dist- the folder where all the built code goes
--splitting- we turn on the experimental splitting behavior
--format=esm- another requirement of splitting to work - as of now, it's only working with es-modules output
Splitting
Splitting - making shared chunks for our entry points to share - is still work-in-progress. You can read more about it in (the documentation)[]. If you get into trouble because of it, just remove
--splitting from the build command.
Running build
So, how is our code doing in the end:
$ npm run build > esbuild-multipage@1.0.0 build > esbuild src/a.js src/b.js --bundle --outdir=dist --splitting --format=esm dist/chunk-TTCANJWN.js 226.2kb dist/a.js 190b dist/b.js 186b ⚡ Done in 23ms
Pretty sweet I would say!
Links
Summary
In this article, we have seen how to build a multipage website with esbuild. Stay tuned for other esbuild & javascript articles.
Discussion (0)
|
https://dev.to/marcinwosinek/how-to-build-a-multipage-website-with-esbuild-5bja
|
CC-MAIN-2021-39
|
refinedweb
| 700
| 65.52
|
Generic submatrix view adapter used internally in the OpenCLMatrixDomain. More...
#include <opencl-domain.h>
Generic submatrix view adapter used internally in the OpenCLMatrixDomain.
NULL constructor.
Constructor from an existing .
Constructor from an existing Matrix and dimensions.
Constructor from an existing SubmatrixAdapter.
Constructor from an existing submatrix and dimensions.
Get the number of rows in the matrix.
Get the number of columns in the matrix.
Get the stride of the matrix.
Set the entry at (i, j).
Get a writeable reference to an entry in the matrix.
Get a read-only individual entry from the matrix.
Get an entry and store it in the given value.
This form is more in the Linbox style and is provided for interface compatibility with other parts of the library
Access the parent matrix.
|
https://linalg.org/linbox-html/class_lin_box_1_1_submatrix_adapter.html
|
CC-MAIN-2022-40
|
refinedweb
| 129
| 53.88
|
C program to create a file called emp.txt and store information about a person, in terms of his name, age and salary.
/***********************************************************
* You can use all the programs on
* for personal and learning purposes. For permissions to use the
* programs for commercial purposes,
* contact info@c-program-example.com
* To find more C programs, do visit
* and browse!
*
* Happy Coding
***********************************************************/
#include <stdio.h>
void main()
{
FILE *fptr;
char name[20];
int age;
float salary;
fptr = fopen ("emp.txt", "w"); /*open for writing*/
if (fptr == NULL)
{
printf("File does not existsn");
return;
}
printf("Enter the namen");
scanf("%s", name);
fprintf(fptr, "Name = %sn", name);
printf("Enter the agen");
scanf("%d", &age);
fprintf(fptr, "Age = %dn", age);
printf("Enter the salaryn");
scanf("%f", &salary);
fprintf(fptr, "Salary = %.2fn", salary);
fclose(fptr);
}
/*Please note that you have to open the file called emp.txt in the directory*/)
|
http://c-program-example.com/2011/10/c-program-for-file-operations.html
|
CC-MAIN-2018-30
|
refinedweb
| 145
| 67.25
|
discuss library
How to Read this Lecture¶
We use dynamic programming many applied lectures, such as
The objective of this lecture is to provide a more systematic and theoretical treatment, including algorithms and implementation while focusing on the discrete case.
Code¶
The code discussed below was authored primarily by Daisuke Oyama..
Discrete DPs¶
Loosely speaking, a discrete DP is a maximization problem with an objective function of the form
$$ \mathbb{E} \sum_{t = 0}^{\infty} \beta^t r(s_t, a_t) \tag{1} $$ of feasible state-action pairs.
$$ \mathit{SA} := \{(s, a) \mid s \in S, \; a \in A(s)\} $$
$$ (Q_\sigma^t r_\sigma)(s) = \mathbb E [ r(s_t, \sigma(s_t)) \mid s_0 = s ] \quad \text{when } \{s_t\} \sim Q_\sigma \tag{2} $$
- $ \$$ v_{\sigma}(s) = \sum_{t=0}^{\infty} \beta^t (Q_{\sigma}^t r_{\sigma})(s) \qquad (s \in S) $$
This function is called the policy value function for the policy $ \sigma $.
The optimal value function, or simply value function, is the function $ v^*\colon S \to \mathbb{R} $ defined by$$ v^*(s) = \max_{\sigma \in \Sigma} v_{\sigma}(s) \qquad (s \in$$ \sigma(s) \in \operatorname*{arg\,max}_{a \in A(s)} \left\{ r(s, a) + \beta \sum_{s' \in S} w(s') Q(s, a, s') \right\} \qquad (s \in S) $$$$ T_{\sigma} v = r_{\sigma} + \beta Q_{\sigma}
or in other words, $ v^* $ is the unique fixed point of $ T $, and
- $ \sigma^* $ is an optimal policy function if and only if it is $ v^* $-greedy
By the definition of greedy policies given above, this means that$$ \sigma^*(s) \in \operatorname*{arg\,max}_{a \in A(s)} \left\{ r(s, a) + \beta \sum_{s' \in S} v^*(s') Q(s, \sigma(s), s') \right\} \qquad (s \in S) $$
Solving Discrete DPs¶
Now that the theory has been set out, let’s turn to solution methods.
The code for solving discrete DPs is available in ddp.py from the QuantEcon.py$$ s' = a + U \quad \text{where} \quad U \sim U[0, \ldots, B] $$
$$ Q(s, a, s') := \begin{cases} \frac{1}{B + 1} & \text{if } a \leq s' \leq a + B \\ 0 & \text{ otherwise} \end{cases} \tag{3} $$.
The following code sets up these objects for us
import numpy as np
import quantecon as qe use of a sparse matrix for
Q.
(An example of using sparse matrices is given in the exercises below)
The call signature of the second formulation is
DiscreteDP(R, Q, β, dynamic programming lecture, we solve a benchmark model that has an analytical solution to check we could replicate it numerically.
The exercise is to replicate this solution using
DiscreteDP.
import scipy.sparse as sparse import matplotlib.pyplot as plt %matplotlib inline from quantecon import compute_fixed_point from quantecon.markov import DiscreteDP
Setup¶
Details of the model can be found in the lecture on optimal growth.
As in the lecture,.
Action (indexed by)
a is feasible at state (indexed by)
s if and only if
grid[a] < f([grid[s]) (zero consumption is not allowed because of the log utility).
Thus the Bellman equation is:$$ v(k) = \max_{0 < k' < f(k)} u(f(k) - k') + \beta v(k'), $$
where $.521e-03 2 4.070e+00 2.254e-03 Iteration Distance Elapsed (seconds) --------------------------------------------- 1 5.518e+00 9.720e-04 2 4.070e+00 1.778e-03 3 3.866e+00 2.489e-03 4 3.673e+00 3.195e-03 Iteration Distance Elapsed (seconds) --------------------------------------------- 1 5.518e+00 8.931e-04 2 4.070e+00 1.684e-03 3 3.866e+00 2.437e-03 4 3.673e+00 3.184e-03 5 3.489e+00 3.932e-03 6 3.315e+00 4.675e-03
/home/qebuild/anaconda3/lib/python3.7/site-packages/quantecon/compute_fp.py:151: RuntimeWarning: max_iter attained before convergence in compute_fixed_point warnings.warn(_non_convergence_msg, RuntimeWarning)
Dynamics of the Capital Stock¶
Finally, let us work on Exercise 2, where we plot the trajectories of the capital stock for three different discount factors, $ 0.9 $, $ 0.94 $, and $ 0.98 $, with initial condition $ k_0 = 0.1 $.
discount_factors = (0.9, 0.94, 0.98) k_init = 0.1 # function function and an $ \varepsilon $-optimal policy function (unless
iter_max is reached).
See also the documentation for
DiscreteDP.
|
https://lectures.quantecon.org/py/discrete_dp.html
|
CC-MAIN-2019-35
|
refinedweb
| 707
| 55.34
|
Some time ago I did some work on a web application built on Sitecore which used the new application cache introduced in HTML5. Application cache essentially makes your web site/application or parts of it available offline and thus can also speed up your application in certain scenarios as content is cached locally.
Using application cache with Sitecore and content that is often changed, however, introduced a few hurdles to overcome. One of them was to make sure that the client always got the newest content from Sitecore and that is what this (relatively) short blog post is going to be about.
We need to do something whenever a file is saved in Sitecore.
HTML5 Application Cache
Before we get to that part, let me just roughly explain how application cache works and you will see what the problem is.
You start out by referencing a manifest file in your html like this:
<html manifest="/manifest.appcache"> ... </html>
The file can be named anything you like but
.appcache is usually used a the file extension. Just make sure that your web server knows how to handle the extension. You might need to add it’s mimetype to your
Web.config like below for it to work correctly.
<system.webServer> <staticContent> <mimeMap fileExtension=".appcache" mimeType="text/cache-manifest"/> </staticContent> </system.webServer>
By specifying the manifest attribute the current page is automatically cached by your browser and available without an internet connection. The browser will now always load the cached version even if it is actually online – it won’t even ask the server if it has been modified or anything. The browser will periodically check the manifest file for changes (the files content has to have changed for it to update the cached pages) and if it has been changed then refetch the page from the server.
The manifest file can contain references to other files that will then follow the same behaviour. By specifying all the files your web application is dependent on in the manifest file, your application will basically be downloaded when the user hits the front page that references the manifest.
But now it will never ask the server for updated versions of the cached pages, unless the content of the manifest file changes. Additionally, if the cache headers for the cached pages allows the pages to be cached, changing the manifest won’t actually update the files unless they have expired.
So be sure to disable caching for any files that is part of your manifest, if you want to make sure they are updated whenever the manifest is changed.
Read my earlier post about how to easily disable caching for specific files or folders here.
If you want to know more about application cache and format of the manifest file then take a look at this tutorial.
Updating the manifest whenever an item is saved in Sitecore
Because pages are not refetched from the server – even if the browser is actually online – when content is updated in Sitecore, the clients will still be seeing the old content and herein lies the problem we needed to solve.
We need to make a change to the manifest file whenever an item is saved to force the clients to refetch the cached pages from our server with the updated content.
Luckily this is pretty easy in Sitecore. All we need to do is add a processor to the
saveUI pipeline.
The Code
Let’s write the code to actually update the manifest file.
public class UpdateManifest { public void Process(SaveArgs args) { Assert.ArgumentNotNull(args, "args"); SaveArgs.SaveItem[] savedItems = args.Items; // Abort if there is no items being saved if (!savedItems.Any()) return; // Abort if the item doesn't exist anymore var item = Client.ContentDatabase .GetItem(savedItems[0].ID, savedItems[0].Language, savedItems[0].Version); if (item == null) return; // Check if the saved item is relevant to us. // In my case I only needed to update the manifest // if the item was in a specific folder. if (item.Paths.FullPath.Contains("/sitecore/content/service-app/")) { UpdateManifestFile(); } } public void UpdateManifestFile() { try { var path = Sitecore.IO.FileUtil.MapPath("~/manifest.appcache"); var lines = File.ReadAllLines(path); // Find current version in comment on line 2 // and increment it if it exists var match = Regex.Match(lines[1], @"# Version (\d+)"); if (match.Success) { int version; int.TryParse(match.Groups[1].Value, out version); lines[1] = "# Version " + (++version); File.WriteAllLines(path, lines); } } catch (Exception ex) { Log.Warn("Could not update version of 'manifest.appcache'", ex, this); } } }
That is all there is to it. We just make sure that an item is actually being saved, that it exists and that the saved item is in a specific part of the Sitecore content tree, as we only need to this for that specific tree. Updating the file is just incrementing a version number in a comment line at the second line in the file.
Now we just need to have this code run as part of the
saveUI pipeline. To do this we just create a new config file
UpdateManifestOnSave.config with the following content.
<configuration xmlns: <sitecore> <processors> <saveUI> <processor mode="on" type="[NAMESPACE].UpdateAppCache, [ASSEMBLY]" patch: </saveUI> </processors> </sitecore> </configuration>
The file should be placed under
App_Config/Include and it is then automatically patched into the configuration by Sitecore.
Now our manifest file will be modified whenever an item is saved in Sitecore (and is located under
/sitecore/content/service-app/) and thereby forces the clients to refetch the cached pages from the server – now with updated content. As I mentioned earlier make sure that your web server has disabled caching for the files included in your manifest to make sure that the browser actually gets the update files.
In this case my site is using the
masterdatabase and not the usual
webdatabase, so my items are never published and I’m therefore hooking into the
saveUIpipeline. If your site is using the
webdatabase you should instead hook into the
publishItempipeline. Your class containing your code then needs to derive from
Sitecore.Publishing.Pipelines.PublishItem.PublishItem.Processor.
Conclusion
With Sitecore’s pipelines it’s quite easy to hook into different parts of Sitecore and customize your solution.
In this blog post you’ve seen how this can be used to hook into the
saveUI pipeline and in this case update the manifest file. In another project we are renaming the saved item from a selected start and end date on the item, so the editors get a better overview in the Sitecore tree without manually having to rename items depending on their current content.
The stuff in this blog post can easily be applied to other pipelines as well in more or less the same way. If you are interested to know which other pipelines Sitecore has, you can have a look in your
Web.config under
configuration/sitecore/pipelines and
configuration/sitecore/processors.
|
https://blog.krusen.dk/hooking-sitecores-save-pipeline/
|
CC-MAIN-2019-22
|
refinedweb
| 1,149
| 62.68
|
SAN FRANCISCO--(BUSINESS WIRE)- 2017, the year's top .FM sites and brands.
dotFM’s year-end top 100 ranking, “.FM Top 100 Hits of 2017” lists the top sites and brands from the past year under dotFM. The top 100 hits represent some of the most recognizable and innovative brands in streaming media and social entertainment today. The .FM Top 100 Hits of 2017 chart is available at:
“It’s amazing and thrilling to see the wide diversity within the .FM namespace, the originality of dotFM clients is both inspiring and refreshing,” remarked George T. Bundy, Chairman & CEO of BRS Media Inc. "For well over 20 years, the .FM TLD’s innovative, cutting edge Brand Registry Services have evolved to meet the growing demand and creativity of our clientele. Today, our comprehensive portfolio of registrants not only include broadcasters, Internet radio, podcasters and the music community, but also interactive companies, premier social media ventures and streaming entrepreneurs worldwide.”
Highlights from the 2017 ranking: The majority of the new adds this year are associated with Podcasting like Anchor.fm, Zencast.fm, Fireside.fm & Al Jazeera’s Jetty.fm. The US and Russia markets continue to be strong, while markets like Brazil with jb.fm and Poland with Planeta.fm showed great growth.
BRS Media has pioneered the 'multimedia' domain space since launching the .FM & .AM Top Level Domains in 1998. Since that time, the .FM Brand Registry Service has evolved to meet the growing demand and creativity of the clientele. through most ICANN Accredited Registrars or any worldwide .FM Registrar Partners () like: Go Daddy, Hover, Name.com, Dynadot, Network Solutions, Gandi.net, United Domains and more. Information about .FM.
|
https://www.businesswire.com/news/home/20180129005222/en/BRS-Media%E2%80%99s-dotFM-Releases-Ranking-Top-.FM
|
CC-MAIN-2018-34
|
refinedweb
| 279
| 68.77
|
When we have two classes where one extends another and if, these two classes have the same method including parameters and return type (say, sample) the method in the subclass overrides the method in the superclass.
i.e. Since it is an inheritance. If we instantiate the subclass a copy of superclass’s members is created in the subclass object and, thus both methods are available to the object of the subclass.
But if you call the method (sample), the sampling method of the subclass will be executed overriding the super class’s method.
class Super{ public static void sample(){ System.out.println("Method of the superclass"); } } public class OverridingExample extends Super { public static void sample(){ System.out.println("Method of the subclass"); } public static void main(String args[]){ Super obj1 = (Super) new OverridingExample(); OverridingExample obj2 = new OverridingExample(); obj1.sample(); obj2.sample(); } }
Method of the superclass Method of the subclass
While overriding −
Both methods should be in two different classes and, these classes must be in an inheritance relation.
Both methods must have the same name, same parameters and, same return type else they both will be treated as different methods.
The method in the child class must not have higher access restrictions than the one in the superclass. If you try to do so it raises a compile-time exception.
If the super-class method throws certain exceptions, the method in the sub-class should throw the same exception or its subtype (can leave without throwing any exception).
Therefore, you cannot override two methods that exist in the same class, you can just overload them.
|
https://www.tutorialspoint.com/is-it-possible-to-override-a-java-method-of-one-class-in-same
|
CC-MAIN-2021-31
|
refinedweb
| 266
| 51.89
|
Counting Sort succeeds by constructing a much smaller set of k values in which to count the n elements in the set. Given a set of n elements, Bucket Sort constructs a set of n buckets into which the elements of the input set are partitioned; Bucket Sort thus reduces its processing costs at the expense of this extra space. If a hash function,
hash(Ai), is provided that uniformly partitions the input set of n elements into these n buckets, then Bucket Sort as described in Figure 4-18 can sort, in the worst case, in O(n) time. You can use Bucket Sort if the following two properties hold:
The input data must be uniformly distributed for a given range. Based on this distribution, n buckets are created to evenly partition the input range.
The buckets must be ordered. That is, if i<j, then elements inserted into bucket bi are lexicographically smaller than elements in bucket bj.
Bucket Sort is not appropriate for sorting arbitrary strings, for example; however, it could be used to sort a set of uniformly distributed floating-point numbers in the range [0,1).
Once all elements to be sorted are inserted into the buckets, Bucket Sort extracts the values from left to right using Insertion Sort on the contents of each bucket. This orders the elements in each respective bucket as the values from the buckets are extracted from left to right to repopulate the original array.
Context
Bucket Sort is the fastest sort when the elements to be sorted can be uniformly partitioned using a fast hashing function.
Forces
If storage space is not important and the elements admit to an immediate total ordering, Bucket Sort can take advantage of this extra knowledge for impressive cost savings.
Solution
In the C implementation for Bucket Sort, shown in Example 4-11, each bucket stores a linked list of elements that were hashed to that bucket. The functions
numBuckets and
hash are provided externally, based upon the input set.
Example 4-11. Bucket Sort implementation in C
extern int hash(void *elt); extern int numBuckets(int numElements); /* linked list of elements in bucket. */ typedef struct entry { void *element; struct entry *next; } ENTRY; /* maintain count of entries in each bucket and pointer to its first entry */ typedef struct { int size; ENTRY *head; } BUCKET; /* Allocation of buckets and the number of buckets allocated */ static BUCKET *buckets = 0; static int num = 0; /** One by one remove and overwrite ar */ void extract (BUCKET *buckets, int(*cmp)(const void *,const void *), void **ar, int n) { int i, low; int idx = 0; for (i = 0; i < num; i++) { ENTRY *ptr, *tmp; if (buckets[i].size == 0) continue; /* empty bucket */ ptr = buckets[i].head; if (buckets[i].size == 1) { ar[idx++] = ptr->element; free (ptr); buckets[i].size = 0; continue; } /* insertion sort where elements are drawn from linked list and * inserted into array. Linked lists are released. */ low = idx; ar[idx++] = ptr->element; tmp = ptr; ptr = ptr->next; free (tmp);
while (ptr != NULL) { int i = idx-1; while (i >= low && cmp (ar[i], ptr->element) > 0) { ar[i+1] = ar[i]; i--; } ar[i+1] = ptr->element; tmp = ptr; ptr = ptr->next; free(tmp); idx++; } buckets[i].size = 0; } } void sortPointers (void **ar, int n, int(*cmp)(const void *,const void *)) { int i; num = numBuckets(n); buckets = (BUCKET *) calloc (num, sizeof (BUCKET)); for (i = 0; i < n; i++) { int k = hash(ar[i]); /** Insert each element and increment counts */ ENTRY *e = (ENTRY *) calloc (1, sizeof (ENTRY)); e->element = ar[i]; if (buckets[k].head == NULL) { buckets[k].head = e; } else { e->next = buckets[k].head; buckets[k].head = e; } buckets[k].size++; } /* now read out and overwrite ar. */ extract (buckets, cmp, ar, n); free (buckets); }
For numbers drawn uniformly from [0,1), Example 4-12 contains sample implementations of the
hash and
numBuckets functions to use.
Example 4-12. hash and numBuckets functions for [0,1) range
static int num; /** Number of buckets to use is the same as the number of elements. */ int numBuckets(int numElements) {
num = numElements; return numElements; } /** * Hash function to identify bucket number from element. Customized * to properly encode elements in order within the buckets. Range of * numbers is from [0,1), so we subdivide into buckets of size 1/num; */ int hash(double *d) { int bucket = num*(*d); return bucket; }
The buckets could also be stored using fixed arrays that are reallocated when the buckets become full, but the linked list implementation is about 30-40% faster.
Analysis
In the
sortPointers function of Example 4-11, each element in the input is inserted into its associated bucket based upon the provided
hash function; this takes linear, or O(n), time. The elements in the buckets are not sorted, but because of the careful design of the
hash function, we know that all elements in bucket bi are smaller than the elements in bucket bj, if i<j.
As the values are extracted from the buckets and written back into the input array, Insertion Sort is used when a bucket contains more than a single element. For Bucket Sort to exhibit O(n) behavior, we must guarantee that the total time to sort each of these buckets is also O(n). Let's define ni to be the number of elements partitioned in bucket bi. We can treat ni as a random variable (using statistical theory). Now consider the expected value E[ni] of ni. Each element in the input set has probability p=1/n of being inserted into a given bucket because each of these elements is uniformly drawn from the range [0,1). Therefore, E[ni]=n*p=n*(1/n)=1, while the variance Var[ni]=n*p*(1-p)=(1-1/n). It is important to consider the variance since some buckets will be empty, and others may have more than one element; we need to be sure that no bucket has too many elements. Once again, we resort to statistical theory, which provides the following equation for random variables:
From this equation we can compute the expected value of ni2. This is critical because it is the factor that determines the cost of Insertion Sort, which runs in a worst case of O(n2). We compute E[ni2]=(1-1/n)+1=(2-1/n), which shows that E[ni2] is a constant. This means that when we sum up the costs of executing Insertion Sort on all n buckets, the expected performance cost remains O(n).
Variations
In Hash Sort, each bucket reflects a unique hash code value returned by the hash function used on each element. Instead of creating n buckets, Hash Sort creates a suitably large number of buckets k into which the elements are partitioned; as k grows in size, the performance of Hash Sort improves. The key to Hash Sort is a hashing function
hash(e
) that returns an integer for each element e such that
hash(ai)≤
hash(aj) if ai≤aj.
The hash function
hash(e
) defined in Example 4-13 operates over elements containing just lowercase letters. It converts the first three characters of the string into a value (in base 26), and so for the string "abcdefgh," its first three characters ("abc") are extracted and converted into the value 0*676+1*26+2=28. This string is thus inserted into the bucket labeled 28.
Example 4-13. hash and numBuckets functions for Hash Sort
/** Number of buckets to use. */ int numBuckets(int numElements) { return 26*26*26; } /** * Hash function to identify bucket number from element. Customized * to properly encode elements in order within the buckets. */ int hash(void *elt) { return (((char*)elt)[0] - 'a')*676 + (((char*)elt)[1] - 'a')*26 + (((char*)elt)[2] - 'a'); }
The performance of Hash Sort for various bucket sizes and input sets is shown in Table 4-5. We show comparable sorting times for Quicksort using the median-of-three approach for selecting the
pivotIndex.
Table 4-5. Sample performance for Hash Sort with different numbers of buckets, compared with Quicksort (in seconds)
Note that with 17,576 buckets, Hash Sort outperforms Quicksort for n>8,192 items (and this trend continues with increasing n). However, with only 676 buckets, once n>32,768 (for an average of 48 elements per bucket), Hash Sort begins its inevitable slowdown with the accumulated cost of executing Insertion Sort on increasingly larger sets. Indeed, with only 26 buckets, once n>256, Hash Sort begins to quadruple its performance as the problem size doubles, showing how too few buckets leads to O(n2) performance.
No credit card required
|
https://www.safaribooksonline.com/library/view/algorithms-in-a/9780596516246/ch04s08.html
|
CC-MAIN-2018-17
|
refinedweb
| 1,438
| 50.16
|
Harnessing the BackPack API - Part IV
If you're like me, you're always keeping an eye on what's happening on eBay. Whenever I find something that I'm interested in tracking, I use eBay's Watch List feature to remember it for me. However, I find when I'm watching lots of items it's not so easy to sort, filter, and display them as I want. That's where eBay Web Services comes in. What I did was write a program to retrieve all my watch list items using SOAP. Now I have complete control to customize the display and add the feature I want to the interface.
In this article I'll walk you how I did this using ASP.NET 2.0. You'll learn the basics of eBay Web Services and see how easy it is to customize eBay.
To get started, register with the eBay Developers Program. Signing up for the program gives you access to the Sandbox, eBay's test environment that allows you to test your applications before taking them live. For this example, we want to pull down the real eBay Watch List data, so you need to get access to eBay's production environment. Register for access to the production eBay environment by submitting the Self-Certification Form. Once you complete this form you'll receive an email with instructions on how to retrieve your production developer keys. Follow those instructions, and once you've retrieved your keys save them in a text file for later use.
Once you have access to the production environment, create an authentication token for your eBay username by going to. (Sign up for eBay first if you haven't done so.) eBay Web Services uses authentication tokens for user authentication to keep your user data secure. This system allows authorized third party applications to make Web Services calls on your behalf without those third parties having access to your eBay username and password. Use the Single User Authentication Token tool to create an authentication token for your test user. In the tool, select the Production environment and enter the production developer keys you retrieved earlier. Go through the flow, and when you're done save the authentication token in the same place as you saved your developer keys.
Now that you have all of that stuff out of the way, we're ready to go. eBay offers access to its web services through SOAP, XML, SDKs for .NET and Java, and even a REST API. For this example we'll be using the SOAP API. eBay exposes a WSDL file that you use to generate wrapper code for the SOAP API. In Visual Studio 2005 or Visual Web Developer 2005 Express Edition, first create a new Web Site, and then import the eBay WSDL file by right-clicking the web site root in the Solution Explorer, then selecting Add Web Reference...:
In the Add Web Reference dialog, load the WSDL URL:
(Click image to zoom)
It takes a few minutes to load the WSDL and generate the stubs from it. Once the WSDL is loaded, click Add Reference. If you get impatient with this process, just think: all this generated code is saving me time!
The example we are creating displays the contents of your eBay watch list in a simple table. Create a new Web Form, and in the <div> element insert the following table:
<asp:Table
<asp:TableRow>
<asp:TableCell</asp:TableCell>
<asp:TableCell
</asp:TableCell>
<asp:TableCell</asp:TableCell>
<asp:TableCell</asp:TableCell>
</asp:TableRow>
</asp:Table>
The table headers are declared in the markup. The rest of the table rows with the watch list data will be added dynamically once the web service call is made.
The generated code allows you to make calls to any of the 100+ methods exposed by the eBay SOAP API. Once you have figured out how to make one eBay Web Service call, the rest is easy. The Page_Load method in the code behind file contains the following code that sets up the call (you'll find the complete code in the download that accompanies this article):
Visual C#
string endpoint = "";
string callName = "GetMyeBayBuying";
string siteId = "0";
string appId = "YOUR_APPID"; // use your app ID
string devId = "YOUR_DEVID"; // use your dev ID
string certId = "YOUR_CERTIFICATE"; // use your cert ID
string version = "437";
// Build the request URL
string requestURL = endpoint
+ "?callname=" + callName
+ "&siteid=" + siteId
+ "&appid=" + appId
+ "&version=" + version
+ "&routing=default";
// Create the service
eBayAPIInterfaceService service = new eBayAPIInterfaceService();
// Assign the request URL to the service locator.
service.Url = requestURL;
// Set credentials
service.RequesterCredentials = new CustomSecurityHeaderType();
service.RequesterCredentials.eBayAuthToken = "YOUR_TOKEN"; // use your token
service.RequesterCredentials.Credentials = new UserIdPasswordType();
service.RequesterCredentials.Credentials.AppId = appId;
service.RequesterCredentials.Credentials.DevId = devId;
service.RequesterCredentials.Credentials.AuthCert = certId;
Visual Basic
Dim endpoint As String = ""
Dim callName As String = "GetMyeBayBuying"
Dim siteId As String = "0"
Dim appId As String = "YOUR_APPID" 'TODO: Enter your AppID
Dim devId As String = "YOUR_DEVID" 'TODO: Enter your DevID
Dim certId As String = "YOUR_CERTIFICATE" 'TODO: Enter your CertID
Dim version As String = "437"
' Build the request URL
Dim requestURL As String = endpoint _
& "?callname=" & callName _
& "&siteid=" & siteId _
& "&appid=" & appId _
& "&version=" & version _
& "&routing=default"
' Create the service
Dim service As New eBayAPIInterfaceService()
With service
' Assign the request URL to the service locator.
.Url = requestURL
' Set credentials
.RequesterCredentials = New CustomSecurityHeaderType()
.RequesterCredentials.eBayAuthToken = "YOUR_TOKEN" 'TODO: Enter your production token
.RequesterCredentials.Credentials = New UserIdPasswordType()
With .RequesterCredentials.Credentials
.AppId = appId
.DevId = devId
.AuthCert = certId
End With
End With
Replace the "YOUR" strings with the developer keys and authentication token that you retrieved earlier. This code sets the SOAP header with the authentication information and creates the URL parameters that are passed in.
The one line in this code that will change from call to call is this:
Visual C#
string callName = "GetMyeBayBuying";
Visual Basic
Dim callName As String = "GetMyeBayBuying"
For whatever call you are making, set the callName accordingly.
The GetMyeBayBuying call retrieves all sorts of information about items you have purchased, items in your watch list, items you are bidding on, and more. The call to GetMyeBayBuying is very simple:
Visual C#
// Make the call to GetMyeBayBuying
GetMyeBayBuyingRequestType buyingRequest = new GetMyeBayBuyingRequestType();
ItemListCustomizationType watchListOptions = new ItemListCustomizationType();
watchListOptions.Sort = ItemSortTypeCodeType.TimeLeft;
buyingRequest.WatchList = watchListOptions;
buyingRequest.Version = version;
GetMyeBayBuyingResponseType buyingResponse = service.GetMyeBayBuying(buyingRequest);
ItemType[] items = buyingResponse.WatchList.ItemArray;
Visual Basic
' Make the call to GetMyeBayBuying
Dim buyingRequest As New GetMyeBayBuyingRequestType()
Dim watchListOptions As New ItemListCustomizationType()
watchListOptions.Sort = ItemSortTypeCodeType.TimeLeft
buyingRequest.WatchList = watchListOptions
buyingRequest.Version = version
Dim buyingResponse As GetMyeBayBuyingResponseType = _
service.GetMyeBayBuying(buyingRequest)
Dim items() As ItemType = buyingResponse.WatchList.ItemArray
The instance of GetMyeBayBuyingRequestType contains the call input options. To specify that we want to retrieve the contents of the watch list, we set the WatchList property with the ItemListCustomizationType options that we have created. We also set the Version property to the same version string as we are passing in the URL. The eBayAPIInterfaceService object contains a method for each call exposed by the SOAP API, and each method takes a "request type" object as a parameter. We pass in the request object that we've created and set the result to an instance of GetMyeBayBuyingResponseType. Finally, we extract the watch list items from the WatchList.ItemArray property of the response object, and set it to an array of ItemType.
Once we have retrieved the items, it is up to us to display the information we want or make further calculations:
Visual C#
// Add table rows
for (int i = 0; i < items.Length; i++)
{
TableCell timeLeftCell = new TableCell();
DateTime endTime = items[i].ListingDetails.EndTime;
DateTime now = DateTime.Now;
// Display that the item has ended, or how much time is left
if (endTime < now)
{
timeLeftCell.Text = "Ended";
}
else
{
TimeSpan timeLeft = endTime - now;
string timeLeftDisplay = "";
int days = timeLeft.Days;
if (days > 0)
timeLeftDisplay += days.ToString() + " days, ";
int hours = timeLeft.Hours;
if (hours > 0)
timeLeftDisplay += hours.ToString() + " hours";
int minutes = timeLeft.Minutes;
if (minutes > 0 && days == 0)
timeLeftDisplay += ", " + minutes.ToString() + " min";
int seconds = timeLeft.Seconds;
if (days == 0 && hours == 0)
timeLeftDisplay += ", " + seconds.ToString() + " sec";
timeLeftCell.Text = timeLeftDisplay;
}
// Display the item title
TableCell titleCell = new TableCell();
titleCell.Text = items[i].Title;
// Display the current price
TableCell currentPriceCell = new TableCell();
currentPriceCell.Text = items[i].SellingStatus.CurrentPrice.Value.ToString();
// Display the Item ID
TableCell itemIDCell = new TableCell();
itemIDCell.Text = "<a href=\""
+ items[i].ItemID + "\">" + items[i].ItemID + "</a>";
TableRow row = new TableRow();
if ((i % 2) == 1)
{
row.BackColor = System.Drawing.Color.LightGray;
}
row.Cells.Add(timeLeftCell);
row.Cells.Add(titleCell);
row.Cells.Add(currentPriceCell);
row.Cells.Add(itemIDCell);
Table1.Rows.Add(row);
}
Visual Basic
' Add table rows
For i As Integer = 0 To items.Length - 1 Step i + 1
Dim timeLeftCell As New TableCell()
Dim endTime As DateTime = items(i).ListingDetails.EndTime
Dim now As DateTime = DateTime.Now
' Display that the item has ended, or how much time is left
If endTime < now Then
timeLeftCell.Text = "Ended"
Else
Dim timeLeft As TimeSpan = endTime - now
Dim timeLeftDisplay As String = String.Empty
Dim days As Integer = timeLeft.Days
If days > 0 Then
timeLeftDisplay &= days.ToString() & " days, "
End If
Dim hours As Integer = timeLeft.Hours
If hours > 0 Then
timeLeftDisplay &= hours.ToString() & " hours"
End If
Dim minutes As Integer = timeLeft.Minutes
If minutes > 0 And days = 0 Then
timeLeftDisplay &= ", " & minutes.ToString() & " min"
End If
Dim seconds As Integer = timeLeft.Seconds
If days = 0 And hours = 0 Then
timeLeftDisplay &= ", " & seconds.ToString() & " sec"
End If
timeLeftCell.Text = timeLeftDisplay
End If
' Display the item title
Dim titleCell As New TableCell()
titleCell.Text = items(i).Title
' Display the current price
Dim currentPriceCell As New TableCell()
currentPriceCell.Text = items(i).SellingStatus.CurrentPrice.Value.ToString()
' Display the Item ID
Dim itemIDCell As New TableCell()
itemIDCell.Text = "<a href=""" _
& items(i).ItemID & """>" & items(i).ItemID & "</a>"
Dim row As New TableRow()
If (i Mod 2) = 1 Then
row.BackColor = System.Drawing.Color.LightGray
End If
With row.Cells
.Add(timeLeftCell)
.Add(titleCell)
.Add(currentPriceCell)
.Add(itemIDCell)
End With
Table1.Rows.Add(row)
The most complex code in this example actually has nothing to do with the web service call: the end time is passed back as a DateTime, but we want to show how much time is left for each item. First, for items that have not already ended, we calculate a TimeSpan between now and the end time. Then, we apply some simple heuristics on that TimeSpan to create a more friendly string representation. Modern platforms like .NET make it easy to get data from web services, so you'll often find yourself spending much more time manipulating the data that you've retrieved.
Each cell in the table row is represented by a TableCell object. For each field in each item that we retrieve, a TableCell is created and its Text property is populated with what we want to display. We then create a TableRow object and call the Add method on its Cells property. And last, we add the row to the Rows property on the Table1 object that is declared on the form.
That's it! With one simple form and little more than 100 lines of code, you have a working application that integrates with eBay Web Services to do something useful. In a future article I'll show you how to integrate other calls and use more sophisticated XML processing to add unique features not available in the standard eBay watch list.
Looks like from the error you're attempting to subtract 2 date objects.
Switch to a DateTime object and you'll be able to do that. Something along the lines of
DateTime.Now - start
i am getting "application name invalid"..error...i passed all four parameters i.e APPID,DEVID,CERTID and AUTHtOKEN...but it fires above error
@daivagna and @haojian jin: are you sure you have eBayAuthToken property using your ebay auth token?
If you're still having problems contact us @ via the contact link at the top of the blog. I'll email you from there.
I am meeting with the same problem with daivagna .
There is a error called "application name invalid".
Very Good Sample. its working fine. But can any body will help me to get the order Details using the same process ? i have tried alot but... i am not getting any response.
please help me. ...
Hi. I have a question. How can i finish the search function like the one in ebayDesktop.Which api can lead me there? Waiting for response . i will very appreciate your help if u can contact me @ email.
Does anybody tried this on Visual Studio 2008 ? If I try to build it, the build hangs and takes 100% resources.
On 2005 eveything is OK.
@Victor: This happened to me when I was adding the reference, but it built fine under VS 2008. The ebay API takes a bit to download. That could be it. I think it took me about 15+ minutes when I did it.
1) Hi! I am confuse with the callName.
string callName = "GetMyeBayBuying";
Where can i find the call name?
2) I had a "Application name invalid." Server error page.
System.NullReferenceException: Object reference not set to an instance of an object.
ItemType items = buyingResponse.WatchList.ItemArray;
i am having problem with this line..
please help!!
very urgent!!
thanks
The following line of code is giving me a hell of a time:
eBayAPIInterfaceService service = new eBayAPIInterfaceService();
Yields ->
The type or namespace name 'eBayAPIInterfaceService' could not be found (are you missing a using directive or an assembly reference?)
I can't find any resources to explain where/what the assembly is.
I
@Xavier: No clue why it does that, you may want to contact Alan who wrote the article.
@simster: You're watchlist may be null which may imply you got zero search results from a quick glance.
@Barry: Are you copying code or did you use the source we provided? If you are just copying code, did you include the web service then at the top of the class include the using block to say You're using the ebay API?
@Barry:
This worked for me.
using ebayTest.com.ebay.developer;
ebayTest is my project name. My web service name is com.ebay.developer too.
This is from a fresh project too.
@Coding4Fun
I am copying the C# code from the blocks above.
I add the Service Reference from
And as far as namespaces, the only 3 I can find are:
using eBay.ServiceReference;
using eBay.Properties;
using eBay;
However I might be missing something, I can't find the eBayAPIInterfaceService object in the entire eBay namespace
I'm using VS 2008, .NET 3.5 if that would make a difference
I just tried the C# version from the download link above.
I am using VS 2008 Express.
I created a New Web site, imported the WSDL file as above.
Copied the source code from the download.
Inserted my AUTHID etc.
When compiling, I get one error on line 11
Line 11 is in Default.aspx.cs
using eBayWebServices;
Error 1 The type or namespace name 'eBayWebServices' could not be found (are you missing a using directive or an assembly reference?)
Hi
I left a question a few days ago and it has not shown up.
Any reason why?
@Marty: Sorry, I was at Mix 08 and am only now catching up. When you created your web service, what did you name it? That is what your using block for that should be. The author of this article renamed his from the default generated to "eBayWebService"
Do me a favor, type "using " then hit control-space. I'll bet your'll see one of the items in the list be the item that you named your web service.
HI,
Figured you must have been away.
I am new to all this so be gentle with me.
When creating the Web service, is that when you are doing the
Add Web Reference?
Where do I type the using CRTL-space?
Marty
@Marty: Email me by clicking the contact button up on the top and we'll take this offline from this thread. I'll be able to respond quicker then I'll post a response on this thread on how we fixed the issue.
I'll be able to do a screen cast possibly to help out and talk you through it.
@Desperate Student TT: Put in a break point, I bet your WatchList is null.
You'll need to do more defensive coding and verify the buyingResponse.WatchList != null before moving foward
I have this error!!
Object reference not set to an instance of an object.
Line 51: ItemType[] items = buyingResponse.WatchList.ItemArray;
HELP PLS T.T
solved this problem
Line 51: ItemType[] items = buyingResponse.WatchList.ItemArray
replace it with this code
if (buyingResponse.WatchList != null)
{
items = buyingResponse.WatchList.ItemArray;
}
and if (items != null), put this 'if' loop to cover the entire 'add table raw' code
@Hemanth one question is are you sure what you're querying has values getting returned?
Does the source code download work?
Getting null values in response then how could i get items, where i have given valid production appid, devid & certid with relative usertoken.
Please guide me
Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.
|
https://channel9.msdn.com/coding4fun/articles/Building-a-Custom-eBay-Watch-List-Using-the-SOAP-API
|
CC-MAIN-2017-09
|
refinedweb
| 2,898
| 58.08
|
Objective
Write a small JavaScript script that blinks a LED on a STM32F board by interacting with a couple of ARM mbed C HAL functions.
It should work on other targets supported by ARM mbed.
Background
There are many reasons why you might want to add a scripting functionality to your embedded MCU.
This project here focuses on those want to embed a JavaScript VM into their existing C firmware, e.g in order to quickly customize the core business logic without having to reflash the device.
In this project, I'll choose V7, an embedded JavaScript VM I co-authered along with my team-mates at Cesanta.
The V7 library is distributed as just two files: v7.c and v7.h. In order to use the V7 library all you need to do is to import those two files in your project.
V7 is written in portable C, and can thus easily be used from C++. V7 is platform independent; here we'll show how to embed it in ARM mbed projects and to build it on an STM32F4 device, but it should work on other targets too.
TL;DR: show me the code
Here is the full source code:
#include "mbed.h" #include "v7.h" DigitalOut led_green(LED1); enum v7_err js_set_led(struct v7 *v7, v7_val_t *res) { led_green = !v7_get_bool(v7, v7_arg(v7, 0)); return V7_OK; } enum v7_err js_wait(struct v7 *v7, v7_val_t *res) { wait(v7_get_double(v7, v7_arg(v7, 0))); return V7_OK; } int main() { struct v7 *v7 = v7_create(); v7_set_method(v7, v7_get_global(v7), "setLed", &js_set_led); v7_set_method(v7, v7_get_global(v7), "wait", &js_wait); v7_exec(v7, "while(true) {" \ " setLed(false);" \ " wait(0.5); " \ " setLed(true);" \ " wait(0.5); " \ "}", NULL); }
You can easily clone the project in the ARM mbed online compiler or command-line tools; just open:
We'll break down this project and show how to install the required dependencies.
Before JavaScript
Let's see first how a simple blinker would look like without the scripting part:
#include "mbed.h" DigitalOut led_green(LED1); int main() { while(true) { led_green = false; wait(0.2); led_green = true; wait(0.8); } }
You can now compile this code in the mbed online compiler or, alternatively, you can use the mbed-cli tool.
Enter JavaScript
Let's first add the V7 library. I packaged the v7.c and v7.h files in a mbed code library, so it's easier to add them to your project and to get updates.
Or, if you use the command-line tool:
mbed add
Now, the library will be built into your firmware. But in order to use it you need to include its header:
#include "v7.h"
Then, inside your main function, you have to create an instance of the virtual machine:
struct v7 *v7 = v7_create();
This v7 variable holds the whole VM state and needs to be passed around when calling V7 API functions.
So, how do we run a script?
v7_exec(v7, "print('hello from javascript')", NULL);
The last parameter of v7_exec is used to grab the result of a JavaScript expression. We'll ignore that.
JavaScript and devices
Awesome! We managed to run some JavaScript code!
But, how can we make that JavaScript code do something useful on the device? V7 is just a generic JS VM library, it doesn't know anything about your embedded board, its API etc.
Let's first write a simple JS script that uses two simple functions we're going to export from the our device's SDK:
while(true) { setLed(false); wait(0.2); setLed(true); wait(0.8); }
We can invoke it inline with v7_exec, or we can load it from a file on flash storage, or download it from network:
v7_exec(v7, "while(true) {" \ " setLed(false);" \ " wait(0.2); " \ " setLed(true);" \ " wait(0.8); " \ "}", NULL);
So, let's register a couple functions that allow us to interact with the SDK at hand:
v7_set_method(v7, v7_get_global(v7), "setLed", &js_set_led); v7_set_method(v7, v7_get_global(v7), "wait", &js_wait);
This code registers two global JavaScript functions, setLed and wait, that can be invoked from your scripts as if they were native functions.
Their implementation is written in C. Let's see how it works by looking closely at how js_wait is implemented:
enum v7_err js_wait(struct v7 *v7, v7_val_t *res) { wait(v7_get_double(v7, v7_arg(v7, 0))); return V7_OK; }
- v7_arg(v7, 0): takes the first argument passed to wait(). It returns a v7_val_t value, which represents a generic JavaScript value.
- v7_get_double(v7, val): extracts the C double floating-point value from a v7_val_t
You can read more about the V7 API here.
|
https://www.hackster.io/mkm/run-javascript-on-stm32-with-mbed-0590e3
|
CC-MAIN-2020-10
|
refinedweb
| 756
| 72.87
|
So I don't want the algorithm to run through all the stocks out there, but I want it to use a handful of different stocks I give it each day to trade on. How would I go about doing this?
Thanks
So I don't want the algorithm to run through all the stocks out there, but I want it to use a handful of different stocks I give it each day to trade on. How would I go about doing this?
Thanks
Joshua -
See. Alternatively, whatever process you are using to pick the stocks potentially could be coded into an algo to run on Quantopian.
Thanks Grant, I'm playing around with the CSV fetcher and I think this will do the trick. How can I view data from my csv in my Notebook to make sure it's working?
You can output limited data to the log. There is also a debugger, which might be helpful. Another approach is to use the
record function and then view data in the research platform, but this has limitations, and probably isn't best for your problem. I'd start with output to the log.
Just try:
print my_output
If you are using a Pandas DataFrame, then you may want to use
my_output.head(5) and
my_output.tail(5) just to confirm that everything is there, as expected.
Alright, I've got that part down now. I have one column for the "Date" and another for the "Symbol" of the stocks I want to trade. How can I take the stock symbols in my CSV and use their SIDs to trade with? Would I put this code in the pre_func?
import talib import pandas def preview(df): log.info(df.head()) # Get stocks from CSV here? return df # Setup our variables def initialize(context): context.stocks = symbols('SPY', 'QQQ') # These will need to come from CSV fetch_csv('', pre_func = preview, post_func = None, date_column = 'Date', date_format='%m/%d/%y', timezone='UTC')
Joshua -
Before you get too deep into this, I'm wondering what your objective might be? If you are aiming to get an allocation from Quantopian, I'm not sure they are so keen on fetcher, and may even not allow it (e.g. trading $50M with a link to an external google doc sounds like a non-starter). Also, note that Quantopian no longer supports retail trading (see).
You may want to consider doing everything within the Quantopian API, using the supplied data.
Thanks Grant, I didn't realize Quantopian was shutting down retail trading.
My objective is this:
- Each morning I create my own watchlist of stocks that I want to trade. I would input these tickers into the CSV, and when certain criteria are met the algo would execute trades on those stocks.
I am not necessarily shooting for allocation from Quantopian on this algo, it is more for personal experimentation. If I use the Quantopian API to filter through stocks, I would not be able to personally pick out the ones I want traded each day. That's why I haven't gone down that road so far. I have the CSV loaded and can log the stock symbols, but I don't know how to pull the symbols out of that dataframe and use their SIDs for trading.
I'm not so familiar with this end of Quantopian, and would have to dig into it. I'm not planning to use fetcher down the road, so maybe someone else can lend a hand here.
If you know the complete universe you are trading in, it can be put into the algo explicitly as a list (e.g. using
symbols). Then, point-in-time, your fetcher file would specify which stocks to trade, in the master list. There's probably a better way of doing things, but this should work.
|
https://www.quantopian.com/posts/how-to-use-your-own-stocks-with-quantopian
|
CC-MAIN-2018-39
|
refinedweb
| 643
| 72.56
|
On 2012-05-31 09:38, Paolo Bonzini wrote: > Il 31/05/2012 00:53, Luigi Rizzo ha scritto: >> The image contains my fast packet generator "pkt-gen" (a stock >> traffic generator such as netperf etc. is too slow to show the >> problem). pkt-gen can send about 1Mpps in this configuration using >> -net netmap in the backend. The qemu process in this case takes 100% >> CPU. On the receive side, i cannot receive more than 50Kpps, even if i >> flood the bridge with a a huge amount of traffic. The qemu process stays >> at 5% cpu or less. >> >> Then i read on the docs in main-loop.h which says that one case where >> the qemu_notify_event() is needed is when using >> qemu_set_fd_handler2(), which is exactly what my backend uses >> (similar to tap.c) > > The path is a bit involved, but I think Luigi is right. The docs say > "Remember to call qemu_notify_event whenever the [return value of the > fd_read_poll callback] may change from false to true." Now net/tap.c has > > static int tap_can_send(void *opaque) > { > TAPState *s = opaque; > > return qemu_can_send_packet(&s->nc); > } > > and (ignoring VLANs) qemu_can_send_packet is > > int qemu_can_send_packet(VLANClientState *sender) > { > if (sender->peer->receive_disabled) { > return 0; > } else if (sender->peer->info->can_receive && > !sender->peer->info->can_receive(sender->peer)) { > return 0; > } else { > return 1; > } > } > > So whenever receive_disabled goes from 0 to 1 or can_receive goes from 0 to 1, > the _peer_ has to call qemu_notify_event. In e1000.c we have > > static bool e1000_has_rxbufs(E1000State *s, size_t total_size) > { > int bufs; > /* Fast-path short packets */ > if (total_size <= s->rxbuf_size) { > return s->mac_reg[RDH] != s->mac_reg[RDT] || !s->check_rxov; > } > if (s->mac_reg[RDH] < s->mac_reg[RDT]) { > bufs = s->mac_reg[RDT] - s->mac_reg[RDH]; > } else if (s->mac_reg[RDH] > s->mac_reg[RDT] || !s->check_rxov) { > bufs = s->mac_reg[RDLEN] / sizeof(struct e1000_rx_desc) + > s->mac_reg[RDT] - s->mac_reg[RDH]; > } else { > return false; > } > return total_size <= bufs * s->rxbuf_size; > } > > static int > e1000_can_receive(VLANClientState *nc) > { > E1000State *s = DO_UPCAST(NICState, nc, nc)->opaque; > > return (s->mac_reg[RCTL] & E1000_RCTL_EN) && e1000_has_rxbufs(s, 1); > } > > So as a conservative approximation, you need to fire qemu_notify_event > whenever you write to RDH, RDT, RDLEN and RCTL, or when check_rxov becomes > zero. In practice, only RDT, RCTL and check_rxov matter. Luigi, does this > patch work for you? > > diff --git a/hw/e1000.c b/hw/e1000.c > index 4573f13..0069103 100644 > --- a/hw/e1000.c > +++ b/hw/e1000.c > @@ -295,6 +295,7 @@ set_rx_control(E1000State *s, int index, uint32_t val) > s->rxbuf_min_shift = ((val / E1000_RCTL_RDMTS_QUAT) & 3) + 1; > DBGOUT(RX, "RCTL: %d, mac_reg[RCTL] = 0x%x\n", s->mac_reg[RDT], > s->mac_reg[RCTL]); > + qemu_notify_event(); > } > > static void > @@ -922,6 +923,7 @@ set_rdt(E1000State *s, int index, uint32_t val) > { > s->check_rxov = 0; > s->mac_reg[index] = val & 0xffff; > + qemu_notify_event(); This still looks like the wrong tool: Packets that can't be delivered are queued. So we need to flush the queue and clear the blocked delivery there. qemu_flush_queued_packets appears more appropriate for this. Conceptually, the backend should be responsible for kicking the iothread as needed. Jan
signature.asc
Description: OpenPGP digital signature
|
https://lists.gnu.org/archive/html/qemu-devel/2012-05/msg04417.html
|
CC-MAIN-2019-51
|
refinedweb
| 502
| 55.03
|
06-26-2009 09:42 AM
Hi,
one of my big problems when developing for the ml403 is that I never managed to make the USB to work. In particular, I never managed to make the c67x00 driver to work in Linux, although I managed to made the usb keyboard example code(from EDK) to run together with the SystemACE ( they share some lines ) with no OS. One of the reason for upgrading to arch=powerpc has been this. I little question to begin with ( I might come with more of this later ) is, how should the device tree should look so to load the c67x00 hcd driver? I have tried with something like:
xps_epc_0: usb@80800000 { compatible = "cypress,c67x00"; interrupt-parent = <&xps_intc_0>; interrupts = < 0 2 >; reg = < 0x80800000 0x10000 >; xlnx,family = "virtex4"; } ;
I have tried playing with some of the things there, with no luck.I managed to load it before with the xparameters.h , but I don't that much how device-trees work regarding to this.
This is for me one of the parts I liked less from using the ml403, the lack of support for the USB. Of course there is an Xapp to make it work,but alone, without the System_Ace and that is for some us useless. If I manage to make it work I wouldn't mind to write a document about it ( hw/sw instructions) since it doesn't seem to be any information on the whole internet about it.
Any ideas?
Thx!
06-26-2009 10:26 AM
What does "no luck" mean? Does linux report anything about USB when it comes up? I've not used that USB chip before, but I have used another vendor's hooked up via the EPC, and your .dts entry looks similar to what I used - assuming "interrupts" is set correctly.
Terry
06-29-2009 02:06 AM - edited 06-29-2009 06:01 AM
With no luck I mean that the driver doesn't load. I am using a merged branch between DENX+Adeos 2.6.29.4 (Xenomai) and Xilinx development. This is my boot sequence:
Linux/PowerPC load: console=ttyUL0 root=/dev/xsa2 noinitrd rootfstype=ext3 rw
Finalizing device tree... flat tree at 0x5e9300
Using Xilinx Virtex machine description
Linux version 2.6.29.4 (xxx@xxx) (gcc version 4.2.2) #6 PREEMPT Fri Jun 26 17:47:04 CEST 2009
Zone PFN ranges:
DMA 0x00000000 -> 0x00004000
Normal 0x00004000 -> 0x00004000
Movable zone start PFN for each node
early_node_map[1] active PFN ranges
0: 0x00000000 -> 0x00004000
MMU: Allocated 1088 bytes of context maps for 255 contexts
Built 1 zonelists in Zone order, mobility grouping on. Total pages: 16256
Kernel command line: console=ttyUL0 root=/dev/xsa2 noinitrd rootfstype=ext3 rw
Xilinx intc at 0x81800000 mapped to 0xfdfff000
PID hash table entries: 256 (order: 8, 1024 bytes)
clocksource: timebase mult[d55555] shift[22] registered
I-pipe 2.6-02: pipeline enabled.
Console: colour dummy device 80x25
console [ttyUL0] enabled
Dentry cache hash table entries: 8192 (order: 3, 32768 bytes)
Inode-cache hash table entries: 4096 (order: 2, 16384 bytes)
Memory: 60644k/65536k available (3808k kernel code, 4828k reserved, 144k data, 244k bss, 152k init)
SLUB: Genslabs=10, HWalign=32, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
Calibrating delay loop... 593.92 BogoMIPS (lpj=296960)
Mount-cache hash table entries: 512
net_namespace: 520 bytes
NET: Registered protocol family 16
bio: create slab <bio-0> at 0
XGpio: /plb@0/gpio@81460000: registered
XGpio: /plb@0/gpio@81440000: registered
XGpio: /plb@0/gpio@81420000: registered
XGpio: /plb@0/gpio@81400000: registered
usbcore: registered new interface driver usbfs
usbcore: registered new interface driver hub
usbcore: registered new device driver usb
I-pipe: Domain Xenomai registered.
Xenomai: hal/powerpc started.
Xenomai: real-time nucleus v2.4.8 (Lords Of Karma) loaded.
Xenomai: starting native API services.
Xenomai: starting POSIX services.
Xenomai: starting RTDM services.
Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
fuse init (API version 7.11)
msgmni has been set to 118
io scheduler noop registered
io scheduler anticipatory registered
io scheduler deadline registered
io scheduler cfq registered (default)
84000000.serial: ttyUL0 at MMIO 0x84000003 (irq = 16) is a uartlite
brd: module loaded
loop: module loaded
xsysace 83600000.sysace: Xilinx SystemACE revision 1.0.12
xsysace 83600000.sysace: capacity: 4001760 sectors
xsa: xsa1 xsa2 xsa3
Xilinx SystemACE device driver, major=254
xilinx_emaclite 81000000.ethernet: Device Tree Probing
xilinx_emaclite 81000000.ethernet: MAC address is now 2: 0: 0: 0: 0: 0
eth0 (): not using net_device_ops yet
xilinx_emaclite 81000000.ethernet: Xilinx EmacLite at 0x81000000 mapped to 0xC5080000, irq=18
Generic platform RAM MTD, (c) 2004 Simtec Electronics
xilinx-xps-spi 81818000.xps-spi: at 0x81818000 mapped to 0xC503A000, irq=19
usbmon: debugfs is not available
usbcore: registered new interface driver cypress_cy7c63
mice: PS/2 mouse device common for all mice
i2c /dev entries driver
Device Tree Probing 'i2c'
81620000.i2c #0 at 0x81620000 mapped to 0xC50A0000, irq=20
Device Tree Probing 'i2c'
81600000.i2c #0 at 0x81600000 mapped to 0xC50C0000, irq=21
usbcore: registered new interface driver usbhid
usbhid: v2.6:USB HID core driver
TCP cubic registered
NET: Registered protocol family 17
RPC: Registered udp transport module.
RPC: Registered tcp transport module.
kjournald starting. Commit interval 5 seconds
EXT3 FS on xsa2, internal journal
EXT3-fs: recovery complete.
EXT3-fs: mounted filesystem with ordered data mode.
VFS: Mounted root (ext3 filesystem) on device 254:2.
Freeing unused kernel memory: 152k init
The device-tree script doesn't generate anything for the cypress so I added it myself with some references that I found in maillists of linuxppc-dev. For me it clearly doesn't seem to load. I have given the interrupt 0 to the hpi interrupt from the cypress. The signals to the FPGA are muxed based in some glue-logic I found in the ML403 reference design.
06-29-2009 03:30 AM - edited 06-29-2009 03:33 AM
One more thing. When I managed to load the driver in PPC, I had something like:
/*
* Cypress USB C67x00 shortcut macro for single instance
*/
#define XPAR_C67x00_USB(num) { \
.name = "c67x00", \
.id = num, \
.num_resources = 2, \
.resource = (struct resource[]) { \
{ \
.start = XPAR_C67X00_USB_PRH##num##_BASEADDR, \
.end = XPAR_C67X00_USB_PRH##num##_BASEADDR + 0xf, \
.flags = IORESOURCE_MEM, \
}, \
{ \
.start = XPAR_OPB_INTC_0_SYSTEM_USB_HPI_INT_PIN_INTR, \
.end = XPAR_OPB_INTC_0_SYSTEM_USB_HPI_INT_PIN_INTR, \
.flags = IORESOURCE_IRQ, \
}, \
}, \
.dev.platform_data = &(struct c67x00_platform_data) { \
.sie_config = C67X00_SIE1_HOST | C67X00_SIE2_PERIPHERAL_A, \
.hpi_regstep = 0x02, /* A0 not connected on 16bit bus */ \
}, \
}
But I don't know that well how that should translate to .dts notation...
06-30-2009 03:29 AM
Well, I will answer this question myself. Looking into the driver i have found that there is no openfirmware (OF) support for this so the system is not going to load c67x00 from the device-tree without it. I am working on a patch for it right now, whenever I have it working I'll try to publish it for review.
If someone is thinking about doing it ( or has done it), please let me know.
02-28-2011 06:01 AM
Hello,
I try to modify the dts file because, like you, I try to use the CY7C67300 driver but the controller is not recognised by Linux.
Is your patch done?
Have you information about the .dts file?
How can I know the port number between the Cypress controller and the interrupt controller (xps_intc)? It's not indicated in the mhs file...
Thank you.
Regards
Pierre
02-28-2011 06:35 AM
Hi,
yes, I got this working some time ago. However I don't have the patch now and I won't have until next week( out-of-the office ATM). One reminder here in 1.5weeks and I'll try to get it.
02-28-2011 06:38 AM
Hi,
Ok thank you very much
Regards
Pierre
03-09-2011 06:53 AM
Hi,
I begin to modify the c67x00 driver for ".dts" file but i have some problems with the interrupt.
Have you found your patch?
Thank you
Pierre
06-10-2011 02:54 AM - edited 06-10-2011 02:55 AM
I am trying to use the Cypress USB controller in host mode as well. The current kernel does not support DTS initialisation of the c67x00 driver - you can compile it but it gets never loaded. The patch everyone seems to implements bypasses the DTS init and just hardwires the driver to start.
I got a more elegant way by modifying the driver to accept parameters from the DTS and register itself with the platform loader to get started like other drivers. While that seems to work, I also have issues with the interrupt on the Cypress/xps_epc device.
In my case it seems to keep firing events. Switching from edge to level triggers does not seem to do anything. Does anybody have a solution?
BTW my own thread is over here: Use ML605 Cypress USB controller as EHCI host in linux
06-15-2011 08:33 AM
I solved my problem, see the thread I linked before for information and a patch to "fix" the Cypress driver to use the DTS properly.
06-15-2011 11:12 PM
Hello,
Ok thanks for the "patch".
I succeeded to use C67x00 driver but without to use the device tree. I will try to use the "patch" to use device tree support.
Have you fixed your interrupt error?
I have a very slow USB bandwidth... Have you problems with reading, writing or bandwidth?
Best regards.
Pierre
06-16-2011 01:01 AM
The interrupt error was due to unconnected pins which prevented the clearing of the interrupt.
The bandwidth is not extremely high (due to the lack of DMA, I think I read that somewhere) but enough to drive a USB headset for example.
06-16-2011 01:07 AM
Ok,
The bandwidth for me was 600 kbytes/s before I reduced the value TOTAL_FRAME_BW in the driver from 12000 to 4096 because I had writing errors... After that, I obtained a bandwidth of 120 kbytes/s but without reading or writing errors.
Thanks.
Pierre
04-16-2012 06:45 AM
hi
I have the same problem as you
I'm working with the ML507 FPGA with power pc processor
Im trying to use the usb port using the linux2.6 kernel
and i have problem with the generator of the device tree
could you please give me some information how make the usb port working with CYPRESS c67x00
thanks
05-08-2012 10:39 AM
08-12-2012 08:14 PM
I have the same problem as you do.
I tried almost everyway I found on the forum, although there are not many.
Have you solved the problem?
Thank you!
Xiangyu
|
https://forums.xilinx.com/t5/Embedded-Linux/Device-Tree-for-c67x00/m-p/42751
|
CC-MAIN-2020-29
|
refinedweb
| 1,772
| 63.7
|
Bot::BasicBot::Pluggable - extended simple IRC bot for pluggable modules
version 0.98
# with all defaults. my $bot = Bot::BasicBot->new(); # with useful options. pass any option # that's valid for Bot::BasicBot. my $bot = Bot::BasicBot::Pluggable->new( channels => ["#bottest"], server => "irc.example.com", port => "6667", nick => "pluggabot", altnicks => ["pbot", "pluggable"], username => "bot", name => "Yet Another Pluggable Bot", ignore_list => [qw(hitherto blech muttley)], );
There's a shell script installed to run the bot.
$ bot-basicbot-pluggable --nick MyBot --server irc.perl.org
Then connect to the IRC server, /query the bot, and set a password. See Bot::BasicBot::Pluggable::Module::Auth for further details.
There are two useful ways to create a Pluggable bot. The simple way is:
# Load some useful modules. my $infobot_module = $bot->load("Infobot"); my $google_module = $bot->load("Google"); my $seen_module = $bot->load("Seen"); # Set the Google key (see). $google_module->set("google_key", "some google key"); $bot->run();
The above lets you run a bot with a few modules, but not change those modules during the run of the bot. The complex, but more flexible, way is as follows:
# Load the Loader module. $bot->load('Loader'); # run the bot. $bot->run();
This is simpler but needs further setup once the bot is joined to a server. The Loader module lets you talk to the bot in-channel and tell it to load and unload other modules. The first one you'll want to load is the 'Auth' module, so that other people can't load and unload modules without permission. Then you'll need to log in as an admin and change the default password, per the following /query:
!load Auth !auth admin julia !password julia new_password !auth admin new_password
Once you've done this, your bot is safe from other IRC users, and you can tell it to load and unload other installed modules at any time. Further information on module loading is in Bot::BasicBot::Pluggable::Module::Loader.
!load Seen !load Google !load Join
The Join module lets you tell the bot to join and leave channels:
<botname>, join #mychannel <botname>, leave #someotherchannel
The perldoc pages for the various modules will list other commands.
Bot::BasicBot::Pluggable started as Yet Another Infobot replacement, but now is a generalised framework for writing infobot-type bots that lets you keep each specific function seperate. You can have seperate modules for factoid tracking, 'seen' status, karma, googling, etc. Included default modules are below. Use
perldoc Bot::BasicBot::Pluggable::Module::<module name> for help on their individual terminology.
Auth - user authentication and admin access. DNS - host lookup (e.g. nslookup and dns). Google - search Google for things. Infobot - handles infobot-style factoids. Join - joins and leaves channels. Karma - tracks the popularity of things. Loader - loads and unloads modules as bot commands. Seen - tells you when people were last seen. Title - gets the title of URLs mentioned in channel. Vars - changes module variables.
The way the Pluggable bot works is very simple. You create a new bot object and tell it to load various modules (or, alternatively, load just the Loader module and then interactively load modules via an IRC /query). The modules receive events when the bot sees things happen and can, in turn, respond. See
perldoc Bot::BasicBot::Pluggable::Module for the details of the module API.
Create a new Bot. Except of the additional attributes loglevel and logconfig identical to the
new method in Bot::BasicBot. Please refer to their accessor for documentation.
Load a module for the bot by name from
./ModuleName.pm or
./modules/ModuleName.pm in that order if one of these files exist, and falling back to
Bot::BasicBot::Pluggable::Module::$module if not.
Reload the module
$module - equivalent to unloading it (if it's already loaded) and reloading it. Will stomp the old module's namespace - warnings are expected here. Not toally clean - if you're experiencing odd bugs, restart the bot if possible. Works for minor bug fixes, etc.
Removes a module from the bot. It won't get events any more.
Returns the handler object for the loaded module
$module. Used, e.g., to get the 'Auth' hander to check if a given user is authenticated.
Returns a list of the names of all loaded modules as an array.
Returns a list of all available modules whether loaded or not
Adds a handler object with the given name to the queue of modules. There is no order specified internally, so adding a module earlier does not guarantee it'll get called first. Names must be unique.
Remove a handler with the given name.
Returns the bot's object store; see Bot::BasicBot::Pluggable::Store.
Logs all of its argument to loglevel info. Please do not use this function in new code, it's simple provided as fallback for old modules.
Returns the bots loglevel or sets it if an argument is supplied. It expects trace, debug, info, warn, error or fatal as value.
Returns the bot configuration file for logging. Please refer to Log::Log4perl::Config for the configurations files format. Setting this to a differant file after calling init() has no effect.
Returns or set
Call the named
$method on every loaded module with that method name.
Returns help for the ModuleName of message 'help ModuleName'. If no message has been passed, return a list of all possible handlers to return help for.
Runs the bot. POE core gets control at this point; you're unlikely to get it back.
During the
make,
make test,
make install process, POE will moan about its kernel not being run. This is a
Bot::BasicBot problem, apparently. Reloading a module causes warnings as the old module gets its namespace stomped. Not a lot you can do about that. All modules must be in Bot::Pluggable::Module:: namespace. Well, that's not really a bug.
Bot::BasicBot::Pluggable is based on POE, and really needs the latest version. Because POE is like that sometimes. You also need POE::Component::IRC. Oh, and Bot::BasicBot. Some of the modules will need more modules, e.g. Google.pm needs Net::Google. See the module docs for more details.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
Mike Eldridge <diz@cpan.org>
I am merely the current maintainer; however, the AUTHOR heading is traditional.
Bot::BasicBot was written initially by Mark Fowler, and worked on heavily by Simon Kent, who was kind enough to apply some patches we needed for Pluggable. Eventually. Oh, yeah, and I stole huge chunks of docs from the Bot::BasicBot source too. I spent a lot of time in the mozbot code, and that has influenced my ideas for Pluggable. Mostly to get round its awfulness.
Various people helped with modules. Convert was almost ported from the infobot code by blech. But not quite. Thanks for trying... blech has also put a lot of effort into the chump.cgi & chump.tem files in the examples/ folder, including some /inspired/ calendar evilness.
And thanks to the rest of #2lmc who were my unwilling guinea pigs during development. And who kept suggesting totally stupid ideas for modules that I then felt compelled to go implement. Shout.pm owes its existence to #2lmc.
|
http://search.cpan.org/~diz/Bot-BasicBot-Pluggable-0.98/lib/Bot/BasicBot/Pluggable.pm
|
CC-MAIN-2014-23
|
refinedweb
| 1,211
| 68.97
|
Use Intel C/C++ compilers V12.0.1.107 or higher version to resolve error when preprocessing and then compiling C++ language code which included math.h or mathimf.h header
Por Eugeny Gladkov (Intel), publicado el 20 de diciembre de 2011
Version:
Intel® C/C++ compilers V11.0 and earlier, and Intel® C/C++ compilers V12.0 built before 2010/10/29.
Product:
Intel® C/C++ compiler
Operating System:
Linux and MAC OS* X
Problem Description:
C++ language code using math.h or mathimf.h header file cannot be compiled by Intel® C/C++ compilers V11.0 and earlier version if it was first preprocessed to separate file and then compiled. The reason is that in C++ language, exception specification of functions corresponding to ISO C classification macros differs in GNU and Intel® math headers.
Consider as an example the following test.cpp file:
#include <math.h>
int foo() {
float x=1.0;
return (int)expf(x);
}
It can be preprocessed to file test1.cpp by the following command:
icc test.cpp -EP -o test1.cpp
But the new file fails to compile:
icc -c test1.cpp
It produces the errors like:
test1.cpp(1659): error: omission of exception specification is incompatible with previous function "__fpclassifyf" (declared at line 1146)
extern int __fpclassifyf ( float __x );
Resolution Status:
The problem has been fixed in Intel® C/C++ compilers built after 2010/10/28, i.e. V12.0.1.107 or higher version. See /en-us/articles/intel-composer-xe for download.
|
https://software.intel.com/es-es/articles/use-intel-cc-compilers-v1201107-or-higher-version-to-resolve-error-when-preprocessing-and
|
CC-MAIN-2018-17
|
refinedweb
| 251
| 53.37
|
#include <unistd.h>, −1 is
returned, and
errno is set
appropriately.
Depending on the filesystem, other errors can be returned. The more general errors are listed below:
Search permission is denied on a component of the path prefix. (See also path_resolution(7).)
path points
outside your accessible address space.
An I/O error occurred.
Too many symbolic links were encountered in
resolving
path.
path is too
long.
The file does not exist.
Insufficient kernel memory was available.
A component of
path is not a
directory.
The caller has insufficient privilege.
A child process created via fork(2) inherits its parent's root directory. The root directory is left unchanged by execve(2).
FreeBSD has a stronger
jail() system call.
chdir(2), pivot_root(2), path_resolution(7)
|
http://manpages.courier-mta.org/htmlman2/chroot.2.html
|
CC-MAIN-2017-17
|
refinedweb
| 125
| 55.61
|
Building Windows 8 blog
Windows Store for developers blog
Visual Studio blog
The Windows blog
Inside SkyDrive blog
Download Windows 8 Release Preview
Windows Dev Center
The //build/ conference
Developer forums
Hi, I’m John Sheehan, Partner Architect on the Windows Development team.
We really appreciate you building apps for the preview releases. Your feedback helps us make Windows 8 great. Of course, building on a preview means that you need to make updates to your apps for each preview release – that’s what this post is all about, migrating your projects from the Developer Preview to the Consumer Preview. I’m going to highlight some of the changes here, but for a detailed guide to the changes, you can download the white paper on migrating your apps from //Build to Windows 8 Consumer Preview from the Dev Center.
When you start thinking about migrating your apps to the Consumer Preview, I’m sure some of you are wondering why we chose to make some of these changes. I can personally assure you that we take every change seriously. Some improvements are made based on direct feedback we hear: a feature is confusing so we make it easier, or it lacks some capability you told us you need. Other times, after we complete a feature and start using it ourselves, we realize it just didn’t land where we wanted it to, so we take what we learned and make it better. There are many factors we consider. Rest assured, we carefully think through every decision, with the goal of creating a great platform for your Metro style apps.
I had to go through the migration process with the Connect 4 app I built on the Developer Preview. I know it takes a bit of work to do the migration. But if you follow the steps outlined in the post and document, you’ll be up and running pretty quickly.
So, let’s dive in!
While it may be tempting to keep your existing project and try to migrate it to the Consumer Preview, enough has changed since the Developer Preview that it’s best to start with a new project. For example, in Visual Studio there were a number of changes in project definition files: the JavaScript project extension was renamed to .jsproj, and the the import statement changed in .csproj/.vbproj files. Your existing project won’t even open because of these changes. After you start a new project, you can move pieces of your old project into the new one.
These steps are a good guideline for migrating your code. You can also find these steps and many more migration details in //Build to Windows 8 Consumer Preview. (I’ll mention this several more times before you're done reading this post!)
By following these steps, you’ll naturally incorporate many of the changes into your app’s code. Now let’s discuss some specific changes that can affect your code as you move it into the new templates.
First, I’d like to describe some changes to the basic programming model that affect developers in any programming language.
The manifest is the DNA of your app. As we make changes in the platform, they often have an impact on the structure of the manifest. Given the number of changes in the manifest, it will likely be easiest to start with the new manifest that gets created when you create your new project and use the manifest editor to modify this new manifest, rather than trying to port your existing manifest.
In the Developer Preview, all async methods were cold start. When you called an async method, you got back an async operation object. You registered for completion and progress callbacks (if applicable) on this operation object and then called IAsyncInfo.Start. The problem with this model was that the call to Start was redundant. As a developer you could reasonably expect that the async operation starts when you make the initial method call.
To make the async model more intuitive, we changed it to a hot start model. In the Consumer Preview, when you call an async method, you get back an async operation object but you don't need to call Start. Instead, the operation is implicitly started when the async method is called. Because we don’t need Start anymore, we removed it from IAsyncInfo.
If you have already been using .then() (JavaScript) or await (C#), this change doesn’t affect you.
Additionally, we added PPL tasks to make async programming easier in C++. I recommend that you take a look at the tutorial in //Build to Windows 8 Consumer Preview and migrate your async code to the PPL model.
Windows Runtime APIs can give your app access to system resources, such as file handles and network sockets. These resources are limited and often the user or other apps can’t use them when your app is accessing them. Your app is responsible for freeing these resources after it’s done using them. However in the Developer Preview it was difficult to explicitly free these resources and so many apps held on to them longer than necessary.
In the Consumer Preview, Windows Runtime APIs that access these system resources can control their lifetimes. For example, in JavaScript these WinRT objects expose a close method and in C# they implement the IDisposable interface. With lifetime management exposed directly on these WinRT APIs, it is now much easier to free system resources when your app is done using them. Use these new lifetime management capabilities to reduce the resource consumption of your app and to make sure your customers always have access to their system resources, like files, when your app is not using them.
We received feedback from you that the COM threading model underlying WinRT was confusing, because it introduced considerations that don’t exist in other programming environments. Some of the issues were:
To fix these issues, we changed the threading model for WinRT objects. At a high level, the changes are:
We introduced many improvements to the contracts in the Consumer Preview. These improvements come in the form of changes to the APIs, functionality, manifest registrations, and UI. Contracts like Search, Share, Settings, the File Picker, etc. have all been improved in one way or another. For example we added a new File Picker contract, FileSavePickerActivatedEventArgs, that allows your app to act as a Save As target. This is an incredibly powerful feature – with it you can build a picker that lets users open and save files to your cloud as simply as if they were on the local disk. To accommodate this change, we renamed the File Picker contract in the Consumer Preview to FileOpenPickerActivatedEventArgs.
For contracts that are supported in Visual Studio the easiest way to incorporate these changes is to use the new item Template to create the contract from scratch. You can then add your existing code that supports the contract to the new template.
A number of APIs relied on URI protocol schemes to access content in the app’s package or in the app’s ApplicationData state locations. These APIs include resource tags in Metro style apps written in HTML/CSS/JS, Live Tiles, the ResourceLoader APIs, the XAML WebView control, and the file Storage APIs.
We updated protocol names to make them consistent across all Metro style apps and Windows 8 integration points. We also renamed these protocol schemes:
Developer Preview scheme
Consumer Preview scheme
ms-wwa://
ms-appx://
ms-wwa-web://
ms-appx-web://
localfolder://
ms-appdata://
Additionally, XAML apps are now restricted to using supported protocol schemes, like ms-appx://, to access resources.
Several changes in the Consumer Preview are specific to Metro style apps written in HTML/CSS/JS. Here are some of the notable changes.
The JavaScript and HTML controls available in the Developer Preview have undergone many changes in response to your feedback. Now it’s easier to add the controls to your app and the methods for hooking controls up to content are more intuitive. Some of the notable controls that have changed and will require you to make updates to are the ListView, AppBar, and FlipView. For example, you no longer can use an ArrayDataSource to populate a ListView. Instead, you now use a WinJS.Binding.List to populate your ListViews. Binding.List makes it much easier to work with your ListView’s in memory data.
Again, //Build to Windows 8 Consumer Preview has the full set of control changes.
Previously, you could navigate within the top-level document of your app from the locally-packaged StartPage to a web-based URL. This prevented your app from interacting with any of the important notifications, such as suspend and resume, because these events are Windows Runtime events and, for security reasons, WinRT objects are inaccessible from the web context. In the Consumer Preview, you are no longer able to navigate to content other than that which is in the local context. In other words, it must come from your app package and be referenced via the ms-appx:// protocol scheme.
Consequently, you may need to reorganize your app logic to rely on an iframe for loading your web content, keeping a single persistent top-level document from the local context always in memory.
In the Developer Preview, the navigation model in HTML/CSS/JS Metro style apps relied on fragment loading APIs for navigating to different pages within an app. This model was fragile and forced you to write a lot of code to handle things like control initialization and page state.
In the Consumer Preview, we introduced a high-level page control in the Windows Library for JavaScript (WinJS) for loading content within a page. Additionally, we updated the Visual Studio templates to use these page controls. In most cases, page controls bypass the need to deal with the fragment loading APIs directly. This makes navigating across your HTML fragments much easier.
Page controls build on top of the fragment loader. They provide an actual object that backs rendered fragments, give you a place to store state, and handle parenting the fragment for you. There is a WinJS control backing your fragment—attached to the parented DOM element—which provides it with a well-defined lifecycle. You can also add arbitrary methods or state to this control object.
JavaScript is very tolerant of unhandled exceptions, stopping execution of any further code in the function containing the exception, but otherwise continuing on in a way that is often unnoticeable. When this happens, your app is no longer in a predictable state. This means that data that your app is relying on may not be valid, or your UI may end up in a broken state. In a web browser, this may be acceptable because the user can refresh the page. But, in a Metro style app, the user must be able to run your app for weeks without ever needing to close and reopen it.
As such, an unhandled JavaScript exception now logs an error message to the event log and terminates the app.
If you have developed XAML Metro style apps, you’ll notice some changes in the Consumer Preview specific to your programming languages.
In the Consumer Preview, we’ve made significant changes for C++ developers to make data binding XAML UI to custom C++ classes much simpler. When you use annotation on your class via the Windows.UI.Xaml.Data.Bindable attribute, your class becomes visible to the binding engine and you no longer have to implement the interface on this class. This significantly reduces your code overhead.
If your XAML Metro style app uses navigation APIs, such as Windows.UI.Xaml.Controls.Frame.Navigate or Windows.UI.Xaml.Navigation.NavigationEventArgs.Type, you’ll need to make some quick changes. These APIs now accept a Type object as the target, rather than the string name representation of the class. Check out //Build to Windows 8 Consumer Preview for the full list of affected APIs.
We made numerous changes to Windows.UI.Xaml.Controls.ApplicationBar functionality to make it more consistent with the user experience for Metro style apps. These changes also remove the overhead of you having to worry about implementation details to match the Metro style experience.
One major change is that you can place an AppBar within your app using the new Windows.UI.Xaml.Controls.Page.TopAppBar and BottomAppBar properties. We recommend that you use these new properties rather than placing your AppBars directly within your app’s layout. We added, renamed, or removed several other AppBar properties.
Semantic zoom is the term used across the Windows 8 platform when the user can zoom in or out on content and change its context. For example, zooming in on a collection of photos might change the photos from small thumbnails to large previews complete with names, dates, etc. The JumpViewer control enabled semantic zoom in the Developer Preview. It has been renamed to the SemanticZoom control. The new name better reflects the user experience that you provide when you implement one of these controls in your app.
In addition to the changes called out in this post, many APIs in the Windows Runtime and the Windows Library for JavaScript have changed. For example in the Windows Runtime there are changes in the Networking, Media, System, UI, Storage, Devices, ApplicationModel, Globalization, Graphics, and Data namespaces. While many of these changes are minor, you’ll want to take care when migrating your code so that you make all the necessary changes in your app..
I look forward to your comments. If you have detailed “how do I …” questions I suggest you post them on the developer forums and we’ll be there to help you figure it out.
-- John Sheehan, Partner Architect, Windows Development
|
http://blogs.msdn.com/b/windowsappdev/archive/2012/03/08/migrating-your-apps-from-developer-preview-to-consumer-preview.aspx
|
CC-MAIN-2015-27
|
refinedweb
| 2,308
| 61.67
|
PMDAPROMETHEUS(1) General Commands Manual PMDAPROMETHEUS(1)
pmdaprometheus - Prometheus PMDA
$PCP_PMDAS_DIR/prometheus/pmdaprometheus [-D] [-n] [-c config] [-d domain] [-l logfile] [-r root] [-t timeout] [-u user]
pmdaprometheus is a Performance Metrics Domain Agent (PMDA) which creates PCP metrics from Prometheus endpoints, which provide HTTP based access to application metrics. The default config directory is $PCP_PMDAS_DIR/prometheus/config.d/, see ``CONFIGURATION SOURCES'' below. The default URL fetch timeout is 2 seconds. The default user, if not specified with the -u option, is the current user. If the -n option is given, the list of configuration files will not be sorted prior to processing. This list is sorted by default but that can be expensive if there are a large number of configuration files (URLs and/or scripts). If the -D option is given, additional diagnostic messages will be written to the PMDA log file, which is $PCP_LOG_DIR/pmcd/prometheus.log by default (see also -lbelow). In addition, the metric prometheus.control.debug controls the same debug flag and can be set with the following command: pmstore prometheus.control.debug value where value is either 1 (to enable verbose log messages) or 0 (to disable verbose log messages). This is particularly useful for examining the http headers passed to each fetch request, filter settings and other processing details that are logged when the debugging flag is enabled. The -d option may be used to override the default performance metrics domain number, which defaults to 144. It is strongly recommended not to change this. The domain number should be different for every PMDA on the one host, and the same domain number should be used for pmdaprometheus PMDA on all hosts. See also the -r option, which allows the root of the dynamic namespace to be changed from the default prometheus. The -l option may be used to specify logfile as the destination for PMDA messages instead of the default, $PCP_LOG_DIR/pmcd/prometheus.log. As a special case, logfile may be "-" to send messages to the stderr stream instead, e.g. -l-. This would normally be the stderr stream for the parent process, pmcd(1), which may itself have redirected stderr. This redirection is normally most useful in a containerized environment, or when using dbpmda(1). The -r option allows the root of the dynamic namespace to be changed to root from the default, prometheus. In conjunction with other command line options, this allows pmdaprometheus to be deployed as a different PMDA with distinct metrics namespace and metrics domain on the same host system. Note that all PMDAs require a unique domain number so the -d option must also be specified. Use of the -r option may also change the defaults for some other command line options, e.g. the default log file name and the default configuration directory.
As it runs, pmdaprometheus periodically recursively scans the $PCP_PMDAS_DIR/prometheus/config.d directory (or the directory specified with the -c option), looking for source URL files (*.url) and executable scripts or binaries. Any files that do not have the .url suffix or are not executable, are ignored - this allows documentation files such as "README" and non-executable "common" script function definitions to be present without being considered as config files. A remote server does not have to be up or stay running - the PMDA tolerates remote URLs that may come and go over time. The PMDA will relay data and metadata when/if they are available, and will return errors when/if they are down. PCP metric IDs, internal and external instance domain identifiers are persisted and will be restored when individual metric sources become available and/or when the PMDA is restarted. In addition, the PMDA checks directory modification times and will rescan for new or changed configuration files dynamically. It is not necessary to restart the PMDA when adding, removing or changing configuration files.
Each file with the .url suffix found in the config directory or sub- directory contains one complete HTTP or HTTPS URL at which pmdaprometheus can reach a Prometheus endpoint. Local file access is also supported with a conventional URL, in which case somepath/somefile should contain prometheus formatted metric data. The first line of a .url config file should be the URL, as described above. Subsequent lines, if any, are prefixed with a keyword that can be used to alter the http GET request. A keyword must end with ':' (colon) and the text extends to the end of the line. Comment lines that start with # and blank lines are ignored. The only currently supported keywords are HEADER: and FILTER:. HEADER: headername: value ... to end of line Adds headername and its value to the headers passed in the http GET request for the configured URL. An example configuration file that provides 3 commonly used headers and an authentication token might be : # this is a comment HEADER: Accept: text/html HEADER: Keep-Alive: 300 HEADER: Connection: keep-alive HEADER: Authorization: token ABCDEF1234567890 As mentioned above, header values extend to the end of the line. They may contain any valid characters, including colons. Multiple spaces will be collapsed to a single space, and leading and trailing spaces are trimmed. A common use for headers is to configure a proxy agent and the assorted parameters it may require. FILTER: INCLUDE METRIC regex or FILTER: EXCLUDE METRIC regex Dynamically created metric names that match regex will be either included or excluded in the name space, as specified. The simple rule is that the first matching filter regex for a particular metric name is the rule that prevails. If no filter regex matches (or there are no filters), then the metric is included by default, i.e. the default filter if none are specified is FILTER: INCLUDE METRIC .* This is backward compatible with older versions of the configuration file that did not support filters. Multiple FILTER: lines would normally be used, e.g. to include some metrics but exclude all others, use FILTER: EXCLUDE METRIC .* as the last of several filters that include the desired metrics. Conversely, to exclude some metrics but include all others, use FILTER: EXCLUDE METRIC regex. In this case it's not necessary (though doesn't hurt) to specify the final FILTER: INCLUDE METRIC .* because, as stated above, any metric that does not match any filter regex will be included by default. Label filtering uses similar FILTER: syntax and semantics. FILTER: EXCLUDE LABEL regex will delete all labels matching regex from all metrics defined in the configuration file. The same rules as for metrics apply for labels too - an implicit rule: FILTER: INCLUDE LABEL .* applies to all labels that do not match any earlier filter rule. Caution is needed with label filtering because by default, all labels are used to construct the PCP instance name. By excluding some labels, the instance names will change. Excluding all labels for a particular metric changes that metric to be singular, i.e. have no instance domain. In addition, by excluding some labels, different instances of the same metric may become duplicates. When such duplicates occur, the last duplicate instance returned by the end- point URL prevails over any earlier instances. For these reasons, it is recommended that label filtering rules be configured when the configuration file is first defined, and not changed thereafter. If a label filtering change is required, the configuration file should be renamed, which effectively defines a new metric, with the new (or changed) instance naming. Unrecognized keywords in configuration files are reported in the PMDA log file but otherwise ignored.
Executable scripts present in the $PCP_PMDAS_DIR/prometheus/config.d directory or sub-directories will be executed and the stdout stream containing prometheus formatted metric data will be parsed as though it had come from a URL or file. The stderr stream from a script will be sent to the PMDA log file, which by default can be found in $(PCP_LOG_DIR)/pmcd/prometheus.log. Note that scripted sources do not support label or metric filtering (as described above for URL sources) - they can simply do their own filtering in the script itself with sed(1), awk(1), or whatever tool is desired. A simple example of a scripted config entry follows: #! /bin/sh awk '{ print("# HELP loadavg local load average") print("# Type loadavg gauge") printf("loadavg {interval=\"1-minute\"} %.2f\n", $1) printf("loadavg {interval=\"5-minute\"} %.2f\n", $2) printf("loadavg {interval=\"15-minute\"} %.2f\n", $3) }' /proc/loadavg This script produces the following Prometheus-formatted metric data when run: # HELP loadavg local load average # Type loadavg gauge loadavg {interval="1-minute"} 0.12 loadavg {interval="5-minute"} 0.27 loadavg {interval="15-minute"} 0.54 If the above script was saved and made executable in a file named $PCP_PMDAS_DIR/prometheus/config.d/local/system.sh then this would result in a new PCP metric named prometheus.local.system.loadavg which would have three instances for the current load average values: 1-minute, 5-minute and 15-minute. Scripted config entries may produce more than one PCP leaf metric name. For example, the above "system.sh" script could also export other metrics such as CPU statistics, by reading /proc/stat on the local system. Such additional metrics would appear as peer metrics in the same PCP metric subtree. In the case of CPU counters, the metric type definition should be counter, not gauge. For full details of the prometheus exposition formats, see .
All metrics from a file named JOB.* will be exported as PCP metrics with the prometheus.JOB metric name prefix. Therefore, the JOB name must be a valid non-leaf name for PCP PMNS metric names. If the JOB name has multiple dot-separated components, the resulting PMNS names will include those components and care is needed to ensure there are no overlapping definitions, e.g. metrics returned by JOB.response may overlap or conflict with metrics returned by JOB.response.time. Config file entries (URLs or scripts) found in subdirectories of the config directory will also result in hierarchical metric names. For example, a config file named $PCP_PMDAS_DIR/prometheus/config.d/mysource/latency/get.url will result in metrics being created (by fetching that source URL) below prometheus.mysource.latency.get in the PCP namespace. Scripts found in subdirectories of the config directory similarly result in hierarchical PCP metric names.
As described above, changes and new additions can be made to files in the configuration directory without having to restart the PMDA. These changes are detected automatically and the PCP metric names below prometheus in the PMNS will be updated accordingly, i.e. new metrics will be dynamically added and/or existing metrics removed. In addition, pmdaprometheus honors the PMCD_NAMES_CHANGE pmFetch(3) protocol that was introduced in PCP version 4.0. In particular, if prometheus metrics are being logged by a PCP version 4.0 or later pmlogger(1), new metrics that appear as a result of changes in the PMDA configuration directory will automatically start to be logged, provided the root of the prometheus PMDA namespace is configured for logging in the pmlogger configuration file. See pmlogger(1) for details. An example of such a pmlogger configuration file is : log mandatory on 2 second { # log all metrics below the root of the prometheus namespace prometheus }
The PMDA maintains special control metrics, as described below. Apart from prometheus.control.debug, each of these metrics is a counter and has one instance for each configured metric source. The instance domain is adjusted dynamically as new sources are discovered. If there are no sources configured, the metric names are still defined but the instance domain will be empty and a fetch will return no values. prometheus.control.calls total number of times each configured metric source has been fetched (if it's a URL) or executed (if it's a script), since the PMDA started. prometheus.control.fetch_time Total time in milliseconds that each configured metric source has taken to return a document, excluding the time to parse the document. prometheus.control.parse_time Total time in milliseconds that each configured metric source has taken to parse each document, excluding the time to fetch the document. When converted to a rate, the calls metric represents the average fetch rate of each source over the sampling interval (time delta between samples). The fetch_time and parse_time counters, when converted to a rate, represent the average fetch and parsing latency (respectfully), during the sampling interval. The prometheus.control.debug metric has a singular value, defaulting to 0. If a non-zero value is stored into this metric using pmstore(1), additional debug messages will be written to the PMDA log file.
pmdaprometheus and libpcp internals impose some numerical constraints about the number of sources (4095), metrics (1024) within each source, and instances for each metric (4194304).
Install the Prometheus PMDA by using the Install script as root: # cd $PCP_PMDAS_DIR/prometheus # ./Install To uninstall, do the following as root: # cd $PCP_PMDAS_DIR/prometheus # ./Remove pmdaprometheus is launched by pmcd(1) and should never be executed directly. The Install and Remove scripts notify pmcd when the agent is installed or removed. When scripts and .url files are added, removed or changed in the configuration directory, it is usually not necessary to restart the PMDA - the changes will be detected and managed on subsequent requests to the PMDA.
$PCP_PMDAS_DIR/prometheus/Install installation script for the pmdaprometheus agent $PCP_PMDAS_DIR/prometheus/Remove undo installation script for the pmdaprometheus agent $PCP_PMDAS_DIR/prometheus/config.d/ contains URLs and scripts used by the pmdaprometheus agent as sources of prometheus metric data. $PCP_LOG_DIR/pmcd/prometheus.log default log file for error messages from pmdaprometheus $PCP_VAR_DIR/config/144.* files containing internal tables for metric and instance ID number persistence (domain 144).), pmlogger(1), pmstore(1), PMWEBAPI(3), pmFetchROMETHEUS(1)
Pages that refer to this page: pmlogger(1), pmdasenderror(3), pmwebapi(3)
|
http://man7.org/linux/man-pages/man1/pmdaprometheus.1.html
|
CC-MAIN-2019-35
|
refinedweb
| 2,299
| 55.64
|
Python. ?
one question, would it be possible to add some of the introspection features from
such as class_copyMethodList, etc, so that the list of available selectors is visible, perhaps when doing a dir(object)? as far as i can tell, ctypes wont let us call runtime methods directly, will it?).
I see the option to not clear globals when running a script has gone away.... is this a change for good? i for one really liked having that feature off ... made hacking away at code much easier, since you could run the main script, then run some additional aux commands..
Interesting, I guess I haven't really tested this on a 32 bit device. Looks like I have to use objc_msgSend_stret there (which doesn't even exist in the 64-bit runtime for some reason).
that worked. i updated
__call__to check for Structure restype, and called objc_msgSend_stret. i suspect one may actually need to check the size of the struct, but this seems to work for most structs ive encountered.
No idea if this would be of any use, but have you tried to use
cffiwith the
CTypesBackend? Might or might not be a more convenient interface than raw
ctypes.
back on clearing globals, thought id share the workaround for stash:
create
launch_stash.pywhich imports, rather than runs, stash, then create a new instance.
since the module never gets cleared, it survives global clearing.
import stash _stash = stash.StaSh() _stash.run() stash._stash=_stash
also, some may find this useful as an action menu script. instead pressing Play to run a script, run this from the action menu, which executes the current script without clearing globals. thats at least a workaround if the option to not clear globals doesnt come back.
# execute script in editor, in current interpreter session without clearing globals import editor execfile(editor.get_path())
- Hackingroelz
New to the beta. Firstly, everything seems to run very smoothly on my iPad. However at some point the app stopped working completely: I couldn't open it anymore, I tried restarting my iPad but that didn't work. I had to reinstall it to use it again. It might've been caused by something in iOS 9 (beta 3) though. Also, I noticed that the MIDIPlayer doesn't seem to work properly in a UI action (even with ui.in_background, it's immediately stopped when the action function ends), although it can be fixed with an empty while loop looping until the end of the MIDI file is reached. Lastly, while it's not really a bug, it seems I have to open a new tab, tap Documentation and close the tab to open the documentation, which is a little bit unintuitive.
Anyway, nice work on the update, the tabbed editor and appex module are really useful!
|
https://forum.omz-software.com/topic/1971/pythonista-1-6-beta-160020/15
|
CC-MAIN-2019-09
|
refinedweb
| 466
| 72.76
|
If you're a Web developer, you know you should target your Web applications to be “mobile first.” This means that your Web applications should look great on a mobile device, and good, if not great, on a desktop browser. There are many techniques you can use to help you develop for mobile. One technique is to use Bootstrap to give yourself responsive styles that change based on the device being used. Another technique, and the focus of this article, is to eliminate HTML tables.
HTML tables, when not used correctly, can cause problems with small screens such as those found on smart phones. This article shows you how to rework your HTML tables to fit better on mobile devices. In addition, you'll learn to use Bootstrap panel classes to completely eliminate HTML tables.
The Problem with HTML Tables
HTML tables are used by many Web developers because they're easy to program, and provide a way for users to see a lot of information like they would on a spreadsheet. But just because something is easy to use and conveys a lot of data, doesn't necessarily mean it's the best tool. There are many reasons why an HTML table isn't suitable for user consumption.
- A table presents too much data on the page, so the user has too much to concentrate upon.
- A user's eyes become fatigued after staring at rows and columns of data much more quickly than when data is spread out.
- It's hard for a user to distinguish between the data in each column because each column is uniform and nothing stands out.
- On a mobile device, the user frequently needs to pan right and left to see all the data. This leads to an annoyed user, and is very unproductive.
HTML Table on Desktop versus Mobile
In Figure 1, you see a list of tabular product data. This renders nicely on a normal desktop browser because the user has a lot of screen real-estate and they don't need to scroll left and right to see all the data.
Look at this same page rendered on a smart phone, as shown in Figure 2. The user is only able to see the left-most column of the table. If they don't know that they can scroll to the right, they're missing some important information. On some mobile browsers, the page may render the complete table, but it's so small that it's hard to read. Either way, the user is forced to interact with their phone to view the data. They must scroll left to right, or maybe pinch or spread with their fingers.
Create an MVC Project
If you wish to follow along creating the sample for this article, create a new MVC project using Visual Studio. Name the project AlternativeTable. Once you have a new MVC project, add three classes into the \Models folder. The names for each of these classes are Product, ProductManager, and ProductViewModel. Instead of using a database, create some mock data in the ProductManager class. The Product class is shown in the following code snippet:
public class Product { public int ProductId { get; set; } public string ProductName { get; set; } public DateTime IntroductionDate { get; set; } public string Url { get; set; } public decimal Price { get; set; } }
Several lines of the ProductManager class are shown in Listing 1. You need to add a few more Product objects into the list so you can display several rows of data. Or, see the sidebar for how to download the complete sample. You can then copy the ProductManager.cs class into the \Models folder to have several product objects to display while running this sample.
The last class is a view model that's called from the MVC Controller. Using an MVVM approach to development provides for a nice separation of concerns in your applications. It's also very easy to bind properties in your CSHTML pages to your view model classes. The ProductViewModel class is shown in the following code snippet.
public class ProductViewModel { public List<Product> Products { get; set; } = new List<Product>(); public void LoadProducts() { ProductManager mgr = new ProductManager(); Products = mgr.Get(); } }
The MVC Controller
You need a MVC controller to load the data and feed that data to the CSHTML pages used to render your page of product data. Right mouse-click on the Controllers folder, select Add > Controller… Select MVC 5 Controller – Empty from the list of templates and click the Add button. Change the name to ProductController and click the OK button. Write the following code in the Index method.
public ActionResult Index() { ProductViewModel vm = new ProductViewModel(); vm.LoadProducts(); return View(vm); }
This method creates an instance of the ProductViewModel class and calls the LoadProducts method to build the Products property in the view model. The CSHTML page you are going to build uses the Products property of the View Model passed to it to build the HTML table.
The HTML Table View
Create a new folder under the \Views folder called \Product. Right mouse-click on this folder name and select Add > MVC 5 View Page with Layout (Razor) from the menu. Set the name to Index and click the OK button. When presented with a list of layouts, choose _Layout.cshtml from the dialog box and click the OK button. Write the code shown in Listing 2.
As you can see from Listing 2, there's nothing out of the ordinary for this table. You use Bootstrap table classes to help with styling the table. You loop through the collection of product data in the Products property of the ProductViewModel class. Each time through the loop, display the appropriate data from the Product class in each
|
https://www.codemag.com/Article/1803021/Eliminate-HTML-Tables-for-Better-Mobile-Web-Apps
|
CC-MAIN-2020-40
|
refinedweb
| 957
| 71.55
|
Assembly.GetType Method (String, Boolean)
Updated: July 2010
Assembly: mscorlib (in mscorlib.dll)
Parameters
- name
- Type: System.String
The full name of the type.
- throwOnError
- Type: System.Boolean
true to throw an exception if the type is not found; false to return null.
Return ValueType: System.Type
An object that represents the specified class.
Implements_Assembly.GetType(String, Boolean)
This method only searches the current assembly instance. The name parameter includes the namespace but not the assembly. To search other assemblies for a type, use the Type.GetType(String) method overload, which can optionally include an assembly display name as part of the type name.
The throwOnError parameter only affects what happens when the type is not found. It does not affect any other exceptions that might be thrown. In particular, if the type is found but cannot be loaded, TypeLoadException can be thrown even if throwOnError is.
|
http://msdn.microsoft.com/en-us/library/19y21115.aspx
|
crawl-003
|
refinedweb
| 147
| 61.43
|
Components and supplies
Necessary tools and machines
Apps and online services
About this project
Background
A breeder, friend of mine, asked me for help to find a technique to warn her in case one of her mare is giving birth. We could not find on the market any suitable device that would allow her continuing her other activities and alert her at any time, especially when she is not at the farm.
Requirements
1. The device shall be able to send alert on a mobile phone, per SMS, per email…
2. The device shall be able to have access to a network at any time
3. The device shall be dust and moisture-proof
4. The device shall be shock-resistant
5. The device shall be able to monitor the pregnancy for at least 3 weeks
How the requirements are fulfilled
1. The device shall be able to send alert on a mobile phone, per SMS, per email…
To give birth, a mare is lying on the flank. The strategy used is to detect this change of angle to trigger the alert.
The required range detection is from 45 to 90 °. It was decided to use two ball tilt sensors which also allows reducing the consumption.
This is not the calving season, so a reconstitution took place
The first alert is sent if the mare stays at least 30 seconds on the flank, two more alerts are sent one and two minutes later to ensure the breeder saw the messages. The duration of 30 seconds was defined to avoid false alert due to other movements of the horse.
The arduino MKRFox 1200 sends the message "HORSE" to the Sigfox backend.
I created a Callback to the Twilio platform that sends an SMS to the breeder.
Fanfan not being too cooperative for a minute, here's how it works.
2. The device shall be able to have access to a network at anytime
In the frame of this project, the stable is 6km away from the closest GSM relay. The network is therefore very bad for using an arduino and a GSM card in a portable configuration. To overcome this issue, it was decided to check the coverage of Sigfox. The picture below shows that the Sigfox coverage using 3 stations is good.
3. The device shall be dust and moisture-proof
The mare may be outside, therefore the material for the housing of the device is selected to resist outdoor environment.
4. The device shall be shock-resistant
The housing material has been selected to have strong resistance. However, a horse is a very powerful animal and we cannot ensure that the housing will resist all treatments. To avoid risk related to the battery ignition in case of shock it was decided to ban the use of Li-ion or LiPo battery.
The use of mercury ball sensor was proscribed to not endanger the horse in case the device is damaged
5. The device shall be able to monitor the pregnancy for at least 3 weeks
The breeder knows approximatly the date of birth and it is important to monitor the last 3 weeks of pregnancy. The arduino MKR Fox 1200 consuming 650μAh in standby mode, 2 AAA 1250mAh batteries will allow a battery life of 80 days.
The Board and wiring
Twilio
You need to create a twilio account
Enter the phone number that will receive the alerts.
Sigfoxbackend
Create your Sigfox account and associate your Arduino MKRFox1200
Select the device type
Select CALLBACKS, New, Custom callback
Enter informations :
Type : DATA, UPLINK
Channel : URL
Url pattern :
AcountSID and AuthToken is in your Twilio account
Tanks to Jennifer, Fantasia, Zoe, Delphine and Elena for their help.
Code
PonyArduino
#include <SigFox.h> #include <ArduinoLowPower.h> const uint8_t SWITCH_PIN = 1; const String payload = "HORSE!"; const uint8_t debug = false; int Count = 0; //Timer int Circle = 0; //Number message send. After 3 messages, go sleep void setup() { if (debug == true) { Serial.begin(9600); while (!Serial) {}; } if (!SigFox.begin()) { Serial.println("Shield error or not present!"); return; } delay(200); // Send the module to the deepest sleep SigFox.end(); // Attach switch pin and enable the interrupt pinMode(SWITCH_PIN, INPUT_PULLUP); // The mare lies on the side, switch close circuit LowPower.attachInterruptWakeup(SWITCH_PIN, CircuitClose, FALLING); } void loop() { LowPower.sleep(); Circle = 0; Count = 0; //If the mare is lifted, switch is open module goes sleep //Or after 3 messages sent the module goes sleep delay(1000); while (Circle!=1) { if (digitalRead (SWITCH_PIN) == HIGH) { Circle = 1; } // Timer Count delay(1000); Count++; // Send message after 30s, 60s and 120s if (Count == 30 || Count == 60) { sendString(payload); } if (Count == 120) { Circle = 1; sendString(payload); } } } void sendString(String str) { // Start the module SigFox.begin(); // Wait at least 100mS delay(100); // Clears all pending interrupts SigFox.status(); delay(1); SigFox.beginPacket(); SigFox.print(str); int ret = SigFox.endPacket(); // send buffer to SIGFOX network SigFox.end(); } void CircuitClose() { }
Custom parts and enclosures
Schematics
Submitted to Contest
Arduino MKR FOX 1200 Contest
Author
Published onNovember 2, 2017
Members who respect this project
you might like
|
https://create.arduino.cc/projecthub/pittex/foaling-monitor-139532
|
CC-MAIN-2018-26
|
refinedweb
| 839
| 61.77
|
The QMouseDriverPlugin class is an abstract base class for mouse driver plugins in Qtopia Core. More...
#include <QMouseDriverPlugin>
Inherits QObject.
The QMouseDriverPlugin class is an abstract base class for mouse driver plugins in Qtopia Core.
Note that this class is only available in Qtopia Core.
Qtopia Core, Qtopia Core's implementation of the QMouseDriverFactory class will automatically detect the plugin and load the driver into the server application at runtime. See How to Create Qt Plugins for details.
See also QWSMouseHandler and QMouseDriverFactory.
Constructs a mouse driver plugin with the given parent.
Note that this constructor is invoked automatically by the Q_EXPORT_PLUGIN2() macro, so there is no need for calling it explicitly.
Destroys the mouse driver plugin.
Note that Qt destroys a plugin automatically when it is no longer used, so there is no need for calling the destructor explicitly.
Implement this function to create a driver matching the type specified by the given key and device parameters. Note that keys are case-insensitive.
See also keys().
Implement this function to return the list of valid keys, i.e. the mouse drivers supported by this plugin.
Qtopia Core provides ready-made drivers for several mouse protocols, see the pointer handling documentation for details.
See also create().
|
https://doc.qt.io/archives/qtopia4.3/qmousedriverplugin.html
|
CC-MAIN-2019-26
|
refinedweb
| 206
| 60.11
|
Changes to the System.Uri namespace in Version 2.0
Several changes were made to the System.Uri class. These changes fixed incorrect behavior, enhanced usability, and enhanced security.
Constructors:
All constructors that have a dontEscape parameter.
Methods:
For URI schemes that are known to not have a query part (file, ftp, and others), the '?' character is always escaped and is not considered the beginning of a Query part.
For implicit file URIs (of the form "c:\directory\file@name.txt"), the fragment character ('#') is always escaped unless full unescaping is requested or LocalPath is true.
UNC hostname support was removed; the IDN specification for representing international hostnames was adopted.
LocalPath always returns a completely unescaped string.
ToString does not unescape an escaped '%', '?', or '#' character.
Equals now includes the Query part in the equality check.
Operators "==" and "!=" are overridden and linked to the Equals method.
IsLoopback now produces consistent results.
The URI "" is no longer translated into "".
"#" is now recognized as a host name terminator. That is, "" is now converted to "".
A bug when combining a base URI with a fragment has been fixed.
A bug in HostNameType is fixed.
A bug in NNTP parsing is fixed.
A URI of the form HTTP:contoso.com now throws a parsing exception.
The Framework correctly handles userinfo in a URI.
URI path compression is fixed so that a broken URI cannot traverse the file system above the root.
|
http://msdn.microsoft.com/en-us/library/ms229708(v=vs.100).aspx
|
CC-MAIN-2014-23
|
refinedweb
| 236
| 62.24
|
I am trying to get into OOP and I have written this little example program I am running in shell. I have to type accountNumber = bankAccount("user name") to create an object. I want the user to input there name, which is an attribute of my bankAccount object and the want the object variable accountNumber to be a number that is generated starting from 1 and then continuing up as more users open bankAccount. I know basics i.e how to get user input etc but I'm not sure if this is the right thing to be doing.
Any help would be much appreciated.
Thanks
Rhys
- Code: Select all
import sys, os, time
class bankAccount():
def __init__(self, name):
self.name = name
self.balance = float(0)
print ("Name: "),self.name
print ("Balance: "),self.balance
def withdraw(self, amount):
self.balance -= amount
def deposit(self, amount):
self.balance += amount
def info(self):
print ("Name: "),self.name
print ("Balance:" + " " + u"\xA3"),self.balance
|
http://www.python-forum.org/viewtopic.php?f=6&t=10678&p=13530
|
CC-MAIN-2014-42
|
refinedweb
| 162
| 69.48
|
Green Thread - Java Beginners
Green Thread Hello,
Can any one say the definition and use of Green Thread in java.
Thanks in advance... Hi friend
Green threads... threading model in 1.2 and beyond. Green threads may have had an advantage
automatically break line when ever I put enter.
automatically break line when ever I put enter. code is working fine. but in output Enter is ignored.
public void action(int a){
if(a == 0... break lines like what I am typing above
Java Break
Java Break
Many programming languages like c, c++ uses the "break"
statement. Java also...:/
Java programming or net beans - Java Beginners
Java programming or net beans Help with programming in Java?
Modify the dog class to include a new instance variable weight (double) and the Cat class to include a new instance variable coatColor (string). Add the corresponding
IT Training in India
IT Training in India
Welcome to Rose India...
Rose India
Technologies (P) Ltd. taking... you need to be successful.
IT Training in India
Java Break keyword
Java Break keyword
... and for handling these loops Java
provides keywords such as break and continue respectively. Among these Java
keywords break is often used in terminating the loops
Java Break Statement
Java Break Statement
... is of unlabeled break statement in java.
In the program break statement....
Syntax for Break Statement in Java
net beans
net beans Write a JAVA program to read the values of an NxN matrix and print its inverse
net beans2
net beans2 Write a JAVA program to find the nearest two points to each other (defined in the 2D-space
What is the difference between a break statement and a continue statement?
and return statement etc.
Other-then Java programming language the break...:// is the difference between a break statement and a continue statement
Disadvantages..... of java and .net
Disadvantages..... of java and .net Disadvantages of Java and .Net
break
break
Sometimes we use Jumping Statements in Java. Using for,
while and do-while.... Using jumping statements like break and continue it is
easier to jump out of loops
net beans
net beans Write a JAVA program to auto-grade exams. For a class of N students, your program should read letter answers (A, B, C, D) for each student. Assume there are 5 questions in the test. Your program should finally print
The Taj Mahal - A unforgettable Journey for ever
Taj Mahal - Unforgettable as ever
India has a rich tradition of dynastic... of art, music and much more. Enjoy your
unforgettable journey of Taj Mahal India gives you memorable travel experience
for ever. The architectural finesse
.NET Programming
.NET Programming
.NET Application Development Services from Rose India
.NET... for their
implementation. Rose India has been using the Microsoft .NET Framework from its
first... applications. Rose India .NET programming team has achieved firm
experience
Continue and break statement
Continue and break statement How can we use Continue and break statement in java program?
The continue statement restart the current loop whereas the break statement causes the control outside the loop.
Here
NET BEAN - IDE Questions
NET BEAN Thanks for your response actually i am working on struts and other window application. so if you have complete resources abt it then tell me....
and if you have link of this book ""Java EE Development with Net Beans
Tourist Places of Karnataka India
of the most visiting tourist
destinations of India. The nature has blessed this state by majestic hills,
beautiful lakes, grand rivers, ever green forests... is amongst the most favorite
tourist destinations of India. Karnataka, the land
Java Break Lable
Java Break Lable
In Java, break statement is used in two ways as labeled
and unlabeled... for Break Labels in Java
public class Java_Break_lable {
public static
net beans 4
net beans 4 Write a JAVA program to read an initial two number x1 and x2, and determine if the two numbers are relatively prime. Two numbers are relatively prime. Two numbers are relatively prime if the only common factor
Java - Break statement in java
Java - Break statement in java
The java programming language supports the following types...;javac Break.java
C:\chandan>java Break
The Prime number in between 1 - 50
Break Statement in java 7
Break Statement in java 7
In this tutorial we will discuss about break statement in
java 7.
Break Statement :
Java facilitate you to break the flow... java provides the way to do this
by using labeled break statement. You can jump
The break Keyword
The break Keyword
"break" is the java keyword used
to terminate the program execution....
In other word we can say that
break keyword is used to prematurely exit
Java Break command
Java Break command
Java Break command is commonly used in terminating looping statements. break
command comes under the Java branching statements category. In programming
Break statement in java
Break statement in java
Break statement in java is used to change the normal control flow of
compound statement like while, do-while , for. Break... happen that we want to
come out of the loop in that case break is used
Java Break loop
Java Break loop
Java contains its own Branching Category in which it
has two keywords... in Java
public class Java_Break_loop {
public static void main(String args
Dot Net Architect
Dot Net Architect
Position Vacant:
Dot Net Architect
Job Description
Candidates will be handling Dot Net Projects.
Java Break while example
Java Break while example
... label mass.
Break While Loop
Java
import javax.swing.JOptionPane;
public class Java_Break_While
Java break for loop
Java break for loop
... for Loop
Example
public class Java_break_for_loop {
public static void... baadshah. In the example, a labeled break statement is used to
terminate for loop
Java Break continue
Java Break continue
Java has two keywords break and continue in its
branching... control from the point where
it is passed.
Break and Continue in Java
Software Services and Solutions,Business Software Services India,Software Outsourcing Services India
Rose India - A Software Company
Rose India Technologies Pvt. Ltd...
provide assistance for offshore development of projects.
Rose India, engaged...
Software Application Development
.NET Programming
LAMP Applications
Jobs at Rose India
Jobs at Rose India
... providers in India. With a sole aim to meet all the technical assistance of the growing... wish to learn as per your convenience. Rose India provides customized and cost
Trainig - Java Beginners
Trainig Hi there
I have done Java at uni
and have came across... offer
about java. is it possible to acess those training
in uk, london.
thanks
profun Hi.. I am Ravi Sangam from India. If you want, i'll
page break using itext in java column wise
page break using itext in java column wise I have a itext report which will displays n number of idcodes which will display from db.But in the pdf... will be displays collapsed. Please help me to find a solution to break the datas
Which is better .Net or Java?
When a developer needs to choose between Java or .NET to develop a new... operating system and the OS just need Java Runtime Environment (JRE).
.NET has AOT... compile a program and compile it to a .Net executable.
Java supports a Connected
Explore Delhi India
Delhi, the capital city of India,
exerts a magical pull over all types... for
practically you'll ever need. This is the Indian culture at its best. But you..., but affords the
view of the entire city.
Must to see Delhi - India Gate
Located
While loop break causing unwanted output
While loop break causing unwanted output Below I will paste my code... as I'm just beginning learning java.
import java.util.Scanner;
public class...{
System.out.println("You are correct");}
break
Website Designing Company India
Website Designing Company India
Roseindia Technology Pvt Ltd is a one of the most front
running website designing company in India that offers... portals, net banking, website for financial and
banking institutes, website
Retriving data from MYSQL without line break using java
Retriving data from MYSQL without line break using java get data without line breaking from mysql table (i.e data stored as mediumtext )using java
sorting student record - Java Beginners
;
//add case statement what ever u want
default:
System.exit(0);
break...sorting student record Program in java for inserting, recording...(){
System.out.println("display");
//do ur work here
}
// Add what ever method u want
Java Break example
Java Break example
Java Break keyword
Java... these loops Java
provides keywords such as break and continue respectively
Hire PHP Developer, Hire PHP Developer India, Hire PHP Developers from India
to Java and .NET. The hosting cost of the
website is also low since Linux... Hire PHP Developer - Hire PHP Developers from India
Hire PHP Developer - Hire PHP Developers from India
Most of the Websites and E
PROJECT ON JAVA NET BEANS AND MYSQL !! PLEASE HELP URGENT
PROJECT ON JAVA NET BEANS AND MYSQL !! PLEASE HELP URGENT i need a project based on connectivity..it can be based on any of the following topics...://
Insertion Sort - Java Beginners
Insertion Sort Hello rose india java experts.If you don't mind.Can you help me.What is the code for Insertion Sort and Selection Sort that displays...)
change(array, j, j-1);
else break;
}
private static void change
Green UML
Green UML
green
is a LIVE round-tripping editor, meaning that it supports both software
engineering and reverse engineering. You can use green to create a UML
Java Program - Java Beginners
://
Thanks...Java Program how will write Java programs using if-else, if-else...("Sunday");
break;
case 2: System.out.println("Monday
Must to See in Agra India
of the city which has the tomb of one of the greatest emperor India has ever had... nights. Many say that if you have visited India, but not seen the Taj, you really... symbolises power and authority which the Mughals had across many parts of India
Flex development company india,Flex development
India Flex solutions offers .NET, PHP, Java supported complex solutions for all..., Java or .NET technologies
Flex/Flash Programming
Introducing...Flex Development in India
Rose India Technologies is a premier Flex
Web Application Development in India, Web Application Development
Web Application Development in India
Develop your Web Applications in India... media applications including media streaming
Rose India offers the ultimate... be.
Distributed Web Applications Development in India:
For the distributed web
Change background color of text box - Java Beginners
Change background color of text box Hi how can i change the background color to red of Javascript text box when ever user enters an incorrect value while on entering a correct value its background color should change green
Green Laser
Green Laser
Green laser is a gas laser using mercury and argon it generates a green beam..., astronomy, and in sea, while green laser is especially used in astronomy
break and continue
break and continue hi
difference between break and
Web Hosting Services,Affordable Web Hosting in India,Web Hosting Service Provider India
of web hosting to
the others on the net that can be accessed from anywhere... e.g. e.g.
PHP, Java, ASP.NET and Windows and Linux. These facilities
allow... of clients, from beginners to Advance users.
RoseIndia.net has a plan named Basic
using if and switch stmt - Java Beginners
be assumed.
Write a program using "switch and if statement" to compute net amount...;
double t1=(p1-(d1*p1))*q1;
list.add(new Shop("T Shirt",q1,p1,d1,t1));
break...=(p2-(d2*p2))*q2;
list.add(new Shop("Silk sari",q2,p2,d2,t2));
break;
case 3
stringbuffer - Java Beginners
://
Thanks... StringBuffer("Rose India Tech");
//Removes the characters at index 10 to 15
What is BREAK?
What is BREAK? What is BREAK?
Hi,
BREAK command clarify reports by suppressing repeated values, skipping lines & allowing for controlled break points.
Thanks
Java StringTokenizer
Java StringTokenizer
In this tutorial we will discuss about String Tokenizer in java. String
Tokenizer allows an application to break into token. Java provide a way to
break a line of text into token which is inside java.util
break and continue
break and continue hi
i am jane
pls explain the
difference between break and continue
Hire PHP programmers, Hire PHP programmer India
is also easy to host. Hosting cost is also less as
compared to the Java and .NET...Hire PHP programmers/Hire PHP programmer in India
Hire PHP programmers and PHP programmers team in India from Rose India
Technologies Pvt. Ltd. We provide
How to write a stylish green text, write a stylish green text, stylish green text
How to write a stylish green text
If you want to write a stylish green text, come to this
example. Some steps are mention here that will help you to write similar easily.
New
jav beginners - Java Beginners
; i; j++){
if (i % j == 0) {
break;
}
}
if (i == j) {
primeNo += i
net_banking
net_banking hi,
I am developing a project on net_banking.. and want to know how to calculate the processing fees on loan(home/vehicle/personal)depending upon loan amount
net beans
net beans how to calculate electric bill,units,load in net beans and how to insert this calculated bill ,unit and load into database and how this calculate value comes on GUI
Outsourcing Company India,Offshore Outsourcing Services,IT Outsourcing Company in India,Offshore Outsourcing Service
latest frameworks in the Java/.NET and Open source
technologies...Outsourcing Services - Outsourcing Company in India
Rose India Technologies Pvt. Ltd. is
Outsourcing Company in India, with
experience
sql and .net
sql and .net I want get coding of connecting data base sql server 2000 to .net please help
Java find prime numbers without using break statement
Java find prime numbers without using break statement
In this tutorial, you will learn how to find the prime numbers without using
break statement.
You all.... In many of the programs, break statement is used to quit the loop.
Actually
.Net - Framework
.Net i am not getting any dot Net tutorials hi,
Thanks
Java Break out of for loop
Java Break out of for loop
...;
brief introduction about Java break label is demonstrated, along with this
concise... statements. These Java labels are break and continue respectively. 'break'
is mainly used
java - Java Beginners
java
Note. I am from philippines and our currency is peso I am from India and our culture is good
dot net
dot net how to open a new window having detailed contents by clicking a marquee text in a page(like news details opening on clicking flash news title) in dot net 2003
core java - Java Beginners
core java how to create a login page using only corejava(not servlets,jsp,hibernate,springs,structs)and that created loginpage contains database(ms-access) the database contains data what ever u r enter and automatically date
hi, - Java Beginners
Friend,
Difference between java and .net:
1)Java is developed by Sun Microsystems where as .Net is by Microsoft Corporation.
2)Java is platform independent where as .Net is not.
3)Java is a language as well as provides run time
PHP Outsourcing, PHP outsourcing work, PHP outsourcing india, Outsource PHP Development work in India
, Java
and .NET programmers having experiences in developing
the quality... development work to us. We
at Rose India Technologies Pvt. Ltd. is dedicated... portals to our dedicated
development team in New Delhi, India.
These days
Summer Destinations in India
Summer Destinations in India
The spring is over and summers have arrived.... With numerous summer destinations in India
there is nothing to worry about the places to chill out in a tranquil
surrounding. The wonder that is India is elegantly and files - Java Beginners
java and files hi
this is varun working for a software company... file from java code,its get executed for that i have used XML Prser.
but now... in file.when ever i execute that program the text file that is notepad
Java Coding - Java Beginners
Java Coding How do I code the following:
Code a switch statement...)
{
case 5 :
ShippingBase=4.95;
break;
case 10 :
ShippingBase=7.95;
break;
case 15 :
ShippingBase=10.95;
break
net banking
net banking hi....
I am developeing a net banking project...and i want coding for following...
this module for new loan application...
First Name,Last Name, Salaried /business, Current Organization, Resident Address, Office
Java basics - Java Beginners
Java basics - Java Beginners what is the difference between.... whereas in object-oriented programming it is to break down a programming task...-oriented programming it is to break down a programming task into objects so an "object
java class - Java Beginners
java class hi sir,
i have to compile two classes(eg:controller.java and client.java) and i have imported some packages in it using jar files(like Activation.jar and servlet.jar) when ever i am running in command promt
Java Training Online from India
Java Training Online from India
Java training online from India is much... are turning towards it.
India, being the hot favorite destination for java.... The student seeking Java
training online from India has a number of options
Java Program - Java Beginners
Java Program Write a Java program that accepts the radius...:
System.out.println("Diameter =" + (2*r));
break;
case 2:
System.out.println("Area =" + (pi*r*r));
break
|
http://www.roseindia.net/tutorialhelp/comment/85540
|
CC-MAIN-2014-10
|
refinedweb
| 2,875
| 57.06
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.