text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
#include <wx/thread.h>
wx wxThread::Wait for the worker thread, but if there are several worker threads it already makes much more sense).
Note that a call to wxCondition::Signal may happen before the other thread calls wxCondition::Wait and, just as with the pthread conditions, the signal is then lost and so if you want to be sure that you don't miss it you must keep the mutex associated with the condition initially locked and lock it again before calling wxCondition::Signal. Of course, this means that this call is going to block until wxCondition::Wait is called by another thread.
This example shows how a main thread may launch a worker thread which starts running and then waits until the main thread signals it to continue:
Of course, here it would be much better to simply use a joinable thread and call wxThread::Wait on it, but this example does illustrate the importance of properly locking the mutex when using wxCondition.
Destroys the wxCondition object.
The destructor is not virtual so this class should not be used polymorphically.
Returns true if the object had been initialized successfully, false if an error occurred..
Waits until the condition is signalled and the associated condition true.
This is a convenience overload that may be used to ignore spurious awakenings while waiting for a specific condition to become true.
Equivalent to
The predicate would typically be a C++11 lambda:
Waits until the condition is signalled or the timeout has elapsed.
This method is identical to Wait() except that it returns, with the return code of
wxCOND_TIMEOUT as soon as the given timeout expires. | https://docs.wxwidgets.org/3.0/classwx_condition.html | CC-MAIN-2019-09 | refinedweb | 274 | 55.58 |
C programming tutorial
A Short Guide On C Programming:Contents
- Introduction
- What Is C
- Comparing C To Other Languages
- C & C++
- What Compiler Should I Use?
- Setting Up Your Compiler
- Your First Program
- Leaving Comments
- Error Debugging
- Variables and Constants
- Mathematical And Logical Operators
- Input and Output
- Functions
- Things You Must Remember
- Books I Recommend
- Bug Buster
- Answers
- Conclusion
INTRODUCTION:
I've seen a lot of people on forums i post on asking information on C programming because there interested in learning, it normally takes a few posts for people to point them in the right direction, i decided to write this tutorial/information text because there are very few of them on the internet (that i have seen) and i hope that this tutorial/information text is more helpful to the beginner programmer than other tutorials and such that are out there, keep in mind this is only a short guide to get you started.
This tutorial/information text was written by Vikas Nale!
WHAT IS C:
C is a medium-low level programming language that was originally developed by Dennis Ritchie at Bell Telephone Laboratories in 1972.
The language was created to design the UNIX operating system which was a lot more popular in 1972 than Windows, well actually Windows didn't even exist at that time.
Because C was such a powerful programming language it became widely used and people started making all sorts of programs with it.
C is called C because its predecessor was called B, believe it or not there is actually a language called D, i haven't bothered to check it out though, not many people code in it.
C is most defiantly the most used programming language in the world today.
COMPARING C TO OTHER LANGUAGES:
You may be thinking why should i learn C?
Well there are many reasons, first comparing C to Visual Basic, when comparing C to Visual Basic, C should be your choice if you are looking to get really stuck into programming Visual Basic programs are quite bloated and often need extra .dll files in order for certain parts of them to run on certain computers, including these files in Visual Basic applications can be a problem and of course makes the program larger than it needs to be.
When comparing C to Delphi, C should be your choice once again, because
Delphi isn't no where near
as popular as C and books and source code examples and books for the language
will be harder to find.
The only other language apart from C and C++ you might be considering is ASM, there's nothing wrong with ASM and its better than C, however ASM is a very difficult language to learn, and to code complex programs it takes a lot longer than if you were to make that program in C, C should be your programming language of choice, below I've listed the pros and cons of each popular programming language.
Visual Basic 6:
Pros: By far the easiest language to program in, simple program creation makes Visual Basic a breeze to program in, lots of source code and books on the internet available for Visual Basic programmers.
Cons: Visual Basic programs sometimes require extra .dll files to be included with applications, Visual Basic is not a portable language and can only be used on Windows.
Pros: Quite easy to program in, not extra .dll files needed.
Cons: Not a very popular programming language.
C & C++:
Pros: Probably the most used program language out there, quite easy to learn and is what is considered a standard programming language, most programs you use are coded in C even C Compilers are coded in C.
Lots of source code available for C programs.
Cons: There aren't really any
ASM:
Pros: The lowest level programming language there is (except hexadecimal and binary.)
Cons: ASM is by far the hardest language out there, and coding is time consuming, coding a decent GUI would be very difficult.
C & C++:
I'm sure you would of heard of C and C++, and I'm sure your thinking what is the difference between the two languages.
C++ is an improved version of C, C++ is based around Object Oriented Programming, you cannot learn C++ with out learning C.
Despite the fact C++ came out, a lot of people still prefer to code in C rather than C++ and C is still the favorite among programmers, C is actually a newer programming language than C++, as the latest version of C was made in 1997 and C++ was made in the 1980's.
C++ is ment to be an improved version of C, as you might already know ++ in C programming is for incrementing, so C++ means an improved version of C.
If you were to program something in C++ instead of C you would be using OOP, where you program the program around the data it will or does hold.
If your going to learn a programming language for a job, you will be wanting to learn C++ as well as C, and maybe even C# which is Microsoft's programming language for the .NET operating system (code name longhorn.)
WHAT COMPILER SHOULD I USE:
There are many compilers out there you could use, some popular ones are LCC W32, Dev C++ and Visual C++
Out of all of these i recommend LCC W32, this tutorial is based on LCC W32 however you should still understand this tutorial even if your using another compiler, because this is a C tutorial, and programming has little to do with what compiler your using.
I have not tried Visual C++, but i do know its expensive, Dev C++ i had problems with, when i tried to compile certain code for some reason it just wouldn't compile, so of course that's why I'm using LCC W32.
Search Google for LCC W32 and you should find a download site, its freeware so there's no need to buy it or open your web browser to or a similar warez/crack site.
SETTING UP YOUR COMPILER:
For those of you who aren't that computer literate, i will go through how to start a Project in LCC W32, step by step.
1. Open up LCC W32
2. Go to File, New, Project
3. It should came up with a menu where you can type certain information, e.g. Project Name.
4. Enter your project name (it can be anything, it doesn't matter.)
5. In "Sources: Working Directory" put the location of the folder where you want your binary to be compiled and your source file (example.c) to be saved to, as with the project name this can be anything you like (anywhere on your hard drive.)
6. Click in the last box (Objects and Executables), you should notice LCC W32 should automatically create a location, it's best just to leave it at that, unless you want to save your binary (compiled versions) somewhere easier to find.
7. Go to Create
8. It should then ask you if you want to generate the application skeleton, Click NO.
9. It should then bring up a screen where you can save your .c file (where your code is going to be saved) name this file what ever you want, although i must note if you save it as example123.c then when you first compile the binary it will be compiled as example123.exe
10. Another screen should come up, click Ok.
11. Another screen should come up with lots of information that will probably confuse you, just leave it as it is and click Next.
12. Another screen should come up, here you might want to change the output file name, the output file naming being the name of the executable when you compile it, under Type of Output you should change it to Console Application, because your a beginner C programmer, you won't want to jump into coding a GUI.
13. Click Next, then click Next again.
14. You should now have a white field where you can type your code, congratulations, you have now setup your compiler for use.
YOUR FIRST PROGRAM:
#include <stdio.h>
int main()
{
printf("Hello Aelphaeis");
return 0;
}
Now whack that into your compiler and hit compile.
You have now compiled your first program, traditionally most C Programmers first program is "Hello World" however i thought "Hello Aelphaeis" sounded a bit cooler.
Now ill explain what each part of the code does:
On the first line you have #include <stdio.h> this tells the compiler when compiling to include stdio.h header files in your application, stdio.h is needed for output to the screen.
On the second line you will see int main(), now this section of code is in virtually every C program there is, main() is the main part of the program, int is put before main() to tell the compiler the main() function will only return an integer (int = integer.)
You will notice on the 3rd line and last line there is "{" and "}" these brackets signal the beginning and end of a function, all functions begin with { and end with }.
On the fourth line you should see printf("Hello Aelphaeis");
printf is a function built into C, its a function that prints text to the screen or other output device (output device is usually the screen.)
"Hello Aelphaeis" is the text inside, notice it's in quotation marks, meaning it's NOT a variable it's actually a text string.
And of course last you will notice the semicolon, a semicolon tells the compiler that, that's the end of the command, generally semicolon's are on the end of each line of code.
However you would NOT put a semicolon after int main()
Last of all you should notice return 0;, this tells the compiler that the function main() will return nothing to the function that called it, however no function called it (although that doesn't matter), because return 0; is there the compiler knows the function will not return any value (all functions have to have a return or at least "return 0;".)
LEAVING COMMENTS:
When programming, sometimes you may want to make your program open source, and then of course other people are going to be downloading your code, now if there going to download your code they want to know what it does, and of course don't like to look through the code line by line to find out what it does, what can you do about that? (rhetorical question of course) You can leave comments in the code for them to read an example would be.
/* This is a Hello Aelphaeis Program */
#include <stdio.h>
int main()
{
printf("Hello Aelphaeis");
return 0;
}
See
/* This is a Hello Aelphaeis Program */
Anything between the /* */ is a comment, this comment can also be multi-line another way of leaving a comment in is using
//example of comment in code
Although i must note that method isn't used very often and is discouraged by some programmers (don't ask me why.)
And that's basically how to leave comments in your code for other people to read.
Tip: Do not leave to many comments, only a few that explain what the hard to understand parts of the program do.
ERROR DEBUGGING:
#include <stdio.h>
int main()
int age
{
printf("Enter Your Age:");
scanf("%d", &age)
printf("Your age is %d", age);
return 0;
}
Put that into your compiler and hit compile, you should notice LCC W32 should come up with the following error message:
Error: C:\aelphaeis\aelphaeis.c 10 Syntax error; missing semicolon before "printf"
Now what does that tell you? If you don't know click on the error message, it should highlight a line, it's telling you there is a missing semicolon before that line, look if you didn't already notice it, you should notice there is a missing semicolon after:
scanf("%d", &age)
Put the missing semicolon in at the end of it and hit compile, it should of compiled and fine, and you should of learnt the basics of error debugging, one thing you must remember the compiler doesn't always pin point exactly where the error message is, so you have to pin point it yourself sometimes.
Error reports in LCC W32 are great, because they tell you a bit about what is wrong with the code, although as i said sometimes you have to look hard before you notice the error, also you find a lot of the time fixing up one error will sometimes fix up all the other error messages you get from LCC W32.
You know what "printf("Enter Your Age:");" does don't you?
I'm sure your wondering though about the line below it and "int age"
well first ill explain "int age", that basically declares the word "age" as an integer variable (variable is basically where data can be stored.)
scanf("%d", &age) is a bit more complicated, scanf is a function built into C which takes input from the keyboard (or other input device) and places it inside a variable.
"%d" basically means to put the received data into to an integer variable and &age tells the compiler that the received data that is put into %d% should be put into the variable "age" which if you have a decent memory we declared was an integer variable at the beginning of the program.
So basically after scanf("%d", &age); is completed, the number the person entered is now stored in the integer variable "age" then you have the next line of code:
printf("Your age is %d", age);
You should understand what that does, if not like with scanf you have a %d in there, telling the program that "%d" is an integer variable, then after that you have a comma then "age" which is the variable that is printed to the screen.
If you were to put:
printf("Your age is %d %d %d", age, age, age);
That would print the number 3 times, for each %d you have you have to put a variable to be printed after the comma in the bracket.
If you were to have the following code:
#include <stdio.h>
int age;
int mum;
int main()
{
printf("Enter Your Age:");
scanf("%d", &age);
printf("Enter Your Mum's Age:");
scanf("%d", &mum);
printf("Your age is %d and your Mum's age is %d", age, mum);
return 0;
}
Notice how in the line:
printf("Your age is %d and your Mum's age is %d", age, mum);
You have two %d, you may be confused, what the program does is, first prints %d the first %d it prints if the first variable after the semicolon, you will notice the integer variable age is first, so it prints that, then when it comes along to the next %d it looks for the second variable to print, which of course is the integer variable "mum"
When printing variables you must remember the spelling has to be correct and the case does also, if you were put have.
printf("Your age is %d and your Mum's age is %d", age, MUM);
You would notice your compiler would generate two error messages,
Error: C:\aelphaeis\aelphaeis.c; 12 undeclared identifier 'MUM'
Error: C:\aelphaeis\aelphaeis.c 12 possible usage of 'MUM' before declaration.
If you click on the error message you should notice it highlights the line, from the error message you should be able to interpret what is wrong, it says undeclared identifier, but we declared the variable mum didn't we?
Of course what is wrong is that the word "mum" is in the wrong case, so the compiler generated an error, and of course you and all the other programmers out there are lucky we have this sort of feature in compilers, because you may perform a typo and not even notice it, all you have to do is change the "MUM" to lowercase so the program understands its the "mum" variable and not some other variable.
You should notice if you fix up the error and compile, there are no error messages, before there were two error messages, fixing one part of the code got rid of both of them, of course when programming if you get a lot of errors, the chances are there are a few bugs in your code, not just one.
VARIABLES AND CONSTANTS:
You already know what a variable is, however you do not know about different variables and the difference between them, you also don't know what constants are, so let's first take a look first at different type of variables.
Different variables are used for different purposes, if you wanted to store numbers in something you would of course use an integer (assuming the number wasn't to large and didn't have decimal points.)
int (integer):
int (integers) can be declared simply by doing
int name;
An integer can hold any number from -32768 to -32767, generally integers should be used for any basically calculation.
Integer variables do not have decimal points.
short (short integer):
This is exactly the same as an int, you might be thinking well why the hell is there two of them then?
Well on very old computers there is a difference between the two, however on all modern computers int and short are the same thing, it's best just to use int, because int is more widely used than short.
You can declare a short integer by:
short name;
float:
A float variable can hold 1.2E-38 in other words when using very large numbers you will be wanting to use a float, or if your using numbers with decimal points you will also be wanting to use a float variable.
float variables can be declared by doing:
float name;
char(character):
char variables are usually used to hold letters and numbers, char variables can only hold one character at a time, so if you wish to store a word inside a char variable you will have to use the variables indexes (not covered in this guide.)
character variables can be declared by:
char name;
long (long integer):
A long integer is just like a normal integer but it can hold more data than a normal integer (incase you haven't already guessed.)
A long integer can hold between -2,147,483,648 and 2,147,438,647.
You can declare a long integer by:
long name;
unsigned char (Unsigned Character):
unsigned character variables can hold any character from 0 to 255, in other words basically all the characters there are really.
You can declare an Unsigned Character by:
unsigned char name;
unsigned int(Unsigned integer):
An unsigned integer can hold any number from 0 to 65535 you can declare an unsigned integer by:
unsigned int name;
unsigned short(Unsigned short integer):
I shouldn't need to tell you this is exactly the same as unsigned int, except on old computers or systems that aren't very popular.
unsigned long(Unsigned long integer):
A unsigned long integer can hold all numbers from 0 to 4,294,967,295
This variable should be used to hold large numbers.
This can be declare by:
unsigned long name;
double(Double Precision):
A Double Precision variable can hold between 2.2E to 308.
In other words you can store very large numbers in a Double Precision variable.
You can also use these variables in constants, you should make sure you use the appropriate variable though for holding the appropriate data, even if you can hold a small number in a larger variable it's best not to, that way your program will be a lot more efficient and use up less RAM.
Below is some code that demonstrates variables and constants:
#include <stdio.h>
/* Aelphaeis 0wnz j00 */
#define pi 3.14
int diameter;
int answer;
int main()
{
printf("Enter the DIAMETER of your circle:");
scanf("%d", &diameter);
answer = diameter * 2 * pi;
printf("Circumference is: %d", answer);
return 0;
}
On the first line we have #include <stdio.h> if you have a decent memory you should remember what that does, it includes header files needed for output to the screen.
Below that we have /* Aelphaeis 0wnz j00 */ which is just a comment, telling you that i 0wn, lol.
Then on the third line we have #define pi 3.14, what this does is declare "pi" as a constant, a constant which is equal to 3.14, which you should know if you attended school regularly is PI.
Below that is
int diameter;
int answer;
This is declaring diameter and answer as integer variables.
Then we have int main() which of course is the main part of the program, the first line inside that main() is
printf("Enter the DIAMETER of your circle:");
That simply prints information to the screen, asking the user to input the diameter of his or her circle.
scanf("%d", &diameter);
I hope you remember what that does, scanf is used for receiving information from the input device connected to the computer, which in most cases should be a keyboard, inside the brackets we have "%d", &diameter, %d means an integer variable, after that is a comma then &diameter which means the value stored in %d will be transferred to the variable diameter.
answer = diameter * 2 * pi;
Basically the CPU will work out diameter(number stored inside variable) multiplied by 2 then multiplied by PI, then it will store the answer inside the integer variable answer.
I also must not forget to mention that there is no difference between
answer = diameter * 2 * pi;
and
answer = diameter*2*pi;
White space does not matter to your compiler, however will be formatted accordingly when you save your work as a .c file.
The last important line we have is
printf("Circumference is: %d", answer);
Which prints "Circumference is:" and then the answer, if you look you will see %d again, which of course means there will be an integer printed to the screen (because of the printf command before it) and then after the text is a comma then answer which is the variable which holds the answer to the problem, if you don't understand what i just said, you might want to carefully look through the code again.
If you compile the program and then test it out, you should notice the answer it returns is a whole number and will have no decimal points it doesn't take a genius to figure out why that is, incase you don't know its because the variable was stored inside an integer which of course cannot hold decimal points, if you wanted to have decimal points in there you would have to use another variable like float.
Now you should have a fair idea of what variables and constants are and how to use them.
To sum the following part of this text up, variables are for storing information in RAM, there an allocation of memory where you can store data, a constant is a variable that once declare cannot be changed, and constants are probably mostly used in mathematical calculations.
There is something important i must note as well, there is a difference if you declare a variable outside of a function and inside, if your programming and your using a variable, if the variable is only needed within one function it is best just to declare and initialize that variable in that function, if the variable is going to be used in multiple functions then you would use what is called a Global Variable which is where the variable is declared under the #include tags and can be used in any functions, although you can use global variables for everything it is strongly recommended that you do NOT because it makes your coding look sloppy and all professional programmers stay away from using too many Global Variables.
When declaring variables it is important to declare a legal variable name below are some rules to use when declaring a variable.
1. The name can only contain letters, digits and underscores "-"
2. The first character of the name must be a letter.
3. When using variables in programming the case of each letters counts, for example test1 is different to Test1.
4. The variable can NOT be one of C's keywords e.g. long int.
When declaring a variable it's best to name it what relating to what ever information it is going to hold (can't be a C keyword though.)
Below are some illegal and legal C variable names (examples.)
long - illegal, one of C's keywords
hello - legal
test# - illegal, contains illegal character
123test - illegal, first character is a number
xxx_yyy - legal, pretty stupid name though.
Using typedef:
Are variable keywords hard to remember?
Well you no longer have to worry, because of a function C has built in called typedef.
Let's just say you wanted to use something else instead of int to declare a integer variable, using typedef you could do:
typedef int integer;
You could then use the word integer instead of int.
So if you were to declare a variable as an integer you could do:
integer example;
MATHEMATICAL AND LOGICAL OPERATORS:
While programming your obviously at some point going to program program's that require users to enter information, and of course this information will be compare to other information as well as things that you define in your program.
This chapter will also introduce you to the
if (x == y) statement.
Below is a program that compares two numbers entered by a user:
#include <stdio.h>
int one1;
int two2;
int main()
{
printf("Enter 2 Numbers:");
scanf("%d", &one1);
printf("Enter a second number:");
scanf("%d", &two2);
if (one1 > two2)
printf("First number is larger");
else
printf("Second number is larger");
return 0;
}
You should know what each part of the code does, except for maybe:
if (one1 > two2)
printf("First number is larger");
else
printf("Second number is larger");
Basically what this does is, if the first variable which is one1 is larger than two2 then it prints "First number is larger", that if it's true, if it does not evaluate to true then
else
printf("Second number is larger");
So now you should know basically how to compare two variables.
Let's now alter the code for it to compare something a different way.
#include <stdio.h>
int one1;
int main()
{
printf("Enter your age:");
scanf("%d", &one1);
if (one1 > 16)
printf("Your older than Vikas Nale");
else
printf("In j00 face I'm older than you!");
return 0;
}
What this does is ask the user for there age, then compares it to a number instead of comparing it to another variable.
Let's just alter that code again (the last part) so it does something slightly different.
if (one1 == 16)
printf("Your the same age as me");
else
printf("Your not the same age as me");
What the above code does is check if the user enter 16, notice it uses "==" and not "=" in C programming = places information from one variable into another, if you wish to compare something you have to use "=="
The Mathematical Operators used in C are:
Equal "=="
Greater than ">"
Less than "<"
Greater than or equal to ">="
Less than or equal to "<="
Not equal "!="
Try making some short programs using these Mathematical Operators.
So now you know about Mathematical Operators you would want to be able to use them with more flexibility this is where Logical Operators come into play.
AND &&
OR ||
NOT !
So now you know what they are, below is an example program where one or more of the Logical Operators are used.
#include <stdio.h>
int one1,two2;
int main()
{
printf("Enter a god damn number:");
scanf("%d", &one1);
printf("OK, now enter another:");
scanf("%d", &two2);
if (one1 && two2 == 5)
printf("Both numbers equal 5");
else
printf("One or both of the numbers do not equal 5");
return 0;
}
If you look at the program you should be able to see what it does, if not well maybe you should of been reading the rest of this book a bit better, but anyway i will explain it to you.
int one1,two2;
Declares one1 and two2 both as integer variables.
Then you have
printf("Enter a god damn number:");
Which asks the user to enter a number, then below that
scanf("%d", &one1);
Notice the %d, that means the data entered will be stored as an integer, then you have a comma and then &one1 which stores the information inside the variable one1.
Then we have the part which compares the two numbers.
if (one1 && two2 == 5)
That basically means if the variable one1 and the variable two2 equals 5 then execute the following code:
printf("Both numbers equal 5");
If not (else) execute this code:
printf("One or both of the numbers do not equal 5");
Then of course you have the
return 0;
At the end of the code, telling the program to return 0, in other words NOTHING.
Now that you know how to use the AND operator you should be able to also use the OR operator "||" with out quotation marks that is.
Lets now have a play around with "!" which if you have a decent memory you should know means NOT.
Examine the following program:
#include <stdio.h>
int Aelphaeis;
int main()
{
printf("Enter a number:");
scanf("%d", &Aelphaeis);
if (Aelphaeis != 5)
printf("The number you entered is NOT 5");
else
printf("The number you entered IS 5");
return 0;
}
If you examine that code you should realize what it does, you notice
if (Aelphaeis != 5)
And think hey why isn't that
if (Aelphaeis !== 5)
When using "!" you do not have to do double equal signs.
You could also change != to something like !=> 5 , meaning not larger than 5.
You now should know about the if, else statement as well as AND, OR, NOT and equal to, not equal to, greater than, less than, greater than or equal to or less than or equal to.
For some tasks it's best to use C's conditional operator "?" instead of the if statement, this is quite basic to use and i will explain below.
example:
z ? 100 : 500
The above means, if z is true (meaning z actually contains something higher than 0) then z = 100 else z = 500.
It would be similar to the following if statement.
if (z)
z = 100
else
z = 500
But using C's conditional operator is a shorter way of doing it.
Adding Subtracting & Incrementing, Decrementing:
During programming it's more than likely for one reason or another you will want to increase of decrease a number of alter it some how.
You can add, multiply, divide or subtract quite easily below is some code showing an example.
#include <stdio.h>
int one1;
int two2;
int three3;
int four4;
int main()
{
printf("Enter a number:");
scanf("%d", &one1);
one1 += 5;
printf("That number plus 5 is: %d\n", one1);
printf("Enter another number:");
scanf("%d", &two2);
two2 -= 5;
printf("That number subtract 5 is: %d\n", two2);
printf("Enter another number:");
scanf("%d", &three3);
three3 *= 5;
printf("That number multiplied by 5 is: %d\n", three3);
printf("Please enter one more number:");
scanf("%d", &four4);
four4 /= 5;
printf("That number divided by 5 is %d\n", four4);
return 0;
}
When you run that program you may notice it rounds off numbers, this is of course because we are using integer variables, if you wish for them not to round of use float variables, you would also have to change "%d" to "%f" because you would be inputting and outputting float variables.
Now you know how to use addition, subtraction, multiplication and division you should learn how to increment and decrement a variable.
Here is a very short program that shows you how to do this:
#include <stdio.h>
int x = 7
int main()
{
printf("Incrementing Number....");
x++
printf("%d", x);
return 0;
}
By analyzing the program, you could see x is declared as an integer, and at the same time it is also initialized, it's at initialization x is equal to 7, then you have the program print "Incrementing Number...." then it increments the number (x++) then prints the number.
If you wished to decrement x all you would have to do is x--.
Mathematical Operator Precedence:
In an expression that contains more than one operator the order of each calculation will be worked out by the operators precedence.
Below is a list showing the precedence of operators in C.
Level Operator(s)
1 () [] -> .
2 ! ~ ++ -- *
3 * (multiplication) / %
4 + -
5 << >>
6 < <= > >=
7 == !=
8 & (bitwise AND)
9 ^
10 |
11 &&
12 ||
13 ?:
14 = += -= *= /= %= &= ^= |= <<= >>=
15 ,
Although some operators take precedence over others some are on the same level as each other.
x = 3 + 3 * 4;
Would make x equal to 24, since multiply and addition are on the same level the line of code would mean, 3 + 3 = 6 then multiply 6 by 4 and store the value inside of x, which by the way you would want to make an integer.
If you were using lots of different operators and you wanted to work out a calculation with your own precedence so the calculation ends up being correct you can use parentheses to enclose bits of your calculation for example.
x = (4 * 5) + 6
The first thing the code would do is multiply 4 times 5 before adding 6 to it.
That's just a basic example, using multiple parenthesis you can co-ordinate the precedence of a calculation, for example:
x = 3 * (2 * (8 + (6 / 2)))
6 / 2 is deeply nested inside the parenthesis therefore it would be the first thing to be calculated by your computers CPU, after than 8 would be added to it, then it would be multiplied by 2 then the answer to the whole lot would be multiplied by 3 then the answer would be stored inside x, again you would want x to be an integer.
INPUT AND OUTPUT:
Well you should of learned about the basics of input and output, but to be able to do more programming of course you will need to know more about input and output, first we will start off explaining more on printf and scanf.
printf:
Ok printf is quite an easy function to use in C you know how to print text strings and you know how to even print integer variables, let's learn about printing other variables.
#include <stdio.h>
float x =10, y = 12;
int main()
{
printf("x equals:%f\n", x);
printf("y equals:%f\n", y);
return 0;
You should be able to identify what the above code does, first we have the header file which is needed for output to the screen, then we have
float x = 10, y =12
This declares and initializes x and y and sets what data they contain.
The only other thing you should notice that is different its "%f", this is used in printf when you want to print a float variable to the screen.
There is one thing i most probably didn't mention earlier in this text, what if you wanted to write a very long sentence when writing your code but didn't want your save file to be very long in width.
You can use "\" to break lines while using printf below is an example:
#include <stdio.h>
void main()
{
printf("Hello My Name Is Vikas Nale");
}
scanf:
You know how to print float variable to the screen, and I'm sure you could probably figure out how to store float variables using scanf just incase below is some example code:
#include <stdio.h>
float x,y;
int main()
{
printf("Enter a number:");
scanf("%f", &x);
printf("Enter another number:");
scanf("%f", &y);
printf("Numbers you entered %f & %f", x, y);
return 0;
}
You should notice if you compile that code and run it, that it prints out the two numbers you entered, if you entered 10 and 8 it would print them out as 10.00000 and 8.00000, this is because the variable you are using is a float variable and designed to hold large numbers, if your only wanting to hold small numbers with out decimal points it's best to use an integer variable, scanf can only be used for storing numerical data.
You know how to input and output float and integer variables using %f and %d, below i have listed the other Specifiers you can use to print other variables.
%c char
%d integer
%ld long integer
%f float
%s char string
%u unsigned integer
%lu long unsigned integer
When printing out stuff using the examples in this text you may of noticed the \n which you can use to make a new line, and you probably already know what I'm going to say now, yes there are other things like \n you can use with printf below i have listed them.
\n New line
\b Backspace
\a
Bell
\\ Backslash
\t Horizontal tab
\? Question mark
\' Single quote
Ok now that you know about this stuff let's make a small program to test it out.
#include <stdio.h>
void main()
{
printf("\tHetrosexual\tHomosexual\n");
printf("\tAelphaeis\tBrownsun\n");
printf("\a\a\a\a\a");
}
You have of noticed, well you should of noticed in the first and second printf i used i used \t, \t puts things into columns, i made two columns one with Heterosexual and one with Homosexual then i put a name under each one.
Then under than i had \a\a\a\a\a which made your computer beep if you compiled and ran the code.
Using puts function:
The puts function is a bit like printf except it is used only when you want to output a text string with not variables, below is an example code for using puts.
#include <stdio.h>
void main()
{
puts("This was printed using");
puts("The puts function");
FUNCTIONS:
C is a modular programming language, which means it uses functions, not just functions that are built into C but also user defined functions, which if you don't already know are functions that are coded by the programmer.
Below is a simple example showing a function:
#include <stdio.h>
void example();
int main()
{
printf("The next thing will be printed by a separate function\n");
example();
return 0;
}
void example()
{
printf("This was printed by example");
}
Ok now i will explain the pieces of code that you haven't worked with before, the first bit that will seem unknown will be
void example();
Now this is the function prototype, function prototypes are always needed and are placed above all the other functions and should contain the type of variable it returns in the function prototype.
In the example code we had void example() this is because this function did not need to return any information to the function that called it or any other function (even though there were no other functions except main.)
Ok now that we know how to call on a function and make it execute code, let's now learn how to pass an argument to a function.
#include <stdio.h>
int half_of(int x);
int main()
{
int x, y;
printf("Please enter a number:");
scanf("%d", &x);
y = half_of(x);
printf("Half of %d is %d", x, y);
return 0;
}
int half_of(int x)
{
return (x/2);
}
Ok if you look at the code you should know basically what it does, i will explain the hard parts.
First we have
int half_of(int x);
This of course is the function prototype it declares that the function half_of can return an integer variable and that other functions can pass another integer variable(s) to it as an argument (int x) is the part that declares the variable that can be passed to it.
int x, y;
Declares x & y as integer variables.
printf("Please enter a number:");
Prompts the user to enter a number.
scanf("%d", &x);
Stores the entered number as an integer inside the variable x.
y = half_of(x);
Calls on the half_of function and passes the variable x to it, the returned data is stored in side the variable y.
printf("Half of %d is %d", x, y);
Prints the variable x which stores the original number the person entered, then prints the variable y which has the number which the half_of function return to it.
return (x/2);
Simply returns x divided by 2.
Ok now you know a bit about functions, let's take a look at a function which is a bit more complex than the ones we have already had a look at in this text:
#include <stdio.h>
int x, y;
int multiply(int x);
int main()
{
printf("Enter a number between 1 and 10:\n");
scanf("%d", &x);
if( x > 10 || x < 1)
{
printf("Enter a number between 1 and 10 IDIOT!\n");
}
else
{
y = multiply(x);
printf("Your number is %d", y);
}
return 0;
}
int multiply(int x)
{
if (x <= 5)
{
printf("Your number is less than 5\n");
return x;
}
else
{
x *= 10;
printf("Your number is larger than 5 and for no reason has been multiplied by 10\n");
return x;
}
}
Now lets analyze this code.
On the first line we have the #Include which includes the header files needed for output to the screen.
Then we have int x, y which of course declares x and y as integer variables and below that we have a function prototype
int multiply(int x);
The function is called multiply it can return integer variables and one integer variable can be passed to it.
After the function prototype we have
printf("Enter a number between 1 and 10:\n");
scanf("%d", &x);
Which prompts the user to enter a number between 1 and 10 then stores the entry inside the integer variable x.
After the information is stored inside the variable the number is checked to see that it is NOT greater than 10 or less than 1, if either evaluates to true (|| = OR) then the following is printed to the screen:
Enter a number between 1 and 10 IDIOT!
If the number is between 1 and 10, the program then perform the following
y = multiply(x)
This passes the x variable to the function multiply, it is then returned and the returned data is then stored inside the integer variable y.
Now lets examine the multiply function:
if (x <= 5)
{
printf("Your number is less than 5\n");
return x;
}
Basically it checks if the number is less than 5, if true, it prints:
"Your number is less than 5" (with out quotation marks)
it then returns x and the main() function prints
Your number is: x (x being a variable.)
If the if statement evaluates to false then to program goes to
else
{
x *= 10;
printf("Your number is larger than 5 and for no reason has been multiplied by 10\n");
return x;
}
This multiplies x by 10 (x *= 10) then prints:
"Your number is larger than 5 and for no reason has been multiplied by 10"
Then returns the variable x, which the program multiplied by 10, and then the main() function prints:
"Your number is: x" (x being a variable.)
THINGS YOU MUST REMEMBER:
1. A semicolon should go on the end of each line.
2. Semicolon's do not go on the end of the beginning of a function.
3. For every function you must have a function prototype.
4. Remember to use the right type of variables.
5. When programming it is essential to include the right #include tags.
6. You must declare a variable before you can use it.
7. When using scanf remember to use "&" before the variable.
8. C is a portable language, but sometimes you might have to alter your code a bit for it to run on another operating system.
9. When using variables with functions you must pass the variable to the function from the calling function to the function that needs it.
10. Global variables are good but you should only use them when you have to.
11. The include tags should go at the top of your code.
12. There is a difference between upper and lowercase variables in C.
13. Comments are helpful but you don't want them everywhere.
14. C source files should be saved as example.c
15. Most importantly C for some people is quite hard to learn, take your time reading tutorials and such on C, it will take a while before you really get into programming.
16. This is a just a short guide, after reading this read some of the books i recommend.
BUG BUSTER:
Now that you know the very basic's of C programming, it is time to test what you know below are some short programs, however this programs contain errors it is your job to fix up this programs using your basic programming knowledge.
1.
#include stdio.h
int main()
{
printf("Hello World")
return 0;
}
2.
#include stdio.h
int main()
{
printf(Please enter a number)
scanf("%d", hello)
return 0;
}
3.
#include stdio.h
char hello
int main()
{
printf(Please enter a number between 1 and 10)
scanf("%d", hello)
print();
return 0;
}
int print(int print);
{
printf("%d", hello)
return 0;
}
4.
include# stdio.h
void main()
{
printf(Enter a number)
scanf("%d", hello)
print(hello)
return 0;
}
char print(int x)
{
printf(x);
return 0;
}
Now IF you were able to fix up all the errors in the above programs, you have successfully studied this guide, and you can now move on to a more advanced guide.
If not you might want to read through this text again and study it more heavily.
ANSWERS:
For those of you who just can't manage to find out what is wrong with one or more of the programs, below i have listed the errors in each one.
1.
1. No triangular brackets surrounding stdio.h
2. No semicolon after printf
2.
1. No triangular brackets surrounding stdio.h
2. No semicolon after printf
3. No quotation marks before and after text in printf
4. Now ampersand (&) before variable hello
5. Variable hello is not declare as an integer
3.
1. No triangular brackets surrounding stdio.h
2. hello should be declared as an integer
3. No quotation marks before and after text in printf
4. No semicolon after printf
5. Now ampersand (&) before variable hello
6. No semicolon after scanf
7. Function print does not have a prototype
8. The function print, it's start:
int print(int print)
Should NOT have a semicolon after it.
9. Not semicolon after printf in print function
4.
1. # is after include
2. stdio.h is not surrounded by triangular brackets
3. Should be int main() not void main()
4. No quotation marks in printf
5. No semicolon after printf
6. No semicolon after scanf
7. No ampersand (&) before hello
8. hello is not even a declared variable, should be declared
9. No semicolon after print(hello), incase your confused what this does is pass the variable hello to the function print, it's not ment to be printf.
10. char print should be int print
11. The function print has no prototype
BOOKS I RECOMMEND:
Primers Guide To C
Teach Yourself C In 21 Days
Practical C Programming, 3rd Edition
The C Programming Language
CONCLUSION:
I hope this text was of help to you and you enjoyed it, you should now make up your mind whether or not C is the right language for you, now that you know the very basics of C.
Maybe one day i will write a full guide to C programming.
GREETZ TO:
htek, The Goon Squad(TGS-Security.com), syst3m 0f cha0s, The Media Assassins, Read101, HackJoeSite and Tomchu.
2 comments : on " C programming "
hiii
nice blog and good conten
Thesis Writing.
apple ipad service center in chennai | apple iphone service center in chennai | Apple laptop service center in chennai | apple iphone service center in chennai | Mobile service center in chennai | http://nvprojects.blogspot.com/p/c-programming.html | CC-MAIN-2019-09 | refinedweb | 8,079 | 60.99 |
As part of a recent project, I needed to create a WCF service, host it in IIS and then access it through a remote client. The client in this case implemented an interface that would allow the client to connect to other sites as well- the concrete implementation of the interface dealing with all the data.
There were common DTOs being used on the client side, and since I had complete control over both the client and server side code, I thought I would try to use RIA’s DomainService in a non-RIA app, to pull entities from the database, convert them to the common DTO type and then send them across the wire to the client.
Everything was off to a great start and then I noticed that after one particular (or so I thought) request, the WCF service was completely locked up. I had to either recompile my service or restart IIS with iisreset to get things working again.
The errors were quite painful to track down. Data was returned to the client just fine on the first call- but subsequent calls failed with the following exception:
System.ServiceModel.CommunicationException.
After enabling WCF Tracing () I saw the following error:
The InnerException message was ‘Type ‘System.Globalization.GregorianCalendar’ with data contract name
‘GregorianCalendar:’
is not expected. Add any types not known statically to the list of known
types – for example, by using the KnownTypeAttribute attribute or by adding
them to the list of known types passed to DataContractSerializer.’. Please
see InnerException for more details.
After some head scratching, concluding the error wasn’t informative, scouring the net etc., etc., I found the following post:
Turns out you can’t sent abstract or virtual types over WCF! Whoops. In one of my objects, I was sending a CultureInfo object, which has a Calendar member. And while calendar is virtual, the concrete implementation being sent was a GregorianCalendar.
In my case, the fix would have been pretty easy
[DataContract] [KnownType( typeof( CircleType ) )] [KnownType( typeof( TriangleType ) )] public class CompanyLogo2 { [DataMember] private Shape ShapeOfLogo; [DataMember] private int ColorOfLogo; }
Use the KnownType attribute. But in other cases, like with System.Type, these are internal and you are basically out of luck.
So I ended up NOT sending my object, but rather just sending generated Entity objects- as these seem to be safer than what I could put together to send and crash WCF. | http://blogs.interknowlogy.com/tag/wcf-crash-servicemodelexception/ | CC-MAIN-2021-10 | refinedweb | 400 | 50.57 |
CQRS: Is it okay to return the result to the ICommandExecutor.Execute () method?
I have some thoughts on team design at CQRS. I want to hear your opinion on my thoughts. Thank you in advance!:)
CQRS has Teams and Team Executors. Sometimes we want the command executors to return some result after execution is complete. One possible solution is (C #):
public interface ICommandExecutor<TCommand> { void Execute(TCommand cmd); } public interface ICommandExecutor<TCommand, TResult> { TResult Execute(TCommand cmd); }
Good. We use two command execution interfaces. Now let's see the client code:
var cmd = new MyCommand(); commandBus.Execute(cmd); // execute no result commandBus.Execute<MyResult>(cmd); // execute result
Yes, now we can return the exeuctor result. But the programmer can be confusing when writing the above code: can this command be executed or not? To get the answer, the programmer needs to examine the source code of the framework to see if there is a MyCommandExecutor or a MyCommandExecutor. This is bad! Very confusing!
So, in my opinion we should DELETE
ICommandExecutor<TCommand, TResult>
. That is, I believe that command executors should always return void. The design is
ICommandExecutor<TCommand, TResult>
bad!
If we need to see what has changed after executing the command. We have to make a new query to the database after calling commandBus.Execute (cmd).
What do you think about this?
source to share
No need to add a second interface. I'm not sure if the return values are appropriate for commands in CQRS, but I sometimes do it with my commands (but I don't follow CQRS). But instead of having a second interface, add the output property to the command.
public class CreateCustomerCommand { // customer properties here // output property public Guid CustomerId { get; internal set; } }
But keep in mind that commands with output properties can never run asynchronously.
If you really want to have an executor interface with a return value (which I don't recommend), check out this article , This article is about implementing queries in a SOLID way, but it addresses the problem of defining a type-safe interface that allows you to return data.
BTW, in the previous example, the command can easily be made asynchronous if the property
CustomerId
has an input property. You allow the client to supply a new random Guid. This way, the client already has an available identifier and does not have to wait for the results to be available.
source to share
In CQRS, the command side should not return anything as it violates the anatomy of the drawing. Your own thoughts on this are correct.
However, Greg Young often mentions the result of the Ack / Nack command operation (or is used anyway). Most messaging systems support such responses. The downside to the expected result is that you cannot be completely asynchronous. I never felt the need for Ack / Nack, as one of the foundations of CQRS is that the team must always be successful, so there is no point in returning Ack / Nack.
Ask yourself what you need to return. What operation should return information that you don't already have on the send / command side? Take the time to understand this before letting your teams become queries.
source to share
Strictly speaking, if you use the patter command, it shouldn't return anything, but it should always be
void
. You should use another command (query) to get any data
source to share | https://daily-blog.netlify.app/questions/1892567/index.html | CC-MAIN-2021-43 | refinedweb | 568 | 57.16 |
doodoo
A library and Leiningen plugin to run
cljs.test in many JS environments. For
Boot plugin, see boot-cljs-test.
...and I would have gotten away with it, too, if it wasn't for you meddling kids.
The latest stable release:
{:plugins [[lein-doo "0.1.10"]]}
To use doo you need to use
[org.clojure/clojurescript "0.0-3308"] or newer.
UsageUsage
PluginPlugin
All arguments are optional provided there is a corresponding default under
:doo
in
project.clj:
lein doo lein doo {js-env} lein doo {js-env} {build-id} lein doo {js-env} {build-id} {watch-mode}
js-envcan be any
chrome,
chrome-headless,
firefox,
firefox-headless,
ie,
safari,
opera,
slimer,
phantom,
node,
rhino, or
nashorn. In the future it is planned to support
v8,
jscore, and others.
- Note that
chrome-headlessrequires
karma-chrome-launcher>= 2.0.0 and Chrome >= 59
- Note that
firefox-headlessrequires
karma-firefox-launcher>= 1.1.0 and Firefox >= 56
watch-mode(optional): either
auto(default) or
oncewhich exits with 0 if the tests were successful and 1 if they failed.
build-idis one of your
cljsbuildprofiles. For example
testfrom:
:cljsbuild {:builds [{:id "test" :source-paths ["src" "test"] :compiler {:output-to "resources/public/js/testable.js" :main your-project.runner :optimizations :none}}]}
Notice that
:main is set to the namespace
your-project.runner
where you define which test namespaces you want to run, using:
(ns your-project.runner (:require [doo.runner :refer-macros [doo-tests]] [your-project.core-test] [your-project.util-test])) (doo-tests 'your-project.core-test 'your-project.util-test)
doo.runner/doo-tests works just like
cljs.test/run-tests but it places hooks
around the tests to know when to start them and finish them. Since it
is a macro that will be calling said namespaces, you need to require
them in
your-project.runner even if you don't call any of their
functions. You can also call
(doo.runner/doo-all-tests) which wraps
cljs.test/run-all-tests to run tests in all loaded namespaces.
Notice that
doo-tests needs to be called in the top level and can't
be called inside a function (unless you explicitly call that function
in the top level).
Then you can run:
lein doo slimer test
which starts an ClojureScript autobuilder for the
test profile and
runs
slimerjs on it when it's done.
You can also call
doo without a
build-id (as in
lein doo phantom) as
long as you specify a Default Build in your
project.clj.
BootBoot
doo is packaged as a Boot task in boot-cljs-test.
LibraryLibrary
To run a JavaScript file in your preferred runner you can directly call
doo.core/run-script from Clojure:
(require '[doo.core :as doo]) (let [doo-opts {:paths {:karma "karma"}} compiler-opts {:output-to "out/testable.js" :optimizations :none}] (doo/run-script :phantom compiler-opts doo-opts))
You can run
doo.core/run-script with the following arguments:
(run-script js-env compiler-opts) (run-script js-env compiler-opts opts)
where:
js-env- any of
:phantom,
:slimer, :
node,
:rhino,
:nashorn,
:chrome,
:chrome-headless,
:firefox,
:firefox-headless,
:ie,
:safari, or
:opera
compiler-opts- the options passed to the ClojureScript when it compiled the script that doo should run
opts- a map that can contain:
:verbose- bool (default true) that determines if the scripts output should be printed and returned (verbose true) or only returned (verbose false).
:debug- bool (default false) to log to standard-out internal events to aid debugging
:paths- a map from runners (keywords) to string commands for bash.
:exec-dir- a directory path (file) from where runner should be executed. Defaults to nil which resolves to the current dir
Setting up EnvironmentsSetting up Environments
This is the hardest part and
doo doesn't do it for you (yet?). Right
now if you want to run
slimer,
phantom,
node
or nashorn that ships with the JDK 8,
you need to install them so that these commands work on the command line:
phantomjs -v slimerjs -v node -v jjs -h rhino -help
If you want to use a different command to run a certain runner, see Paths.
Remember that Rhino and Node don't come with a DOM so you can't call the window or document objects. They are meant to test functions and logic, not rendering.
Slimer & PhantomSlimer & Phantom
If you want to run both, use
lein doo headless {build-id} {watch-mode}.
Do not install Slimer with homebrew unless you know what you are doing. There are reports of it not working with ClojureScript when installed that way because of dated versions.
Note: Slimer does not currently throw error exit codes when encountering an error, which makes them unsuitable for CI testing.
NodeNode
Some requirements:
- Minimum node version required:
0.12
:output-diris needed whenever you are using
:none.
:target :nodejsis always needed.
:node-test {:source-paths ["src" "test"] :compiler {:output-to "target/testable.js" :output-dir "target" :main example.runner :target :nodejs}}
KarmaKarma
InstallationInstallation
Karma is a comprehensive JavaScript test runner. It uses plugins to extend functionality. We are interested in several "launcher" plugins which start a browser on command. You might want any of:
- karma-chrome-launcher - karma-firefox-launcher - karma-safari-launcher - karma-opera-launcher - karma-ie-launcher
Alternatively, if you don't want
doo to launch the browsers for you,
you can always launch them yourself and navigate to
We also need to properly report
cljs.test results inside Karma.
We'll need a "framework" plugin:
- karma-cljs-test
Karma and its plugins are installed with
npm. It is
recommended
that you install Karma and it's plugins locally in the projects directory
with
npm install karma --save-dev. It is possible to install Karma and
its plugins globally with
npm install -g karma, but this is not recommended.
It is not possible to run mix local and global Karma and Karma plugins.
Karma provides a CLI tool to make running Karma simpler and to ease cross platform compatibility. doo uses the CLI tool as the default runner, if you don't install it you will need to configure doo.
For local installation run:
npm install karma karma-cljs-test --save-dev
and install the Karma CLI tool globally with
npm install -g karma-cli
then install any of the launchers you'll use:
npm install karma-chrome-launcher karma-firefox-launcher --save-dev npm install karma-safari-launcher karma-opera-launcher --save-dev npm install karma-ie-launcher --save-dev
The
--save-dev option informs
npm that you only need the packages
during development and not when packaging artifacts.
The installation will generate a
node-modules folder with all the
installed modules. It is recommended to add
node-modules to your
.gitignore.
If you are using
lein-npm, follow their
instructions.
Measuring coverage with IstanbulMeasuring coverage with Istanbul
It's possible to generate Istanbul coverage reports for JS files produced from CLJS.
To make it work two things are required.
Install your karma coverage plugin.
npm install karma-coverage --save-dev
Add coverage seetings to your
project.clj
:doo {:coverage {:packages [my-app.module] :reporter {:check {:global {:statements 100}}}}}
Packages section is essential, it enables coverage cofiguration and defines which files would have coverage instrumentation.
By default HTML reporter is enabled which creates
coverage folder with the report
and there are no coverage reqirements.
Anything under
:reporter is passed as
coverageReporter config to Karma config.
See Karma coverage for more details. See Reagent covered for a sample project configuration.
Non-standard Karma configurationNon-standard Karma configuration
If you are using a local installation and/or
node_modules is not located
at the project root, you need to tell
doo about it. Add this to your
project.clj:
:doo {:paths {:karma "path/to/node_modules/karma/bin/karma"}} :cljsbuild { your-builds }
and make sure that the file
karma/bin/karma exists inside
node_modules. If your
package.json and
node_modules folder are in the
same directory than your
project.clj, then you should use:
:doo {:paths {:karma "./node_modules/karma/bin/karma"}} :cljsbuild { your-builds }
For more info on
:paths see Paths.
Global installation will allow you to use karma in all of your projects. The problem is that it won't be explicitly configured in your project that karma is used for testing, which makes it harder for new contributors to setup.
In some systems (e.g. Ubuntu) you might need to run all npm commands as root: sudo npm install karma --save-dev
Karma Phantom and Karma Slimer (experimental)Karma Phantom and Karma Slimer (experimental)
To avoid starting a new Slimer/Phantom on every run while using
auto, we can use
Slimer/Phantom through Karma.
Install any of the launchers you'll use:
npm install karma-phantomjs-launcher --save-dev npm install karma-slimerjs-launcher --save-dev
and call
lein doo karma-phantom test auto lein doo karma-slimer test auto
If you are using
once, the regular
phantom/
slimer runners are recommended.
Note: karma-slimer sometimes fails to close the running Slimer instance, which you need to close manually.
Electron (experimental)Electron (experimental)
After installing Electron install the launcher with
npm install karma-electron-launcher --save-dev
and call
lein doo electron test
PathsPaths
You might want to use a different version of node, or the global version of Karma, or any other binary to run your tests for a given environment. You can configure that paths like so:
:doo {:paths {:node "user/local/bin/node12" :karma "./frontend/node_modules/karma/bin/karma"} :cljsbuild { your-builds }
Paths can also be used to pass command line arguments to the runners:
:doo {:paths {:phantom "phantomjs --web-security=false" :slimer "slimerjs --ignore-ssl-errors=true" :karma "karma --port=9881 --no-colors" :rhino "rhino -strict" :node "node --trace-gc --trace-gc-verbose"}}
AliasesAliases
You might want to group runners and call
them from the command line. For example, while developing you might
only be interested in
chrome and
firefox, but you also want to
test with
safari before doing a deploy:
:doo {:alias {:browsers [:chrome :firefox] :all [:browsers :safari]}} :cljsbuild { my-builds }
Then you can use:
lein doo browsers my-build # runs chrome and firefox lein doo all my-build # runs chrome, firefox, and safari
As you can see, aliases can be recursively defined: watch for circular
dependencies or
doo will bark.
The only built-in alias is
:headless [:phantom :slimer].
Default BuildDefault Build
To save you one command line argument,
lein-doo lets you specify a
default build in your
project.clj:
:doo {:build "some-build-id" :paths { ... } :alias { ... }} :cljsbuild {:builds [{:id "some-build-id" :source-paths ["src" "test"] :compiler {:output-to "out/testable.js" :optimizations :none :main example.runner}}]}
Custom Karma configurationCustom Karma configuration
You can supply arbitrary configuration options to Karma under the
:karma {:config {}} key. For example, if you want to use karma-junit-reporter, do this:
{:doo {:karma {:config {"plugins" ["karma-junit-reporter"] "reporters" ["progress" "junit"] "junitReporter" {"outputDir" "test-results"}}}}}
The options are merged to Doo's Karma configuration. By default, array values
are merged by appending. For example, in the example above, the value of
"plugins" is appended to the list of plugins needed by Doo. Merging is
implemented with meta-merge, so if you need more control, you can
use
^:replace and
^:prepend metadata.
Custom Karma launchersCustom Karma launchers
To add custom Karma launchers (eg. as described in the Chrome Karma Plugin) you can add the following config entries to your
project.clj as shown in the example below:
The plugin in the
:launchers map should match an installed Karma plugin and
the name should match a Karma launcher (possibly a custom one as shown in the
following example). If needed, add
"customLaunchers" configuration
under the
:config key.
You will then be able to run
lein doo chrome-no-security from the comand line.
:doo {:karma {:launchers {:chrome-no-security {:plugin "karma-chrome-launcher" :name "Chrome_no_security"}} :config {"customLaunchers" {"Chrome_no_security" {"base" "Chrome" "flags" ["--disable-web-security"]}}}}
Travis CITravis CI
To run on travis there is a sample
.travis.yml file in the example project: example/.travis.yml
(Currently only tested with PhantomJS.)
DevelopingDeveloping
To run the tests for doo, you need to have installed rhino, phantomjs, slimer, chrome, node, and firefox. You will also need to run
npm install in the
library directory.
LicenseLicense
This project started as a repackaging of cemerick/clojurescript.test, therefore much of the credit goes to Chas Emerick and contributors to that project.
Distributed under the Eclipse Public License either version 1.0 or (at your option) any later version. | https://libraries.io/clojars/lein-doo | CC-MAIN-2020-10 | refinedweb | 2,088 | 55.84 |
Joerg Knitter wrote: > Michael Krufky wrote: > >> On 12/7/05, Manu Abraham <abraham.manu at gmail.com> wrote: >> >>> Gregoire Favre wrote: >>> >>>> what's the current CVS ? >>>> >>>> Is it v4l-dvb ? >>> >> Yes. > > So, do I understand right that I don´t have to download dvb-kernel > anymore but v4l-dvb to get latest DVB drivers? YES. dvb-kernel and v4l-kernel have officially merged into the new, v4l-dvb cvs, > I haven´t updated them for ages because suddenly CVS versions only > worked with latest kernel versions. It is not trivial for a > non-developer to compile a new kernel that really works if the base > kernel is patched in several areas (thinking e.g. of a SuSE kernel - I > compiled a later kernel and starting this new one, hotplug suddenly > did not work anymore...) - but I know that this "compatibility issue" > has been fixed some weeks ago. Thanks for noticing... :-) ... Nobody has said anything yet about backwards-compatability, since I've enabled it. v4l-dvb cvs is *supposedly* backwards compatable with all kernel versions... Personally, I have not tested 2.4, but the goal is to make this work as well. #if kernel_version >= linux_version(2,6,12) build merged v4l + dvb #else build v4l with NO dvb support #endif ...Now, dvb DOES compile cleanly under 2.6.11 and 2.6.10, but there is a minor fix needed in video-buf-dvb.c in order for the hybrid drivers to work... As soon as that's fixed, we can extend dvb build by default on older kernel versions... There are also ways to get dvb to work in 2.4, but I have not investigated this. > Can I still compile the drivers in the build-2.6 directory and use > them with ./insmod.sh so that I don´t have to recompile the whole kernel? You have choices: a) use the v4l-dvb build environment: make make install (this will install the new modules into your /lib/modules/{uname -r} directory, and load them normally as if they were bundled with your kernel. b) you can use the insmod.sh script from dvb-kernel .... we havent moved it into v4l-dvb yet ... I'll put it on my to-do list.... The only difference is that in dvb-kernel, it was called build-2.6, in the new tree, these modules are built inside v4l-dvb/v4l/ ... (I agree that this should be renamed -- minor issue) > With kind regards Thank you for the comments! Keep them coming..... we want everyone to be happy with the merged tree. Cheers, Michael Krufky | https://www.linuxtv.org/pipermail/linux-dvb/2005-December/006779.html | CC-MAIN-2016-40 | refinedweb | 431 | 77.64 |
ConvertUTF8_to_UTF16 shows up at about 5-8% of warm start time. Our implementation is straightforward C code; I bet we can find a much faster implementation using SSE, NEON, whatever's available.
There is u8u16, but I don't think it's license is GPL-compatible.
Quick and dirty test: inBufSize: 2155589 Convert_1: 4.78326 ms Convert_dumb: 2.2116 ms Convert_dumb2: 2.8301 ms Convert_u8u16: 1.90388 ms the input is all ASCII, about 2MB worth. * Convert_1 is our current routine. * Convert_dumb just does u8->u16 straight; it's not a true utf8->utf16 conversion. * Convert_dumb2 walks the input string and looks for any high bits; if no high bits are set, it does a straight u8->u16 conversion, otherwise it calls Convert_1 (so it basically times the overhead of checking for any high bits set). * Convert_u8u16 uses u8u16. Convert_dumb could be accelerated even more using SSE; my impl is just a straight C impl. I'd suggest that we take the dumb2 approach in our current code and keep searching for something faster.
Er, sorry: Convert_1: 7.22484 ms Convert_dumb: 2.18996 ms Convert_dumb2: 2.84106 ms Convert_u8u16: 1.8791 ms The 4.7ms number was with some hacks applied.
Created attachment 390671 [details] [diff] [review] sample patch Convert_1: 6.70116 ms Convert_2: 2.73848 ms Convert_dumb: 2.12909 ms Convert_dumb2: 2.73505 ms Convert_u8u16: 1.77669 ms Here's an impl of Convert_2. Note that I'm trying to optimize for ascii here, and also that this isn't touching utf8->utf16 that happens with js, and maybe not for xbl/xml. Not sure. This needs some SSE2 love, and it also should probably do something with the work it did to figure out which initial parts of a string have no non-ascii chars by doing the fast copy for that segment...
Maybe mmoy can help?. A few questions: - Can I assume that in/out are aligned on dwords? - What is the typical number of characters to be processed?
(In reply to comment #6) >. Great! > A few questions: > > - Can I assume that in/out are aligned on dwords? I /think/ so. If not, we may be able to change things around so that it's always true. > - What is the typical number of characters to be processed? Good question. Someone should do some instrumentation and figure that out... would probably be helpful to know distribution of run lengths, and also for each run how many ascii chars before you hit a char with the high bit set.
(In reply to comment #0) > ConvertUTF8_to_UTF16 shows up at about 5-8% of warm start time. Our > implementation is straightforward C code; I bet we can find a much faster > implementation using SSE, NEON, whatever's available. How much data is being converted? 5% of 900ms is 45ms which would be like 12MB of data, which seems like a lot of text... Is all of that unique text or are we converting the same text more than once?
> How much data is being converted? Well, let's see. Quickly instrumenting the sync case in nsXBLService::FetchBindingDocument shows us loading about 630KB of UTF-8 encoded XBL at startup. Doing the same for nsStreamLoader::OnDataAvailable (which should really only be hit for CSS and maybe scripts) I see us loading about 1.1MB of data that way. Then there are the actual XUL documents, of course, not to mention whatever internal conversions we might be doing.
Created attachment 391128 [details] The text that we convert on startup About 900k worth, nothing obviously duplicated. I don't know if any of the stuff is unnecessary, but there sure is a lot of it.
I instrumented an image with the patch and it came up with 748671 bytes converted. But doing a lot of other things generates conversions too. As far as startup time goes, is precomputing the converted text a possibility?
Possibly; I looked into doing a quick-and-dirty iconv conversion of the relevant files, but that caused other problems since that switches the default charset of a bunch of documents (with a utf16 bom). There's also additional IO cost. But if someone can think of a way to do it without breaking anything, it'd be good to evaluate that cost.
However, a lot of this is JS modules and XBL, which we want to fastload...
I have an SSE2 implementation of the one to two byte conversion working but it uses MOVDQU to write the output doublequadwords. MOVDQU is an unaligned write so there's a performance penalty there. I did some additional instrumenting and it looks like both source and destination are doubleword aligned which means that I can add code to do aligned writes with a little effort. I have to add some code to turn this on for Mac OS X and Windows too.
Created attachment 394142 [details] [diff] [review] Original patch with SSE2 one byte to two byte code This patch has been tested on Mac OS X and Windows and it appears to work (returns correct results). There is special code to handle the cases where the offset between the source and destination is 0 or 8 after the source has been double quadword aligned. These two cases are seen about 99% of the time. Other alignments are handled by the movdqu (unaligned move) default code. Could someone run a performance test to determine if this code helps?
Warm start on my fast win7 box went from 343-344ms down to 332-334ms. Not a huge boost, but noticeable. I also added a block at the start of the high-bit-set checking to align the pointer to 4 bytes, so that we don't blow up on ARM. However... after just doing a bunch more startups, I started seeing some 342's again. Now I'm starting to doubt my measurement accuracy; I'll try this again on my slower Mac tomorrow. Also, u8u16 is really crazy fast! I'll plug your code into the hack benchmark that I wrote tomorrow as well to see how it compares.
On my mac -- goes from 825-843 to 780-790. I'll take 5-6%!
You could also probably use _mm_movemask_pi8 instead of & 0x80808080 to pick up another couple percent.
Created attachment 395742 [details] standalone benchmark Just added the sse code in to the benchmark -- it gets us in the right ballpark, g++ -O2: orig: 8.06701 ms chk: 2.87505 ms dumb_ascii: 2.26799 ms sse: 2.86604 ms sse_ascii: 2.23546 ms u8u16: 1.8919 ms "chk" is basically the same as the sse case, where it checks for high byte, but then instead of using sse it just does a simple byte-to-short copy loop. The two _ascii variants just do straight copies and don't do the high bit check. Kinda unfortunate that the sse variant is basically the same speed. But, it doesn't hurt. So, what I'm trying to say: let's get this patch in (but with the additional alignment fix). I've also attached the benchmark; apologies for horrible code. Some more bits: g++ -O2 -mtune=prescott (gcc 4.0.1): orig: 6.72993 ms chk: 2.87276 ms dumb_ascii: 2.23084 ms sse: 2.90849 ms sse_ascii: 2.23746 ms u8u16: 1.8933 ms similar for g++-4.2 with -mtune=core2. g++ -O3 -mtune=prescott (gcc 4.0.1): orig: 7.47574 ms chk: 2.87053 ms dumb_ascii: 2.23784 ms sse: 2.90917 ms sse_ascii: 2.24879 ms u8u16: 1.91424 ms (yes, slower than -O2, even worse slowdown with g++-4.2) g++ -Os -mtune=prescott: orig: 9.78382 ms chk: 2.86463 ms dumb_ascii: 2.25211 ms sse: 3.0451 ms sse_ascii: 2.33603 ms u8u16: 2.01193 ms
Created attachment 395743 [details] [diff] [review] patch with 4-byte alignment on check Just the 4-byte alignment at the start in this patch. Let's get this in, and someone can write the followup bug for linux (really, the only thing that's missing is having sse2_available equivalent defined somewhere on linux).
And while I'm here, here's the benchmark for real utf8 data (longer than the earlier set, so they're not directly comparable... this is 3145728 bytes, original was 2155589): g++ -O2: orig: 15.4942 ms chk: 15.4288 ms dumb_ascii: 3.77464 ms sse: 15.4654 ms sse_ascii: 3.78217 ms u8u16: 7.89827 ms g++ -O2 -mtune=prescott: orig: 14.4634 ms chk: 14.2659 ms dumb_ascii: 3.78667 ms sse: 14.2653 ms sse_ascii: 3.77649 ms u8u16: 7.92226 ms
Replacing "& 0x80808080" with "_mm_movemask_epi8(*(__m128i *) src)" gives me the following results on the benchmark (I used my own pure ASCII text, so obviously the results aren't directly comparable, and I commented out u8u16 since it doesn't seem to want to build on x64) Before kshuey@linux-5wgr:~/Downloads> ./t.o inBufSize: 512262 orig: 5.40125 ms chk: 2.95589 ms dumb_ascii: 2.64377 ms sse: 1.31553 ms sse_ascii: 0.82156 ms After kshuey@linux-5wgr:~/Downloads> ./t.o inBufSize: 512262 orig: 5.54781 ms chk: 2.9853 ms dumb_ascii: 2.5621 ms sse: 0.994294 ms sse_ascii: 0.81584 ms That works out to a relative savings of 25%. Given your number of 5-6% of startup time overall that shaves off a little over 1% more.
A few comments: A code comment that said something like "Use the real converter if there are any bytes with the high bit set, otherwise just expand ASCII to UTF-16" might have saved me a few minutes of head-scratching. The "return NS_OK" at the end doesn't seem like it's equivalent to what the other code does; don't you need to return NS_OK_UDEC_MOREOUTPUT in some cases? Finally, the UTF8-UTF16 conversion code in xpcom/string has been optimized a good bit more than this code (although it's also a little less tolerant of errors, since it's designed for "sanitized" data... although I think that's actually mostly been fixed recently). I suspect ConvertReal could be made a good bit faster; one thing I might try would be loading the member variables into locals at the start of the function and saving them back into the member variables at the end.
Shouldn't the |#endif ! | be |#endif // | instead? Why are you using two different *_SSE2 macros when you are really only using them together? Are you planning to address Linux (or in general, other GCC platforms) in a similar way (in another bug)? I hope I adapted your benchmark correctly for my Linux x86_64 system (taking the MAC_SSE2 route without u8u16), I get these numbers for comparison inBufSize: 918825 orig: 3.12586 ms chk: 2.68073 ms dumb_ascii: 0.815924 ms sse: 2.68309 ms sse_ascii: 0.446675 ms
I think that your patch should not define MAC_SSE2. gcc defines __SSE2__ for MacOS X and x86_64, so you should use __SSE2__ instead of your MacOS's define.
So I did some testing of vlad's benchmark from attachment 395742 [details] using attachment 391128 [details] as lipsum.txt. My understanding is that the changes in this patch are supposed to move us from "orig" to "sse". On my laptop (Linux, x86-64, Thinkpad), I see: orig: 2.97091 ms chk: 2.73285 ms dumb_ascii: 0.91355 ms sse: 2.74453 ms sse_ascii: 0.83265 ms on mrbkap's laptop (x86 Mac), I actually see chk and sse being slightly slower than orig (I guess the cost of the extra pass varies). Now, I wrote a patch to the original code to: * not use member variables during the conversion * add a nested inner loop and on my laptop I now get: orig: 1.34414 ms (I think these numbers only really have two significant figures, though.) So I think maybe it would be better to optimize the original code and avoid the two-pass approach.
Created attachment 395941 [details] [diff] [review] patch to standalone benchmark, optimizing original code
Except I realize I'm supposed to be testing with pure ASCII data, since vlad's test doesn't do any chunking. With that, on mrbkap's laptop (with SSE), I see: original: inBufSize: 2300000 orig: 8.06885 ms chk: 2.68913 ms dumb_ascii: 2.11991 ms sse: 2.10505 ms sse_ascii: 1.70828 ms with my patch: inBufSize: 2300000 orig: 2.92254 ms chk: 2.52946 ms dumb_ascii: 2.0997 ms sse: 1.9211 ms sse_ascii: 1.64253 ms
At the risk of further cluttering the bug with benchmarks and attachments ... I've been working on further optimizing the sse variation by eliminating the dual loop structure (i.e. combining the scanning loop and the processing loop). What I came up with is "sseoneloop". It scans and converts at the same time as long as it finds only ASCII. Once it finds non-ASCII it calls ConvertReal. Right now it calls ConvertReal to do the whole thing so theoretically it could be worse than orig (if you had a bunch of ASCII text with one multibyte character at the very end, runtime would be approximately sseoneloop + orig) but it's fairly simple to just call ConvertReal on what remains (given that you already have a nice character boundary). I'll clean up the code and attach it in patch form in a bit. As for numbers: orig, chk, dumb_ascii, and sse_ascii are all what they are in vlad's benchmark. dbaron is dbaron's code. sseoneloop is as above. Benchmark results: x86-64 linux inBufSize: 3073578 orig: 30.6071 ms chk: 18.392 ms dumb_ascii: 14.9552 ms dbaron: 20.4106 ms sse: 8.57576 ms sseoneloop: 4.90909 ms sse_ascii: 5.37327 ms I find it interesting that all of your numbers seem to come out with sse about equal to chk when every time I run it it runs almost twice as fast as chk. Also, I don't believe that there are even two significant figures in these results. From what I've seen the results fluctuate in a roughly one millisecond interval.
Created attachment 396042 [details] [diff] [review] Patch to benchmark adding Covert_dbaron and Convert_sseoneloop This patch adds dbaron's code to the benchmark and adds sseoneloop as described in my previous comment.
If you're on a 32 bit system you'll probably want to add u8u16 back in after you apply that patch, as I forgot to unremove it before diffing.
And with -O2 inBufSize: 3073578 orig: 14.5715 ms chk: 6.72878 ms dumb_ascii: 5.47907 ms dbaron: 6.89872 ms sse: 5.41505 ms sseoneloop: 0.82828 ms sse_ascii: 4.41576 ms And -O3 inBufSize: 3073578 orig: 14.3186 ms chk: 6.87903 ms dumb_ascii: 5.33305 ms dbaron: 6.39502 ms sse: 4.88152 ms sseoneloop: 0.706781 ms sse_ascii: 3.99238 ms
Created attachment 396067 [details] [diff] [review] Patch adding dbaron and sseoneloop to standalone benchmark Ignore the last optimized benchmarks, I was missing a piece of the routine (it was just throwing away its output and the compiler knew that). Corrected benchmarks: -O0 inBufSize: 3073578 orig: 30.5634 ms chk: 18.0327 ms dumb_ascii: 14.9367 ms dbaron: 20.4797 ms sse: 8.27054 ms sseoneloop: 5.60232 ms sse_ascii: 5.32999 ms -O2 inBufSize: 3073578 orig: 14.182 ms chk: 5.98159 ms dumb_ascii: 5.13692 ms dbaron: 6.23665 ms sse: 4.77082 ms sseoneloop: 3.89594 ms sse_ascii: 3.86942 ms -O3 inBufSize: 3073578 orig: 14.1013 ms chk: 6.66146 ms dumb_ascii: 4.79984 ms dbaron: 6.18853 ms sse: 4.75576 ms sseoneloop: 3.81217 ms sse_ascii: 3.801 ms Sorry for the incorrect numbers earlier. I should have realized it's absurd for it to perform faster than the version that doesn't even check to see if there are multibyte characters. The overhead of checking for those characters is incredibly small though, only 10 microseconds at -O3 according to this..
(In reply to comment #34) >. No problem. Sounds great. If you don't get around to it and want me to turn it into a patch just let me know. I probably couldn't get around to it until tomorrow, but would be happy to do it if wanted.
Created attachment 396576 [details] [diff] [review] final patch Ok, here is what I think is a good final patch. It includes dbaron's optimization as the core piece, and then two customized ascii run converters, one for SSE2, and one for ARM. Here's x86, with the SSE code... ascii: orig: 10.4181 ms (ok) dbaron: 4.79316 ms (ok) final: 2.34829 ms (ok) utf8 content (wikipedia utf8 page, so short html runs and long utf8 runs): orig: 15.1058 ms (ok) dbaron: 12.4791 ms (ok) final: 13.4203 ms (ok) so a little slower than the simple loop, due to a few extra bits of code for short runs. ARM: ascii: orig: 164.045 ms (ok) 006d 006f 0064 006f final: 80.1377 ms (ok) 006d 006f 0064 006f utf8: orig: 177.403 ms (ok) 006f 006e 0069 0063 dbaron2: 117.875 ms (ok) 006f 006e 0069 0063 (3MB in all cases)
Created attachment 397227 [details] [diff] [review] Make the UCS conversion faster for the non-ascii part Some nits: Instead of if ((*src & 0x80) == 0), one can also do if (*src >= 0). Also instead of if ((*in & 0xE0) == 0xC0), testing bits with if (*in & 0x40) is also faster. The first is sometimes (but not always!) done by a smart optimizing compiler, but the second not. Attached patch reduces the loop part for non-ascii from 88 ASM lines to 71 lines, for example from: ; Line 152 mov al, cl and al, 224 ; 000000e0H cmp al, 192 ; 000000c0H jne SHORT $LN25@Convert ; Line 155 and ecx, 31 ; 0000001fH shl ecx, 6 mov DWORD PTR [edx+12], ecx ; Line 156 mov BYTE PTR [edx+16], 1 ; Line 157 mov BYTE PTR [edx+17], 2 jmp $LN30@Convert to: ; Line 161 test cl, 32 ; 00000020H jne SHORT $LN25@Convert ; Line 164 and ecx, 31 ; 0000001fH shl ecx, 6 ; Line 165 mov BYTE PTR [edx+17], 2 jmp SHORT $LN45@Convert
Vlad, shouldn't it be "include <emmintrin.h>"? Also, are we handling support for this on linux at a later date? (presumably since there's no easy way to get at __sse2_available) Other than that looks good for me. I built firefox with it on windows 7 and I think it loads noticeably faster though I haven't done any quantitative measurements.
We have a few functions already in the tree that determine SSE2 functionality on Linux, e.g. CheckForSSE2() in jstracer.cpp and sse2_available() in qcms, so Linux on x86 shouldn't be a problem. As I hinted in comment 24, at least the Linux x86_64 case could very well be covered by MAC_SSE2.
Yeah, the issue with linux isn't getting it to work, it's figuring out where to put the detection code. I could just copy sse2_available somewhere, but was hoping to end up with a libxul-global spot... but maybe that's not feasible. I missed comment 24 -- I'll change the ifdefs to __SSE2__. The further optimizations in comment 37 we should also take, though merging the patches is going to become tricky; Alfred, can you remerge after I land the ascii bit?
(In reply to comment #37) > Instead of if ((*src & 0x80) == 0), one can also do if (*src >= 0). You can't do this; you're assuming than |char| == |signed char|, but |char| can be either |signed char| or |unsigned char|. (And I thought it was usually |unsigned char|.) > Also instead of if ((*in & 0xE0) == 0xC0), testing bits with if (*in & 0x40) is > also faster. But not equivalent, since you're not checking that that the bit below is unset (i.e., you want to require (*in & 0x40) && !(*in & 0x20)), which probably leads to security bugs.
(In reply to comment #41) > But not equivalent, since you're not checking that that the bit below is unset > (i.e., you want to require (*in & 0x40) && !(*in & 0x20)), which probably leads > to security bugs. Er, actually, given the order you check it, it is ok, so never mind.
(In reply to comment #38) > Also, are we handling support for this on linux at a later date? (presumably > since there's no easy way to get at __sse2_available) It actually looks like there are a few tricky things about this, but I filed bug 513422 on some changes that I think will get us a bit closer.
Comment on attachment 396576 [details] [diff] [review] final patch +#if defined(MAC_SSE2) || defined(WIN_SSE2) + +static inline void +Convert_ascii_run (const char *&src, + PRUnichar *&dst, + PRInt32 len) +{ + if (__sse2_available && len > 15) { Reversing the order of those two checks would avoid checking if sse2 is available for short strings. Not a big deal, but would save us a few instructions when converting short strings. r+sr=jst Checked in with the check reversed as suggested.
backed out due to maybe-orange, and I gotta sleep so can't watch the second cycle. will reland tomorrow if it ends up being not this patch's fault
re #40: I will look at my part after this is landed safely. re #41: we can always force the char pointer to be signed char within the function to be sure about the matching. Secondly testing &0x40 is indeed not the same, but in my patch first the high bit is checked and the next bit.
Created bug 514140 for the non-ascii part
(In reply to comment #46) > backed out due to maybe-orange, and I gotta sleep so can't watch the second > cycle. will reland tomorrow if it ends up being not this patch's fault These were all known intermittent oranges.
Maybe a silly question at this point in the game given that you just relanded this, but what optimizations option are we using on this now?
Whatever the default is, which I think is -Os -- we should probably switch to at least -O2 here, but that can be a separate bug.
Created attachment 398357 [details] [diff] [review] VC7.1 bustage fix
Comment on attachment 398357 [details] [diff] [review] VC7.1 bustage fix Pushed changeset 8bb68d6639b7 to mozilla-central.
Note, it seems that nsUnicharInputStream doesn't use this converter, but the converter from ConvertUTF8toUTF16 (xpcom/string/public/nsUTF8Utils.h) Also the CountValidUTF8Bytes function is not really needed as one can easily make the safe assumption that destLen = srcLen + 1 (see). The easiest way would be to change nsUnicharInputStream to use the nsUTF8ToUnicode converter, instead the one in nsUTF8Utils.h. ConvertUTF8toUTF16 is also used in: # xpcom/string/src/nsReadableUtils.cpp # toolkit/xre/nsWindowsRestart.cpp So, we should these also redirect use the optimized nsUTF8toUnicode converter, and remove the ConvertUTF8toUTF16 version completely.
That would mean moving nsUTF8toUnicode out of intl and into xpcom. All the things you list live in xpcom and can't depend on intl. I would be all for not having multiple UTF8-to-UTF16 conversion codepaths around...
(In reply to comment #55) > I would be all for not having multiple UTF8-to-UTF16 conversion codepaths > around... A hearty "me too"!! Note however that there are some functional differences between the two converters: ConvertUTF8toUTF16 is less fault-tolerant than nsUTF8toUnicode, and can only safely be used when the input is known to be valid UTF-8. See also bug 497204
Not blocking, but if a roll-up patch were to be nominated, we should take it.
file 548664 for neon support | https://bugzilla.mozilla.org/show_bug.cgi?id=506430 | CC-MAIN-2017-26 | refinedweb | 3,939 | 76.72 |
:
Looking at the TR1, it seems that shared_ptr and weak_ptr are
destined to be added to the <memory> include file. Currently, if you
have tr1 support for gcc, you need to include <tr1/boost_shared_ptr>.
My guess is that this path is specific to gcc. Should it be the case
that we add the definitions for vcl_shared_ptr (and vcl_weak_ptr) to
<vcl_memory.h> or should there be a new file like <vcl_shared_ptr.h>?
Also, I'm assuming I need to create some sort of CMake compiler test
to see if shared_ptr exists on the system. I'm not sure where to
begin with this. I figured I would model this after
VCL_USE_NATIVE_STL, but I can't even find where that CMake variable is
defined. Can anyone point me in the right direction?
Thanks,
Matt
On Fri, Mar 7, 2008 at 6:09 PM, Amitha Perera
<amithaperera@...> wrote:
> Matt Leotta wrote:
> > We are still sticking to the vcl_
> > prefix correct? So these would be vcl_shared_ptr and vcl_weak_ptr?
>
> I think they should be. Technically, they'd be in the std::tr1
> namespace, but vcl should gloss over that.
>
> My thought is that, in the vcl way, if std::tr1::shared_ptr exists[*],
> then vcl_shared_ptr should be a #define to that. Otherwise, it'll fall
> back to a boost-based implementation.
>
> I'd be willing to help.
>
> Amitha.
>
> [*] It does exist on many compilers. For example, g++ 4.1.3 has it.
> (And it happens to be a boost based implementation.)
>
>
View entire thread | https://sourceforge.net/p/vxl/mailman/message/18798833/ | CC-MAIN-2017-39 | refinedweb | 248 | 86.2 |
Setup ajax php
- Change Language Portugues2 to Português - BR - Create a “Ad Block #4” that displays after every 5 posts - Fix username invalid characters - Onesignal setup - Add birthday to setup and profile - {{LANG add_frame_caption}} to file [login to view URL]
Ubuntu 18.04 64 bit Need someone with good linux skills to setup FTP server on linux. Looking for someone with the correct price to work long term as i have a few linux box and always need help
My developer .. errors [login to view URL] which stops him from cleaning it up as
It is online coaching java/j2ee application. I need developer who has experienced java, jsp,struts2,Ajax,javascript and MySQL.
Currently using Paypal as a gateway for accepting payments for an existing OpenCart website. Looking for an alternative discreet payment gateway to be setup
...their
Hi All A small project to customize, Interested Person Having Knowledge of PHP , Ajax, MySql can contact only free lancers, Co. Excuse Thannks
Existing website for martial arts supplier needs proper SEO setup. [login to view URL] am looking for support with basic business administration. This will include a variety of tasks including online research, setup of online systems (which will require some technical troubleshooting), responding to client emails, and making phone calls. You must speak and write American English fluently and be available to make and receive phone calls
Hello, Need to setup Asterisk server, with Sangoma PCI Card for asterisk voip server Carrier Siptrunk TDM with Audio Codec E1-PRI Best regards, CP
personal branding strategy for marketing on social media. website building. social media review of current setup for Improvements. People who apply must be able to show examples of their work and prove that the work that is their own.
import 2 xml of different language , WPML setup , site is already done
we are looking for an experienced Marketer to distri.. additional ... | https://www.freelancer.com/job-search/setup-ajax-php/ | CC-MAIN-2019-22 | refinedweb | 315 | 50.57 |
![endif]-->
Eclipse is a free, powerful, and full-featured development environment that can be set up to work with AVR and Arduino. This page is very much a work-in-progress, please feel free to add to it or improve it.
Below is explanation how to setup Eclipse with WinAVR and the Eclipse plugin AVR-eclipse. There is an easier 100%free and opensource way to use Eclipse. For details see
PlatformIO is free and open-source cross-platform code builder and library manager. It doesn't depend on any additional libraries/tools from an operation system. It allows you to use PlatformIO beginning from PC and ending with credit-card sized computers (like Raspberry Pi, BeagleBone, CubieBoard).
All instructions are described in the main documentation Integration PlatformIO with Eclipse IDE.
There are no current packages, but it's easy to install as per above. Create e.g.
/usr/local/pckg/eclipse and unpack the downloaded eclipse archive. Get the eclipse C/C++ release with all the Linux add-ons. Create a symlink in
/usr/local/bin to the
eclipse executable in the unpacked eclipse directory. Installing plugins from inside eclipse gets them installed in the user's home directory,
$HOME/.eclipse/
zypper -v ar -k -f
zypper -v in avrdude avr-libc avr-example cross-avr-gcc cross-avr-insight
The avr-eclipse plugin cannot automatically find the WinAVR installed with the IDE. Therefore you need to either configure the Eclipse plugin to find WinAVR in the IDE location, or install WinAVR in the default location.
In Eclipse, select Window->Preferences, then click the AVR tree view entry.
Select OK to close the Preferences window.
Open the project properties (right click the project and select properties). Select AVR->AVRDude the tab programmer. Create a new programmer configuration using the new button.
One more step is needed (under Win7 with Eclipse 4.2, AVR plugin v2.4, and Arduino IDE v1.0.1) to get AVRDude working. When invoked from Eclipse, AVRDude cannot find its config file. To fix this, copy the AVRDude config file from:
"C:\path-to-arduino\hardware\tools\avr\etc\avrdude.conf" to
"C:\path-to-arduino\hardware\tools\avr\bin\avrdude.conf".
You should now be able to compile and upload Arduino projects in Eclipse.
Note: As these are workspace settings you may have to redo this setup for each workspace you use.
Install the latest version of [ | WinAVR]]. This should work "out of the box" with the latest version of the AVR plugin, as long as you install WinAVR to the default directory. If it doesn't, follow the instructions for using WinAVR here: Programming AVR with Eclipse and WinAVR
There are three ways to get the Arduino core library:
The library source code is included in the Arduino IDE download, in the hardware/cores/arduino directory. At the very least, you will need the header files from that directory accessible to your Eclipse project:
* HardwareSerial.h * WProgram.h * wiring.h * WConstants.h * binary.h * pins_arduino.h * wiring_private.h
You will also need the .a static library.
The resulting program becomes substantially smaller when linking with a static library, so this is the only sensible option. (The link order matters too!)
This is the preferred method. However, I (Ecologisto) couldn't use it on Mac as I didn't find the specified files. The third option did work fine on mac though.
From any Arduino IDE project, get the core.a file in the compilation subdirectory. To generate this file you will need to create and verify a project in the Arduino IDE. It can be any program, such as the Blink program. Make sure that the correct target board is selected in the Arduino IDE.
The compilation subdirectory is a temporary folder. For example, in Windows 7, the object files might be stored in the following directory:
C:\Users\<username>\AppData\Local\Temp\buildXXXXXXXXXXXXXXXXXXXXX.tmp
And for Linux it will be:
/tmp/buildXXXXXXXXXXXXXXXXXXXXX.tmp/core.a
And on a Mac (>=10.5) it will be somwhere under:
/var/folders (Example: /var/folders/3V/3V-hlvHMEDCp4QCrrmPOKE+++TI/-Tmp-/build8628825626808181146.tmp)
The X's in the build...tmp directory are a hash that map to the project whose temporary files are held in the directory. The temporary files are named after the project, so you can verify that you're copying the correct library.
Copy the core.a file into your own project directory, and rename it to libArduinoCore.a. You can call it anything, so long as it starts with "lib" and ends with ".a". It might be a good idea to name the file after its target, e.g. libArduinoMegaCore.a, so that you can always tell what the library's target architecture is.
You can copy the entire contents of the hardware/cores/arduino directory into your Eclipse project so that it is compiled into the application every time. This requires that you use C++ projects, and the projects will take a bit longer to compile. This generally isn't a very good idea unless you're hacking the core code.
You can compile your own static library in Eclipse. The best way is to create this as a library project, and to define a build configuration for each combination of AVR CPU and clock frequency you have.
The sources are copied from the Arduino IDE. Download that, and unpack it somwhere.
328P_16MHz(no spaces!). Add a descriptive text. Copy settings from Debug. Click OK. Select the configuration just added, click Set active. Click OK.
328P_8MHz,
1280_16MHz.
${workspace_loc:/${ProjName}/src}. Click OK.
ln -s).
arduino_corethe name of the library project created earlier, and
srcthe name of the source directory in the library project.
; avr-nm -C -n ${OUTPUT} >${BuildArtifactFileBaseName}.symbol
This project setup is as close as possible to the Arduino IDE. On Linux it is possible to use the same source files for an Arduino sketch and an eclipse project (just don't edit them with both at the same time), and compile/upload either with eclipse or the Arduino IDE.
*.pdeof type C++ source file, and
-x c++" to the C++ compiler settings.
"${workspace_loc:/arduino_core/src}"
"${workspace_loc:/${ProjName}/arduinolib}"
"${workspace_loc:/${ProjName}/lib}"
"${workspace_loc:/arduino_core/328P_16MHz}"
-Wl,--gc-sectionsto the linker options.
#include <WProgram.h>, even the .pde files. It doesn't hurt with the IDE, which does it automatically.
When you try to build you get a prompt saying something like "The application was unable to start correctly (0xc0000142). [...]". The workaround described on (in german) worked fine for me: Download this file and replace the existant file in "<WinAVR installation path>\utils\bin\" and retry to build.
Another workaround in english is described here.
The C++ standard apparently requires this function when using certain language features. This is e.g. the case when using the Print class, or the Serial object, which probably means most Arduino programs. Interestingly, when compiling programs with the Arduino IDE this function remains undefined and no error is produced. It seems this can be achieved with eclipse AVR programs and the following linking order: user program files, user libraries, Arduino libraries, libarduino_core.a, libm.a.
If you do need this function, the most flexible is to put this into a file
pure_virtual.cpp in the project's source folder:
extern "C" void __cxa_pure_virtual() { #ifdef __AVR__ asm("cli"); #endif while (1) ; }
The contents of the __cxa_pure_virtual function can be any error handling code; this function will be called whenever a pure virtual function is called. This file can be excluded from being compiled into the project by right-clicking on its name, selecting properties, and checking "exclude from build".
Using new, delete, and dynamic memory management generally is not recommended for small embedded devices because it can become a resource hog. If you must have it, create a
.cpp file in the project source folder with this content:
#include <stdlib.h> __extension__ typedef int __guard __attribute__((mode (__DI__))); void * operator new(size_t size) { return malloc(size); } void operator delete(void * ptr) { free(ptr); } void * operator new[](size_t size) { return malloc(size); } void operator delete[](void * ptr) { if (ptr) free(ptr); } int __cxa_guard_acquire(__guard *g) {return !*(char *)(g);} void __cxa_guard_release (__guard *g) {*(char *)g = 1;} void __cxa_guard_abort (__guard *) {}
Turning on "pedantic" compiler warnings by adding '--pedantic' to AVR Compiler -> Miscellaneous -> Other flags and AVR C++ Compiler -> Miscellaneous -> Other flags could save some headache.
The last trap is in the fact that by default printf is not supporting float numbers. However because the C++ projects can be complex, it is wise to go to Project Options/Libraries section, you might want to include
but NOT libprintf_min, libscanf_min.
The Arduino environment issues a reset command prior to uploading. This reset command is essential for any kind of bootloader communication. If your hardware is not reset there is no way for avrdude to communicate with the bootloader. Any attempt will result in errors like:
"avrdude: stk500_2_ReceiveMessage(): timeout".
Newer versions of AVRDude (such as the one shipped with CrossPack) will also do this.
Instead of using programmer stk500v1 with AVRdude, use programmer arduino and it creates the reset automatically. In the project properties, change AVR -> AVRDude, Programmer tab, Programmer configuration. Edit the existing one or add a new one.
If you have an older version and are using Windows, you can overcome this problem with this little app. Attach:newavrdude.exe
To use this, you need to:
1) rename the avrdude.exe that is currently being used by eclipse to "realavrdude.exe" 2) Make sure that in the eclipse configuration for avrdude, you specify the port override. This should add something like "-P//./COM7" and it needs to be argument #3 (if this causes a problem for anyone, let me know and I'll build in some configuration stuff). 3) copy newavrdude.exe into the same directory as realavrdude.exe, and rename it to avrdude.exe
When using the Arduino IDE, you just have to define setup() and loop() functions and the IDE already has a main functional defined in the core library that calls setup() and loop() functions.
When setting up the Arduino core library as described above, it is functionally identical to the Arduino programming environment. That is, don't provide main(), but do define setup() and loop(). This is the easiest option.
If you must have your own main(), remove it from the arduino_core library. Eclipse does not define the main() function for you. However, you can use Arduino's default main() function by copying the main.cxx file from the hardware/cores/arduino directory in the Arduino IDE package. You might have to rename this file to main.c for Eclipse to recognize it as a code file. Linking in main.cxx will allow you just to define the setup() and loop() functions like you would in the Arduino IDE.
You must #include "WProgram.h" in your application program to gain access to the Arduino API.
If you choose to define your own main function, you must NEVER return from main(). I mean, you-MUST-NEVER-return-from-main. In human language, this could be translated as "the main() function must contain some kind of endless loop, because if it ends, the Arduino won't stop the program but will just keep reading random data as code". You must also always add a call to init(); as your first instructions. Not doing this will prevent any time-related functions from working.
So, your basic code will usually look like :
int main(void) { /* Must call init for arduino to work properly */ init(); /****************************/ /* Add your setup code here */ /****************************/ for (;;) { /****************************/ /*** write main loop here ***/ /****************************/ } // end for } // end main
Configure the correct USB serial port in AVRDude. It's one of /dev/ttyUSBN, with N being some digit. Best in the long run is to configure udev rules for your Arduino board serial numbers that create a symbolic link to the USB device currently being used by your board. The link name remains constant.
Since we either upload our code direct via a serial port as with the "Arduino Serial" or via USB (which also only simulates a serial port with the FTDI chip on the Arduino borad) we need to know which serial port our Board is connected to.
On Windows, the FTDI virtual COM port driver doesn't register its COM port until the Arduino board is plugged in. With the board plugged into USB, type "devmgmt.msc" into CMD to open the device manager. Go to "Ports (COM & LPT)" and find the "USB Serial Port (COMx)" where x is the number of the port connected to the Arduino. This COM port will usually be the same on your machine even if you reboot or unplug the Arduino, but might differ on different machines.
Go back to your project settings, and go the AVR/AVRDude page. Create a new programmer configuration. You only need to do this once; for other Arduino projects, you can reuse this configuration.
For Diecimila and Duemilanove, the "Programmer Hardware" is "ATMEL STK500 Version 1.x firmware". If you have a recent version of AVRDude, you can also choose "Arduino" which uses a protocol similar to STK500 1.x Version. For the Uno it should work as well.
Enter the name of your serial port in the "Override default port" field. Usually this is something like "/dev/cu.usbserialx" in *NIX and something like "//./COMx" in Windows. Override the default baud rate to 19200 for Arduinos based on the ATmega168, or 57600 for Arduinos based on the ATmega328p. If you receive an error similar to
avrdude: stk500_getsync(): not in sync: resp=0xe0
on a ATmega328p, try 115200 baud rate.
Once you've configured the programmer, save it, then go to the "Advanced" tab and check "Disable device signature check".
If you are using WinAVR-20090313 or earlier on Windows, then you will need to replace the default avrdude.exe and avrdude.conf in the WinAVR directory with the ones from the following directories in the Arduino IDE folder:
hardware\tools\avr\bin\avrdude.exe hardware\tools\avr\etc\avrdude.conf
If you've checked "AVR Dude" in the Additional tool in toolchain panel of the C/C++ Build Settings, your program will get upload at each build. Else, you can just right click your project in the Explorer and choose "AVR/Upload Project to Device" or click the AVR upload button.
If everything worked out fine, your Eclipse console will print something like
avrdude: verifying ... avrdude: 1280 bytes of flash verified avrdude done. Thank you. Finished building: avrdudedummy
If you have Target Management Project installed, just go to the Window menu / Show View / Other.. And choose Terminal/Terminal. Open it with a serial link and choose the serial port your Arduino is connected to. Notice you'll probably have to disconnect the terminal while uploading.
As the RXTX library is a troublemaker for many people, there is a workaround : Just start a SSH or Telnet (please be aware this may be a potential security leak) server on your computer and open it in the Eclipse terminal. Then, on any unix, just type :
cat /dev/your_serial_port/
The Arduino Blink example implemented in an Eclipse project can be found on github right here. It contains the Arduino Core Library and is therefore completely self contained, the Arduino IDE is not needed. All command line options are configured so that they are as similar to the Arduino IDE as possible creating equally small binaries. It can be imported into Eclipse pretty easily using its Git extension providing a good starting point for your application. An installation manual is available there too. | http://playground.arduino.cc/code/eclipse | CC-MAIN-2016-44 | refinedweb | 2,603 | 65.52 |
First we should know who is providing on-line web-services(WS). is one such site where we can find different on-line web-services like Stock Quote, IP Locator etc.
We can see WSDL file for each WS in the above site and using this WSDL we need to generate our client and to query the WS .
We can send request to the on-line Web-service and in turn we get response for the request from WS .
So For example if we have generated Stock Quote WS client code , in request we send a symbol (for EX: IBM) to request it's recent stock quote value.
So in response we should get it's latest value and other stuff.
Before we generate WS Client code for provided WSDL in Stock quote from .
Let's see how it works
SOAP UI is one of the easiest way to check this out.
Download SOAP UI ZIP and launch the SOAP UI .
After extracting you can run file : soapui.bat from it's extracted location Webservices_Stuff\SoapUI-5.0.0\bin
Create a new project for example StockQuote in SOAPUI and provide WSDL path taken from for Stock Quote
WSDL used :
On the left hand side in the SOAPUI new project is created with GetQuote and Request 1 etc.
If we double click the Request 1 we will see a window opened with two parts one with Request XML and other one Response XML.
There in the Request XML if set the ? symbol with IBM like one company symbol and click on run button on the top it gives us the response with
detailed information on the stock.
So now our next job is to create client code for the StockQuote in our eclipse .
So first let's download the AXIS2 project to help on this.
Download axis2-1.6.2-war.zip , from
And Create a user library to hold the required jars in path ...
Look for screenshot attached.
Now create a java project with name StockQuote and From Eclipse Run > RunConfigurations create new StockQuote run configuration.
In the main tab provide Project name as StockQuote and provide main class as
org.apache.axis2.wsdl.WSDL2Java
In the Arguments tab
under Program arguments
provide required arguments like
-o C:\Krish\MyDocs\WorkSpaces\My_Java_Workspace\StockQuote
-p com.demo.ws.stock.quote
-u
-uri
Now run this Run Configuration
Refresh the StockQuote Project from left navigation on Eclipse.
Now You should see new code file created for StockQuote Web-Service Client under Src directory.
Observe StockQuoteStub.java file.
to find out how to construct Request and Response objects
Now write your main client class ... sample file looks like below ..
Sample Main Code :
import java.rmi.RemoteException; import net.webservicex.; import net.webservicex.; import org.apache.axis2.AxisFault; import com.demo.ws.stock.quote.StockQuoteStub; public class MainClient { public static void main(String[] args) throws RemoteException { try { StockQuoteStub stub = new StockQuoteStub(); GetQuote gq = new GetQuote(); gq.setSymbol("IBM"); GetQuoteResponse resp = stub.getQuote(gq); System.out.println(resp.getGetQuoteResult()); } catch (AxisFault e) { e.printStackTrace(); } } }Sample Output after running this WS :
Wednesday, April 16, 2014 // Labels: Java // 0 comments //
0 comments to "Creating Web-Service Client for online webservices using AXIS2"
Visitors
Archives
- ▼ 2014 (11)
- ▼ April (2)
- ► 2013 ూ ... | http://www.krishnababug.com/2014/04/creating-web-service-client-for-online.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+KrishnasPage+%28Krishna%27s+page%29 | CC-MAIN-2017-39 | refinedweb | 543 | 65.12 |
- NAME
- VERSION
- DESCRIPTION
- AUTHOR
NAME
WWW::MenuGrinder::Extending - Writing your own grinder plugins.
VERSION
version 0.06
DESCRIPTION
This document explains some of the things you will need to know if you want to write plugins that extend the functionality of
WWW::MenuGrinder.
Plugins
A
WWW::MenuGrinder plugin is a Perl class that uses
Moose and consumes the role
WWW::MenuGrinder::Role::Plugin (or, much more likely, one of its derived roles), and conforms to a certain interface. For more in-depth information on a given plugin type, see the documentation for the corresponding role.
Plugin Names
If your plugin is named
WWW::MenuGrinder::Plugin::Foo users can easily load it by adding
"Foo" to their plugins list. However it's not necessary to use the global namespace. If you write a plugin that's specific to
MyApp, you might call it
MyApp::MenuGrinder::Plugin::Frobnicate, and indicate it in the plugins list by preceding the fully qualified class name with a
+ sign.
Plugin Dependencies
A plugin may declare its dependency on another plugin by providing a
plugin_depends method returning a list of strings. Each string in the list is a plugin name and is parsed in the same way as plugins given in the config. If a plugin
A depends on a plugin
B, it is guaranteed that
B will be loaded before
A's constructor is called (if
B can't be loaded, then
A's load will fail with a distinctive message), and that
B will come before
A in any processing chains (such as
mogrify).
Control Flow
Load-Time
Plugin Initialization
When the
initmethod is called, the grinder reads its config and loads each plugin specified in the config in turn. Plugin dependencies are resolved after the requiring plugin's class is loaded, but before the requiring plugin's constructor is called.
Plugin Verification
Plugins and Plugin roles may provide a
verify_pluginmethod, to be called after the plugin and its dependencies have all been initialized. The purpose of this method is to ensure that contracts (such as required methods of the MenuGrinder object or of the plugin itself) are met at load time, in order to avoid surprises later.
Once the plugins are loaded, the Loader plugin has its
loadmethod called. This method is expected to return a menu structure (conventionally a hashref) for all further plugins to work with.
Before Pre-Mogrify
With the menu loaded, each PreMogrifier plugin is given a chance to do initialization. Each plugin will have its
before_pre_mogrifymethod called, if that method exists. No arguments are passed and the method isn't permitted to modify the menu structure.
Pre-Mogrify
Next, PreMogrifier plugins do initial transformation of the menu object. Each plugin has its
pre_mogrifymethod called in turn on the menu object.
pre_mogrifycan modify the menu argument in-place, or completely recreate it; in either case it returns the menu object for the next plugin to process.
Item Pre-Mogrify
The bundled Visitor plugin does tree-traversal, and calls the
item_pre_mogrifymethod on any plugin consuming the ItemPreMogrifier role. This processing happens wherever
Visitoris loaded in the plugin chain -- this is immediately before the first
ItemPreMogrifierplugin, if it's not specified in the plugins list explicitly.
XXX write more about how this is called.
Request Time
Before Mogrify
As with
BeforePreMogrify, each Mogrifier plugin gets a chance to do per-request initialization by implementing the
before_mogrifymethod. This method takes no arguments and shouldn't modify the menu object.
Mogrify
Next, each Mogrifier plugin modifies the menu structure using information from the current request. Each
Mogrifierplugin in turn has its
mogrifymethod called with the menu object; as with
pre_mogrify, it should modify the menu object in place or copy it, and return the new object.
Item Mogrify
The bundled Visitor plugin does tree-traversal, and calls the
item_mogrifymethod on any plugin consuming the ItemMogrifier role. This processing happens wherever
Visitoris loaded in the plugin chain -- this is immediately before the first
ItemMogrifierplugin, if it's not specified in the plugins list explicitly.
XXX write more about how this is called.
Output
Lastly in the processing chain, the Output plugin is called. There is only one output plugin, and it is always last in the chain; therefore its output isn't required to be valid input to any other plugin, as with other plugins. The return value of the output plugin's
outputmethod will be returned from the grinder object's
get_menumethod.
Cleanup
Finally, the cleanup method is called for each plugin, if it exists, allowing the plugin to discard any state data that it no longer needs. Note that due to implementation details of web frameworks, the Cleanup phase might happen immediately before the request phase of the next request, rather than immediately after the current request.. | https://metacpan.org/pod/distribution/WWW-MenuGrinder/lib/WWW/MenuGrinder/Extending.pod | CC-MAIN-2016-22 | refinedweb | 794 | 51.28 |
IF WE FOLLOW THE VARIATION OF SOME QUANTITY OVER TIME, WE ARE DEALING WITH A TIME SERIES. TIME series are incredibly common: examples range from stock market movements to the tiny icon that constantly displays the CPU utilization of your desktop computer for the previous 10 seconds. What makes time series so common and so important is that they allow us to see not only a single quantity by itself but at the same time give us the typical “context” for this quantity. Because we have not only a single value but a bit of history as well, we can recognize any changes from the typical behavior particularly easily.
On the face of it, time-series analysis is a bivariate problem (see Chapter 3). Nevertheless, we are dedicating a separate chapter to this topic. Time series raise a different set of issues than many other bivariate problems, and a rather specialized set of methods has been developed to deal with them.
To get started, let’s look at a few different time series to develop a sense for the scope of the task.
Figure 4-1 shows the concentration of carbon dioxide (CO2) in the atmosphere, as measured by the observatory on Mauna Loa on Hawaii, recorded at monthly intervals since 1959.
This data set shows two features we often find in a time-series plot: trend and seasonality. There is clearly a steady, long-term growth in the overall concentration of CO2; this is the trend. In addition, there is also a regular periodic pattern; this is the seasonality. If we look closely, we see that the period in this case is exactly 12 months, but we will use the term “seasonality” for any regularly recurring feature, regardless of the length of the period. We should also note that the trend, although smooth, does appear to be nonlinear, and in itself may be changing over time.
Figure 4-1. Trend and seasonality: the concentration of CO2 (in parts per million) in the atmosphere as measured by the observatory on Mauna Loa, Hawaii, at monthly intervals.
Figure 4-2 displays the concentration of a certain gas in the exhaust of a gas furnace over time. In many ways, this example is the exact opposite of the previous example. Whereas the data in Figure 4-1 showed a lot of regularity and a strong trend, the data in Figure 4-2 shows no trend but a lot of noise.
Figure 4-3 shows the dramatic drop in the cost of a typical long-distance phone call in the U.S. over the last century. The strongly nonlinear trend is obviously the most outstanding feature of this data set. As with many growth or decay processes, we may suspect an exponential time development; in fact, in a semi-logarithmic plot (Figure 4-3, inset) the data follows almost a straight line, confirming our expectation. Any analysis that fails to account explicitly for this behavior of the original data is likely to lead us astray. We should therefore work with the logarithms of the cost, rather than with the absolute cost.
There are some additional questions that we should ask when dealing with a long-running data set like this. What exactly is a “typical” long-distance call, and has that definition changed over the observation period? Are the costs adjusted for inflation or not? The data itself also begs closer scrutiny. For instance, the uncharacteristically low prices for a couple of years in the late 1970s make me suspicious: are they the result of a clerical error (a typo), or are they real? Did the breakup of the AT&T system have anything to do with these low prices? We will not follow up on these questions here because I am presenting this example only as an illustration of an exponential trend, but any serious analysis of this data set would have to follow up on these questions.
Figure 4-2. No trend but relatively smooth variation over time: concentration of a certain gas in a furnace exhaust (in arbitrary units).
Figure 4-4 shows the development of the Japanese stock market as represented by the Nikkei Stock Index over the last 40 years, an example of a time series that exhibits a marked change in behavior. Clearly, whatever was true before the New Year’s Day 1990 was no longer true afterward. (In fact, by looking closely, you can make out a second change in behavior that was more subtle than the bursting of the big Japanese bubble: its beginning, sometime around 1985–1986.)
This data set should serve as a cautionary example. All time-series analysis is based on the assumption that the processes generating the data are stationary in time. If the rules of the game change, then time-series analysis is the wrong tool for the task; instead we need to investigate what caused the break in behavior. More benign examples than the bursting of the Japanese bubble can be found: a change in sales or advertising strategy may significantly alter a company’s sales patterns. In such cases, it is more important to inquire about any further plans that the sales department might have, rather than to continue working with data that is no longer representative!
After these examples that have been chosen for their “textbook” properties, let’s look at a “real-world” data set. Figure 4-5 shows the number of daily calls placed to a call center for a time period slightly longer than two years. In comparison to the previous examples, this data set has a lot more structure, which makes it hard to determine even basic properties. We can see some high-frequency variation, but it is not clear whether this is noise or has some form of regularity to it. It is also not clear whether there is any sort of regularity on a longer time scale. The amount of variation makes it hard to recognize any further structure. For instance, we cannot tell if there is a longer-term trend in the data. We will come back to this example later in the chapter.
After this tour of possible time-series scenarios, we can identify the main components of every time series:
Trend
Seasonality
Noise
Other(!)
The trend may be linear or nonlinear, and we may want to investigate its magnitude. The seasonality pattern may be either additive or multiplicative. In the first case, the seasonal change has the same absolute size no matter what the magnitude of the current baseline of the series is; in the latter case, the seasonal change has the same relative size compared with the current magnitude of the series. Noise (i.e., some form of random variation) is almost always part of a time series. Finding ways to reduce the noise in the data is usually a significant part of the analysis process. Finally, “other” includes anything else that we may observe in a time series, such as particular significant changes in overall behavior, special outliers, missing data—anything remarkable at all.
Given this list of components, we can summarize what it means to “analyze” a time series. We can distinguish three basic tasks:
Description attempts to identify components of a time series (such as trend and seasonality or abrupt changes in behavior). Prediction seeks to forecast future values. Control in this context means the monitoring of a process over time with the purpose of keeping it within a predefined band of values—a typical task in many manufacturing or engineering environments. We can distinguish the three tasks in terms of the time frame they address: description looks into the past, prediction looks to the future, and control concentrates on the present.
Most standard methods of time-series analysis make a number of assumptions about the underlying data.
Data points have been taken at equally spaced time steps, with no missing data points.
The time series is sufficiently long (50 points are often considered as an absolute minimum).
The series is stationary: it has no trend, no seasonality, and the character (amplitude and frequency) of any noise does not change with time.
Unfortunately, most of these assumptions will be more or less violated by any real-world data set that you are likely to encounter. Hence you may have to perform a certain amount of data cleaning before you can apply the methods described in this chapter.
If the data has been sampled at irregular time steps or if some of the data points are missing, then you can try to interpolate the data and resample it at equally spaced intervals. Time series obtained from electrical systems or scientific experiments can be almost arbitrarily long, but most series arising in a business context will be quite short and contain possibly no more than two dozen data points. The exponential smoothing methods introduced in the next section are relatively robust even for relatively short series, but somewhere there is a limit. Three or four data points don’t constitute a series! Finally, most interesting series will not be stationary in the sense of the definition just given, so we may have to identify and remove trend and seasonal components explicitly (we’ll discuss how to do that later). Drastic changes in the nature of the series also violate the stationarity condition. In such cases we must not continue blindly but instead deal with the break in the data—for example, by treating the data set as two different series (one before and one after the event).
An important aspect of most time series is, the presence of noise—that is, random (or apparently random) changes in the quantity of interest. Noise occurs in many real-world data sets, but we can often reduce the noise by improving the apparatus used to measure the data or by collecting a larger sample and averaging over it. But the particular structure of time series makes this impossible: the sales figures for the last 30 days are fixed, and they constitute all the data we have. This means that removing noise, or at least reducing its influence, is of particular importance in time-series analysis. In other words, we are looking for ways to smooth the signal.
Figure 4-6. Simple and a Gaussian weighted moving average: the weighted average is less affected by sudden jumps in the data.
The simplest smoothing algorithm that we can devise is the running, moving, or floating average. The idea is straightforward: for any odd number of consecutive points, replace the centermost value with the average of the other points (here, the {xi} are the data points and the smoothed value at position i is si):
This naive approach has a serious problem, as you can see in Figure 4-6. The figure shows the original signal together with the 11-point moving average. Unfortunately, the signal has some sudden jumps and occasional large “spikes,” and we can see how the smoothed curve is affected by these events: whenever a spike enters the smoothing window, the moving average is abruptly distorted by the single, uncommonly large value until the outlier leaves the smoothing window again—at which point the floating average equally abruptly drops again.
We can avoid this problem by using a weighted moving average, which places less weight on the points at the edge of the smoothing window. Using such a weighted average, any new point that enters the smoothing window is only gradually added to the average and then gradually removed again:
Here the wj are the weighting factors. For example, for a 3-point moving average, we might use (1/4, 1/2, 1/4). The particular choice of weight factors is not very important provided they are peaked at the center, drop toward the edges, and add up to 1. I like to use the Gaussian function:
to build smoothing weight factors. The parameter σ in the Gaussian controls the width of the curve, and the function is essentially zero for values of x larger than about 3.5σ. Hence f(x, 1) can be used to build a 9-point kernel by evaluating f(x, 1) at the positions [–4, –3, –2, –1, 0, 1, 2, 3, 4]. Setting σ = 2, we can form a 15-point kernel by evaluating the Gaussian for all integer arguments between –7 and +7. And so on.
All moving-average schemes have a number of problems.
They are painful to evaluate. For each point, the calculation has to be performed from scratch. It is not possible to evaluate weighted moving averages by updating a previous result.
Moving averages can never be extended to the true edge of the available data set, because of the finite width of the averaging window. This is especially problematic because often it is precisely the behavior at the leading edge of a data set that we are most interested in.
Similarly, moving averages are not defined outside the range of the existing data set. As a consequence, they are of no use in forecasting.
Fortunately, there exists a very simple calculational scheme that avoids all of these problems. It is called exponential smoothing or Holt–Winters method. There are various forms of exponential smoothing: single exponential smoothing for series that have neither trend nor seasonality, double exponential smoothing for series exhibiting a trend but no seasonality, and triple exponential smoothing for series with both trend and seasonality. The term “Holt–Winters method” is sometimes reserved for triple exponential smoothing alone.
All exponential smoothing methods work by updating the result from the previous time step using the new information contained in the data of the current time step. They do so by “mixing” the new information with the old one, and the relative weight of old and new information is controlled by an adjustable mixing parameter. The various methods differ in terms of the number of quantities they track and the corresponding number of mixing parameters.
The recurrence relation for single exponential smoothing is particularly simple:
Here si is the smoothed value at time step i, and xi is the actual (unsmoothed) data at that time step. You can see how si is a mixture of the raw data and the previous smoothed value si–1. The mixing parameter α can be chosen anywhere between 0 and 1, and it controls the balance between new and old information: as α approaches 1, we retain only the current data point (i.e., the series is not smoothed at all); as α approaches 0, we retain only the smoothed past (i.e., the curve is totally flat).
Why is this method called “exponential” smoothing? To see this, simply expand the recurrence relation:
What this shows is that in exponential smoothing, all previous observations contribute to the smoothed value, but their contribution is suppressed by increasing powers of the parameter α. That observations further in the past are suppressed multiplicatively is characteristic of exponential behavior. In a way, exponential smoothing is like a floating average with infinite memory but with exponentially falling weights. (Also observe that the sum of the weights, Σj α(1 – α)j, equals 1 as required by virtue of the geometric series Σi qi = 1/(1 – q) for q < 1. See Appendix B for information on the geometric series.)
The results of the simple exponential smoothing procedure can be extended beyond the end of the data set and thereby used to make a forecast. The forecast is extremely simple:
xi+h = si
where si is the last calculated value. In other words, single exponential smoothing yields a forecast that is absolutely flat for all times.
Single exponential smoothing as just described works well for time series without an overall trend. However, in the presence of an overall trend, the smoothed values tend to lag behind the raw data unless α is chosen to be close to 1; however, in this case the resulting curve is not sufficiently smoothed.
Double exponential smoothing corrects for this shortcoming by retaining explicit information about the trend. In other words, we maintain and update the state of two quantities: the smoothed signal and the smoothed trend. There are two equations and two mixing parameters:
si = αxi + (1 – α)(si–1 + ti–1)
ti = β(si – si–1) + (1 – β)ti–1
Let’s look at the second equation first. This equation describes the smoothed trend. The current unsmoothed “value” of the trend is calculated as the difference between the current and the previous smoothed signal; in other words, the current trend tells us how much the smoothed signal changed in the last step. To form the smoothed trend, we perform a simple exponential smoothing process on the trend, using the mixing parameter β. To obtain the smoothed signal, we perform a similar mixing as before but consider not only the previous smoothed signal but take the trend into account as well. The last term in the first equation is the best guess for the current smoothed signal—assuming we followed the previous trend for a single time step.
To turn this result into a forecast, we take the last smoothed value and, for each additional time step, keep adding the last smoothed trend to it:
xi+h = si + h ti
Finally, for triple exponential smoothing we add yet a third quantity, which describes the seasonality. We have to distinguish between additive and multiplicative seasonality. For the additive case, the equations are:
For the multiplicative case, they are:
Here, pi is the “periodic” component, and k is the length of the period. I have also included the expressions for forecasts.
All exponential smoothing methods are based on recurrence relations. This means that we need to fix the start-up values in order to use them. Luckily, the specific choice for these values is not very critical: the exponential damping implies that all exponential smoothing methods have a short “memory,” so that after only a few steps, any influence of the initial values is greatly diminished. Some reasonable choices for start-up values are:
and:
For triple exponential smoothing we must provide one full season of values for start-up, but we can simply fill them with 1s (for the multiplicative model) or 0s (for the additive model). Only if the series is short do we need to worry seriously about finding good starting values.
The last question concerns how to choose the mixing parameters α, β, and γ. My advice is trial and error. Try a few values between 0.2 and 0.4 (very roughly), and see what results you get. Alternatively, you can define a measure for the error (between the actual data and the output of the smoothing algorithm), and then use a numerical optimization routine to minimize this error with respect to the parameters. In my experience, this is usually more trouble than it’s worth for at least the following two reasons. The numerical optimization is an iterative process that is not guaranteed to converge, and you may end up spending way too much time coaxing the algorithm to convergence. Furthermore, any such numerical optimization is slave to the expression you have chosen for the “error” to be minimized. The problem is that the parameter values minimizing that error may not have some other property you want to see in your solution (e.g., regarding the balance between the accuracy of the approximation and the smoothness of the resulting curve) so that, in the end, the manual approach often comes out ahead. However, if you have many series to forecast, then it may make sense to expend the effort and build a system that can determine the optimal parameter values automatically, but it probably won’t be easy to really make this work.
Finally, I want to present an example of the kind of results we can expect from exponential smoothing. Figure 4-7 is a classical data set that shows the monthly number of international airline passengers (in thousands of passengers).[8] The graph shows the actual data together with a triple exponential approximation. The years 1949 through 1957 were used to “train” the algorithm, and the years 1958 through 1960 are forecasted. Note how well the forecast agrees with the actual data—especially in light of the strong seasonal pattern—for a rather long forecasting time frame (three full years!). Not bad for a method as simple as this.
On a recent consulting assignment, I was discussing monthly sales numbers with the client when he made the following comment: “Oh, yes, sales for February are always somewhat lower—that’s an after effect of the Christmas peak.” Sales are always lower in February? How interesting.
Sure enough, if you plotted the monthly sales numbers for the last few years, there was a rather visible dip from the overall trend every February. But in contrast, there wasn’t much of a Christmas spike! (The client’s business was not particularly seasonal.) So why should there be a corresponding dip two months later?
By now I am sure you know the answer already: February is shorter than any of the other months. And it’s not a small effect, either: with 28 days, February is about three days shorter than the other months (which have 30–31 days). That’s about 10 percent—close to the size of the dip in the client’s sales numbers.
When monthly sales numbers were normalized by the number of days in the month, the February dip all but disappeared, and the adjusted February numbers were perfectly in line with the rest of the months. (The average number of days per month is 365/12 = 30.4.)
Whenever you are tracking aggregated numbers in a time series (such as weekly, monthly, or quarterly results), make sure that you have adjusted for possible variation in the aggregation time frame. Besides the numbers of days in the month, another likely candidate for hiccups is the number of business days in a month (for months with five weekends, you can expect a 20 percent drop for most business metrics). But the problem is, of course, much more general and can occur whenever you are reporting aggregate numbers rather than rates. (If the client had been reporting average sales per day for each month, then there would never have been an anomaly.)
This specific problem (i.e., nonadjusted variations in aggregation periods) is a particular concern for all business reports and dashboards. Keep an eye out for it!
The autocorrelation function is the primary diagnostic tool for time-series analysis. Whereas the smoothing methods that we have discussed so far deal with the raw data in a very direct way, the correlation function provides us with a rather different view of the same data. I will first explain how the autocorrelation function is calculated and will then discuss what it means and how it can be used.
The basic algorithm works as follows: start with two copies of the data set and subtract the overall average from all values. Align the two sets, and multiply the values at corresponding time steps with each other. Sum up the results for all time steps. The result is the (unnormalized) correlation coefficient at lag 0. Now shift the two copies against each other by a single time step. Again multiply and sum: the result is the correlation coefficient at lag 1. Proceed in this way for the entire length of the time series. The set of all correlation coefficients for all lags is the autocorrelation function. Finally, divide all coefficients by the coefficient for lag 0 to normalize the correlation function, so that the coefficient for lag 0 is now equal to 1.
All this can be written compactly in a single formula for c(k)—that is the correlation function at lag k:
Here, N is the number of points in the data set. The formula follows the mathematical convention to start indexing sequences at 1, rather than the programming convention to start indexing at 0. Notice that we have subtracted the overall average μ from all values and that the denominator is simply the expression of the numerator for lag k = 0. Figure 4-8 illustrates the process.
The meaning of the correlation function should be clear. Initially, the two signals are perfectly aligned and the correlation is 1. Then, as we shift the signals against each other, they slowly move out of phase with each other, and the correlation drops. How quickly it drops tells us how much “memory” there is in the data. If the correlation drops quickly, we know that, after a few steps, the signal has lost all memory of its recent past. However, if the correlation drops slowly, then we know that we are dealing with a process that is relatively steady over longer periods of time. It is also possible that the correlation function first drops and then rises again to form a second (and possibly a third, or fourth,...) peak. This tells us that the two signals align again if we shift them far enough—in other words, that there is periodicity (i.e., seasonality) in the data set. The position of the secondary peak gives us the number of time steps per season.
Let’s look at a couple of examples. Figure 4-9 shows the correlation function of the gas furnace data in Figure 4-2. This is a fairly typical correlation function for a time series that has only short time correlations: the correlation falls quickly, but not immediately, to zero. There is no periodicity; after the initial drop, the correlation function does not exhibit any further significant peaks.
Figure 4-9. The correlation function for the exhaust gas data shown in Figure 4-2. The data has only short time correlations and no seasonality; the correlation function falls quickly (but not immediately) to zero, and there are no secondary peaks.
Figure 4-10 is the correlation function for the call center data from Figure 4-5. This data set shows a very different behavior. First of all, the time series has a much longer “memory”: it takes the correlation function almost 100 days to fall to zero, indicating that the frequency of calls to the call center changes more or less once per quarter but not more frequently. The second notable feature is the pronounced secondary peak at a lag of 365 days. In other words, the call center data is highly seasonal and repeats itself on a yearly basis. The third feature is the small but regular sawtooth structure. If we look closely, we will find that the first peak of the sawtooth is at a lag of 7 days and that all repeating ones occur at multiples of 7. This is the signature of the high-frequency component that we could see in Figure 4-5: the traffic to the call center exhibits a secondary seasonal component with 7-day periodicity. In other words, traffic is weekday dependent (which is not too surprising).
So far I have talked about the correlation function mostly from a conceptual point of view. If we want to proceed to an actual implementation, there are some fine points we need to worry about.
The autocorrelation function is intended for time series that do not exhibit a trend and have zero mean. Therefore, if the series we want to analyze does contain a trend, then we must remove it first. There are two ways to do this: we can either subtract the trend or we can difference the series.
Figure 4-10. The correlation function for the call center data shown in Figure 4-5. There is a secondary peak after exactly 365 days, as well as a smaller weekly structure to the data.
Subtracting the trend is straightforward—the only problem is that we need to determine the trend first! Sometimes we may have a “model” for the expected behavior and can use it to construct an explicit expression for the trend. For instance, the airline passenger data from the previous section, describes a growth process, and so we should suspect an exponential trend (a exp(x/b)). We can now try guessing values for the two parameters and then subtract the exponential term from the data. For other data sets, we might try a linear or power-law trend, depending on the data set and our understanding of the process generating the data. Alternatively, we might first apply a smoothing algorithm to the data and then subtract the result of the smoothing process from the raw data. The result will be the trend-free “noise” component of the time series.
A different approach consists of differencing the series: instead of dealing with the raw data, we instead work with the changes in the data from one time step to the next. Technically, this means replacing the original series xi with one consisting of the differences of consecutive elements: xi+1 – xi. This process can be repeated if necessary, but in most cases, single differencing is sufficient to remove the trend entirely.
Making sure that the time series has zero mean is easier: simply calculate the mean of the (de-trended!) series and subtract it before calculating the correlation function. This is done explicitly in the formula for the correlation function given earlier.
Another technical wrinkle concerns how we implement the sum in the formula for the numerator. As written, this sum is slightly messy, because its upper limit depends on the lag. We can simplify the formula by padding one of the data sets with N zeros on the right and letting the sum run from i = 1 to i = N for all lags. In fact, many computational software packages assume that the data has been prepared in this way (see the Workshop section in this chapter).
Figure 4-11. A filter chain: each filter applied to a signal yields another signal, which itself can be filtered.
The last issue you should be aware of is that there are two different normalization conventions for the autocorrelation function, which are both widely used. In the first variant, numerator and denominator are not normalized separately—this is the scheme used in the previous formula. In the second variant, the numerator and denominator are each normalized by the number of nonzero terms in their respective sum. With this convention, the formula becomes:
Both conventions are fine, but if you want to compare results from different sources or different software packages, then you will have to make sure you know which convention each of them is following!
Until now we have always spoken of time series in a direct fashion, but there is also a way to describe them (and the operations performed on them) on a much higher level of abstraction. For this, we borrow some concepts and terminology from electrical engineering, specifically from the field of digital signal processing (DSP).
In the lingo of DSP, we deal with signals (time series) and filters (operations). Applying a filter to a signal produces a new (filtered) signal. Since filters can be applied to any signal, we can apply another filter to the output of the first and in this way chain filters together (see Figure 4-11). Signals can also be combined and subtracted from each other.
As it turns out, many of the operations we have seen so far (smoothing, differencing) can be expressed as filters. We can therefore use the convenient high-level language of DSP when referring to the processes of time-series analysis. To make this concrete, we need to understand how a filter is represented and what it means to “apply” a filter to a signal.
Each digital filter is represented by a set of coefficients or weights. To apply the filter, we multiply the coefficients with a subset of the signal. The sum of the products is the value of the resulting (filtered) signal:
This should look familiar! We used a similar expression when talking about moving averages earlier in the chapter. A moving average is simply a time series run through an n-point filter, where every coefficient is equal to 1/n. A weighted moving average filter similarly consists of the weights used in the expression for the average.
The filter concept is not limited to smoothing operations. The differencing step discussed in the previous section can be viewed as the application of the filter [1, –1]. We can even shift an entire time series forward in time by using the filter [0, 1].
The last piece of terminology that we will need concerns the peculiar sum of a product that we have encountered several times by now. It’s called a convolution. A convolution is a way to combine two sequences to yield a third sequence, which you can think of as the “overlap” between the original sequences. The convolution operation is usually defined as follows:
Symbolically, the convolution operation is often expressed through an asterisk: y = w * x, where y, w, and x are sequences.
Of course, if one or both of the sequences have only a finite number of elements, then the sum also contains only a finite number of terms and therefore poses no difficulties. You should be able to convince yourself that every application of a filter to a time series that we have done was in fact a convolution of the signal with the filter. This is true in general: applying a filter to a signal means forming the convolution of the two. You will find that many numerical software packages provide a convolution operation as a built-in function, making filter operations particularly convenient to use.
I must warn you, however, that the entire machinery of digital signal processing is geared toward signals of infinite (or almost infinite) length, which makes good sense for typical electrical signals (such as the output from a microphone or a radio receiver). But for the rather short time series that we are likely to deal with, we need to pay close attention to a variety of edge effects. For example, if we apply a smoothing or differencing filter, then the resulting series will be shorter, by half the filter length, than the original series. If we now want to subtract the smoothed from the original signal, the operation will fail because the two signals are not of equal length. We therefore must either pad the smoothed signal or truncate the original one. The constant need to worry about padding and proper alignment detracts significantly from the conceptual beauty of the signal-theoretic approach when used with time series of relatively short duration.
The
scipy.signal package
provides functions and operations for digital signal processing that
we can use to good effect to perform calculations for time-series
analysis. The
scipy.signal package
makes use of the signal processing terminology introduced in the
previous section.
The listing that follows shows all the commands used to create graphs like Figure 4-5 and Figure 4-10, including the commands required to write the results to file. The code is heavily commented and should be easy to understand.
from scipy import * from scipy.signal import * from matplotlib.pyplot import * filename = 'callcenter' # Read data from a text file, retaining only the third column. # (Column indexes start at 0.) # The default delimiter is any whitespace. data = loadtxt( filename, comments='#', delimiter=None, usecols=(2,) ) # The number of points in the time series. We will need it later. n = data.shape[0] # Finding a smoothed version of the time series: # 1) Construct a 31-point Gaussian filter with standard deviation = 4 filt = gaussian( 31, 4 ) # 2) Normalize the filter through dividing by the sum of its elements filt /= sum( filt ) # 3) Pad data on both sides with half the filter length of the last value # (The function ones(k) returns a vector of length k, with all elements 1.) padded = concatenate( (data[0]*ones(31//2), data, data[n-1]*ones(31//2)) ) # 4) Convolve the data with the filter. See text for the meaning of "mode". smooth = convolve( padded, filt, mode='valid' ) # Plot the raw data together with the smoothed data: # 1) Create a figure, sized to 7x5 inches figure( 1, figsize=( 7, 5 ) ) # 2) Plot the raw data in red plot( data, 'r' ) # 3) Plot the smoothed data in blue plot( smooth, 'b' ) # 4) Save the figure to file savefig( filename + "_smooth.png" ) # 5) Clear the figure clf() # Calculate the autocorrelation function: # 1) Subtract the mean tmp = data - mean(data) # 2) Pad one copy of data on the right with zeros, then form correlation fct # The function zeros_like(v) creates a vector with the same dimensions # as the input vector v but with all elements zero. corr = correlate( tmp, concatenate( (tmp, zeros_like(tmp)) ), mode='valid' ) # 3) Retain only some of the elements corr = corr[:500] # 4) Normalize by dividing by the first element corr /= corr[0] # Plot the correlation function: figure( 2, figsize=( 7, 5 ) ) plot( corr ) savefig( filename + "_corr.png" ) clf()
The package provides the Gaussian filter as well as many others. The filters are not normalized, but this is easy enough to accomplish.
More attention needs to be paid to the appropriate padding and
truncating. For example, when forming the smoothed version of the
data, I pad the data on both sides by half the filter length to ensure
that the smoothed data has the same length as the original set. The
mode argument to the
convolve() and
correlate functions determines which pieces
of the resulting vector to retain. Several modes are possible. With
mode="same", the returned vector
has as many elements as the largest input vector (in our case, as the
padded data vector), but the elements closest to the ends would be
corrupted by the padded values. In the listing, I therefore use
mode="valid", which retains only
those elements that have full overlap between the data and the
filter—in effect, removing the elements added in the padding
step.
Notice how the signal processing machinery leads in this application to very compact code. Once you strip out the comments and plotting commands, there are only about 10 lines of code that perform actual operations and calculations. However, we had to pad all data carefully and ensure that we kept only those pieces of the result that were least contaminated by the padding.
The Analysis of Time Series. Chris Chatfield. 6th ed., Chapman & Hall. 2003.
This is my preferred text on time-series analysis. It combines a thoroughly practical approach with mathematical depth and a healthy preference for the simple over the obscure. Highly recommended.
[8] This data is available in the “airpass.dat” data set from R. J. Hyndman’s Time Series Data Library at.
No credit card required | https://www.safaribooksonline.com/library/view/data-analysis-with/9781449389802/ch04.html | CC-MAIN-2017-13 | refinedweb | 6,470 | 59.53 |
14 September 2012 05:00 [Source: ICIS news]
SINGAPORE (ICIS)--Here is Friday’s midday ?xml:namespace>
CRUDE: WTI Oct $99.08/bbl, up 77 cents; BRENT Nov $116.55/bbl, up 67 cents
Crude futures rose in early Asian trade on the back of speculation that the latest stimulus measures by the US Federal Reserve will boost demand for oil.
NAPHTHA: $1,000.50-1,003.50/tonne CFR
Open-spec prices for the second-half October contracts rose in the morning in tandem with crude futures.
BENZENE: $1,245-1,255/tonne FOB
Discussions for prompt cargoes were limited. Second-half October-loading lots were offered at $1,230-1,235/tonne FOB Korea, while November-loading lots were offered at $1,225-1,240/tonne FOB Korea against bids at $1,205-1,215/tonne FOB Korea.
TOLUENE: $1,185-1,200/tonne FOB
Prices firmed at the low end of the range, because of higher bids and offers in the market. Bids for November-loading lots were at $1,185/tonne FOB
ETHYLENE: $1,270-1,320/tonne CFR NE Asia, stable
A deal for an end-September loading 5,000 to 6,000 tonne cargo and bound for Europe was heard concluded at $1,250/tonne FOB
PROPYLENE: $1,380-1,400/tonne CFR NE Asia, stable
Selling ideas were at around $1,400/tonne CFR NE Asia, while buying ideas were at $1,380-1,390/tonne CFR NE As | http://www.icis.com/Articles/2012/09/14/9595435/noon-snapshot-asia-markets-summary.html | CC-MAIN-2015-22 | refinedweb | 246 | 59.03 |
This example shows how to use a Try and Catch block to catch exceptions.
This example shows how to use a Try…Catch block to catch an OverflowException.
This code example is also available as an IntelliSense code snippet. In the code snippet picker, it is located in Visual Basic Language. For more information, see How to: Insert Snippets Into Your Code (Visual Basic).
Dim Top As Double = 5
Dim Bottom As Double = 0
Dim Result As Integer
Try
Result = CType(Top / Bottom, Integer)
Catch Exc As System.OverflowException
MsgBox("Attempt to divide by zero resulted in overflow")
End Try
This example requires:
A reference to the System namespace.
The following code example implements a Try...Catch block that handles Exception, IOException, and all the exceptions that derive from IOException.
Try
' Add code for your I/O task here.
Catch dirNotFound As System.IO.DirectoryNotFoundException
Throw dirNotFound
Catch fileNotFound As System.IO.FileNotFoundException
Throw fileNotFound
Catch pathTooLong As System.IO.PathTooLongException
Throw pathTooLong
Catch ioEx As System.IO.IOException
Throw ioEx
Catch security As System.Security.SecurityException
Throw security
Catch ex As Exception
Throw ex
Finally
' Dispose of any resources you used or opened in the Try block.
End Try
Add the code you want to execute to the Try block.. | http://msdn.microsoft.com/en-us/library/ys1b32h3(VS.80).aspx | crawl-002 | refinedweb | 211 | 51.34 |
i am confused here on what to write can some 1 help out here - Java Beginners
i am confused here on what to write can some 1 help out here i don't quite understand how to code it so can some one help out
Thread Memory Usage in java - Java Beginners
++ program's memeory usage?
In other words,i want to compute a C++_program's memory usage in java,
Can I do...://
but,if I use
java bit
java bit what is java bit
Bitwise and Bit Shift Operators
Bitwise and Bit Shift Operators
In Java the bitwise and bit shift operators are used..., if the value of the sign bit is 1.
Now lets understand these operators in brief....,edit,delete. I saw Session id is loaded for
edit and delete action not for add .so am asking this Of course u can perform any operation without
confused about an error in my web application deploying to Tomcat - Java Server Faces Questions
confused about an error in my web application deploying to Tomcat ... with it:
I deploy... in production environments was not found on the java.library.path: C:\Program Files\Java
:( I am not getting Problem (RMI)
I am not getting Problem (RMI) When i am excuting RMI EXAMPLE 3,2
I am getting error daying nested exception and Connect Exception
Spring Usage - Spring
Spring Usage Hi
This is Chandra Mohan,
I want to work with Spring.
SO, i want to working with spring, what are the API's required i.e. how to set the classpath,jar files and Structure of the Spring project.
Please
Java Get Memory Usage
Java Get Memory Usage
...;How to find the Memory Usage in Java?":
Java Code to Get Memory...
For more details on memory usage, see the article on Memory
Usage in Java
Java Array Usage
Java Array Usage
... in an array
for (int i = 0; i < months.length; i++ ) {
System.out.println("month: " + month[i]);
Here, we have taken
java - Java Beginners
java im comfortable with c and c++ language concept. i also know core java little bit but now i m confused as i don't know from where shud i start... SHOULD I WALK?".
If ur comfortable with C++(core java) then its too easy
Usage of setDate() in prepared Statement - JDBC
Usage of setDate in prepared Statement Hi, I have created a jsp...() of prepared statement,the following error is displayed: setDate() not available in prepared statement. Basically, I need to accept the date dynamically unable to identify the error in my code
i am unable to identify the error in my code class Program
{
public static void main(String[] args)
{
BufferedReader br=new BufferedReader(new InputStreamReader(System.in));
System.out.println("enter
Java Syntax - Java Beginners
Java Syntax Hi!
I need a bit of help on this...
Can anyone tell... have read about arrayList but i am trying to see if i could implement something...://
Thanks
I wonder - Java Beginners
I wonder Write two separate Java?s class definition where the first one is a class Health Convertor which has at least four main members:
i. Attribute weight
ii. Attribute height
iii. A method to determine number
Not sure what I am missing ? Any ideas?
Not sure what I am missing ? Any ideas? import java.util.*;
public...)
{
if(str.length()==0)
return false;
for(int i=0;i<str.length();i++)
if (c==str.charAt(i));
return true
Beginners in Java
Beginners in Java Hi, I am beginners in Java, can someone help me... tutorials for beginners in Java with example?
Thanks.
Hi, want to be command over Java, you should go on the link and follow the various beginners
Inheritance Overloading - Java Beginners
();
}
}
But the output is:
1
2
7
Can anyone tell me, why the output is this.
I am...:// Overloading The problem is:
I have this program
doubt in inheritance program - Java Beginners
doubt in inheritance program how will we get the result 6
2 5 in the inheritance program in the given example i got 6 &2 but i am confused about 5
Java I/O - Java Beginners
Creating Directory Java I/O Hi, I wanted to know how to create a directory in Java I/O? Hi, Creating directory with the help of Java Program is not that difficult, go through the given link for Java Example Codehttp
Get Usage Memory Example
Get Usage Memory Example
... in understanding Get Usage
Memory. The class Memory Usage include the main method... ( ) - This method return you
the total amount of memory that your java
javascript time with am pm
javascript time with am pm How can I display current time with AM... in am and pm</title>
<script type="text/javascript">
var todayDate=new... seconds=todayDate.getSeconds();
var format ="AM";
if(hours>11)
{format="PM
Java for beginners
Java for beginners Java for beginners
Which is the best resource... through examples?
As a newbie Java developer, I just learned C++ and HTML. Now I am looking for the easy to learn Java tutorials. If you can suggest me the best
Java Syntax - Java Beginners
Java Syntax Hi!
I need a bit of help on this...
Can anyone tell... have read about arrayList but i am trying to see if i could implement something.../java/beginners/array_list_demo.shtml
Thanks
file i/o - Java Beginners
file i/o hiii,
i have to read some integers from a text file and store them in link list..
so please can i have source code for it.??
thanks
File I/O - Java Beginners
File I/O How to search for a specific word in a text file in java? Hi Arnab,
Please check the following code.
=====================
import java.io.File;
import java.io.BufferedReader;
import
Java - Java Beginners
Java public void run (IAction action)
{
//I need code here ...using Eclipse IDE
}
1.Note if I am clicking Button i want to open
Excel Report
2.Excel Report Location : C:\RSVR2\Audit\output\usage report 20081024
This is what i need - Java Beginners
This is what i need Implement a standalone procedure to read in a file containing words and white space and produce a compressed version of the file....
for this question i need just :
one function can read string like (I like
Good tutorials for beginners in Java
Good tutorials for beginners in Java Hi, I am beginners in Java... in details about good tutorials for beginners in Java with example?
Thanks.
... the various beginners tutorials related to Java
what is bit
what is bit what is bit
Connection pooling - Java Beginners
an application and I am now trying to implement the same through connection pooling. I... the application, I am getting the following error:
Package pack does not exist... am very much confused.. Plz provide me the solution asap....
Regards:34:39");
long l=ts2.getTime() +(100060602424);//add 24 days
Date d=new Date(l
i am inserting an image into database but it is showing relative path not absolute path
i am inserting an image into database but it is showing relative path not absolute path hi my first page.........
<html>
<head>...)
{
System.out.println(e);
}
%>
</body>
</html>
when i compiled it i
this is my javascript code and i am not understanding the mistake in this,please help me?
this is my javascript code and i am not understanding the mistake in this,please help me? <html>
<h2>Form Validation</h2>
<script language = "Javascript">
function checkEmail
how to validate javascriptcode n i am attaching file give validations
how to validate javascriptcode n i am attaching file give validations <%@page import="java.sql.SQLException"%>
<%@page import="com.rajsoft.CAF.util.DBconnection"%>
<%@page import="java.sql.Statement"%>
<
Java program - Java Beginners
Java program Dear maam/Sir,
I am a 2nd year Computer... Sign is: Virgo
Your Element is: Earth
Your Chinese Zodiac is: RAM
I am just confused on what commands should I use and at the same time, I am
how to validate javascriptcode n i am attaching file give validations
how to validate javascriptcode n i am attaching file give validations ...!=null)
{
int i=0;
if(i<...=(RoleVO) v1.get(i);
System.out.println("CAFHUB IS====>
java downloads - Java Beginners
java downloads hi friends,
i would like to download java1.5 .so...,
I am sending you a link. This like will help you.
Please visit for more information.
using java for cross platform application development - Java Beginners
java.But i am confused, java with which platform i have to use?Linux or windows?and anybody have idea how am i going to start writing the code?
please help me...using java for cross platform application development hi,
i
I/O Java
System.out.println(" Error in Concat:"+e);
}
}
}
I am not really sure why...I/O Java import java.io.File;
import java.io.FileNotFoundException...(File file) {
//if the file extension is .txt or .java return true, else
Tutorial For Java beginners
Tutorial For Java beginners I am beginners in Java. Is there any good tutorial and example for beginners in Java? I want to learn Java before Jan... of December 2013.
Check the Java Beginners Java Tutorial section for tutorials
Parameter month I Report - Java Beginners
Parameter month I Report hy,
I want to ask about how to make parameter in I Report, parameter is month from date. How to view data from i report... like Java/JSP/Servlet/JSF/Struts etc ...
Thanks
Java compilation error - Java Beginners
Java compilation error Hello,
i am getting an error while running simple core java program on command prompt.java is installed on my pc. the error is:
javac:file not found
usage:javac
Regards,
jyoti prakash
java beginners - Java Beginners
java beginners thanks for the suggestion
so I am sending the patteren in place of dots i want blank space
aaaaaaaaaaaaaaaa...{
public static void main(String[] args){
for(int i=1;i<=8;i
I need your help - Java Beginners
I need your help What is the purpose of garbage collection in Java, and when is it used? Hi check out this url :
Parameter month I Report - Java Beginners
Parameter month I Report ok, my problem now is in Report in java with I report. I want to give parameter month/year to my design in I Report... to I Report design.
Thank's Hi friend,
Code to help in solving
odject value - Java Beginners
odject value hello friends
i have one doubt on my coding.am posting my code here.
i want to print the value of object.But am confused...);
System.out.println(a.toString());
}
}
i got one output like: test@3e25a5.what
recursions - Java Beginners
recursions Can somebody help me with these four questions. Please. i am learning recursions in Java.
Question1. Trying to find the value...(6,8)? and little bit of working for me to understand
question3.
i need
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://roseindia.net/tutorialhelp/comment/9650 | CC-MAIN-2016-07 | refinedweb | 1,829 | 56.76 |
I generally subscribe to the attitude that premature optimizations are evil, but I strongly believe that a robust caching strategy should evolve alongside the rest of the system. Waiting too long makes it hard to cleanly and thoughtfully add caching. Besides, in my experience, a considered caching strategy generally means I worry less about performance in other areas - especially data access and data modelling. In other words, I can build those complex parts for maintainability, as opposed to having to worry about the cost of each individual query.
.NET developers are pretty cache-savvy - thanks largely in part to the powerful System.Web.Caching namespace and ASP.NET's simple to use OutputCaching capabilities. For that reason, and the fact that it tends to be very application specific, I don't want to go over how to decide what to cache, how to deal with synch issues, updates and so on. Instead, I specifically want to talk about Memcached.
System.Web.Caching
You're probably already familiar with Memcached - it's a highly efficient distributed caching system. It's used generously by all the big web 2.0 players (In may 2007 it was revealed that Facebook relies on 200 16GB quad-core dedicated Memcached servers). Interest in Memcached from the .NET community has been relatively low (although over the last year more and more people are talking about it). Frankly, if you're doing anything that requires horizontal scaling you're seriously shooting yourself in the foot by overlooking it. It runs on windows - although we run it on Linux and there's really no reason for you not to learn that too!
Fundamentally, there are two problems with the built-in cache. First, it's limited to the memory of a single system which happens to be shared with the rest of your application domain. Secondly, if you have two servers, each with their own in-memory cache, users are likely to see very weird synching issues. Memcached isn't as fast as in-memory caching, but will scale to virtually unlimited amount of memory. There isn't any redundancy of failover, simply memory spread across multiple servers.
The best part is that it literally takes seconds to get it up and running. First, download a windows build onto your development machine here. (look for the win32 binary of memcached). Unzip the package somewhere, I put mine in c:\program files\memcached\. Next, from the command line, run memcached -d install. This will install memcached as a service. You can run memcached -h for more command lines options. You'll need to start the service (I also changed my startup type to manual, but that's completely up to you).
c:\program files\memcached\
memcached -d install
memcached -h
The next step is to install the client library. I use suggest Enyim Memcached from CodePlex. The project comes with a sample configuration file, which you should be able to easily incorporate into your web.config or app.config. While developing, only put one server 127.0.0.1 on port 11211 (which is the default). You also need to add a reference to the two dlls.
Aside from that, you basically program against a simple API. You create an instance of MemcachedClient (it's thread-safe so you can use a singleton, or re-create it since it's inexpensive to create), and call Store, Get or Remove (or a few other useful methods) like you would the normal cache object. As I've blogged about before (here and here), I'm a fan of hiding all of this behind an interface to ease mocking and swapping.
MemcachedClient
Store
Get
Remove
Here's an example:
MemcachedClient client = new MemcachedClient();
client.Store(StoreMode.Set, "Startup", DateTime.Now, DateTime.Now.AddMinutes(20));
DateTime startup = client.Get<DateTime>("Startup");
client.Remove("Startup");
[Advertisement]
Pingback from Reflective Perspective - Chris Alcock » The Morning Brew #131
Definitely, memcached is the standard for high traffic websites which need to scale out. We're using it very successfully on Windows with the client developed by Enyim. It's fast and reliable.
Pingback from Dew Drop - July 8, 2008 | Alvin Ashcraft's Morning Dew
running this on 64-bit CentOS servers and using Enyim client rocks. we run 4 servers with 16G of cache each. Cacti has a nice monitoring plugin as well so you can see whats happening inside the cache as far as total items, evictions,etc.
Carl,
Check our Cacheonix if you want to scale reliably and expensively :)
Cacheonix is in beta right now and we give a free license for each new bug you will find.
You've been kicked (a good thing) - Trackback from DotNetKicks.com
Another free option if you don't want to rely on open source software (and there are many reasons why you might not such as the lack of any support contract) then you can just re-host the ASP.NET cache in a windows service as a distributed cache. It'll take you about 30 minutes to set up:
gregbeech.com/.../how-to-write-a-high-performance-distributed-cache-in-30-minutes.aspx
We run off a (not very) enhanced variant of the cache in this article.
Hi there,
I was wondering if you could give an example of how you would achieve what the sqldependency would do. For example have a DB trigger that will flush out the cache or something?
Regards DotnetShadow
Shadow:
I don't think you can. But I must say, the SqlDepedency thing has always been at-odds with DDD in my mind. I agree it's convenient and powerful, and possibly even the right approach to some applications (such as reporting).
However, rather than letting a change in data be the trigger for clearing the cache, I really think this ought to be caused by a given behaviour. In other words, it shouldn't be because a column within the UserAddresses table was changed that the cache is clear, but rather because user.Save() was called.
This is traditionally how other frameworks address dependency issues. They provide callbacks, such as AfterSave which allow you to erase the cache (or, like Rails, they provide Sweepers which Observe domain objects for changes, which provide greater abstraction).
The other nice thing about this, is that you've taken direct control over your caching strategy and aren't tied to database provider or a caching provider.
It will be interesting to see if Microsoft's memcached (Velocity) supports it though...
Programming links 07.11.2008
Pingback from Scale Cheaply - Memcached « vincenthome | http://codebetter.com/blogs/karlseguin/archive/2008/07/07/scale-cheaply-memcached.aspx | crawl-002 | refinedweb | 1,097 | 64.81 |
Stefan Hufnagl (hufnagl@de.ibm.com), Senior Consultant, IBM
26 Jul 2005
See the process of recording and running a simple functional test on a ClearQuest application. An example scenario is used to illustrate the concepts presented in this getting started article.
Editor's Note: This article was written using IBM® Rational® ClearQuest 2003 and IBM® Rational® Functional Tester 6.1 and 6.0.
Overview
IBM Rational ClearQuest is a highly configurable and customisable tool for change request management. With this tool, you can easily develop databases applications that are focused on special tasks. Developing ClearQuest applications depends heavily on the graphical user interface (GUI), and this is where automated testing can help. You can develop test scripts for common ClearQuest tasks, such as submit and state change. Modifying records testing and analyzing the results of the tests then becomes easy. An additional benefit of testing is that the ClearQuest developer can use the test scripts for presenting the application to a customer.
Because ClearQuest can be used with many different interfaces, it is necessary for developed applications to undergo extensive testing. Although the ClearQuest application should be tested in all GUIs (Windows, Unix, Web, Eclipse), this article is focused on testing in only two environments: the new ClearQuest Web interface and the native Windows client. Using two separate examples, this article describes how to automate the testing in these two environments using IBM Rational Functional Tester. We chose to use Rational Functional Tester because it seems to more easily recognize GUI objects than Rational Robot. The same is true for the native Windows client.
Prerequisites
To perform the tests in the examples, you must complete the following steps:
Acceptance Tests with Rational Functional Tester and the Rational ClearQuest Web Interface
This example shows how to create an acceptance test using Rational Functional Tester and the Rational ClearCase Web Interface. To prepare your machine for setting up the test script, complete the following steps:
Prepare Functional Tester to test the application
Start ClearQuest Web and record your first script
Instead of recording one large script that covers every activity, it is wiser to build several smaller scripts (modular). This means that you will create a separate script for login, log off, submit defects, queries and so on. In this first script you will record only the login procedure.
Playback of the recorded script
To play back the recorded script, close the ClearQuest Web window.
Extended script usage
For ease of maintenance and scalability, it would be reasonable to use several scripts. You could create a separate script for each of the following:
But how do you control and use the modular scripts? You can use a main script and insert the necessary scripts. To insert the scripts into the main script, review the following figure:
The result should look like this:
public class MainScript extends MainScriptHelper
{
/**
* Script Name : MainScript
* Generated : Mar 22, 2005 8:47:22 PM
* Description : Functional Test Script
* Original Host : WinNT Version 5.0 Build 2195 (S)
*
* @since 2005/03/22
* @author Administrator
*/
public void testMain(Object[] args)
{
callScript("CoreWorkflow.Login");
callScript("CoreWorkflow.Submit");
callScript("CoreWorkflow.LogOff");
}
}
Using Verification Points with Regular Expressions
Verification Points are very important, without them your test scripts do not have meaningful results. The next example uses a verification point with Regular Expressions. With the help of regular expressions, verification points turn from static to dynamic.
Before starting the ClearQuest example, an explanation of Regular Expression is needed. A Regular Expression is a formula for matching strings that adhere to a particular pattern. Regular Expressions are very powerful when used in Rational Functional Tester, but can sometimes be hard to understand. For more information about Regular Expressions, visit "A Tao of Regular Expressions."
After submitting a defect in Rational ClearQuest, you should receive a confirmation similar to the following figure. You could use this confirmation as a Verification Point, but you should be careful. The next time you replay the script the confirmation looks the same, except for the defect number. It is here that a Regular Expression takes place.
How do you verify this ClearQuest result message in Rational Functional Tester?
Functional Tests with the native ClearQuest Windows Client
Working with the native ClearQuest Windows Client is very similar to working with the ClearQuest Web interface. For this reason, we only mention the differences in this example. Please note: You can't use Rational Functional Tester with native ClearQuest without the .Net Runtime.
Follow these steps to prepare Functional Tester to test ClearQuest:
Summary
Despite the rumors that it is impossible to use Rational Functional Tester for testing the GUI of Rational tools like ClearQuest, the examples in this article show how simple it is. The examples in the article showed you how to:
These examples only begin to show you what Functional Tester can do. I plan to write a follow-up article that shows the power of the object map and takes a closer look at Rational Functional Tester Java Code.
Hopefully, you now have the skills to try Rational Functional Tester on your own ClearQuest application code, in your environment. Best of luck!
Acknowledgements
Thanks to André Kofaldt for all the technical support he gave me, and to Dr. Sternkicker for reviewing and providing thoughts on the paper.
Resources
About the author
Stefan Hufnagl is currently a Senior Consultant in the Rational Software Group at IBM, supporting the ClearCase, ClearQuest, and Functional Tester product lines. He has experience in the areas of Configuration and Change management, technical marketing, and software testing. He has been working with the Rational tools since 1998.
Rate this page
Please take a moment to complete this form to help us better serve you.
Did the information help you to achieve your goal?
Please provide us with comments to help improve this page:
How useful is the information? | http://www.ibm.com/developerworks/rational/library/05/726_test/index.html | crawl-002 | refinedweb | 974 | 53.51 |
Last.
Secondly,, eclipse has some major problems on karmic due to some changes. The fault is with eclipse, but it will be some time before any fixes work through to the eclipse based products I need to work with, so I downgraded.
Planet maemo: category "feed:85141068e640087e3494790d59181094"
Last.
I wrote a couple of weeks ago a little teaser about writing a custom cell renderer.
Once I got witter to the point that it had multiple views, I immediately wanted to have a nice way to switch between those views. In the first instance I just used buttons which have the advantage of being able to go direct to the view you want, but at the cost of screen space to show the button. Or alternatively needing to go via menus to get to the buttons.
Enter ‘gestures’ I wanted to be able to swipe left or right to switch views, much like on the multi-desktop of the N900. So I did some searching and eventually found reference to gPodder which is also written in python and introduced swipe gestures.
So i dug around the source and found that essentially they capture the position of a ‘pressed’ event and the position of the ‘released’ event and calculate the difference. If it’s over a certain threshold left or right then they trigger the appropriate method.
This seemed reasonable enough, but I couldn’t figure out what object was emitting those signals. As I looked into it I found something better.
The hildon pannableArea emits signals for horizontal scrolling and vertical scrolling. And it does so regardless of whether it will actually scroll.
What this means is that for witter, I use a pannableArea to do kinetic scrolling of the treeview which shows the tweets. There is no horizontal movement, but I can use the following:
pannedWindow.connect('horizontal-movement', self.gesture)
Then in the method gesture I get:
def gesture(self, widget, direction, startx, starty): if (direction == 3): #Go one way if (direction == 2): #Go rthe other
those numbers do have constants associated, but I haven’t figured out where I am supposed to reference them from, so I’m just using the numbers.
The cool thing about this is that it is quite selective about what constitutes horizontal movement. Going diagonally left and up or down does NOT trigger this signal.
So it’s a pretty nice way to switch between views. Now I need to figure out how to do the cool dragging of views like the desktop, rather than just a straight flip of views.
Posted in maemo, project, SoftwareEngineering Tagged: gestures, hildon, N900, pannablearea, Python, swipe, witter
As a teaser to a future post I thought I’d post an early screenshot of Witter using a custom cell renderer. This is about the first point at which my cell renderer is actually capable of showing tweets at all.
It completely lacks any layout of information, or colouring/sizing of text. But I wanted to put it up to a) contrast with when I’m done, and b) show that it took me nearly 200 lines of code, just to get this far…
Posted in maemo, project, SoftwareEngineering Tagged: custom cellrenderer, gtk, N900, treeview, witter
| http://maemo.org/news/planet-maemo/category/feed:85141068e640087e3494790d59181094/?org_openpsa_qbpager_net_nehmer_blog_index_page=2 | CC-MAIN-2017-09 | refinedweb | 538 | 67.99 |
Opened 8 years ago
Closed 8 years ago
#3746 closed (wontfix)
Add a wildcard object for use in objects.filter queries.
Description
Advantage: Greatly simplify a common usage with no breakage of old code, while making code more pythonic.
There should be a wildcard object to pass with keyword params to filter(), such that any keyword param is ignored which equals this wildcard object. This should not break any existing code.
Examples:
def people_report(request, kw):
for key in ('name', 'occupation'):
query_params = {}
if if key in kw and kw[key] != None:
query_params[key] = kw[key]
people = Person.objects.filter(query_params)
...
#COMPARED TO
def people_report(request, name=WILDCARD, occupation=WILDCARD):
people = Person.objects.filter(name=name, occupation=occupation)
...
#Implementation:
change django.db.models.manager.filter from:
def filter(self, *args, kwargs):
return self.get_query_set().filter(*args, kwargs)
TO:
def filter(self, *args, kwargs):
for key in kwargs:
if kwargs[key] == django.db.models.manager.WILDCARD:
del kwargs[key]
return self.get_query_set().filter(*args, kwargs)
I don't think this is worth it. It's only one extra line of code (using a list comprehension) to do the check by hand. Any "wildcard" value we pick can then not be used as a value for that field.
Accepted for a Design Decision.
& for readability, here's the code provided by Justin wikified:
Proposed patch: | https://code.djangoproject.com/ticket/3746 | CC-MAIN-2015-27 | refinedweb | 222 | 51.24 |
Use Dart 2 constants.
Remove upper case constants (#4) * Remove usage of upper-case constants. * update SDK version * remove stable from Travis config
Middleware for the
http package that transparently retries failing requests.
To use this, just create an
RetryClient that wraps the underlying
http.Client:
import 'package:http/http.dart' as http; import 'package:http_retry/http_retry.dart'; main() async { var client = new RetryClient(new http.Client()); print(await client.read("")); await client.close(); }
By default, this retries any request whose response has status code 503 Temporary Failure up to three retries. It waits 500ms before the first retry, and increases the delay by 1.5x each time. All of this can be customized using the
new RetryClient() constructor. | https://dart.googlesource.com/http_retry/+/refs/tags/0.1.1+1 | CC-MAIN-2021-04 | refinedweb | 119 | 63.25 |
/*
* MethodContactContact</code> object is acts as a contact that
* can set and get data to and from an object using methods. This
* requires a get method and a set method that share the same class
* type for the return and parameter respectively.
* @author Niall Gallagher
* @see org.simpleframework.xml.core.MethodScanner
*/
class MethodContact implements Contact {
/**
* This is the label that marks both the set and get methods.
*/
private Annotation label;
* This is the set method which is used to set the value.
*/
private MethodPart set;
* This is the get method which is used to get the value.
*/
private MethodPart get;
* This is the dependent types as taken from the get method.
private Class[] items;
* This represents the declaring class for this method.
private Class owner;
* This is the dependent type as taken from the get method.
private Class item;
* This is the type associated with this point of contact.
private Class type;
* This represents the name of the method for this contact.
private String name;
* Constructor for the <code>MethodContact</code> object. This is
* used to compose a point of contact that makes use of a get and
* set method on a class. The specified methods will be invoked
* during the serialization process to get and set values.
*
* @param get this forms the get method for the object
public MethodContact(MethodPart get) {
this(get, null);
}
* @param set this forms the get method for the object
public MethodContact(MethodPart get, MethodPart set) {
this.owner = get.getDeclaringClass();
this.label = get.getAnnotation();
this.items = get.getDependents();
this.item = get.getDependent();
this.type = get.getType();
this.name = get.getName();
this.set = set;
this.get = get;
}
* This is used to determine if the annotated contact is for a
* read only variable. A read only variable is a field that
* can be set from within the constructor such as a blank final
* variable. It can also be a method with no set counterpart.
*
* @return this returns true if the contact is a constant one
public boolean isReadOnly() {
return set == null;
* This returns the get part of the method. Acquiring separate
* parts of the method ensures that method parts can be inherited
* easily between types as overriding either part of a property.
* @return this returns the get part of the method contact
public MethodPart getRead() {
return get;
* This returns the set part of the method. Acquiring separate
* @return this returns the set part of the method contact
public MethodPart getWrite() {
return set;
*) {
T result = get.getAnnotation(type);
if(type == label.annotationType()) {
return (T) label;
}
if(result == null && set != null) {
return set.getAnnotation(type);
return result;
* type;
* item;
* This provides the dependent classes for the contact. This will
* typically represent a generic types for the actual type. For
* contacts that use a <code>Map</code> type this will be the
* generic type parameter for that map type declaration.
public Class[] getDependents() {
return items;
}
* This is the class that declares the contact. The declaring
* class is where the method represented has been defined. This
* will typically be a class rather than an interface.
* @return this returns the class the contact is declared within
public Class getDeclaringClass() {
return owner;
* This is used to acquire the name of the method. This returns
* the name of the method without the get, set or is prefix that
* represents the Java Bean method type. Also this decapitalizes
* the resulting name. The result is used to represent the XML
* attribute of element within the class schema represented.
* @return this returns the name of the method represented
public String getName() {
return name;
}
*{
Method method = get.getMethod();
Class type = method.getDeclaringClass();
if(set == null) {
throw new MethodException("Property '%s' is read only in %s", name, type);
set.getMethod().invoke get.getMethod().invoke(source);
* This is used to describe the contact as it exists within the
* owning class. It is used to provide error messages that can
* be used to debug issues that occur when processing a contact.
* The string provided contains both the set and get methods.
* @return this returns a string representation of the contact
public String toString() {
return String.format("method '%s'", name);
} | http://simple.sourceforge.net/download/stream/report/cobertura/org.simpleframework.xml.core.MethodContact.html | CC-MAIN-2018-05 | refinedweb | 678 | 56.86 |
Hi ruby-core. here I am again, with a new revision of my monotonic patch. I still hope that one day it will be integrated :) = Problem This is a subtle problem that is mostly problematic on long running processes. If the computer's system clock is set back for a certain delta, the scheduled threads will wait for that delta time + their scheduled time before being activated again. This is due to the fact that they uses the epoch-based gettimeofday() time source. = Solution The general solution is to use a monotonic clock because monotonicity is the guarantee that you won't go back in time. Those clocks are not epoch-based but it doesn't matter because thread scheduling is only using deltas of times. The implemented solution uses the POSIX clock_gettime function, which has a CLOCK_MONOTONIC flag. clock_gettime(), on most BSD, should be in the libc. On linux, it is required to link ruby against librt.so, which is part of the libc6 package. The Darwin (OS X) kernel does not implement the clock_gettime function. Since clock_gettime() doesn't guarantee you that your system provides such clock, I also had to add an init function to check the clock availability, plus the gettimeofday() fallback. = Patch Index: eval.c =================================================================== --- eval.c (revision 16252) +++ eval.c (working copy) @@ -28,6 +28,12 @@ #define EXIT_FAILURE 1 #endif +#ifdef HAVE_UNISTD_H +#include <unistd.h> +#include <time.h> +#include <errno.h> +#endif + #include <stdio.h> #include "st.h" @@ -1340,6 +1346,7 @@ void Init_stack _((VALUE*)); void Init_heap _((void)); void Init_ext _((void)); +void Init_clock _((void)); #ifdef HAVE_NATIVETHREAD static rb_nativethread_t ruby_thid; @@ -1382,6 +1389,7 @@ rb_origenviron = environ; #endif + Init_clock(); Init_stack((void*)&state); Init_heap(); PUSH_SCOPE(); @@ -10245,13 +10253,32 @@ curr_thread->safe = level; } +static int clock_monotonic = 0; + +void +Init_clock() +{ +#ifdef CLOCK_MONOTONIC + struct timespec tp; + clock_monotonic = (clock_gettime(CLOCK_MONOTONIC, &tp) == 0); +#endif +} + /* Return the current time as a floating-point number */ static double timeofday() { - struct timeval tv; - gettimeofday(&tv, NULL); - return (double)tv.tv_sec + (double)tv.tv_usec * 1e-6; + if (clock_monotonic) { +#ifdef CLOCK_MONOTONIC + struct timespec tp; + clock_gettime(CLOCK_MONOTONIC, &tp); + return (double)tp.tv_sec + (double)tp.tv_nsec * 1e-9; +#endif + } else { + struct timeval tv; + gettimeofday(&tv, NULL); + return (double)tv.tv_sec + (double)tv.tv_usec * 1e-6; + } } #define STACK(addr) (th->stk_pos<(VALUE*)(addr) && (VALUE*)(addr)<th->stk_pos+th->stk_len) = QA Q: on what branch is your code based ? A: I used the ruby_1_8 branch Q: timeofday() might be used by some code that needs epoch-based time ? A: I have grep'ed trough the code but might have missed something Q: is there a similar solution for windows ? A: I don't know any Windows API that guarantees clock monotonicity. Q: why didn't you rename timeofday() ? A: It would hide the purpose of the patch. It is a good idea although. Q: how do I know if my ruby instance uses the monotonic clock ? (like I'm running a rails application on a vserver) A: use "strace" :) . I don't think there is a Scheduled class where I could publish that information.
on 01.05.2008 19:46
on 01.05.2008 20:27
. > Q: is there a similar solution for windows ? > A: I don't know any Windows API that guarantees clock monotonicity. The Windows API functions for monotonic time are GetTickCount and GetTickCount64. The latter is preferable since the former wraps around its 32-bit count every 50 days or so. -mental
on 01.05.2008 20:33
On Fri, 2 May 2008 03:26:53 +0900, MenTaLguY <mental@rydia.net> wrote: > The Windows API functions for monotonic time are GetTickCount and > GetTickCount64. The latter is preferable since the former wraps around > its 32-bit count every 50 days or so. (GetTickCount64 is specific to Windows Vista, though -- I wouldn't worry about Windows just yet, let's make sure that we have POSIX systems fixed first) -mental
on 01.05.2008 20:35
2008/5/1 MenTaLguY <mental@rydia.net>: >. Yes, this is also my belief. Another confirmation is that timeofday() has even moved in thread.c in ruby1.9 | http://www.ruby-forum.com/topic/151618 | crawl-001 | refinedweb | 677 | 67.04 |
Hi,
I am sorry if this issue has been addressed before, but otherwise I couldn't find it answered here or anywhere else (that might just mean my search skill isn't good enough, sorry again).
I have experienced problems with Semantic when working on generics in my Java program (at least this is the case in my program). If my class has generic type, for example,
public class Tree<V> {....
semantic-ia-fast-jump would complaint that it could not find suitable jump point even if I point to a legit method call (I find this a bit frustrating, since I use this feature a lot).
In JDEE class menu, I pretty much can't find anything for the Tree class.
But the above issues don't occur if Tree is not defined as generic class.
public class Tree{....
Does anyone have this issue? I am wondering if the Java parser isn't correctly parsing Java 1.5 syntax, but then looking at the parser code in wisent-java.wy, it claims to be able to parse correct Java 1.5. Though, it could be the problem with other part other than the parser, it there a quick fix for this issue? If not, I would hope the next release would solve the problem.
The cedet version I am using is the newest svn build, not the one that comes with emacs23.2.
Thanks!
--
___
K.S | http://sourceforge.net/p/cedet/mailman/attachment/AANLkTil3NXaAyY10y8q2nP25tHeHAtX09AD7Wrf3f_Ru@mail.gmail.com/1/ | CC-MAIN-2014-15 | refinedweb | 238 | 80.72 |
's so much more fun to develop when your database has real, interesting data. We do have a way to add some fake genuses into the database, but they're not very interesting. And when we need more dummy data - like users and genus notes - it's just not going to work well.
Nope - we can do better. I'm dreaming of a system where we can quickly re-populate our local database with a really rich set of fake data, or fixtures.
DoctrineFixturesBundle. This bundle is step 1 towards my dream. Copy the
composer require line and paste that into the terminal. But hold on! I also want to download something else:
nelmio/alice. That's just a normal PHP library, not a bundle. And it's going to make our fixtures amazing:
Tip
If you are on Symfony 3.2 or higher, you don't have to specify the DoctrineFixturesBundle version constraint
composer require --dev doctrine/doctrine-fixtures-bundle:2.3.0 nelmio/alice:2.1.4
Tip
Be sure to install version 2 of Alice, as version 3 has many changes:
$ composer require --dev nelmio/alice:2.1.4
Oh, and the
--dev flag isn't too important. It means that these lines will be added to the
require-dev section of
composer.json:
And that's meant for libraries that are only needed for development or to run tests.
When you deploy - if you care enough - you can tell composer to not download the libraries in this section. But frankly, I don't bother.
While Composer is communicating with the mothership, copy the
new bundle line and add it to
AppKernel. But put it in the section that's inside of the
dev
if statement:
This makes the bundle - and any services, commands, etc that it gives us - not available in the
prod environment. That's fine for us - this is a development tool - and it keeps the
prod environment a little smaller.
Anyways, this bundle gives us a new console command -
doctrine:fixtures:load. When we run that, it'll look for "fixture classes" and run them. And in those classes, we'll create dummy data.
Copy the example fixture class. In AppBundle, add a
DataFixtures/ORM directory. Then, add a new PHP class called - well, it doesn't matter - how about
LoadFixtures. Paste the example class we so aggressively stole from the docs and update its class name to be
LoadFixtures:
Clear out that
User code. We need to create Genuses.. and we have some perfectly good code in
newAction() we can steal to do that. Paste that it:
The
$manager argument passed to this function is the entity manager. Use it to persist
$genus and don't forget the
Genus
use statement. Oh, and only one namespace - whoops!
I know this is not very interesting yet - stay with me. To run this, head over to the terminal and run:
./bin/console doctrine:fixtures:load
This clears out the database and runs all of our fixture classes - we only have 1. Now, head back to the list page. Here is our one random genus. So it's kind of cool... but I know - totally underwhelming. Enter Alice: she makes fixtures fun again.
// } } | https://symfonycasts.com/screencast/symfony3-doctrine/dummy-data | CC-MAIN-2021-31 | refinedweb | 538 | 76.11 |
Ghost.exe – a generic host for OWIN applicationsGate, opensource, OWIN, programming, tech, web February 20th, 2012
Hello again! Now let’s talk about Ghost.exe.
As you may be aware – OWIN can be thought of as a port of Rack or WSGI specification to .NET, and Gate can be thought of as a reference implementation the of Rack utility and middleware library.
If you’ve used Rack then you’re probably familiar with the rackup executable. It provides a way to load a web site and run it on a web server that doesn’t provide it’s own executable. That is more or less the role Ghost.exe plays in the overall OWIN/Gate suite.
By now you’re thinking, “That’s awesome, stop typing and tell me how to get some of that!” and the easiest way is with Chocolatey – a nuget-based software distribution mechanism. The easiest way to get Chocolatey is from the VS Package Manager Console.
Step one: get Chocolatey
PM> Install-Package chocolatey Successfully installed 'chocolatey 0.9.8.14'. PM> Initialize-Chocolatey The repository is set up at 'C:\NuGet'. The packages themselves go to 'C:\NuGet\lib' (i.e. C:\NuGet\lib\yourPackageName). Run chocolatey /? for a list of functions. PM> Uninstall-Package chocolatey Successfully uninstalled 'chocolatey 0.9.8.14'.
The output text is heavily edited down – it does mention you may need to restart powershell for path changes to take effect. Actually – let’s leave VS and jump right to a new command prompt which should avoid that problem.
Step two: install ghost
C:\Users\lodejard>chocolatey install ghost ===================================================== Chocolatey (0.9.8.14) is installing ghost (from OR) to "C:\NuGet\lib" =====================================================
Step three: run ghost
This is using the OWIN web app we made in the last post.
C:\Users\lodejard>cd \Projects\Experiments\HelloEverything\HelloEverything C:\Projects\Experiments\HelloEverything\HelloEverything>ghost --server kayak Started at
Press Esc when you want to exit the Ghost.exe – but other than that you can browse to and see the quickstart web site we created in the last blog post in all its glory. This should work if you run Ghost with the current directory at the base path for a “web app” or “class library” project which (a) compiles it’s output to a bin directory, and (b) one of the assemblies has a public Startup class in that assembly’s base namespace.
You need to follow the (b) convention if you don’t name the Startup method on the command line. Otherwise you can start an alternate environment by providing that. In fact – let’s add simple trace logging to the Startup.Debug method. This can be edited right into the example file the quickstart added to the project.
public void Debug(IAppBuilder builder) { // added to trace some request values as they pass through builder.Use(ShowRequests); builder.UseShowExceptions(); Configuration(builder); } // simple user-middleware AppDelegate ShowRequests(AppDelegate app) { // this delegate is called per request return (env, result, fault) => { // use a light wrapper class to access env dictionary as properties var req = new Request(env); // trace out some info req.TraceOutput.WriteLine( "{0} {1}{2} {3}", req.Method, req.PathBase, req.Path, req.QueryString); // and then pass all request along app(env, result, fault); }; }
You can select that configuration by running it as follows. Let’s use the firefly server this time.
C:\...\HelloEverything>ghost --server firefly HelloEverything.Startup.Debug Started at GET / GET /wilson GET /wilson flip=crash
Also be sure to run
Ghost /? to check out the options as well. Currently supported http servers include the default HttpListener (the default), Kayak, and Firefly. Remember to set the url acls for your port if you’re going to be using HttpListener.
C:\Projects\Experiments\HelloEverything\HelloEverything>ghost /? Usage: Ghost [options] [<application>] Runs <application> on an http server Example: Ghost -p8080 HelloWorld.Startup Options: -s, --server=VALUE Load assembly named "Gate.Hosts.TYPE.dll" to determine http server to use. TYPE defaults to HttpListener. -u, --url=VALUE May be used to set --scheme, --host, --port, and --path options with a combined URIPREFIX value. Format is '<scheme>://<host>[:<port>]<path>/'. -S, --scheme=VALUE Determine which socket protocol server should bind with. SCHEME may be 'http' or 'https'. Defaults to 'http'. -h, --host=VALUE Which host name or IP address to listen on. NAME defaults to '+' for all IP addresses. -p, --port=VALUE Which TCP port to listen on. NUMBER defaults to 8080. -P, --path=VALUE Determines the virtual directory to run use as the base path for <application> requests. PATH must start with a '/'. -o, --output=VALUE Writes any errors and trace logging to FILE. Default is stderr. -v, --verbose Increase the output verbosity. -?, --help Show this message and exit. Environment Variables: PORT Changes the default TCP port to listen on when both --port and --url options are not provided. OWIN_SERVER Changes the default server TYPE to use when the --server option is not provided.
September 4th, 2012 at 9:50 pm
I’ve regularly.
June 18th, 2013 at 9:39 pm
October 17th, 2013 at 2:43 am
Hi to every one, it’s truly a fastidious for me to pay a visit
this site, it contains precious Information. | http://whereslou.com/2012/02/20/ghost-exe-a-generic-host-for-owin-applications | CC-MAIN-2013-48 | refinedweb | 869 | 58.48 |
Introduction: Brace.
Step 1: Objective
The creation of a device capable of sending a "S.O.S." message to the nearest security center to assure the safety of the people using a dragonboard 410c development card with a bluetooth connection and the usage of a http protocol.
Step 2: Materials
Dragonboard 410c development card
· Simple Lilypad Arduino
· Lilypad Coin Cell Battery Holder
· RN-42 BlueSMiRF Silver
· Push button
· Buzzer Grove
· Jumpers
· A Bracelet or Fabric and Threat
· 2025 Battery support
Step 3: Wiring Up
1.- Connect the push button, which has a pull-down array, to the arduino lilypad pin 5 as shown in image 1.
2.- Establish the bluetooth connection to the lilypad having in mind that the RX and TX pins of the bluetooth module must be connected in an inverse way to the ones in the lilypad.
3.- A supply of 3.3 V feeds the bluetooth module and the lilypad and this will be connected to the SR-2025 battery support.
4-. The only connected GPIO will be the GPIO A that is connected to pin 23 on the dragonboard.
Step 4: Programming
Used repositories:
Libsoc @ (Follow instructions on github page) Required packages Autoreconf: sudo apt-get install dh-autoreconf To handle GPIO on python: ./configure --enable-board=dragonboard410c --with-board-configs 96boardsGPIO @
/ GPIO names used with the 96boards GPIO library
Io-client-python @ To use Adafruit’s io platafform.
import time from Adafruit_IO import Client from gpio_96boards import GPIO from time import gmtime, strftime
GPIO_A = GPIO.gpio_id('GPIO_A') pins = ( (GPIO_A, 'out'), )
def blink(gpio): gpio.digital_write(GPIO_A,GPIO.HIGH) time.sleep(1) gpio.digital_write(GPIO_A,GPIO.LOW) time.sleep(1)
ADAFRUIT_IO_KEY = 'Your Private Key Goes Here'
aio = Client(ADAFRUIT_IO_KEY)
while True: try: data = aio.receive_next('Status') ts = strftime("%Y-%m-%d %H:%M:%S", gmtime()) print ts, ' ','Alert! {0}'.format(data.value) if data.value == 'Call for help': with GPIO(pins) as gpio: blink(gpio) blink(gpio) except Exception as e: pass
time.sleep(10)
Android App. Created using MIT App Inventor, uses HTTP protocol to post data to adafruit’s io API
Step 5: Conclusion
The proyect created will accomplish the main objetive, to create an environment of safety for the people in the city, making a "smart city". This proyect affects the way poeople live in cities entirely, creating safe zones so that everyone can feel sure at any given time. The proyect was created effectively in the given time.
Be the First to Share
Recommendations
2 years ago
Wow. This Instructable is underrated. | https://www.instructables.com/Brace-Yourself/ | CC-MAIN-2021-10 | refinedweb | 421 | 54.22 |
Classes
Basics
While classes are interwoven into languages like Java or C, you can accomplish almost anything in Python without encountering classes.
We’ll be showing the example before explaining its contents, so don’t worry if you don’t understand what you’re looking at right away.
Anatomy of a Python Class
import math class Pet: domesticated = True def __init__(self, name, age, gender, height, weight, is_fixed=False): self.name = name self.age = age self.gender = gender self.height = height self.weight = weight self.is_fixed = is_fixed def sound(self): return(f'Hello there, my name is {self.name}, and I\'m a pet.') @staticmethod def calculate_bmi(height, weight): return(weight / math.pow(height, 2) * 703) def is_overweight(self): return(self.calculate_bmi(self.height, self.weight) > 24) @classmethod def from_dict(cls, d): print("In from_dict, and cls is:", cls) is_fixed = False if d.get("name"): name = d.get("name") if d.get("age"): age = d.get("age") if d.get("gender"): gender = d.get("gender") if d.get("height"): height = d.get("height") if d.get("weight"): weight = d.get("weight") if d.get("is_fixed"): is_fixed = d.get("is_fixed") return cls(name, age, gender, height, weight, is_fixed) class Dog(Pet): def sound(self): return(f'Ruff ruff!')
Instances and their Attributes
The name of a class immediately follows its declaration — this class is
Pet. We would create an instance/object of
Pet by doing the following:
my_pet = Pet(name="Ziva", age=.25, gender="Female", height=.5, weight=10, is_fixed=True)
The above code works thanks to the
__init__ function in
Pet, which allows instances of Pet to be initialized.
age,
name,
gender,
height,
weight, and
is_fixed are instance attributes. In this case,
is_fixed is the only argument with a default value, meaning you must provide information for the other 5 in order for your code to run. We’ll explain the inner workings of
__init__ shortly.
Attributes of classes can be accessed using
. followed by the attribute, as in the following examples:
print(my_pet.name)
Ziva
print(my_pet.age)
0.25
Class Attributes
Python classes can have their own attributes independent of any declared objects of that class — notice how
domesticated = True is on its own, outside of
__init__. You can determine class attributes by calling on instances or the class itself.
print(my_pet.domesticated) # via instance of the Pet class print(Pet.domesticated) # via the class itself
True True
Class Methods
Methods are, broadly speaking, functions that are defined within a class that serve some purpose that might need repeated. All methods have the
def keyword in front of them, short for define. Methods can be called in the same way as attributes. Unlike other languages, Python requires a
self argument in the method declaration to ensure that an instance refers to or changes its own attributes. It is not necessary to include
self anywhere except the method declaration within the class.
sound and
is_overweight are some of the methods in
Pet.
print(my_pet.sound())
Hello there, my name is Ziva, and I'm a pet.
print(my_pet.is_overweight())
True
Looks like our pet makes some weird sounds and needs some walks.
calculate_bmi is a static method of
Pet. Static methods don’t take
self or the class as arguments, nor modify the attributes of an instance or class.
self is not passed as the first argument for these methods. Generally, a static method may be appropriate if the method is loosely coupled to the object.
print(Pet.calculate_bmi(50, 120))
33.744
print(my_pet.calculate_bmi(50, 120))
33.774
# this will NOT work, calculate_bmi is not passed `self` # you will have to provide your own parameters for static methods print(my_pet.calculate_bmi())
Error in py_call_impl(callable, dots$args, dots$keywords): TypeError: calculate_bmi() missing 2 required positional arguments: 'height' and 'weight' Detailed traceback: File "<string>", line 1, in <module>
from_dict is a class method of
Pet. Instead of
self, class methods accept
cls as the first argument, and are automatically passed
cls as the first argument when called.
cls is simply the class, which is Pet in this case.
This method specifically takes a
dictionary containing information for every instance attribute in
Pet, then creates a
Pet object from that.
d = {"name": "Ziva", "age": .25, "gender": "Female", "height": .5, "weight": 10, "is_fixed": True} # Pet and cls are the same: print(Pet)
<class '__main__.Pet'>
Pet.from_dict(d)
In from_dict, and cls is: <class '__main__.Pet'> <__main__.Pet object at 0x7ff1a2eb3520>
Dunder methods, including
__init__, all start and end with double underscores, and they generally encompass functions that are built-in to the basic object types in Python:
__str__,
__add__,
__format__, and so on.
The idea is that you are able to flesh out your own classes by adapting base Python dunder methods for your own purposes.
Inherited Classes
At the end of our example is
Dog, which is another class that contains
Pet within parentheses. This makes
Pet a parent class for
Dog that hands down its methods and attributes.
Though
Pet appears to be an argument for
Dog, you cannot substitute a
Pet object to initialize
Dog. Child classes effectively clone their parents, overriding certain methods or attributes when necessary. In this case,
sound will have different results depending on if the object is a
Pet or a
Dog.
my_dog = Dog("Ziva", .25, "Female", .5, 10, True) print(my_pet.sound()) print(my_dog.sound())
Hello there, my name is Ziva, and I'm a pet. Ruff ruff!
print(my_dog.is_overweight()) print(my_dog.domesticated)
True True
Resources
A great introduction to classes in Python.
A good resource for the basics.
A nice article explaining classes and object oriented programming.
A great explanation of the differences between the types of methods. | https://the-examples-book.com/programming-languages/python/classes | CC-MAIN-2022-33 | refinedweb | 950 | 68.47 |
The latest versions of this document, the PNG specification, and related information can always be found at the PNG FTP archive site,. The maintainers of the PNG specification can be contacted by e-mail at png-mng-misc @ lists.sourceforge.net.
This document is an extension to the Portable Network Graphics (PNG) specification, version 1.2 [PNG-1.2], and in "Portable Network Graphics (PNG) Specification (Second Edition)" [PNG-ISO]. It describes additional public chunk types and contains additional information for use in PNG images.
This document, together with the PNG specification, contains the entire list of registered "public" PNG chunks. The additional registered chunks appearing in this document are the oFFs, pCAL, sCAL, gIFg, gIFs, sTER, and fRAc chunks, plus the deprecated gIFt chunk. Additional chunk types may be proposed for inclusion in this list by contacting the PNG specification maintainers at png-mng-misc @ lists.sourceforge.net. Chunks described here are expected to be less widely supported than those defined in the basic specification. However, application authors are encouraged to use these chunk types whenever appropriate for their applications.
This document also describes data representations that do not occur in the core PNG format, but are used in one or more special-purpose chunks. New chunks should use these representations whenever applicable, in order to maximize portability and simplify decoders..
1. Data Representation ............................................ 1 1.1. Integer values ............................................ 1 1.2. Floating-point values ..................................... 1 2. Summary of Special-Purpose Chunks .............................. 1 3. Chunk Descriptions ............................................. 1 3.1. oFFs Image offset ......................................... 1 3.2. pCAL Calibration of pixel values .......................... 1 3.3. sCAL Physical scale of image subject ...................... 1 3.4. gIFg GIF Graphic Control Extension ........................ 1 3.5. gIFx GIF Application Extension ............................ 1 3.6. sTER Indicator of Stereo Image ........................... 1 4. Chunks Not Described Here ...................................... 1 4.1. dSIG Digital Signature .................................... 1 4.2. fRAc Fractal image parameters ............................. 1 5. Text Chunk Keywords ............................................ 1 6. Deprecated Chunks .............................................. 1 6.1. gIFt GIF Plain Text Extension ............................. 1 7. Security Considerations ........................................ 1 8. Appendix: Sample code .......................................... 1 8.1. pCAL ...................................................... 1 8.2. Fixed-point gamma correction .............................. 1 9. Appendix: Rationale ............................................ 1 9.1. pCAL ...................................................... 1 10. Appendix: Revision History .................................... 1 11. References .................................................... 1 12. Credits ....................................................... 1
Refer to Section 2.1 of the PNG specification for the format and range of integer values.
The core of PNG does not use floating-point numbers anywhere; it uses integers or, where applicable, fixed-point fractional values. However, special-purpose chunks may need to represent values that do not fit comfortably in fixed-point notation. The textual floating-point notation defined here is recommended for use in all such cases. This representation is simple, has no a priori limits on range or precision, and is portable across all machines.
A floating-point value in this notation is represented by an ASCII text string in a standardized decimal floating-point format. The string is variable-length and must be terminated by a null (zero) character unless it is the last item in its chunk. The string consists of an optional sign ("+" or "-"), an integer part, a fraction part beginning with a decimal point ("."), and an exponent part beginning with an "E" or "e" and optional sign. The integer, fraction, and exponent parts each contain one or more digits (ASCII "0" to "9"). Either the integer part or the fraction part, but not both, may be omitted. A decimal point is allowed, but not required, if there is no fraction part. The exponent part may be omitted. No spaces or any other character besides those specified may appear.
Note in particular that C-language "F" and "L" suffixes are not allowed, the string "." is not allowed as a shorthand for 0 as in some other programming languages, and no commas or underscores are allowed. This format ought to be easily readable in all programming environments.
This table summarizes some properties of the chunks described in this document.
Name Multiple Ordering constraints OK? oFFs No Before IDAT pCAL No Before IDAT sCAL No Before IDAT gIFg Yes None gIFt Yes None (this chunk is deprecated) gIFx Yes None sTER No Before IDAT dSIG Yes In pairs, immediately after IHDR and before IEND fRAc Yes None
The oFFs chunk gives the position on a printed page at which the image should be output when printed alone. It can also be used to define the image's location with respect to a larger screen or other application-specific coordinate system.
The oFFs chunk contains:
X position: 4 bytes (signed integer) Y position: 4 bytes (signed integer) Unit specifier: 1 byte
Both position values are signed. The following values are legal for the unit specifier:
0: unit is the pixel (true dimensions unspecified) 1: unit is the micrometer
Conversion note: one inch is equal to exactly 25400 micrometers. A
micrometer (also called a micron) is
10-6 meter.
The X position is measured rightwards from the left edge of the page to the left edge of the image; the Y position is measured downwards from the top edge of the page to the top edge of the image. Note that negative values are permitted, and denote displacement in the opposite directions. Although oFFs can specify an image placement that is partially or wholly outside the page boundaries, the result of such placement is application-dependent.
If present, this chunk must precede the first IDAT chunk.
When a PNG file is being used to store physical data other than color values, such as a two-dimensional temperature field, the pCAL chunk can be used to record the relationship (mapping) between stored pixel samples, original samples, and actual physical values. The pCAL data might be used to construct a reference color bar beside the image, or to extract the original physical data values from the file. It is not expected to affect the way the pixels are displayed. Another method should be used if the encoder wants the decoder to modify the sample values for display purposes.
The pCAL chunk contains:
Calibration name: 1-79 bytes (character string) Null separator: 1 byte Original zero (x0): 4 bytes (signed integer) Original max (x1): 4 bytes (signed integer) Equation type: 1 byte Number of parameters: 1 byte Unit name: 0 or more bytes (character string) Null separator: 1 byte Parameter 0 (p0): 1 or more bytes (ASCII floating-point) Null separator: 1 byte Parameter 1 (p1): 1 or more bytes (ASCII floating-point) ...etc...
There is no null separator after the final parameter (or after the unit name, if there are zero parameters). The number of parameters field must agree with the actual number of parameters present in the chunk, and must be correct for the specified equation type (see below).
The calibration name can be any convenient name for referring to the mapping, and is subject to the same restrictions as the keyword in a PNG text chunk: it must contain only printable Latin-1 [ISO/IEC-8859-1] characters (33-126 and 161-255) and spaces (32), but no leading, trailing, or consecutive spaces. The calibration name can permit applications or people to choose the appropriate pCAL chunk when more than one is present (this could occur in a multiple-image file, but not in a PNG file). For example, a calibration name of "SI" or "English" could be used to identify the system of units in the pCAL chunk as well as in other chunk types, to permit a decoder to select an appropriate set of chunks based on their names.
The pCAL chunk defines two mappings:
0..max, where
max=2bitdepth-1, to the original samples, which are signed integers. The
x0and
x1fields, together with the bit depth for the image, define this mapping.
x0,
x1, the equation type, parameters, and unit name.
The mapping between the stored samples and the original samples is given by the following equations:
original_sample = (stored_sample * (x1-x0) + max/2) / max + x0 stored_sample = ((original_sample - x0) * max + (x1-x0)/2) / (x1-x0) clipped to the range 0..max
In these equations, "
/" means integer division
that rounds toward
negative infinity, so
n/d = integer(floor(real(a)/real(b)))).
Note that
this is the same as the "
/" operator in the
C programming language
when
n and
d are nonnegative, but not necessarily
when
n or
d is negative.
Notice that
x0 and
x1 are the original samples
that correspond to
the stored samples 0 and
max, respectively. Encoders will usually
set
x0=0 and
x1=max to indicate that the stored samples
are equal to the
original samples. Note that
x0 is not constrained to be less
than
x1,
and neither is constrained to be positive, but they must be different
from each other.
This mapping is lossless and reversible when
abs(x1-x0) <= max
and the original sample is in the range
x0..x1.
If
abs(x1-x0) > max
then there can be no lossless reversible mapping, but the functions
provide the best integer approximations to floating-point affine
transformations.
The mapping between the original samples and the physical values is given by one of several equations, depending on the equation type, which may have the following values:
0: Linear mapping 1: Base-e exponential mapping 2: Arbitrary-base exponential mapping 3: Hyperbolic mapping
For equation type 0:
physical_value = p0 + p1 * original_sample / (x1-x0)
For equation type 1:
physical_value = p0 + p1 * exp(p2 * original_sample / (x1-x0))
For equation type 2:
physical_value = p0 + p1 * pow(p2, (original_sample / (x1-x0)))
For equation type 3:
physical_value = p0 + p1 * sinh(p2 * (original_sample - p3) / (x1-x0))
For these physical value equations, "
/" means
floating-point division.
The function
exp(x) is
e raised to the power
of
x, where
e is
the base of the natural logarithms, approximately 2.71828182846. The
exponential function
exp() is the inverse the natural logarithm
function
ln().
The function
pow(x,y) is
x raised to the
power of
y.
pow(x,y) = exp(y * ln(x))
The function
sinh(x) is the hyperbolic sine of
x.
sinh(x) = 0.5 * (exp(x) - exp(-x))
The units for the physical values are given by the unit name, which may contain any number of printable Latin-1 characters, with no limitation on the number and position of blanks. For example, "K", "population density", "MPa". A zero-length string can be used for dimensionless data.
For color types 0 (gray) and 4 (gray-alpha), the mappings apply to the gray sample values (but not to the alpha sample). For color types 2 (RGB), 3 (indexed RGB), and 6 (RGBA), the mappings apply independently to each of the red, green, and blue sample values (but not the alpha sample). In the case of color type 3 (indexed RGB), the mapping refers to the RGB samples and not to the index values.
Linear data can be expressed with equation type 0.
Pure logarithmic data can be expressed with either equation type 1 or 2:
Equation type 1 Equation type 2 x0 = 0 x0 = 0 x1 = max x1 = max p0 = 0 p0 = 0 p1 = bottom p1 = bottom p2 = ln(top/bottom) p2 = top/bottom
Equation types 1 and 2 are functionally equivalent; both are defined because authors may find one or the other more convenient.
Using equation type 3, floating-point data can be reduced (with
loss) to a set of integer samples such that the resolution of the
stored data is roughly proportional to its magnitude. For example,
floating-point data ranging from
-1031 to
1031
(the usual range of
32-bit floating-point numbers) can be represented with:
Equation type 3 x0 = 0 x1 = 65535 p0 = 0.0 p1 = 1.0e-30 p2 = 280.0 p3 = 32767.0
The resolution near zero is
about
10-33, while the resolution near
1031 or
-1031 is about
1028.
Everywhere the resolution is about 0.4 percent of the magnitude.
Note that those floating-point parameters could be stored in the chunk more compactly as follows:
p0 = 0 p1 = 1e-30 p2 = 280 p3 = 32767
Applications should use double precision arithmetic (or take other
precautions) while performing the mappings for equation types 1, 2, and
3, to prevent overflow of intermediate results when p1 is small and the
exp(),
pow(), or
sinh() function is large.
If present, the pCAL chunk must appear before the first IDAT chunk. Only one instance of the pCAL chunk is permitted in a PNG datastream.
While the pHYs chunk is used to record the physical size of the image itself as it was scanned or as it should be printed, certain images (such as maps, photomicrographs, astronomical surveys, floor plans, and others) may benefit from knowing the actual physical dimensions of the image's subject for remote measurement and other purposes. The sCAL chunk serves this need. It contains:
Unit specifier: 1 byte Pixel width: 1 or more bytes (ASCII floating-point) Null separator: 1 byte Pixel height: 1 or more bytes (ASCII floating-point)
The following values are legal for the unit specifier:
1: unit is the meter 2: unit is the radian
Following the unit specifier are two ASCII strings. The first string defines the physical width represented by one image pixel; the second string defines the physical height represented by one pixel. The two strings are separated by a zero byte (null character). As in the text chunks, there is no trailing zero byte for the final string. Each of these strings contains a floating-point constant in the format specified above (Floating-point values, Section 1.2). Both values are required to be greater than zero.
If present, this chunk must precede the first IDAT chunk.
The gIFg chunk is provided for backward compatibility with the GIF89a Graphic Control Extension. It contains:
Disposal Method: 1 byte User Input Flag: 1 byte Delay Time: 2 bytes (byte order converted from GIF)
The Disposal Method indicates the way in which the graphic is to be treated after being displayed. The User Input Flag indicates whether user input is required before continuing. The Delay Time specifies the number of hundredths (1/100) of a second to delay before continuing with the processing of the datastream. Note that this field is to be byte-order-converted.
The "Transparent Color Flag" and "Transparent Color Index" fields found in the GIF89a Graphic Control Extension are omitted from gIFg. These fields should be converted using the transparency features of basic PNG.
The GIF specification allows at most one Graphic Control Extension to preceed each graphic rendering block. Because each PNG file holds only one image, it is expected that gIFg will appear at most once, before IDAT, but there is no strict requirement.
The gIFx chunk is provided for backward compatibility with the GIF89a Application Extension. The Application Extension contains application-specific information. This chunk contains:
Application Identifier: 8 bytes Authentication Code: 3 bytes Application Data: n bytes
The Application Identifier is a sequence of eight printable ASCII characters used to identify the application creating the Application Extension. The Authentication Code is three additional bytes that the application may use to further validate the Application Extension. The remainder of the chunk is application-specific data whose content is not defined by the GIF specification.
Note that GIF-to-PNG converters should not attempt to perform byte reordering on the contents of the Application Extension. The data is simply transcribed without any processing except for de-blocking GIF sub-blocks.
Applications that formerly used GIF Application Extensions may define special-purpose PNG chunks to replace their application extensions. If a GIF-to-PNG converter recognizes the Application Identifier and is aware of a corresponding PNG chunk, it may choose to convert the Application Extension into that PNG chunk type rather than using gIFx.
When present, the sTER chunk indicates that the datastream contains a stereo pair of subimages within a single PNG image.
The sTER chunk contains:
Mode: 1 byte 0: cross-fuse layout 1: diverging-fuse layout
The sTER chunk with
mode==0 or
mode==1
indicates that the datastream
contains two subimages, encoded within a single PNG image.
They are arranged side-by-side, with one subimage intended
for presentation to the right eye and the other subimage
intended for presentation to the left eye. The left edge of
the right subimage must be on a column that is evenly divisible
by eight, so that if interlacing is employed the two images
will have coordinated interlacing. Padding columns between
the two subimages must be introduced by the encoder if
necessary.
The sTER chunk imposes no requirements on the contents of the
padding pixels. For compatibility with software not supporting
sTER, it does not exempt the padding pixels from existing
requirements; for example, in palette images, the padding pixels
must be valid palette indices.
The two subimages must have the same dimensions
after removal of any padding.
When
mode==0, the right-eye image appears at the left and
the left-eye image appears at the right, suitable for
cross-eyed free viewing. When
mode==1, the left-eye image
appears at the left and the right-eye image appears at the
right, suitable for divergent (wall-eyed) free viewing.
Decoders that are aware of the sTER chunk may display the two images in any suitable manner, with or without the padding. Decoders that are not aware of the sTER chunk, and those that recognize the chunk but choose not to treat stereo pairs differently from regular PNG images, will naturally display them side-by-side in a manner suitable for free viewing.
If present, the sTER chunk must appear before the first IDAT chunk.
Given two subimages with width subimage_width, encoders can calculate the inter-subimage padding and total width W using the following pseudocode:
padding := 7 - ((subimage_width - 1) mod 8) W := 2 * subimage_width + padding
Given an image with width W, decoders can calculate the subimage width and inter-subimage padding using the following pseudocode:
padding := 15 - ((W - 1) mod 16) if (padding > 7) then error subimage_width := (W - padding) / 2
Decoders can assume that the samples in the left and right subimages are cosited, such that the subimages and their centers are coincident at the projection plane. Decoders can also assume that the left and right subimages are intended to be presented directly to the right and left eyes of the user/viewer without independent scaling, rotation or displacement. I.e., the subimages will be presented at the same size in the same relative position and orientation to each eye of the viewer.
Encoders should use the pHYs chunk to indicate the pixel's size ratio when it is not 1:1.
It is recommended that encoders use the cross-fusing layout
(
mode==0),
especially when the image centers are separated by more than 65 millimeters
when displayed on a typical monitor.
The definitions of some public chunks are being maintained by groups other than the core PNG group. In general, these are chunks that are useful to more than one application (and thus are not private chunks), but are considered too specialized to list in the core PNG documentation.
The dSIG chunk provides a digital signature that guarantees that the contents of the prtion of the entire datastream enclosed in a pair of such chunks has not changed since the digital signature was added. This chunk is described in detail in a separate document, [dSIG-spec], which is accompanied by an example provided in [dSIG-example].
The fRAc chunk will describe the parameters used to generate a fractal image. The specification for the contents of the fRAc chunk is being developed by Tim Wegner, twegner @ phoenix.net.
In the future, chunks will be fully specified before they are registered.
It is expected that special-purpose keywords for PNG text chunks will be registered and will appear in this document. However, no such keywords have yet been assigned.
All registered textual keywords in text chunks and all other chunk types are limited to the ASCII characters A-Z, a-z, 0-9, space, and the following 20 symbols:
! " % & ' ( ) * + , - . / : ; < = > ? _
but not the remaining 12 symbols:
# $ @ [ \ ] ^ ` { | } ~
This restricted set is the ISO-646 "invariant" character set [ISO-646]. These characters have the same numeric codes in all ISO character sets, including all national variants of ASCII.
The chunks listed in this section are registered, but deprecated. Encoders are discouraged from using them, and decoders are not encouraged to support them.
The gIFt chunk was originally provided for backward compatibility with the GIF89a Plain Text Extension, but gIFt is now deprecated because it suffers from some fundamental design flaws.
The gIFt chunk contains:
Text Grid Left Position: 4 bytes (signed integer, byte order and size converted) Text Grid Top Position: 4 bytes (signed integer, byte order and size converted) Text Grid Width: 4 bytes (unsigned integer, byte order and size converted) Text Grid Height: 4 bytes (unsigned integer, byte order and size converted) Character Cell Width: 1 byte Character Cell Height: 1 byte Text Foreground Color: 3 bytes (R,G,B samples) Text Background Color: 3 bytes (R,G,B samples) Plain Text Data: n bytes
Text Grid Left Position, Top Position, Width, and Height specify the text area position and size in pixels. The converter must reformat these fields from 2-byte LSB-first unsigned integers to 4-byte MSB-first signed or unsigned integers. Note that GIF defines the position to be relative to the upper left corner of the logical screen. If an oFFs chunk is also present, a decoder should assume that the oFFs chunk defines the offset of the image relative to the GIF logical screen; hence subtracting the oFFs values (converted from micrometers to pixels if necessary) from the Text Grid Left and Top Positions gives the text area position relative to the main PNG image.
Character Cell Width and Height give the dimensions of each character in pixels.
Text Foreground and Background Color give the colors to be used to render text foreground and background. Note that the GIF-to-PNG converter must replace the palette index values found in the GIF Plain Text Extension block with the corresponding palette entry.
The remainder of the chunk is the text to be displayed. Note that this data is not in GIF sub-block format, but is a continuous datastream.
The normal precautions (see the Security considerations section of the PNG specification) should be taken when displaying text contained in the sCAL calibration name, pCAL unit name, or any ASCII floating-point fields.
Applications must take care to avoid underflow and overflow of intermediate results when converting data from one form to another according to the pCAL mappings.
This appendix provides some sample code that can be used in encoding and decoding PNG chunks. It does not form a part of the specification. In the event of a discrepancy between the sample code in this appendix and the chunk definition, the chunk definition prevails.
The latest version of this code, including test routines not shown here, is available at.
#if 0 pcal.c 0.2.2 (Sat 19 Dec 1998) Adam M. Costello <amc @ cs.berkeley.edu> This is public domain example code for computing the mappings defined for the PNG pCAL chunk. #endif #if __STDC__ != 1 #error This code relies on ANSI C conformance. #endif #include <limits.h> #include <math.h> #include <stdio.h> #include <stdlib.h> /* In this program a type named uintN denotes an unsigned */ /* type that handles at least all values 0 through (2^N)-1. */ /* A type named intN denotes a signed type that handles at */ /* least all values 1-2^(N-1) through 2^(N-1)-1. It is not */ /* necessarily the smallest such type; we are more concerned */ /* with speed. */ typedef unsigned int uint16; #if UINT_MAX >= 0xffffffff typedef unsigned int uint32; #else typedef unsigned long uint32; #endif #if INT_MAX >= 0x7fffffff && INT_MIN + 0x7fffffff <= 0 typedef int int32; #else typedef long int32; #endif /* Testing for 48-bit integers is tricky because we cannot */ /* safely use constants greater than 0xffffffff. Also, */ /* shifting by the entire width of a type is undefined, so */ /* for unsigned int, which might be only 16 bits wide, we */ /* must shift in two steps. */ #if (UINT_MAX - 0xffff) >> 8 >> 8 >= 0xffffffff typedef unsigned int uint48; #define HAVE_UINT48 1 #elif (ULONG_MAX - 0xffff) >> 16 >= 0xffffffff typedef unsigned long uint48; #define HAVE_UINT48 1 #elif defined(ULLONG_MAX) #if (ULLONG_MAX - 0xffff) >> 16 >= 0xffffffff typedef unsigned long long uint48; #define HAVE_UINT48 1 #endif #else #define HAVE_UINT48 0 #endif /*******************/ /* Program failure */ void fail(const char *msg) { fputs(msg,stderr); fputc('\n', stderr); exit(EXIT_FAILURE); } /*************************/ /* Check max, x0, and x1 */ int samp_params_ok(uint16 max, int32 x0, int32 x1) /* Returns 1 if max, x0, and x1 have */ /* allowed values, 0 otherwise. */ { const int32 xlimit = 0x7fffffff; return max > 0 && max <= 0xffff && x0 <= xlimit && x0 >= -xlimit && x1 <= xlimit && x1 >= -xlimit && x0 != x1; } /***********************************************/ /* Map from stored samples to original samples */ int32 stored_to_orig(uint16 stored, uint16 max, int32 x0, int32 x1) #if 0 Returns the original sample corresponding to the given stored sample, which must be <= max. The parameters max, x0, and x1 must have been approved by samp_params_ok(). The pCAL spec says: orig = (stored * (x1-x0) + max/2) / max + x0 [1] Equivalently: orig = (stored * (x1-x0) + max/2) / max + (x0-x1) - (x0-x1) + x0 orig = (stored * (x1-x0) + max * (x0-x1) + max/2) / max - (x0-x1) + x0 orig = ((max - stored) * (x0-x1) + max/2) / max + x1 So we can check whether x0 < x1 and coerce the formula so that the numerators and denominators are always nonnegative: orig = (offset * xspan + max/2) / max + xbottom [2] This will come in handy later. But the multiplication and the subtraction can overflow, so we have to be trickier. For the subtraction, we can convert to unsigned integers. For the multiplication, we can use 48-bit integers if we have them, otherwise observe that: b = (b/c)*c + b%c a*b = a*(b/c)*c + a*(b%c) ; let d = a*(b%c) (a*b)/c = a*(b/c) + d/c remainder d%c [3] These are true no matter which way the division rounds. If (a*b)/c is in-range, a*(b/c) is guaranteed to be in-range if b/c rounds toward zero. Here is another observation: sum{x_i} / c = sum{x_i / c} + sum{x_i % c} / c [4] This one also avoids overflow if the division rounds toward zero. The pCAL spec requires rounding toward -infinity. ANSI C leaves the rounding direction implementation-defined except when both the numerator and denominator are nonnegative, in which case it rounds downward. So if we arrange for all numerators and denominators to be nonnegative, everything works. Starting with equation 2 and applying identity 4, then 3, we obtain the final formula: d = offset * (xspan % max) xoffset = offset * (xspan / max) + d/max + (d%max + max/2) / max orig = xoffset + xbottom #endif { uint16 offset; uint32 xspan, q, r, d, xoffset; int32 xbottom; if (stored > max) fail("stored_to_orig: stored > max"); if (x1 >= x0) { xbottom = x0; xspan = (uint32)x1 - (uint32)x0; offset = stored; } else { xbottom = x1; xspan = (uint32)x0 - (uint32)x1; offset = max - stored; } /* We knew xspan would fit in a uint32, but we needed to */ /* cast x0 and x1 before subtracting because otherwise the */ /* subtraction could overflow, and ANSI doesn't say what */ /* the result will be in that case. */ /* Let's optimize two common simple cases */ /* before handling the general case: */ if (xspan == max) { xoffset = offset; } else if (xspan <= 0xffff) { /* Equation 2 won't overflow and does only one division. */ xoffset = (offset * xspan + (max>>1)) / max; } else { #if HAVE_UINT48 /* We can use equation 2 and do one uint48 */ /* division instead of three uint32 divisions. */ xoffset = (offset * (uint48)xspan + (max>>1)) / max; #else q = xspan / max; r = xspan % max; /* Hopefully those were compiled into one instruction. */ d = offset * r; xoffset = offset * q + d/max + (d%max + (max>>1)) / max; #endif } /* xoffset might not fit in an int32, but we know the sum */ /* xbottom + xoffset will, so we can do the addition on */ /* unsigned integers and then cast. */ return (int32)((uint32)xbottom + xoffset); } /***********************************************/ /* Map from original samples to stored samples */ uint16 orig_to_stored(int32 orig, uint16 max, int32 x0, int32 x1) #if 0 Returns the stored sample corresponding to the given original sample. The parameters max, x0, and x1 must have been approved by samp_params_ok(). The pCAL spec says: stored = ((orig - x0) * max + (x1-x0)/2) / (x1-x0) clipped to the range 0..max Notice that all three terms are nonnegative, or else all are nonpositive. Just as in stored_to_orig(), we can avoid overflow and rounding problems by transforming the equation to use unsigned quantities: stored = (xoffset * max + xspan/2) / xspan #endif { uint32 xoffset, xspan; if (x0 < x1) { if (orig < x0) return 0; if (orig > x1) return max; xspan = (uint32)x1 - (uint32)x0; xoffset = (uint32)orig - (uint32)x0; } else { if (orig < x1) return 0; if (orig > x0) return max; xspan = (uint32)x0 - (uint32)x1; xoffset = (uint32)x0 - (uint32)orig; } /* For 16-bit xspan the calculation is straightforward: */ if (xspan <= 0xffff) return (xoffset * max + (xspan>>1)) / xspan; /* Otherwise, the numerator is more than 32 bits and the */ /* denominator is more than 16 bits. The tricks we played */ /* in stored_to_orig() depended on the denominator being */ /* 16-bit, so they won't help us here. */ #if HAVE_UINT48 return ((uint48)xoffset * max + (xspan>>1)) / xspan; #else /* Doing the exact integer calculation with 32-bit */ /* arithmetic would be very difficult. But xspan > 0xffff */ /* implies xspan > max, in which case the pCAL spec says */ /* "there can be no lossless reversible mapping, but the */ /* functions provide the best integer approximations to */ /* floating-point affine transformations." So why insist */ /* on using the integer calculation? Let's just use */ /* floating-point. */ return ((double)xoffset * max + (xspan>>1)) / xspan; #endif } /*********************************************/ /* Check x0, x1, eqtype, n, and p[0]..p[n-1] */ int phys_params_ok(int32 x0, int32 x1, int eqtype, int n, double *p) /* Returns 1 if x0, x1, eqtype, n, and p[0]..p[n-1] */ /* have allowed values, 0 otherwise. */ { if (!samp_params_ok(1,x0,x1)) return 0; switch (eqtype) { case 0: return n == 2; case 1: return n == 3; case 2: break; case 3: return n == 4; } /* eqtype is 2, check for pow() domain error: */ if (p[2] > 0) return 1; if (p[2] < 0) return 0; return (x0 <= x1) ? (x0 > 0 && x1 > 0) : (x0 < 0 && x1 < 0); } /************************************************/ /* Map from original samples to physical values */ double orig_to_phys(int32 orig, int32 x0, int32 x1, int eqtype, double *p) /* Returns the physical value corresponding to the given */ /* original sample. The parameters x0, x1, eqtype, and p[] */ /* must have been approved by phys_params_ok(). The array */ /* p[] must hold enough parameters for the equation type. */ { double xdiff, f; xdiff = (double)x1 - x0; switch (eqtype) { case 0: f = orig / xdiff; break; case 1: f = exp(p[2] * orig / xdiff); break; case 2: f = pow(p[2], orig / xdiff); break; case 3: f = sinh(p[2] * (orig - p[3]) / xdiff); break; default: fail("orig_to_phys: unknown equation type"); } return p[0] + p[1] * f; }
The latest version of this code, including test routines not shown here, is available at.
#if 0 gamma-lookup.c 0.1.4 (Sat 19 Dec 1998) by Adam M. Costello <amc @ cs.berkeley.edu> This is public domain example code for computing gamma correction lookup tables using integer arithmetic. #endif #if __STDC__ != 1 #error This code relies on ANSI C conformance. #endif #include <limits.h> #include <math.h> /* In this program a type named uintN denotes the */ /* smallest unsigned type we can find that handles */ /* at least all values 0 through (2^N)-1. */ typedef unsigned char uint8; #if UCHAR_MAX >= 0xffff typedef unsigned char uint16; #else typedef unsigned short uint16; #endif #if UCHAR_MAX >= 0xffffffff typedef unsigned char uint32; #elif USHRT_MAX >= 0xffffffff typedef unsigned short uint32; #elif UINT_MAX >= 0xffffffff typedef unsigned int uint32; #else typedef unsigned long uint32; #endif /*********************/ /* 16-bit arithmetic */ void precompute16(uint16 L[511]) /* Precomputes the log table (this requires floating point). */ { int j; double f; /* L[j] will hold an integer representation of */ /* -log(j / 510.0). Knowing that L[1] (the largest) is */ /* 0xfe00 will help avoid overflow later, so we set the */ /* scale factor accordingly. */ f = 0xfe00 / log(1 / 510.0); for (j = 1; j <= 510; ++j) L[j] = log(j / 510.0) * f + 0.5; } void gamma16(uint16 L[511], uint8 G[256], uint16 g) /* Makes a 256-entry gamma correction lookup table G[] with */ /* exponent g/pow(2,14), where g must not exceed 0xffff. */ { int i, j; uint16 x, y, xhi, ghi, xlo, glo; j = 1; G[0] = 0; for (i = 1; i <= 255; ++i) { x = L[i << 1]; xhi = x >> 8; ghi = g >> 8; y = xhi * ghi; if (y > 0x3f80) { /* We could have overflowed later. */ /* But now we know y << 2 > L[1]. */ G[i] = 0; continue; } xlo = x & 0xff; glo = g & 0xff; y = (y << 2) + ((xhi * glo) >> 6) + ((xlo * ghi) >> 6); while (L[j] > y) ++j; G[i] = j >> 1; } } /*********************/ /* 32-bit arithmetic */ void precompute32(uint32 L[511]) /* Precomputes the log table (this requires floating point). */ { int j; double f; /* L[j] will hold an integer representation of */ /* -log(j / 510.0). Knowing that L[1] (the largest) */ /* is 0x3ffffff will help avoid overflow later, so we */ /* set the scale factor accordingly. */ f = 0x3fffffff / log(1 / 510.0); for (j = 1; j <= 510; ++j) L[j] = log(j / 510.0) * f + 0.5; } void gamma32(uint32 L[511], uint8 G[256], uint16 g) /* Makes a 256-entry gamma correction lookup table G[] with */ /* exponent g/pow(2,14), where g must not exceed 0xffff. */ { int i, j; uint32 x, y; j = 1; G[0] = 0; for (i = 1; i <= 255; ++i) { x = L[i << 1]; y = (x >> 14) * g; while (L[j] > y) ++j; G[i] = j >> 1; } } /**********************************************/ /* Floating-point arithmetic (for comparison) */ void gamma_fp(uint8 G[256], double g) /* Makes a 256-entry gamma correction */ /* lookup table G[i] with exponent g. */ { int i; G[0] = 0; for (i = 1; i <= 255; ++i) G[i] = pow(i/255.0, g) * 255 + 0.5; }
This appendix gives the reasoning behind some of the design decisions in the PNG extension chunks. It does not form a part of the specification.
This section gives the reasoning behind some of the design decisions in the pCAL chunk. It does not form a part of the specification.
Equation types 1 and 2 seem to be equivalent. Why have both?
ln()and
exp(), since
pow()may provide better accuracy in some floating-point math libraries. We also don't want to force people using base-10 logs to store a sufficiently accurate value of
ln(10)in the pCAL chunk.
e, we don't want to force people to encode a sufficiently accurate value of
ein the pCAL chunk, or to use
pow()when
exp()is sufficient.
x0and
x1provide a way to recover the original data, losslessly, when the original range is not a power of two. Sometimes the digitized values do not have a range that fills the full depth of a PNG. For example, if the original samples range from 0 (corresponding to black) to 800 (corresponding to white), PNG requires that these samples be scaled to the range 0 to 65535. By recording
x0=0and
x1=800we can recover the original samples, and we indicate the precision of the data.
x0=46000and
x1=47000, we can recover the original data samples that fell between 46000 and 47000.
Why define integer divison to round toward negative infinity? This is different from many C implementations and from all Fortran implementations, which round toward zero.
We cannot leave the choice unspecified. If we were to specify
rounding toward zero, we'd have to account for a discontinuity at
zero. A division by positive
d would map the
2d-1
values from
-(d-1)
through
d-1 to zero, but would map only d values to any other value;
for example,
3d through
4d-1 would be mapped
to
3. Achieving lossless
mappings in spite of this anomaly would be difficult.
Names of contributors not already listed in the PNG specification are presented in alphabetical order:
GIF is a service mark of CompuServe Incorporated. PostScript is a trademark of Adobe Systems.
This document was built from the file pngext-master-20060914 on 14 September 2006..
The "Appendix: Sample Code" has been placed in the public domain, and the conditions described above do not apply to that appendix. | http://www.libpng.org/pub/png/spec/register/pngext-1.4.0-pdg.html | CC-MAIN-2017-17 | refinedweb | 6,044 | 51.07 |
We often need to display data in a table. However, unless it’s a realtime table, we need to reload to view new data each time that it’s added. For example, take a table of movies arranged by the year they were released. Each time a movie is added, we would not know of the changes until we reload our page - which is not the best experience for a user.
Today, we will solve this problem by creating a realtime table of movie titles, which updates once there is new data.
To follow this tutorial, please ensure that you are familiar with the basics of:
Pusher is a hosted service that makes it super-easy to add realtime data and functionality to web and mobile applications.
Pusher sits as a realtime layer between your servers and your clients. Pusher maintains persistent connections to the clients - over Web-socket if possible and falling back to HTTP-based connectivity - so that as soon as your servers have new data that they want to push to the clients they can do, instantly.
The next thing we need to do is create a new Asp.Net MVC application. To do so, let’s:
Visual C#
ASP.NET Web Application.
For this tutorial, I named the project:
pusher_realtime_table.
Now we are almost ready. The next step will be to install the official
Pusher library for .Net using the
NuGet Package.
To do this, we go to tools on the top bar, click on
NuGet Package Manager, on the drop-down we select
Package Manager Console.
We will see the
Package Manager Console at the bottom of our Visual Studio. Next, let’s install the package by running:
Install-Package PusherServer
Now that our environment is set up and ready, let’s dive into writing code.
By default, Visual Studio creates three controllers for us, however we will use the
HomeController for the application logic.
The first thing we want to do is to define a model that stores the list of movies we have in the database.
Under the ‘models’ folder, let’s create a file named
realtimetable.cs and add the following content:
using System; using System.Collections.Generic; using System.ComponentModel.DataAnnotations; using System.Linq; using System.Web; namespace pusher_realtime_table.Models { public class RealtimeTable { [Key] public int id { get; set; } [Required] [MaxLength(225)] public string title { get; set; } [Required] public int year { get; set; } } }
In the above block of code, we have declared a model called
RealtimeTable with three main properties:
Now that we have defined our model, let’s go ahead and reference it in our default database context called
ApplicationDbContext. To do this, let’s open up
models\IdentityModels.cs file, then locate the class called
ApplicationDbContext and add the following after the create function:
public DbSet<RealtimeTable> realtime { get; set; }
In the code block above,
DBSet class represents an entity set that is used for create, read, update, and delete operations. The entity which we will use to do CRUD operations is the
realtimetable model we created earlier, and we have given it the name realtime.
Although our model is setup, we still need to attach a database to our application. To do so, select the Server Explorer by the left hand side of our Visual Studio, right click on Data Connections and add a database.
Now both our model and database is set to work, let’s go ahead creating our index route. Open the
HomeController and replace it with the following code:
using pusher_realtime_table.Models; using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using PusherServer; using System.Net; using System.Threading.Tasks; namespace pusher_realtime_table.Controllers { public class HomeController : Controller { ApplicationDbContext db = new ApplicationDbContext(); public ActionResult Index() { return View(); } [HttpPost] public async Task<ActionResult> Index(RealtimeTable data) { realtimetable setdata = new RealtimeTable(); setdata.title = data.title; setdata.year = data.year; db.realtime.Add(setdata); db.SaveChanges(); var options = new PusherOptions(); options.Cluster = "XXX_APP_CLUSTER"; var pusher = new Pusher("XXX_APP_ID", "XXX_APP_KEY", "XXX_APP_SECRET", options); ITriggerResult result = await pusher.TriggerAsync("asp_channel", "asp_event", data); return RedirectToAction("view", "Home"); } } }
In the code block above, we have defined our Index function for both
GET and
POST requests.
Before looking at our
GET and
POST controller functions, we notice that there is an import of our db context into our class with the line that says:
ApplicationDbContext db = new ApplicationDbContext();
This makes it possible to access our database model which we have defined using the
DbSet class in our
ApplicationDbContext class.
In the
GET function, we have returned the view which we will be using to add a new movie into our database.
Notice that the
POST method is set to be asynchronous. This is due to the fact that the Pusher .NET library uses the await operator to wait for the asynchronous response from the data emitted to Pusher.
In this function, we first add our new movie to the database, then we trigger an event. Once the event has been successfully emitted, we then return a redirect to our view function which we will be creating soon.
Now that we have defined our index route, we can add new movies to the database, though we cannot see the details of the movies we have added. To do that, we need to define our view route, which returns a table of all the movies we have in our database.
Let’s open our
HomeController and add the following functions:
public ActionResult seen() { return Json(db.realtime.ToArray(), JsonRequestBehavior.AllowGet); } public ActionResult view() { return View(); }
In the seen function, we have exposed a webservice that returns a JSON result of all the movies we have in our database.
In the view function, we return our view which shows us the list of our movies rendered with Vue.
Let’s open up our
Views\Home\Index.cshtml and replace the content with the following:
@model pusher_realtime_table.Models.RealtimeTable @{ ViewBag. <h4>realtimetable</h4> <hr /> @Html.ValidationSummary(true, "", new { @ @Html.LabelFor(model => model.title, htmlAttributes: new { @ @Html.EditorFor(model => model.title, new { htmlAttributes = new { @ @Html.LabelFor(model => model.year, htmlAttributes: new { @ @Html.EditorFor(model => model.year, new { htmlAttributes = new { @ <div class="col-md-offset-2 col-md-10"> <input type="submit" value="Create" class="btn btn-default" /> </div> </div> </div> } <div> @Html.ActionLink("Back to List", "Index") </div>
In the above block of code, we have created our form which consists of three main inputs, which are:
Next, let’s also create the view file to show us all the current movies we have in realtime.
Let's create a new file called
view.cshtml in our
Views\Home folder, and add the following content:
@{ ViewBag.</script> <script src=""></script> <script src="//js.pusher.com/4.0/pusher.min.js"></script> <h2>Real-Time Table</h2> <table class="table" id="app"> <tr> <th> Sn </th> <th> Title </th> <th> Year </th> </tr> <tr v- <td> {{index+1}} </td> <td> {{mov.title}} </td> <td> {{mov.year}} </td> </tr> </table> <script> var pusher = new Pusher('XXX_APP_KEY, { cluster: 'XXX_APP_CLUSTER' }); var my_channel = pusher.subscribe('asp_channel'); var app = new Vue({ el: '#app', data: { movies: [] }, created: function () { this.get_movies(); this.listen(); }, methods: { get_movies: function () { axios.get('@Url.Action("seen", "Home")') .then((response)=> { this.movies = response.data; }); }, listen: function () { my_channel.bind("asp_event", (data) => { this.movies.push(data); }) } }, computed: { sorted_movies: function () { var movies = this.movies; movies = movies.sort(function (a, b) { return parseInt(a.year) - parseInt(b.year); }); return movies; } } }); </script>
In the view file above, notice that we have included three new libraries which are:
vue.min.js: This is the Vue js library which will be used to render our data.
axios.min.js: This is the official Axios library, which we will be using to make HTTP requests to our server.
pusher.min.js: This is the official Pusher JavaScript client, with which we will be receiving our realtime data.
Our markup is pretty simple. It consists of an HTML table which renders all our movies using Vue.
We need to pay attention to the script section of our View file. This is where all the magic goes on. Just before we declare our Vue app, we instantiated Pusher by calling the Pusher object while passing in our app key and cluster.
Next, we subscribe to the
asp_channel event.
In the created function, we fire the
get_movies function, which uses Axios to fetch the list of all our movies.
Next, we fired the listen function which watches for the arrival of the new data, then pushes them to the array of all our movies.
Also, notice that we have a computed property called
sorted_movies, which returns a sorted list of our movies based on the year.
Below is a picture of what we have built:
In the course of this tutorial, we have covered how to build a realtime table using .NET and Pusher.
We have gone through the process of setting up the environment, using the
NuGet Package Manager to install the required Pusher library.
Pusher Limited is a company registered in England and Wales (No. 07489873) whose registered office is at 160 Old Street, London, EC1V 9BW. | https://www.pusher.com/tutorials/realtime-table-aspnet | CC-MAIN-2019-18 | refinedweb | 1,522 | 57.27 |
This was a shocking email: the people have a Python 2 CGI script. They needed advice on Python 2 to 3 migration.
Here's my advice on a Python 2 CGI script: Throw It Away.
A great deal of the CGI processing is part of the wsgi module, as well as tools like jinja and flask. This means that the ancient Python 2 CGI script has to be disentangled into two parts.
- All the stuff that deals with CGI and HTML. This isn't valuable and must be deleted.
- Whatever additional, useful, interesting processing it does for the various user communities.
The second part -- the useful work -- needs to be preserved. The rest is junk..
The idea here is to look at the project as a rewrite where some of the legacy code may be preserved. It's better to proceed as though this is new development with the legacy code providing examples and test cases. If we look at this as new, we'll start with some diagrams to provide a definition of done.
Step One
Understand the user communities. Create a 4C Context Diagram to show who the users are and what the expect. Ideally, it's small with "users" and "administrators." It may turn out to be big with complex privilege rules to segregate users.
It's hard to get this right. Everyone wants the code "converted". But no one really knows all the things the code does. There's a lot of pressure to ignore this step.
This step creates the definition of done. Without this, there's no way to do anything with the CGI code and make sure that the original features still work.
Step Two
Create a 4C.
(This should be very quick to produce. If it's not, go back to step one and make sure you really understand the context.)
Step Three
Create a 4C.
You will have several lists. One list has all the things in site-packages. If the
PYTHONPATH environment variable is used, all the things in the directories named in this environment variable. Plus. All the things named in
import statements.
These lists should overlap. Of course someone can install a package that's not used, so the site-packages list should be a superset of the import list.
This is a checklist of things that must be read (and possibly converted) to build the new features.
Step Four?
You'll need two suites of fully automated tests.
- Unit tests for the Python code. This must have 100% code coverage and will not be easy.
-.
Let's break this into two steps.
Step Four.
If you can find a Python 2 version of coverage, and a Python 2 version of pytest, I suggest using this combination to write a test suite, and make sure you have 100% code coverage.
This is a lot of work, and there's no way around it. Without automated testing, there's no way to prove that you're done and the software can be trusted in production.
You will find bugs. Don't fix them now. Log them by marking the test case with the proper answer different from the answer you're getting.
Step Five
Python has a built-in CGI server you can use. See for a handler that will provide core CGI features from a Python script allowing you to test without the overhead of Apache httpd or some other server.
You need an integration test suite for each user stories in the context you created in Step One. No exceptions. Each User. Each Story. A test to show that it works..
It is common to find bugs. Don't fix them now. Log them by marking the test case with the proper answer different from the answer you're getting.
Step Six
Refactor. Now that you have automated tests to prove the legacy CGI script really works, you need to disentangle the Python code into three distinct components.
- A Component to parse the request: the methods, cookies, headers, and URL.
- A Component that does useful work. This corresponds to the "model" and "control" part of the MVC design pattern.
- A Component that builds the response: the status, headers, and content.
In many CGI scripts, there is often a hopeless jumble of bad code. Because you have tests in Step Four and Step Five, you can refactor and confirm the tests still pass.
If the code is already nicely structured, this step is easy. Don't plan on it being easy.
One goal is to eventually replace HTML page output creation with jinja. Similarly, another goal is to eventually replace parsing the request with flask. All of the remaining CGI-related features get pushed into a wsgi-compatible plug-in to a web server.
The component that does the useful work will have some underlying data model (resources, files, downloads, computations, something) and some control (post, get, different paths, queries.) We'd like to clean this up, too. For now, it can be one module.
After refactoring, you'll have a new working application. You'll have a new top-level CGI script that uses the built-in wsgi module to do request and response processing. This is temporary, but is required to pass the integration test suite.
You may want to create an intermediate Component diagram to describe the new structure of the code.
Step Seven
Write an OpenAPI specification for the revised application. See for more information. Add the path processing so openapi.json (or openapi.yaml) will produce the specification. This means updating unit and integration tests to add this feature..
Some of the document structures described in the OpenAPI specification will be based on the data model and control components factored out of the legacy code. It's essential to get these details write in the OpenAPI specification and the unit tests.
This may expose problems in the CGI's legacy behavior. Don't fix it now. Instead document the features that don't fit with modern API's. Don't be afraid to use # TODO comments to show what should be fixed.
Step Eight
Use the 2to3 tool to convert ONLY the model and control components. Do not convert request parsing and response processing components; they will be discarded. This may involve additional redesign and rewrites depending on how bad the old code was.
Convert the unit tests for ONLY the model and control components components..
Do not start writing view functions or HTML templates until underlying model and control module works. This is the foundation of the application. It is not tied to HTTP, but must exist and be tested independently.
Step Nine.
Rewrite the remaining unit tests manually. These unit tests will now use the Flask test client. The goal is to get back to 100% code coverage.
Update the C4 container, component, and code diagrams.
Step Ten.
Do not reuse the Apache httpd and CGI interface. This was terrible.
Step Eleven.
Step Twelve
Fix the bugs you found in Steps Four, Five, and Seven. You will be creating a new release with new, improved features.
tl;dr
This is a lot of work. There's no real alternative. CGI scripts need a lot of rework. | https://slott-softwarearchitect.blogspot.com/2021/08/ | CC-MAIN-2021-39 | refinedweb | 1,205 | 76.72 |
Python & NetworkX for Network Topology Data
I tried thousands of times to apply "NetworkX" Python library to analyze the datasets found by this link:
As long as I execute my Python code to the data found there I got, for example, some unrealistic results.
g = nx.read_weighted_edgelist('out.topology') g.size() 0
0 as a result for this huge data is completely wrong !
Could you please help me to read this data through "NetworkX" Python library ?
1 answer
- answered 2017-11-15 00:37 rodgdor
As someone mentioned trying reading the list gets errors. However, if you get rid of the first line (
% sym positive) and try the code below to create your graph it should be fine:
import networkx as nx with open("out.topology", 'rt') as f: g = nx.parse_edgelist(f, create_using=nx.DiGraph(), data=[('weight', float), ('timestamp', float)])
The data contains 4 columns: source | target | weight | timestamp of edge.
Just include that info in the arguments as shown in my snippet.?
- Algorithm to multiply edges of a Networkx graph
So my problem is to find the longest path from a node to another node (or the same node) in a graph implemented with Networkx library.
I don't want to add the edges' weights but multiply them and take the biggest result. Obviously, passing only once by each node or not at all.
For example if I want to go from node 1 to node 4, the best result would be : 2 x 14 x 34 x 58
Thank you for your help !
- What is the most efficient library to work with Multilayer networks in Python or R?
Question:
Which libraries/packages are the most efficient (in terms of memory use and speed) to analyze multilayer networks?
Desired answer:
Ideally, the answer to this question would involve using each of these libraries to create a pre-defined network and calculate a few centrality measures (e.g. page rank), benchmarking the amount of time and RAM memory each library uses. Something similar to this benchmark here.
List of packages
I'm particularly interested in libraries/packages in
Ror
Python. I list below some the packages I was recommended, but feel free to add others to the list:
Libraries in
Python:
- Pymnet - pure python
- NetworkX - pure python
- graph-tool - python and C++
- mammult - python and C
Libraries in
R:
- multinet - R and C++
- multiplex - pure R
- multigraph - pure R
- MuxViz - pure R.
obs. I've only tried MuxViz but I find it could be faster and it still has a limited number of centrality metrics.
- Object is not subscripable networkx
import itertools import copy import networkx as nx import pandas as pd import matplotlib.pyplot as plt #-- edgelist = pd.read_csv(' /e570c38bcc72a8d102422f2af836513b/raw/89c76b2563dbc0e88384719a35cba0dfc04cd522/edgelist_sleeping_giant.csv') edgelist.head(10) #-- nodelist = pd.read_csv('') nodelist.head(5) #-- g = nx.Graph() #-- for i, elrow in edgelist.iterrows(): g.add_edge(elrow[0], elrow[1], attr_dict=elrow[2:].to_dict()) #-- #print(elrow[0]) #print(elrow[1]) #print(elrow[2:].to_dict()) #-- g.edges(data=True)[0:5] g.nodes(data=True)[0:10] #-- print(format(g.number_of_edges())) print(format(g.number_of_nodes()))
Gets me the following error:
Traceback (most recent call last): File "C:/Users/####/Main.py", line 22, in <module> g.edges(data=True)[0:5] TypeError: 'EdgeDataView' object is not subscriptable
I have read a couple of other threads but nada. From my simple understanding the error is caused by
[0:5]but im most likely wrong.
I'm a fairly basic coder and am trying to follow this tutorial and i get the error above. | http://quabr.com/47297065/python-networkx-for-network-topology-data | CC-MAIN-2017-47 | refinedweb | 590 | 63.59 |
import "nsIMsgIncomingServer.idl";
logon succeeded - persist password, if user chooses.
this is really dangerous.
this destroys all pref values do not call this unless you know what you're doing!
this is also very dangerous.
this will remove the files associated with this server on disk.
these generic getter / setters, useful for extending mailnews note, these attributes persist across sessions
for mail, this configures both the MDN filter, and the server-side spam filter filters, if needed.
If we have set up to filter return receipts into our Sent folder, this utility method creates a filter to do that, and adds it to our filterList if it doesn't exist. If it does, it will enable it.
this is not used by news filters (yet).
If Sent folder pref is changed we need to clear the temporary return receipt filter so that the new return receipt filter can be recreated (by ConfigureTemporaryReturnReceiptsFilter()).
internal pref key - guaranteed to be unique across all servers
pretty name - should be "userid on hostname" if the pref is not set
helper function to construct the pretty name in a server type specific way - e.g., mail for foo@test.com, news on news.mozilla.org
hostname of the server
real hostname of the server (if server name is changed it's stored here)
userid to log into the server
real username of the server (if username is changed it's stored here)
protocol type, i.e.
"pop3", "imap", "nntp", "none", etc used to construct URLs
the schema for the local mail store, such as "mailbox", "imap", or "news" used to construct URIs
can this server be removed from the account manager? for instance, local mail is not removable, but an imported folder is
If the server supports Fcc/Sent/etc, default prefs can point to the server.
Otherwise, copies and folders prefs should point to Local Folders.
By default this value is set to true via global pref 'allows_specialfolders_usage' (mailnews.js). For Nntp, the value is overridden to be false. If ISPs want to modify this value, they should do that in their rdf file by using this attribute. Please look at mozilla/mailnews/base/ispdata/aol.rdf for usage example.
If the password for the server is available either via authentication in the current session or from password manager stored entries, return false.
Otherwise, return true. If password is obtained from password manager, set the password member variable.
spam settings | http://doxygen.db48x.net/mozilla/html/interfacensIMsgIncomingServer.html | CC-MAIN-2016-50 | refinedweb | 406 | 64.3 |
Type-safe error handling for Python.
Project description
errs
Type-safe error handling for Python.
- Free software: MIT license
- Documentation:.
Installation
pip install errs
Usage
The @errs decorator marks any function or method that raises an Exception. Rather than handling the Exception explicitly, we collect the result of the function and then check whether an error occurred.
This leads to code that is more explicit about error handling as well as resilient to the raising of unforeseen exceptions. This style is similar to error handling in Go.
Additionally, all exceptions wrapped by @errs will be logged to the default Python logger on the error level. This provides a powerful abstraction where runtime behaviors are logged and separated from current application state.
from errs import errs @errs def raises(): #type: () -> int raise Exception('this will get logged') return 0 def check_error(): #type: () -> None out, err = raises() print('Error: {err}'.format(err.check())) if __name__ == '__main__': check_error() #prints Error: True
Credits
This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.
History
0.1.0 (2018-12-30)
- First release on PyPI.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/errs/ | CC-MAIN-2020-29 | refinedweb | 212 | 57.16 |
Search Criteria
Package Details: ttf-symbola 13.00-8
Dependencies (2)
- fontforge (fontforge-git) (make)
- poppler (poppler-minimal, poppler-lcdfilter, poppler-git, poppler-lcd) (make)
Required by (14)
- discord-canary (optional)
- discord-canary-electron-bin (optional)
- discord-development (optional)
- discord-development-electron-bin (optional)
- dmenu-supermario9590-git
- dwm-supermario9590-git
- dwm-zarcastic-git (optional)
- fonts-meta-base (requires font-symbola)
- gnome-shell-extension-improvedosk-git (optional)
- m17n-im-shortname-unicode-emoji-git
- nctelegram-git (optional)
- nodejs-gitmoji-cli (optional)
- st-supermario9590-git
- unicodemoticon (optional)
Sources (2)
grawlinson commented on 2021-03-02 19:59
Brottweiler commented on 2021-03-02 17:06
The git clone URL on this AUR package points to "font-symbola.git" so it downloads the wrong version. Manually downloading a snapshot downloads the right version of course. Why is it pointing to font-symbola.git?
caleb commented on 2021-01-17 08:12
@Togooroo I just re-downloaded the source and the checksum still matches. This is not out of date, but I suspect you have the same problem many other have had with this package. Because of a cooky issue with the way AUR package namespaces work there is a deleted (marked as deleted but still clonable) repository in the old namespace. This package's clone URL is different than it used to be. Please update by cloning from the URL shown in this package's meta data.
caleb commented on 2021-01-17 08:09
@IMBJR That package file is from a different repository. Please use the "Git Clone URL"s shown on this package and clone from scratch.
IMBJR commented on 2020-11-25 16:46
Oh wow. The PKGBUILD re-downloaded is:
pkgname=ttf-symbola pkgver=13.00 pkgrel=1 ....
That's clearly wrong. I'll have to manually download the file then.
grawlinson commented on 2020-11-25 16:39
IMBJR: You're using an old release. Update your package.
IMBJR commented on 2020-11-25 16:00
[imbjr@pc ttf-symbola]$ makepkg ==> Making package: ttf-symbola 13.00-1 (Wed 25 Nov 2020 15:53:40 GMT) ==> Checking runtime dependencies... ==> Checking buildtime dependencies... ==> Retrieving sources... -> Downloading ttf-symbola-13.00.zip... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 3581k 100 3581k 0 0 447k 0 0:00:08 0:00:08 --:--:-- 564k -> Downloading LICENSE.pdf... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 27995 100 27995 0 0 29813 0 --:--:-- --:--:-- --:--:-- 29813 ==> Validating source files with sha512sums... ttf-symbola-13.00.zip ... FAILED LICENSE.pdf ... Passed ==> ERROR: One or more files did not pass the validity check!
Has the ZIP file been updated since you upgraded the PKGBUILD file?
caleb commented on 2020-07-27 12:09
@Soptik That checksum you posted in your out of date flag is already the checksum used by this package, it hasn't changed. What has changed is the base name for the package, I suspect you are using the outdated Git repository name. I suggest a fresh clone.
caleb commented on 2020-05-01 11:13
@balticer I have unflagged this as it is not out of date yet. You have the same issue noted in a few previous comments here. The output you posted with your flag shows that you have the wrong git repository checked out. Note the package base name has changed and you seem to be cloning the old repository. Please update and build from this package.
sjugge commented on 2020-04-17 05:59
@caleb, thanks. You're right, the package / repo listed here is fine.
I was working with the ttf-symbola repo which I've had on my system for some time, and shows this AUR page as well. I did not realize I probably missed an update about this. Switched to the font-symbola, no issue anymore.
Pinned Comments
caleb commented on 2020-03-25 05:53
In the latest release of this package I've introduced a workaround to force fresh downloads whenever we bump the package release number, not just the version. This should work around most people's caching issues resulting from upstream's replacing files without changing the filename or versions.
If you hit a checksum error now please flag this package as out of date. | https://aur.tuna.tsinghua.edu.cn/packages/ttf-symbola/ | CC-MAIN-2021-21 | refinedweb | 721 | 56.55 |
Hi peeps.
I just wrote a program because im bored.
Here is the code:
But im just wondering how to compile it, can anyone help me?But im just wondering how to compile it, can anyone help me?Code:#include <iostream> using namespace std; int main() { int DOB; cout << "Please enter your date of birth...\n" << endl; cin >> DOB cin.ignore(); if (DOB < 2006) { cout << "Erh, you were not born in this year!\n" << endl; } else if (DOB == 2006) { cout << "Oioi, howz your first year?\n" << endl; } else { cout << "WTF are you on aboot? You arn't even born yet!\n" << endl; } cin.get(); }
I'm using Visual C++ 2005 express edition. | https://cboard.cprogramming.com/cplusplus-programming/81709-code-compile.html | CC-MAIN-2017-09 | refinedweb | 112 | 95.98 |
Hi everyone, Tyler Franke here, and today I wanted to talk about an issue that you might run into when doing client push installs and have thin clients in your environment. When using client push installation to install the System Center Configuration Manager 2007 or System Center 2012 Configuration Manager client, you may run into a situation where the process fails on a thin client with the following error:
The error encountered is error 0x8004100e (i.e. Invalid Namespace)
The error is usually encountered because within WMI on the thin client, the namespace and/or classes that are required do not exist. For the Configuration Manager client to install, we expect and require that the cimv2 namespace exists and is accessible.
If you encounter this on an embedded (thin-client) system, please be aware that the manufacturer (OEM) chooses which Windows features to include and which to exclude. If you encounter this issue you will need to address it with your thin client manufacturer directly.
To verify whether this is the issue you are encountering, you can verify the existence of the cimv2 namespace by performing the following actions:
1. Execute WBEMTEST (e.g. Start-> Run-> WBEMTEST).
2. Once the Windows Management Instrumentation Tester appears click on the ‘Connect…’ button.
3. In the Connect window that appears, type only ‘root’ (without quotes) in the Namespace field and click the ‘Connect’ button.
4. Back on the Windows Management Instrumentation Tester window click the ‘Query…’ button.
5. In the Query window that appears, type select * from __NAMESPACE in the Enter Query field and click Apply.
In the Query Results window, verify whether or not the __NAMESPACE.Name="cimv2” namespace appears. If it does not then this is your issue. Below is an example where __NAMESPACE.Name="cimv2” does exist.
For more information please see the following:
Device Manager 2011 Overview:
Tasks for Managing Configuration Manager Clients on Windows Embedded Devices:
Deploying the Configuration Manager Client to Windows Embedded Device:
Tyler Franke |:
Keywords: WBEM_E_INVALID_NAMESPACE cm12 cm07 configmgr 2012 configmgr 2007 | https://blogs.technet.microsoft.com/configurationmgr/2012/12/05/configmgr-support-tip-client-push-to-a-thin-client-os-fails-with-error-0x8004100e/ | CC-MAIN-2017-39 | refinedweb | 337 | 53.92 |
A primer on macOS SDKs¶
Overview¶
A macOS SDK is an on-disk directory that contains header files and meta information for macOS APIs.
Apple distributes SDKs as part of the Xcode app bundle. Each Xcode version comes with one macOS SDK,
the SDK for the most recent released version of macOS at the time of the Xcode release.
The SDK is located at
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk.
Compiling Firefox for macOS requires a macOS SDK. The build system uses the SDK from Xcode.app by
default, and you can select a different SDK using the
mozconfig option
--with-macos-sdk:
ac_add_options --with-macos-sdk=/Users/username/SDKs/MacOSX10.11.sdk
Supported SDKs¶
First off, Firefox runs on 10.9 and above. This is called the “minimum deployment target” and is independent of the SDK version.
Our official Firefox builds compiled in CI (continuous integration) currently use the 10.11 SDK. Bug 1475652 tracks updating this SDK.
For local builds, all SDKs from 10.11 to 10.15 are supported. Firefox should compile successfully with all of those SDKs, but minor differences in runtime behavior can occur.
However, since only the 10.11 SDK is used in CI, compiling with different SDKs breaks from time to time. Such breakages should be reported in Bugzilla and fixed quickly.
Aside: Firefox seems to be a bit of a special snowflake with its ability to build with an arbitrary SDK. For example, at the time of this writing (June 2020), building Chrome requires the 10.15 SDK. Some apps even require a certain version of Xcode and only support building with the SDK of that Xcode version.
Why are we using such an old SDK in CI, you ask? It basically comes down to the fact that macOS hardware is expensive, and the fact that the compilers and linkers supplied by Xcode don’t run on Linux.
Obtaining SDKs¶
Sometimes you need an SDK that’s different from the one in your Xcode.app, for example to check whether your code change breaks building with other SDKs, or to verify the runtime behavior with the SDK used for CI builds.
The easy but slightly questionable way to obtain an SDK is to download it from a public github repo.
Here’s another option:
Have your Apple ID login details ready, and bring enough time and patience for a 5GB download.
Check these tables in the Xcode wikipedia article and find an Xcode version that contains the SDK you need.
Look up the Xcode version number on xcodereleases.com and click the Download link for it.
Log in with your Apple ID. Then the download should start.
Wait for the 5GB Xcode_*.xip download to finish.
Open the downloaded xip file. This will extract the Xcode.app bundle.
Inside the app bundle, the SDK is at
Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk.
Effects of the SDK version¶
An SDK only contains declarations of APIs. It does not contain the implementations for these APIs.
The implementation of an API is provided by the OS that the app runs on. It is supplied at runtime,
when your app starts up, by the dynamic linker. For example, the AppKit implementation comes
from
/System/Library/Frameworks/AppKit.framework from the OS that the app is run on, regardless
of what SDK was used when compiling the app.
In other words, building with a macOS SDK of a higher version doesn’t magically make new APIs available when running on older versions of macOS. And, conversely, building with a lower macOS SDK doesn’t limit which APIs you can use if your app is run on a newer version of macOS, assuming you manage to convince the compiler to accept your code.
The SDK used for building an app determines three things:
Whether your code compiles at all,
which range of macOS versions your app can run on (available deployment targets), and
certain aspects of runtime behavior.
The first is straightforward: An SDK contains header files. If you call an API that’s not declared anywhere - neither in a header file nor in your own code - then your compiler will emit an error. (Special case: Calling an unknown Objective-C method usually only emits a warning, not an error.)
The second aspect, available deployment targets, is usually not worth worrying about:
SDKs have large ranges of supported macOS deployment targets.
For example, the 10.15 SDK supports running your app on macOS versions all the way back to 10.6.
This information is written down in the SDK’s
SDKSettings.plist.
The third aspect, varying runtime behavior, is perhaps the most insidious and surprising aspect, and is described in the next section.
Runtime differences based on macOS SDK version¶
When a new version of macOS is released, existing APIs can change their behavior. These changes are usually described in the AppKit release notes:
Sometimes, these differences in behavior have the potential to break existing apps. In those instances, Apple often provides the old (compatible) behavior until the app is re-built with the new SDK, expecting developers to update their apps so that they work with the new behavior, at the same time as they update to the new SDK.
Here’s an example from the 10.13 release notes:
Responsive Scrolling in NSCollectionViews is enabled only for apps linked on or after macOS 10.13.
Here, “linked on or after macOS 10.13” means “linked against the macOS 10.13 SDK or newer”.
Apple’s expectation is that you upgrade to the new macOS version when it is released, download a new Xcode version when it is released, synchronize these updates across the machines of all developers that work on your app, use the SDK in the newest Xcode to compile your app, and make changes to your app to be compatible with any behavior changes whenever you update Xcode. This expectation does not always match reality. It definitely doesn’t match what we’re doing with Firefox.
For Firefox, SDK-dependent compatibility behaviors mean that developers who build Firefox locally can see different runtime behavior than the users of our CI builds, if they use a different SDK than the SDK used in CI. That is, unless we change the Firefox code so that it has the same behavior regardless of SDK version. Often this can be achieved by using APIs in a way that’s more in line with the API’s recommended use.
For example, we’ve had cases of broken placeholder text in search fields, missing or double-drawn focus rings, a startup crash, fully black windows, fully gray windows, broken vibrancy, and broken colors in dark mode.
In most of these cases, the breakage was either very minor, or it was caused by Firefox doing things
that were explicitly discouraged, like creating unexpected NSView hierarchies, or relying on unspecified
implementation details. (With one exception: In 10.14, HiDPI-aware
NSOpenGLContext rendering in
layer-backed windows simply broke.)
And in all of these cases, it was the SDK-dependent compatibility behavior that protected our users from being exposed to the breakage. Our CI builds continued to work because they were built with an older SDK.
We have addressed all known cases of breakage when building Firefox with newer SDKs. I am not aware of any current instances of this problem as of this writing (June 2020).
For more information about how these compatibility tricks work, read the Overriding SDK-dependent runtime behavior section.
Supporting multiple SDKs¶
As described under Supported SDKs, Firefox can be built with a wide variety of SDK versions.
This ability comes at the cost of some manual labor; it requires some well-placed
#ifdefs and
copying of header definitions.
Every SDK defines the macro
MAC_OS_X_VERSION_MAX_ALLOWED with a value that matches the SDK version,
in the SDK’s
AvailabilityMacros.h header. This header also defines version constants like
MAC_OS_X_VERSION_10_12.
For example, I have a version of the 10.12 SDK which contains the line
#define MAC_OS_X_VERSION_MAX_ALLOWED MAC_OS_X_VERSION_10_12_4
The name
MAC_OS_X_VERSION_MAX_ALLOWED is rather misleading; a better name would be
MAC_OS_X_VERSION_MAX_KNOWN_BY_SDK. Compiling with an old SDK does not prevent apps from running
on newer versions of macOS.
With the help of the
MAC_OS_X_VERSION_MAX_ALLOWED macro, we can make our code adapt to the SDK that’s
being used. Here’s an example where the 10.14 SDK changed the signature of
an
NSApplicationDelegate method:
- (BOOL)application:(NSApplication*)application continueUserActivity:(NSUserActivity*)userActivity #if defined(MAC_OS_X_VERSION_10_14) && MAC_OS_X_VERSION_MAX_ALLOWED >= MAC_OS_X_VERSION_10_14 restorationHandler:(void (^)(NSArray<id<NSUserActivityRestoring>>*))restorationHandler { #else restorationHandler:(void (^)(NSArray*))restorationHandler { #endif ... }
We can also use this macro to supply missing API definitions in such a way that they don’t conflict with the definitions from the SDK. This is described in the “Using macOS APIs” document, under Using new APIs with old SDKs.
Overriding SDK-dependent runtime behavior¶
This section contains some more details on the compatibility tricks that cause different runtime behavior dependent on the SDK, as described in Runtime differences based on macOS SDK version.
How it works¶
AppKit is the one system framework I know of that employs these tricks. Let’s explore how AppKit makes this work, by going back to the NSCollectionView example from above:
Responsive Scrolling in NSCollectionViews is enabled only for apps linked on or after macOS 10.13.
For each of these SDK-dependent behavior differences, both the old and the new behavior are implemented
in the version of AppKit that ships with the new macOS version.
At runtime, AppKit selects one of the behaviors based on the SDK version, with a call to
_CFExecutableLinkedOnOrAfter(). This call checks the SDK version of the main executable of the
process that’s running AppKit code; in our case that’s the
firefox or
plugin-container executable.
The SDK version is stored in the mach-o headers of the executable by the linker.
One interesting design aspect of AppKit’s compatibility tricks is the fact that most of these behavior differences can be toggled with a “user default” preference. For example, the “responsive scrolling in NSCollectionViews” behavior change can be controlled with a user default with the name “NSCollectionViewPrefetchingEnabled”. The SDK check only happens if “NSCollectionViewPrefetchingEnabled” is not set to either YES or NO.
More precisely, this example works as follows:
-[NSCollectionView prepareContentInRect:]is the function that supports both the old and the new behavior.
It calls
_NSGetBoolAppConfigfor the value “NSCollectionViewPrefetchingEnabled”, and also supplies a “default value function”.
If the user default is not set, the default value function is called. This function has the name
NSCollectionViewPrefetchingEnabledDefaultValueFunction.
NSCollectionViewPrefetchingEnabledDefaultValueFunctioncalls
_CFExecutableLinkedOnOrAfter(13).
You can find many similar toggles if you list the AppKit symbols that end in
DefaultValueFunction,
for example by executing
nm /System/Library/Frameworks/AppKit.framework/AppKit | grep DefaultValueFunction.
Overriding SDK-dependent runtime behavior¶
You can set these preferences programmatically, in a way that
_NSGetBoolAppConfig() can pick them up,
for example with
registerDefaults
or like this:
[[NSUserDefaults standardUserDefaults] setBool:YES forKey:@"NSViewAllowsRootLayerBacking"];
The AppKit release notes mention this ability but ask for it to only be used for debugging purposes::]).
It’s interesting that they mention this at all because, as far as I can tell, none of these values are documented. | https://firefox-source-docs.mozilla.org/widget/cocoa/sdks.html | CC-MAIN-2020-50 | refinedweb | 1,873 | 54.93 |
Originally Posted by bekim
i did some modifications :
#include <iostream>
using namespace std;
int main ()
{
int numDoubles = 0;
char text[30];
cout<<"Enter sentence: ";
cin.getline(text,30);
for (int i = 0; i<(text[30]); i++)
if (isalpha(text[i]) == isalpha(text[i+1]))
numDoubles++;
cout << "There is "<< numDoubles << " character double.";
cin.get();cin.get();
return 0;
}
but it still dont work the output is always 0
Now you're just guessing. Instead of writing nonsense, you should correct the mistake I pointed out. I showed you the correct spelling. I don't know how much simpler to make it.
View Tag Cloud
Forum Rules | http://forums.codeguru.com/showthread.php?523962-how-to-count-double-characters&p=2068251 | CC-MAIN-2016-44 | refinedweb | 105 | 67.15 |
View Complete Post...
Hy
I wondering how can I retrive data from a CheckBoxList in c#. and once I get the data I would like to store the data in a session so I can use the data from a session on a another page
Thank you in advance?
Hello,
I have my asp.net mvc site set up with wildcard mapping. for requests that dont have a matching routing, IIS6.0 is showing the default 404 error message even though i have customerror message configured in the web.config.
Any ideas?},...
hi friends
I want to open a web site in code by giving the server url, but in that server page we are restricting if session is null. Is work the session passing from the localhost in server???
How to pass session from local host to server
Urgent
Hi guys, i'm new to WFC and I'm experiencing a problem when I pass my DataContract class:
[DataContract] public class FileAttachment { private byte[] binaryBuffer = null; private String FileName = string.Empty; [DataMember] public byte[] FileBufferByteArray { get { return binaryBuffer; } set { binaryBuffer = value; } } [DataMember] public string FileAttachmentName { get { return FileAttachmentName; } set { FileAttachmentName = value; } } }
into the service. I'm constantly getting a System.StackOverflowE
For a class like following,
[Serializable]
public class SampleClass
{
public String Name { get; set; }
}
and a class Library containing following void method which simply assigns some value to the property.
public void AssignValueToClass(SampleClass classInstance)<br/>{
classInstance.Name = "Name assigned to Class";
}
When I call above method through Remoting client like following
RemotingClient client = MyRemoteObject.GetClient();
SampleClass instance = new SampleClas();
client.AssignValueToClass(instance);
MessageBox.Show(instance.Name); -- No Value is assigned to the class, Value of Name remains Null and message box shows empty.
But when I simply add a reference to my class library, and simple create the instance of the cl
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/12769-session-request-class-without-passing-as.aspx | CC-MAIN-2017-13 | refinedweb | 320 | 54.42 |
Hardware Support Final
Card Set Information
Author:
jal128
ID:
151410
Filename:
Hardware Support Final
Updated:
2012-05-04 05:43:14
hardware
Folders:
Description:
Hardware Support Final
Show Answers:
>
Flashcards
> Print Preview
The flashcards below were created by user
jal128
on
FreezingBlue Flashcards
. What would you like to do?
Get the
free
Flashcards app for iOS
Get the
free
Flashcards app for Android
Learn more
For Windows XP, use the ______ to see which service packs are installed.
a) Computer Properties box
b) Computer manager box
c) System Properties box
d)System Manager box
c) System Properties box
To view and manually install updates, click _____, and then follow the directions on-screen.
a) Start and Windows Updates
b) Start, Windows, and Windows Updates
c) Start, All Programs, Accessories, and Windows Updates
d) Start, All Programs, and Windows Updates
d) Start, All Programs, and Windows Updates
To protect a system against malicious attack, you need to verify that _____ software is configured to scan the system regularly and that it is up-to-date.
a) antispyware
b) firewall
c) prevention
d) antivirus
d) antivirus
Use the _____ command to defrag the drive from a command prompt window.
a) Optimize
b) Analyze
c) Defrag
d) Chdsk
c) Defrag
To make sure the drive is healthy, you need to search for and repair file system errors using the Windows utility
a) Defrag
b) Chdsk
c) Analyze
d) Optimize
b) Chdsk
_____ creates restore points at regular intervals and just before you install software or hardware.
a) System Activation
b) System Protection
c) System Restoration
d) System Prevention
b) System Protection
The user folder for an account contains a group of subfolders called the user _____.
a) namespace
b) directory namespace
c) profile namespace
d) controlled namespace
c) profile namespace
If you enter a command and want to terminate its execution before it is finished, you can press _____ to do so.
a) Ctrl+Break
b) Ctrl+V
c) Ctrl+B
d) Ctrl+X
a) Ctrl+Break
Besides backing up user data or system files, you can also back up the entire hard drive using Windows Vista _____.
a) Automated System Backup
b) Complete PC recovery
c) Automated System Rocovery
d) Complete PC backup
d) Complete PC backup
The _____ command creates a subdirectory under the directory.
a) RD
b) MD
c) CD
d) AD
b) MD
A dynamic disk requires _____ MB of storage for the disk management database.
a) 1
b) 2
c) 3
d) 4
a) 1
You can use _____ ro convert two or more basic disks to dynamic disks.
a) Disk Management
b) Windows Manager
c) System Management
d) Computer Management
a) Disk Management
When a hard drive is first sensed by Windows, it is assigned the _____ disk status.
a) Primary
b) Dynamic
c) Automatic
d) Basic
d) Basic
A disk marked as dynamic can be used with other dynamic disks in a spanned or _____ volume.
a) blocked
b) checkered
c) striped
d) lined
c) striped
If you are having problems with a hard drive, volume, or mounted drive, check _____ for events about the drive that might have ben recorded there.
a) Event Viewer
b) Windows Event Log
c) Event Log
d) Event Log Manager
a) Event Viewer
Windows Vista Ultimate offers language packs through _____.
a) Microsoft Update
b) Windows Update
c) Language Update
d) System Update
b) Windows Update
The _____ tab of Task Manager lists system services and other processes associated with applications, together with how much CPU time and memory the process uses.
a) Applications
b) Services
c) Processes
d) Performance
c) Processes
If your desktop locks up, you can use _____ to refresh it.
a) Task Manager
b) Process Manager
c) Task List Manager
d) Task and Process Manager
a) Task Manager
The _____ tab in Task Manager lets you monitor network activity and bandwidth used.
a) Application
b) Process
c) Networking
d) Services
c) Networking
The _____ tab in Task Manager shows all users currently logged on the system.
a) Users
b) Processes
c) Networking
d) Applications
a) Users
You can use the _____ to find out what processes are launched at startup and to temorarily disable a process from loading.
a) System Access Utility
b) System Management Utility
c) System Stability Utility
d) System Configuration Utility
d) System Configuration Utility
_____ is a Windows utility that can be used to build your own customized console windows.
a) Microsoft Management Console
b) Microsoft Manager Console
c) Microsoft Management Components
d) Microsoft Manager Consoles
a) Microsoft Management Console
A(n) _____ is a single window that contains one or more administrative tools such as Device Manager or Disk Management.
a) snap-in
b) add-in
c) view
d)console
d)console
In a console, the individual tools are called _____.
a) consoles
b) add-ins
c) snap-ins
d) views
c) snap-ins
A console is saved in a file with an _____ file extension.
a) .mscx
b) .msc
c) .mmc
d) .mmcx
b) .msc
Device Manager reads data from the ______ key to build the information it displays about hardware comfigurations.
a) HKLM\HARDWARE
b) HKLM\SYSTEM
c) HKLM\CONFIG
d) HKU\HARDWARE
a) HKLM\HARDWARE
_____, under Windows Vista, is a summary index designed to measure the overall performance of a system.
a) Windows Rating
b) Windows Reliability Index
c) Windows Experience Index
d) Windows Performance Index
c) Windows Experience Index
You can use the _____ to find information about the installed processor and its speed, how much RAM is installed, and free space on the hard drive.
a) System Management Utility
b) Computer Management Utility
c) System Configuration Utility
d) System Information Utility
d) System Information Utility
The _____ is responsible for maintaining an index of files and folders on a hard drive to speed up Windows searches.
a) Windows performance monitor
b) Windows reliability monitor
c) Windows indexer
d) Windows database
c) Windows indexer
The _____ box can protect your system against user making unauthorized changes and against malware installing itself without your knowledge.
a) UAC
b) AAC
c) UAN
d) UAK
a) UAC
Most programs written for Windows have a(n) _____ routine which can be accessed from the Vista Programs and Features applet in the Control Panel, the XP Add Remote Programs applet in the Control Panel, or a utility in the All Programs menu.
a) installer
b) packing
c) packaging
d) uninstall
d) uninstall
If a system is giving repeated startup errors or you have just removed several programs, you might want to search through _____ for processes left there by uninstall or corrupted software that might be giving startup problems.
a) registry keys
b) files
c) folders
d) menus
a) registry keys
Use the Windows Vista _____ tool to deal with an immediate hardware or software problem.
a) Help and Support
b) Help and Problems Solutions
c) Problem Reports and Solutions
d) Problem Solutions
c) Problem Reports and Solutions
If a problem happens in the kernel mode of Windows, a _____ or blue screen error occurs.
a) STOP
b) HALT
c) BREAK
d) FAULT
a) STOP
You can quiskly identify a problem with memory or eliminate memory as the source of a problem by using the _____ tool.
a) Windows Memory Diagnostics
b) Vista System Diagnostics
c) Windows System Diagnostics
d) Vista Memory Diagnostics
d) Vista Memory Diagnostics
For hardware problems, _____ is a Windows Vista/XP/2000 utility that runs in the background to put stress on drivers as they are loaded and running.
a) File Signature Verification
b) Driver Verifier
c) Vista Memory Diagnostics
d) Problem Reports and Solutions
b) Driver Verifier
The _____ tool displays information about digitally signed files, including device driver files and application files, and logs information to C:\Windows\Sigverif.txt
a) Driver Verifier
b) Vista Memory Diagnostics
c) File Signature Verification
d) Problem Reports and Solutions
c) File Signature Verification
The _____ tool can be used to direct information about drivers to a file, including information about digital signatures.
a) Driver Query
b) Driver Verifier
c) Problem Reports and Solutions
d) File Signature Verification
a) Driver Query
Vista configuration data is stored in the Vista _____ file.
a) Root Configuration File
b) Boot Initialization File
c) Root Retention File
d) Boot Configuration File
d) Boot Configuration File
The Vista Advanced Boot Options menu appears when a user presses _____ as Vista is loading.
a) F2
b) F4
c) F8
d) F12
c) F8
You can use the Windows RE command prompt window to restor registry files using those saved in the _____ folder.
a) c:\windows\system32\regback
b) c:\windows\regback
c) c:\windows\platform\config\regback
d) c:\windows\system32\config\regback
d) c:\windows\system32\config\regback
The _____ file is a hidden text file stored in the root directory of the active partition that Ntldr reads to see what operating systems are available and how to set up the boot.
a) Boot.ini
b) Boot.ldr
c) Config.sys
d) Boot.int
a) Boot.ini
The _____ is used when Windows 2000/XP does not start properly or hangs during the load.
a) Safe Mode Console
b) Restoration Console
c) Recovery Console
d) Reversion Console
c) Recovery Console
In the Recovery Console, the command _____ deletes a directory.
a) MD
b) RD
c) CD
d) LD
b) RD
To retrieve the last command entered in the Recovery Console, press _____ at the command prompt.
a) F1
b) F2
c) F3
d) F4
c) F3
To retreive the command entered in the Recovery Console one character at a time, press the _____ key.
a) F1
b) F2
c) F3
d) F4
a) F1
Enter the command _____ to see a list of all services currently installed, which includes device drivers.
a) listsvc
b) svclst
c) lstsvc
d)svclist
a) listsvc
To move to the root directory, use the command _____.
a) RD \
b) CD \
c) MD \
d) PD \
b) CD \
A compressed file uses a(n) _____ as the last character in the file extension.
a) underscore
b) dollar sign
c) equals sign
d) dash
a) underscore
Use the _____ command to extract .cab files.
a) Restore
b) Decompress
c) Expand
d) Decode
c) Expand
In the world of computers, the term _____ refers to the computer's physical components, such as the monitor, keyboard, motherboard, and hard drive.
a) software
b) middleware
c) architecture
d) hardware
d) hardware
The term _____ refers to the set of instructions that directs the hardware to accomplish a task.
a) software
b) hardware
c) middleware
d) stack
a) software
To perform a computing task, software uses hardware for four basic functions: _____.
a) input, storage, retrieval, and display
b) input, processing, storage, and output
c) input, storage, alteration, and output
d) output, input, analysis, and viewing
b) input, processing, storage, and output
Most input/output devices communicate with components inside the computer case through a wireless connection or through cables attached to the case at a connection called a(n) _____.
a) port
b) slot
c) interface
d) socket
a) port
The printer produces output on paper, often called _____ copy.
a) real
b) soft
c) hard
d) virtual
c) hard
A device that is not installed directly on the motherboard is called a(n) _____ device.
a) standard
b) peripheral
c) extraneous
d) perimeter
b) peripheral
The _____ is a group of microchips on the motherboard that control the flow of data and instructions to and from the processor.
a) chipset
b) block
c) bridge
d) gate
a) chipset
Primary storage is provided by devices called memory or _____ located on the motherboard and on some adapter cards.
a) ROM
b) BIOS
c) Flash
d) RAM
d) RAM
Most _____ drives consist of a sealed case containing platters or disks that rotate at a high speed.
a) optical
b) hard
c) flash
d) solid state
b) hard
A(n) _____ drive is considered standard equipment on most computer systems today because most software is distributed on CDs or DVDs.
a) floppy
b) flash
c) optical
d) hard
c) optical
USB _____ drives are compact, easy to use, and currently holds up to 64GB of data.
a) worm
b) optical
c) hard
d) flash
d) flash
A(n) ______ is a set of rules and standards that any two entities use for communication.
a) protocol
b) language
c) interface
d) interaction
a) protocol
Everything in a computer is _____.
a) decimal
b) binary
c) hexadecimal
d) integer
b) binary
The _____ card, also called a graphics card, provides one or more ports for a monitor.
a) video
b) interface
c) modem
d) network
a) video
The _____ card provides a port for a network cable to connect th PC to a network.
a) video
b) interface
c) modem
d) network
d) network
Data and instructions are stored on special ROM (read-only memory) chips on the board and are called the _____.
a) flash
b) microcode
c) BIOS
d) symbols
c) BIOS
Motherboard settings are stored in a small amout of RAM locoted on the firmware chip and are called _____.
a) CMOS ROM
b) CMOS RAM
c) DMOS ROM
d) HMAC RAM
b) CMOS RAM
A(n) _____ is software that controls a computer.
a) application
b) operating system
c) interface routine
d) ROM routine
b) operating system
In 1986, _____ was introduced and quickly became the most popular OS amoung IBM computers and OBM-compatible computers using the Intel 8086 processors.
a) MS-DOS
b) Windows
c) OS/2
d) CP/M
a) MS-DOS
A _____ interface is an interface that uses graphics as compared to a command-driven interface.
a) text
b) menu-based
c) common user
d) graphical user
d) graphical user
Vista has a new 3D user interface called the _____ user interface.
a) Glass
b) Aero
c) Air
d) Shield
b) Aero
A _____ makes it possible to boot a computer inro one of two OSs.
a) hypervisitor
b) virtualization layer
c) dual boot
d) quick boot
c) dual boot
A(n) _____ is a portion of an OS that relates to the user and to applications.
a) hypervisor
b) interpreter
c) GUI
d) shell
d) shell
The _____ interface between the subsystem in user mode and the HAL.
a) application
b) executive services
c) primitive
d) non-privileged services
b) executive services
_____ are small programs stored on the hard drive that tell the computer how to communicate with a specific hardware device such as a printer, network card, or modem.
a) Abstraction layer
b) Device drivers
c) BIOS
d) Input routines
b) Device drivers
The _____ is usually on the right side of the task bar and displays open services.
a) status tray
b) display area
c) system tray
d) indentification area
c) system tray
A _____ is one or more characters following the last period in a filename, such as .exe, .txt, or .avi.
a) filename
b) file extension
c) file version
d) file type
b) file extension
The _____ dialog box in Windows Vista appears each time a user attempt to perform an action that can be done only with administrative privileges.
a) User Account Confirmation
b) User Account Control
c) User Access Control
d) User Access Confirmation
b) User Account Control
Windows identifies file types primarily by the _____.
a) file contents
b) resource fork
c) file extension
d) companion stream
c) file extension
Windows offers two ways to sync files: _____.
a) Briefcase and Offline Files
b) Network Neighborhood and Offline Files
c) Synchronization Tasks and Online Files
d) Briefcase and Synchronization layer
b) Network Neighborhood and Offline Files
For the _____ window,you can change the read-only, hidden, archive, and indexing attributes of the file.
a) Open
b) Attributes
c) Status
d) Properties
d) Properties
A(n) _____ is a list of items that is used to speed up a search.
a) database
b) index
c) shortcut
d) block
b) index
An applet has a _____ file extension.
a) .ctl
b) .crl
c) .cpl
d) .app
c) .cpl
A _____ is responsible for the PC before trouble occurs.
a) PC hardware technician
b) PC software technician
c) PC support technician
d) technical support technician
c) PC support technician
The most significant certifying organization for PC technicians is the _____.
a) Security+
b) Computing Technology Industry Association
c) Internet Engineering Task Force
d) Computer Technology Information Association
d) Computer Technology Information Association
_____ has industry reconition, so it should be your first choice for certification as a PC technician.
a) Security+ Certification
b) Network+ Certification
c) Hardware+ Certification
d) A+ Certification
d) A+ Certification
When someone initiates a call for help, the technician starts the process by creating a(n) _____.
a) ticket
b) request
c) rule
d) order
a) ticket
One of the most important ways to acheive customer satisfaction is to do you best by being _____.
a) genial
b) on time
c) prepared
d) courteous
c) prepared
_____ customers are your customers who come to you and your compnay for service.
a) Internal
b) External
c) Partner
d) Related
b) External
To improve your attitude, you must do it from your _____.
a) heart
b) head
c) intellect
d) language
a) heart
_____ differences happen because we are from different countries and societies or because of physical hadicaps.
a) Regional
b) Social
c) Cultural
d) Language
c) Cultural
_____ a problem increases your claue in the eyes of your coworkers and boss.
a) Taking ownership of
b) Ceding responsibility for
c) Relinquishing control of
d) passing on
a) Taking ownership of
To provide good service, you need to have a good _____ when servicing customers on the phone or online, on site, or in a shop.
a) instincts
b) plan
c) script
d) phone voice
b) plan
When you arrive at the customer's site, greet them with a _____.
a) positive attitude
b) clean uniform
c) wmile
d) handshake
d) handshake
Troubleshooting begins by _____.
a) Assessing the computer
b) interviewing the user
c) reading the work order
d) understanding the procedure
b) interviewing the user
Part of setting _____ is to establish a timeline with your customer for the completion of a project.
a) expectations
b) goals
c) requirements
d) objectives
a) expectations
_____ support requires more interaction with customers than any other type of PC support.
a) Web
b) Phone
c) Face-to-face
d) Remote
b) Phone
_____ is required if the customer must be told each key to press or command button to click.
a) Attitude
b) Arrogance
c) Patience
d) Objectivity
c) Patience
Knowing how to _____ a problem to those higher in the support chain is one of the first things you should learn on a new job.
a) delegate
b) recommend
c) downgrade
d) escalate
d) escalate
Organize you time by making _____ and sticking with them as best you can.
a) to-do list
b) plans
c) ticket lists
d) task lists
a) to-do list
_____ memory temporarily holds data and instructions as the CPU processes them.
a) Random array
b) Read-only
c) Repeatable access
d) Random access
d) Random access
_____ is used for a memory cache and is contained within the processor housing.
a) SRAM
b) DRAM
c) ROM
d) cSRAM
a) SRAM
_____ loses its data rapidly, and the memory controller must refresh it several thousand times a second.
a) Static RAM
b) Read-only memory
c) Dynamic RAM
d) Flash RAM
c) Dynamic RAM
Laptops use a smaller version of a DIMM called a(n) _____.
a) mDIMM
b) SO-DIMM
c) dDIMM
d) l-DIMM
b) SO-DIMM
A(n) _____ gets its name because it has independent pins on opposite side of the module.
a) DIMM
b) RIMM
c) SIMM
d) TRIMM
a) DIMM
If the number of bits is not an odd number for odd parity or an even number for even parity, a _____ error occurs.
a) checksum
b) flip
c) word
d) parity
d) parity
A Rambus memory module is called a(n) _____.
a) RIMM
b) DIMM
c) SIMM
d) RDIMM
a) RIMM
SIMMs are rated by speed, measured in _____.
a) microseconds
b) milliseconds
c) picoseconds
d) nanoseconds
d) nanoseconds
To use System Information, in the Vista Start Search box or the Windows XP Run box, type _____ and press Enter.
a) msinfo32
b) msconfig
c) wininfo
d) msinfo
a) msinfo32
In the table found in the motherboard manual, a chip on a RIMM module is called a _____.
a) tool
b) gadget
c) widget
d) component
d) component
Higher-quality memory modules have _____ installed to reduce heat and help the module last longer.
a) fans
b) heat rails
c) epoxy
d) heat sinks
d) heat sinks
If the chip's surface is dull or matted, or you can scratch off the markings with a fingernail or knife, suspect that the chip has been _____.
a) re-burned
b) re-marked
c) defaced
d) installed new
b) re-marked
For _____ modules, small clips latch into place on each side of the slot to hold the module in the slot.
a) DRAM
b) RIMM
c) DIMM
d) SIMM
c) DIMM
When installing the RIMM, _____ on the edges of the RIMM module will help you to orient it correctly in the socket.
a) vents
b) colors
c) notches
d) symbols
c) notches
In Windows, memory errors can cause frequent _____ errors.
a) General Fault
b) General Protection Fault
c) Stack Overflow
d) Heap Overflow
b) General Protection Fault
A _____ drive has one, two, or more platters, or disks, that stack together and spin in unison inside a sealed metal housing that contains firmware to control reading and writing data to the drive and to communicate with the motherboard.
a) magnetic optical
b) floppy
c) magnetic hard
d) magnetic image
c) magnetic hard
The top and bottom of each disk of magnetic hard drive have a(n) _____ that moves across the disk surface as all the disks rotate on a spindle.
a) read head
b) read/write head
c) access head
d) platter device
b) read/write head
Each side, or surface, of one hard drive platter is called a _____.
a) bubble
b) wing
c) tail
d) head
d) head
Windows Vista technology that supports a hybrid drive is called _____.
a) ReadyRAM
b) ReadySpin
c) ReadyDrive
d) ReadyDisk
c) ReadyDrive
The total number of _____ on the drive determines the drive capacity.
a) sectors
b) wedges
c) platters
d) controllers
a) sectors
_____ on a circuit board inside the drive housing is responsible for writing and reading data to these tracks and sectors and for keeping track of where everything is stored on the drive.
a) Wetware
b) ROM
c) Firmware
d) Software
c) Firmware
During the _____ formatting process, you specify the size of the partition and what file system it will use.
a) low-level
b) high-level
c) medium-level
d) physical
b) high-level
The ATA interface standards are developed by Technical Committee T13 and published by _____.
a) ISO
b) IEEE
c) IETF
d) ANSI
d) ANSI
_____ is a system BIOS feature that monitors hard drive performance, disk performance, disk spin up time, temperature, distance between the head and the disk, and other machanical activities of the drive in order to predict when the drive is likely to fail.
a) S.M.A.R.T.
b) D.A.R.T.
c) S.T.A.R.T.
d) S.P.I.N.
a) S.M.A.R.T.
_____ transfers data directly from the drive to memory without involving the CPU.
a) RMA
b) DAM
c) SMA
d) DMA
d) DMA
_____ mode involves the CPU and is slower than DMA mode.
a) DMZ
b) PIO
c) RMA
d) PIN
b) PIO
With _____, you can connect and disconnect a drive while the system is running.
a) wet-swapping
b) cold-swapping
c) hot-swapping
d) block-swapping
c) hot-swapping
External _____ is up to six times faster than USB or FireWire.
a) SATA
b) SCSI
c) IDE
d) DMA
a) SATA
_____ is a standard for communication between a subsystem of peripheral devices and the system bus.
a) SAS
b) SATA
c) SCSD
d) SCSI
d) SCSI
A technology that configures two or more hard drives to work together as an array of drives is called _____.
a) FLAG
b) RAID
c) RAIN
d) RAISE
b) RAID
A drive _____ is a duplication of everything written to a hard drive.
a) network
b) array
c) handle
d) image
d) image
With _____, two hard drives are configured as a single volume.
a) binding
b) bonding
c) spanning
d) wrapping
c) spanning
When using an 80-conductor cable-select cable, the drive nearest the motherboard is the _____.
a) master
b) slave
c) subordinate
d) major
a) master
If you are mounting a hard drive into a bay that is too large, a _____ kit can help you securely fit the drive into the bay.
a) reversal bay
b) drive connecting
c) universal bay
d) bay adapter
c) universal bay
Problems with a device can sometimes be solved by updating the _____ or firmware.
a) device patches
b) device drivers
c) device scanners
d) device containers
b) device drivers
Devices and their device drivers are managed using _____.
a) System Manager
b) Computer Manager
c) Device Manager
d) Control Panel
c) Device Manager
_____ systems and peripherals have the U.S. Green Star, indicating that they satisfy certain energy-conserving standards of the U.S. Environment Protection Agency (EPA).
a) Energy Star
b) Energy Saving
c) Energy Wise
d) Green Energy
a) Energy Star
To protect the data on a USB storage device while removing it, double-click the _____ icon in the notification area before removing the device.
a) Format
b) Safely Eject Hazard
c) Eject
d) Safely Remove Hardware
d) Safely Remove Hardware
FireWire and i.Link are common names for another peripheral bus officially named _____.
a) IEEE 1394
b) IEEE 802.1x
c) IEEE 1399
d) IEEE 1934
a) IEEE 1394
_____ supports speed up to 400 Mbps and is sometimes called FireWire 400.
a) IEEE 1394
b) IEEE 802.9
c) IEEE 1394a
d) IEEE 1394.2
c) IEEE 1394a
_____ can use cables up to 100 meters (328 feet), and uses a 9-pin rectangular connector.
a) 1394a
b) 1394b
c) 1394
d) 1394c
b) 1394b
IEEE 1394 uses _____ data transfer, meaning that data is transfered continuously without breaks.
a) asynchronous
b) isolated
c) symmetric
d) isochronous
d) isochronous
A serial port is provided by the motherboard or might be provided by an adopter card called a(n) _____ card.
a) interrupt
b) storage
c) I/O controller
d) Southbridge
c) I/O controller
_____ CRT monitors draw a screen by making two passes.
a) Multipass
b) Multipath
c) Noninterlaced
d) Interlaced
d) Interlaced
A _____ deivce is an input device that biological data about a person, which can be input data to identify a person's fingerprints, handprints, face, voice, eye, and handwritten signature.
a) biometric
b) biomechanical
c) simulated biology
d) biostatic
a) biometric
Ports on the motherboard can be disabled or enabled in _____ setup.
a) RAM
b) firmware
c) Northbridge
d) BIOS
d) BIOS
The _____ assignment refer to the system resources a parallel port will use to manage a print job.
a) COM
b) LPT
c) SRL
d) PRL
b) LPT
A _____ plug is a tool used to test a serial, parallel, USB, network, or other port.
a) loop-back
b) return
c) wrap
d) lock out
a) loop-back
Chips sometimes loosen because of temperature changes; this condition is called _____.
a) chip slip
b) block creep
c) chip creep
d) solder error
c) chip creep
For laptops, you can adjust the brightness of the display using _____ keys.
a) secondary
b) function
c) control
d) interface
b) function
The goal of _____ technology is to use sights, sounds, and animation to make computer output look as much like real life as possible.
a) realistic
b) transition
c) multimedia
d) reality
c) multimedia
Computers store data digitally and ultimately as a stream of only two numbers: _____.
a) 1 and 2
b) 0 and 1
c) 0 and 9
d) 8 and 1
b) 0 and 1
After the sound is recorded and sigitalized, many sound cards convert and ompress the digitilized sound to _____ format.
a) MP3
b) AU
c) WAV
d) AIF
a) MP3
MP3 sound files have a(n) _____ file extension.
a) .wav
b) .avi
c) .aif
d) .mp3
d) .mp3
A video _____ card lets you capture this video input and save it to a file on your hard drive.
a) capture
b) retouch
c) interface
d) rendering
a) capture
_____ are smooth and level areas on an optical disc.
a) Pits
b) Runs
c) Valleys
d) Lands
d) Lands
_____ are recessed areas on the surface of an optical disc.
a) Land
b) Grooves
c) Pits
d) Valleys
c) Pits
In order to read each sector on the spiral of a CD at a constant _____, the disc spins faster when the read-write head is near the center of the disc.
a) rotational velocity
b) linear velocity
c) linear torque
d) rotational momentum
b) linear velocity
A double-sided, dual-layer DVD can hold _____ GB.
a) 4.7
b) 8.5
c) 9.4
d) 17
d) 17
_____ drives are a great method of keeping backups of data stored on your hard drive.
a) External hard
b) External optical
c) Internal optical
d) External floppy
a) External hard
_____ is a common compression standard for storing photos.
a) GIF
b) AIF
c) WAV
d) JPEG
d) JPEG
A _____ provides slots for memory cards and can be an internal or external device.
a) card slot
b) media processor
c) media reader
d) media storage device
c) media reader
For EIDE, there are four choices for drive installations: _____.
a) primary slave, primary major, primary secondary, secondary primary
b) primary master, primary slave, secondary master, and secondary slave
c) primary master, primary secondary, secondary master, and secondary secondary
d) primary minor, primary major, secondary major, and secondary minor.
b) primary master, primary slave, secondary master, and secondary slave
A CD can hold about _____ of data.
a) 600 MB
b) 700 MB
c) 1 GB
d) 1.5 GB
b) 700 MB
If a disc gets stuck in the drive, use the _____ to remove it.
a) emergency screw hole
b) paper clip hole
c) emergency eject hole
d) screwdriver slot
c) emergency eject hole
To find out what to do if you are accidentally exposed to a dangerous solution, look on the instructions printed on the can or check out the material _____ sheet.
a) hazards
b) safety discovery
c) security data
d) safety data
d) safety data
_____, caused by lifting heavy objects, is one of the most common injuries that happen at work.
a) Neck injury
b) Arm injury
c) Hand injury
d) Back injury
d) Back injury
_____ from cigarettes can accumulate on fans, causing them to jam, which in turn will cause the system to overheat.
a) Nicotine
b) Carbon dioxide
c) Tar
d) Carbon monoxide
c) Tar
Proper _____ is essential to keeping a system cool.
a) liquid circulation
b) air circulation
c) heat circulation
d) heat containment
b) air circulation
To keep boot sector viruses at bay, in _____ you can disable the ability to write to the boot sector of the hard drive.
a) Firmware
b) Drive firmware
c) BIOS setup
d) OS setup
c) BIOS setup
Dispose of _____ in the regular trash.
a) Alkaline batteries
b) Button batteries used in digital cameras
c) Laser printer toner cartridges
d) Ink-jet printer cartridges
a) Alkaline batteries
Dispose of _____ by returning them to the original dealer or by taking them to a recycling center.
a) Alkaline batteries
b) Button batteries
c) Linser printer toner cartridges
d) Computer cases and power supplies
b) Button batteries
Return _____ to the manufacturer or dealer to be recycled
a) Laser printer toner cartridges
b) Storage media
c) computer cases
d) Alkaline batteries
a) Laser printer toner cartridges
Most CRT monitors today are desiged to discharge after sitting unplugged for _____ minutes.
a) 15
b) 30
c) 45
d) 60
d) 60
When someone purchases software from a software vendor, that person has only purchased a _____ for the software, which is the right to use it.
a) license
b) manual
c) copy
d) subscription
a) license
The right to copy the work, called a _____, belongs to the creator of the work or others to whom the creator transfers this right.
a) standards mark
b) reference mark
c) copy standard
d) copyright
d) copyright
_____ are intended to legally protect the intellectual property right of organizations or individuals to creative works, which include books, images, and software.
a) Trademarks
b) Copyrights
c) Service marks
d) Copymarks
b) Copyrights
The _____ was designed in part to protect software copyrights by requiring that only lehally obtained copies of software be used.
a) International Copyright Act of 1999
b) Digital Mellennium Compyright Act
c) Federal Copyright Act of 1976
d) Federal Copyright Act of 1965
c) Federal Copyright Act of 1976
The rule _____ is the most powerful.
a) dissect the problem
b) observe the problem
c) keep good notes
d) divide and conquer
d) divide and conquer
If you want to recover lost data on a hard drive, don't _____ the drive.
a) write anything to
b) read anything from
c) write anything to the boot sector
d) write anything to the allocation tables
a) write anything to
As you solve computer problems, always keep in mind that you don't want to make things worse, so you should use the _____ invasive solution.
a) maximum
b) directly
c) most
d) least
d) least
Good _____ helps you take what you learned into the next troubleshooting situation, train others, develop effective preventive maintenance plans, and satisfy any audits or customer or employer queries about your work.
a) procedures
b) documentation
c) memory
d) communication skills
b) documentation
Upgrading to a better edition of Vista can easily be accomplished by using the _____ feature.
a) Windows Anywhere Upgrade
b) Windows Anytime Upgrade
c) Windows Anyplace Upgrade
d) Window Automatic Upgrade
b) Windows Anytime Upgrade
Use a _____-bit OS if you need increased performance and your system has enough resources to support it.
a) 8
b) 16
c) 32
d) 64
d) 64
Use _____ to find out which devices are installed in you XP system.
a) Computer Manager
c) User Manager
c) Device Manager
d) System Manager
c) Device Manager
A(n) _____ server is used to hold the setup files on a Windows CD or DVD on the network and then at each PC, you can execute the Setup program on the server.
a) distribution
b) image
c) software
d) file
a) distribution
A(n) _____ installation is performed by storing the answers to installationquestions in a text file or script that Windows calls an answer file.
a) scripted
b) attended
c) unattended
d) replacement
c) unattended
The Windows utility _____ is used to remove configuration settings, such as the computer name that uniquely identifies the PC.
a) msconfig.exe
b) msprep.exe
c) w32prep.exe
d) sysprep.exe
d) sysprep.exe
For some bard-name computers, the hard drive contains a _____ partition that can be used to reinstall Windows.
a) hidden resolution
b) hidden recovery
c) visible recovery
d) visible rebuild
b) hidden recovery
A _____ computer is software that simulates the hardware of a physical computer.
a) real
b) logical
c) virtual
d) functional
c) virtual
The _____ partition is the active partiion of the hard drive.
a) system
b) data
c) recovery
d) logical
a) system
A Windows _____ is a logical group of computers and users that share resources, where administration, resources, and security on a workstation are controlled by that workstation.
a) domain
b) workgroup
c) task group
d) work unit
b) workgroup
A Windows domain is a type of _____ network, which is a network where resources are managed by a centralized computer.
a) time-sharing
b) virtual
c) server/server
d) server/client
d) server/client
An example of a network operating system in Windows Server 2008, which controls a network using the directory database called _____.
a) Net Directory
b) Win Directory
c) Active Directory
d) Open Directory
c) Active Directory
The _____ command is used to copy the information from the old computer to a server or removable media.
a) getstate
b) loadstate
c) setstate
d) scanstate
d) scanstate
The _____ command is used to copy the information to a new computer.
a) getstate
b) loadstate
c) setstate
d) scanstate
b) loadstate
During the normal Windows XP installation, setup causes the system to reboot _____ times.
a) one
b) two
c) three
d) four
c) three
To convert a FAT32 volume to an NTFS voume, first back up all important data on the drive and then use this command prompt: _____, where D: is the drive to be converted.
a) convntfs D:
b) convert D: /FS:NTFS /OFS:FAT
c) convfs D: /FS:NTSF
d) convert D: /FS:NTSF
d) convert D: /FS:NTSF
For an always-up broadband connection (such as a cable modem or DSL), select _____ when you configure updates.
a) Never
b) Notify me but don't automatically download or install them
c) Automatic
d) Autimatically download but do not install them
c) Automatic
When you configure updates, if the PC doesn't have an always-up Internet connection (such as dial-up), you might want to select _____.
a) Never
b) Notify me but don't automatically download or install them
c) Automatic
d) Autimatically download but do not install them
d) Autimatically download but do not install them
What would you like to do?
Get the
free
Flashcards app for iOS
Get the
free
Flashcards app for Android
Learn more
>
Flashcards
> Print Preview | https://www.freezingblue.com/flashcards/print_preview.cgi?cardsetID=151410 | CC-MAIN-2017-04 | refinedweb | 6,371 | 51.31 |
Millions […]
The post Guru Night at Red Hat Summit: Hands-on experience with serverless computing appeared first on Red Hat Developer Blog.]]>
Millions Wednesday, May 8 from 5:00 p.m. to 8:00 p.m. at the Boston Convention and Event Center in ML2 East-258AB. (Doubtless there will be a map to show you where or what ML2 East etc. is; we have no idea.) Head to the signup page and fill out your details now.
We felt compelled to point that out. But read on.
Although most of us would attend a serverless workshop sight unseen, here’s more information that may help you decide to sign up. In this three-hour, hands-on workshop, designed specifically for software developers and architects, Burr and team will cover:
Istio brings an array of powerful features to the basic Kubernetes platform, including sidecar proxies and a service mesh for smarter canary deployments and dark launches. (Check out Don Schenck’s series of Istio blog posts for a great overview of the technology, how it works, and what it does.)
Knative is the heart of the serverless infrastructure you’ll use. Built on Istio and Kubernetes, it provides scale-to-zero support for services that aren’t currently in demand. In addition, it can autoscale services in response to events and build container images inside the Kubernetes cluster. (Our friend Kamesh Sampath has an excellent Knative tutorial that’s not to be missed.)
Anxious that your hard-earned Java skills may not be as useful in the world of containers, microservices, and serverless computing? Worry not. Quarkus is a revolutionary technology that optimizes Java for those environments, delivering a performance improvement of 10x to 100x in many cases. It’s supersonic, subatomic Java.
Camel K is built on Camel’s event-driven architecture. It is a lightweight integration framework that runs natively on OpenShift or Kubernetes that is specifically designed for serverless computing and microservices. Check out an introduction to the platform itself and an article on how Camel K works with Knative.
Data pipelines and event streams are a vital part of many modern applications. The Kafka website describes it as “wicked fast.” It’s an established technology, and it runs in production in thousands of enterprises today.
Want hands-on experience with these cutting-edge technologies? Yah, you betcha. Sign up now.
As you would expect, you must be a registered Red Hat Summit attendee to attend this session. If you haven’t registered yet, visit Red Hat Summit to sign up. See you in Boston!
The post Guru Night at Red Hat Summit: Hands-on experience with serverless computing post From zero to Quarkus and Knative: The easy way best-of-breed Java libraries and standards. Knative is a Kubernetes-based platform to build, deploy, and manage modern serverless workloads. You can learn more in this article series.
This article does not provide a full deep dive on Knative or Quarkus. Instead, I aim to give you a quick and easy way to start playing with both technologies so you can further explore on your own.
In the following examples, I assume you’ve already installed a Minishift machine. Minishift is a tool that helps you run OKD locally by launching a single-node OKD cluster inside a virtual machine. With Minishift, you can try out OKD or develop with it, day-to-day, on your local machine (Linux, Windows, or Mac).
Please keep in mind that, in this example, I’m using the upstream version of Minishift; of course, you can replicate and run all the stuff on the Container Development Kit (CDK) by Red Hat.
I’ll execute all the following commands as a cluster administrator in the Red Hat OpenShift environment. Thus, you should switch to an admin user before continuing.
To begin, we need to set up Knative on Minishift. To do this, we need to clone the Minishift add-ons for Knative by the OpenShift team:
$ git clone $ minishift addons install minishift-addons/knative-istio $ minishift addons install minishift-addons/knative-build $ minishift addons install minishift-addons/knative-serving $ minishift addons install minishift-addons/knative-eventing
After that, we can start the installation process for the first add-on: knative-istio.
$ minishift addons apply knative-istio
Once that step is complete, you can install the Knative resources:
$ minishift addons apply knative-build $ minishift addons apply knative-serving $ minishift addons apply knative-eventing
When you’ve finished with all this setup, you should find a bunch of new pods running for enabling your Minishift to Knative:
$ oc get pods --all-namespaces ... knative-build build-controller-85b9c8d7f-f6jj4 1/1 Running 0 2m knative-build build-webhook-66bfc7ffc8-8s9tq 1/1 Running 0 2m knative-eventing controller-manager-0 1/1 Running 0 1m knative-eventing eventing-controller-7d69f6945b-mhrrj 1/1 Running 0 1m knative-eventing in-memory-channel-controller-569f959967-qkt96 1/1 Running 0 1m knative-eventing in-memory-channel-dispatcher-c54844b75-5l7bv 1/1 Running 0 1m knative-eventing webhook-667567bc86-fz4p7 1/1 Running 0 1m knative-serving activator-5c8d4bbc9d-4mt6l 1/1 Running 0 1m knative-serving activator-5c8d4bbc9d-qw4jh 1/1 Running 0 1m knative-serving activator-5c8d4bbc9d-z65gt 1/1 Running 0 1m knative-serving autoscaler-5d6dcf98f8-pcmqb 1/1 Running 0 1m knative-serving controller-98c69fcc-xjwls 1/1 Running 0 1m knative-serving webhook-68dc778cb5-xmgwm 1/1 Running 0 1m
Before playing with Knative Build, we should set up another prerequisite for this quickstart: a container image registry for our Quarkus Knative Build.
Unfortunately, as we’ll see in few moments, the Quarkus quickstart example will generate (through Maven) Knative Build resources’ files using Kaniko as the Knative Build template. I’ve tried to make Kaniko work with OpenShift internal registry but I had no luck with that. I also opened an issue on GitHub for reporting the behavior.
Unfortunately, Kaniko doesn’t seem to play well with Quay.io Registry either. Another approach could be to move the Knative Build Template from Kaniko to Buildah.
But, we want the easiest and fastest way for getting Knative and Quarkus up & running, for this reason, we’ll use the Dockerhub online registry instead.
To start, log in or register to Dockerhub, and then you’re ready to create your container repository, named quarkus-greetings.
We can now move forward with the Knative Build.
We’re now ready to check out the Quarkus quickstarts repo and start playing with Knative Build.
$ git clone $ cd quarkus-quickstarts/getting-started-knative
Then we can execute the Maven command for building up the Kubernetes resources’ files. We’ll pass the following parameters to the Maven command:
$ mvn -Dcontainer.registry.url='' \ > -Dcontainer.registry.user='alezzandro' \ > -Dcontainer.registry.password='XXXXXXXYYYYYYYZZZZZZZZ' \ > -Dgit.source.revision='master' \ > -Dgit.source.repo.url='' \ > -Dapp.container.image='quay.io/alezzandro/quarkus-greetings' \ > clean process-resources [INFO] Scanning for projects... [INFO] [INFO] ----------------< org.acme:quarkus-quickstart-knative >----------------- [INFO] Building quarkus-quickstart-knative 1.0-SNAPSHOT [INFO] --------------------------------[ jar ]--------------------------------- [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ quarkus-quickstart-knative --- [INFO] Deleting /home/alex/gitprojects/quarkus-quickstarts/getting-started-knative/target [INFO] [INFO] --- build-helper-maven-plugin:3.0.0:add-resource (add-resource) @ quarkus-quickstart-knative --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ quarkus-quickstart-knative --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /home/alex/gitprojects/quarkus-quickstarts/getting-started-knative/src/main/resources [INFO] Copying 6 resources to /home/alex/gitprojects/quarkus-quickstarts/getting-started-knative/target/knative [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.840 s [INFO] Finished at: 2019-03-29T13:29:55+01:00 [INFO] ------------------------------------------------------------------------
The command creates the resource files in
target/knative directory:
$ ls target/knative/ build-sa.yaml container-registry-secrets.yaml deploy-key.yaml kaniko-pvc.yaml m2-pvc.yaml service.yaml
By the way, the Maven command can also take as an input Git credentials for pulling down a private Git repo. In any case, we just used the public Quarkus quickstart repo, so we don’t need the generated
deploy-key.yaml file and its reference in the ServiceAccount contained in
build-sa.yaml. We need to remove them:
$ rm target/knative/deploy-key.yaml $ cat target/knative/build-sa.yaml apiVersion: v1 kind: ServiceAccount metadata: name: build-bot secrets: - name: container-registry-secrets - name: deploy-key <- We need to remove this line
We can now create the OpenShift project that will hold all these prepared resources:
$ oc new-project quarkus-greetings Now using project "quarkus-greetings" on server "".
Before going forward, let’s set special permissions for the just-created namespace, as suggested by the Knative Minishift Addons GitHub repo:
$ oc adm policy add-scc-to-user anyuid -z default -n quarkus-greetings $ oc adm policy add-scc-to-user privileged -z default -n quarkus-greetings
And finally, we can deploy our Kubernetes resources:
$ oc apply --recursive --filename target/knative/ serviceaccount/build-bot created secret/container-registry-secrets created persistentvolumeclaim/kaniko-cache created persistentvolumeclaim/m2-cache created service.serving.knative.dev/quarkus-quickstart-knative created
After that, Knative Build Controller will notice the new resource,
quarkus-quickstart-knative, and will start the build:
$ oc get pods NAME READY STATUS RESTARTS AGE quarkus-quickstart-knative-00000-lrb2b 0/1 Init:0/3 0 4s
This pod is composed of three init-containers that will initialize the credentials, clone the Git repo, build it, and finally push the image to the remote registry:
We can also take a look to the Dockerfile that Kaniko will use for building our image in the “build-step-docker-push” container.
The Dockerfile is a multi-stage one, containing three “FROM” instructions, so three containers will be used. This means Kaniko will run in sequence the first two containers for building the Quarkus app’s binary and then it will copy to the latest container (the third) the binary build.
We can finally follow the status of the build with these simple commands:
$ oc get pods NAME READY STATUS RESTARTS AGE quarkus-quickstart-knative-00000-t8228 0/1 Running 0 1m $ oc logs -f -c build-step-docker-push quarkus-quickstart-knative-00000-t8228 ... INFO[0695] EXPOSE 8080 INFO[0695] cmd: EXPOSE INFO[0695] Adding exposed port: 8080/tcp INFO[0695] WORKDIR /work/ INFO[0695] cmd: workdir INFO[0695] Changed working directory to /work INFO[0695] Taking snapshot of full filesystem... INFO[0695] Skipping paths under /kaniko, as it is a whitelisted directory INFO[0695] Skipping paths under /workspace, as it is a whitelisted directory INFO[0695] Skipping paths under /cache, as it is a whitelisted directory INFO[0695] Skipping paths under /builder/home, as it is a whitelisted directory INFO[0695] Skipping paths under /run/secrets, as it is a whitelisted directory INFO[0695] Skipping paths under /var/run, as it is a whitelisted directory INFO[0695] Skipping paths under /dev, as it is a whitelisted directory INFO[0695] Skipping paths under /sys, as it is a whitelisted directory INFO[0695] Skipping paths under /proc, as it is a whitelisted directory INFO[0696] No files were changed, appending empty layer to config. No layer added to image. INFO[0696] ENTRYPOINT ["./application","-Dquarkus.http.host=0.0.0.0"] 2019/03/29 19:16:06 pushed blob sha256:72f1a1307b6f2f9dd158e31e62f06529b09652fffb2630a51c0f3e8fcdcb62ba 2019/03/29 19:16:06 pushed blob sha256:4b3c899486387dd62fe5c4a31eeb37a073dbd9e0ee0065d47bed98ffd8e0889b 2019/03/29 19:16:15 pushed blob sha256:040efd5dc88c66de8192eb1a9f9f764e49d5466381b04b1aaf528caeea156e40 2019/03/29 19:16:16 pushed blob sha256:f0034e1b296e24109590a6436bdfd4ad44500a3b8c76eb21f300861e22c40540 2019/03/29 19:16:18 pushed blob sha256:21d95e340ee05b20c5082eab8847957df806532886d34608fcf6f49e69a21360 2019/03/29 19:16:18 index.docker.io/alezzandro/quarkus-greetings:latest: digest: sha256:fe0ef7d5b8f4d7ac334a9d94d4c8a8ac9f51b884def36e6660d4c46d09ac743c size: 917
Once the build process is complete, we have all the tools in place for getting our serverless service up and running (if requested). I wrote “if requested” because we just built a serverless application that will be spawn up ONLY if a request comes to our service.
We can now take a look at the created Knative resources:
$ oc get ksvc NAME DOMAIN LATESTCREATED LATESTREADY READY REASON quarkus-quickstart-knative quarkus-quickstart-knative.quarkus-greetings.example.com quarkus-quickstart-knative-00000 quarkus-quickstart-knative-00000 False RevisionFailed $ oc get configuration NAME LATESTCREATED LATESTREADY READY REASON quarkus-quickstart-knative quarkus-quickstart-knative-00000 quarkus-quickstart-knative-00000 False RevisionFailed $ oc get revision NAME SERVICE NAME READY REASON quarkus-quickstart-knative-00000 quarkus-quickstart-knative-00000-service False NoTraffic $ oc get route.serving.knative.dev NAME DOMAIN READY REASON quarkus-quickstart-knative quarkus-quickstart-knative.quarkus-greetings.example.com True
Don’t worry about the various “False” and “RevisionFailed” status messages. They’re just reporting that “NoTraffic” is coming to our service, so the controller and autoscaler placed our application in idle.
Moving forward, we’re ready to launch the first request to our service. We’ll use the
curl binary for making the HTTP request, and we need to contact the Knative Ingress Gateway that we’ll find in the
istio-system namespace:
$ oc get pods -n istio-system | grep gateway istio-egressgateway-7b46794587-c9mm8 1/1 Running 1 5h istio-ingressgateway-57f76dc4db-7khgt 1/1 Running 1 5h knative-ingressgateway-56d46fcb88-kmc4g 1/1 Running 1 2h
Keep in mind that Knative uses the HTTP “Host” header to route requests to its services. For this reason, we’ll use some tricks to get the correct IP address and port to contact, and then we’ll pass the correct hostname contained in the resource
route.serving.knative.dev that we discovered before:
$ INGRESSGATEWAY=knative-ingressgateway $ IP_ADDRESS="$(minishift ip):$(oc get svc $INGRESSGATEWAY --namespace istio-system --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')" $ curl -H 'Host: quarkus-quickstart-knative.quarkus-greetings.example.com' $IP_ADDRESS/greeting/alex hello alex
Our service just replied to us! Let’s see what that means in terms of Kubernetes resources:
$ oc get pods NAME READY STATUS RESTARTS AGE quarkus-quickstart-knative-00000-874sq 0/1 Completed 0 1h quarkus-quickstart-knative-00000-deployment-688fcd9f4f-wccsf 2/2 Running 0 1m
As you can see our previously built pod is up and running and serving requests. Let’s take a closer look:
$ oc describe pod quarkus-quickstart-knative-00000-deployment-688fcd9f4f-wccsf Name: quarkus-quickstart-knative-00000-deployment-688fcd9f4f-wccsf Namespace: quarkus-greetings ... Status: Running ... Containers: user-container: ... Image: index.docker.io/alezzandro/quarkus-greetings@sha256:fe0d37b98347a321769880030951cfd1a767a0cf1f105f4665ab3a70050a6d2c ... queue-proxy: Image: gcr.io/knative-releases/github.com/knative/serving/cmd/queue@sha256:ce66dd18f0d504e40e050f31b9de4315f8c225f308e9885eb4cbd82b2ba03c1a ...
Even if I filtered the output, running the previous command, you’ll see that the running pod is composed of a user-container (the quarkus-greetings service) and queue-proxy (the sidecar container that will bridge our container to the Knative system).
I have tried this example many times on my Minishift appliance and, like any software, it can fail. If something doesn’t work properly in the Serving part of this demo, the best way to start troubleshooting is to search in the “knative-serving” namespace:
$ oc get pods -n knative-serving NAME READY STATUS RESTARTS AGE activator-6677bbc9d6-2ql94 1/1 Running 0 51m activator-6677bbc9d6-p6l7z 1/1 Running 0 51m activator-6677bbc9d6-s84zk 1/1 Running 0 51m autoscaler-5d87cc6b75-bjntw 1/1 Running 0 58m controller-f4c59f474-z5x4n 1/1 Running 1 2h webhook-5d9cbd46f7-q5rc6 1/1 Running 1 2h
Take a look at the logs of activator(s), autoscaler, and controller pods. If you see errors or failures in the logs, try to restart them with a simple command like this:
$ oc delete pod POD_NAME
Don’t worry about the consequences. Kubernetes Deployments resources will spawn a brand new pod once you manually delete one.
That’s all, folks. I hope you’ll try this demo for yourself, and may the kube be with you!
The post From zero to Quarkus and Knative: The easy way […]
The post The evolution of serverless and FaaS: Knative brings change FaaS to see where things stand.
That’s simple: It’s computing without a server. It’s magic.
Actually, no; that’s not true. Of course, there is a server (or many servers) involved. One of the tenants of serverless is that the developer need not be concerned with server “stuff.” No need to fuss over RAM usage and scaling and so on. Simply (now that is a loaded word, “simply”) deploy your code and it works.
Here’s the official definition from the Cloud Native Computing Foundation:
.”
An important point is the last sentence about executing, scaling, and billing on demand. Until recently—that is, until Knative appeared on the scene—a microservice ran 24×7 and was not, by the above definition, serverless.
Since serverless and FaaS have traditionally been used as interchangeable terms, they were considered one and the same. This much was constant: They both did not describe a typical microservice that is available all the time. In fact, discussions (arguments?) surfaced about the use of FaaS and/or microservices, with some even going so far as to claim that all microservices should, in fact, be serverless functions. While that may seem extreme to some, it has almost become a moot point. Why?
At the time when Google announced the availability of Knative, Kubernetes-based platform, FaaS offerings were typically based on small pieces of source code that ran as functions. Compiling them wasn’t often necessary. OpenWhisk—an open source FaaS platform—uses the command
wsk action create... to turn, for example, a Node.js file into a function (they’re called “Actions” in OpenWhisk parlance). One small file and one command and you have a function. You don’t even need to code any HTTP handlers or routes; it’s all built into the platform. And there’s one and only one route. The function has one entry and exit point; it’s not a complex RESTful service with multiple URIs.
Knative disrupted all that by making any service available as a function, in that Knative allows a service to scale to zero after a configured period of time. In other words, the service stops running, which means no CPU cycles, no disk activity, and no billing activity during its idle time. That’s not a small thing. It is the essence of serverless functions, and suddenly a RESTful service that handles, say, four different routes could now be considered a serverless function.
True, scaling to zero has its own challenge: how to minimize restart time. That’s another article altogether.
So, by definition, any service that can scale to zero and respond on demand is serverless (or FaaS or serverless function).
Knative also brings other functionality to the developer: building, eventing, and serving are the three parts of Knative. I briefly discussed serving, but building and eventing are important as well.
Eventing, for instance, allows you to fire off services by using events. Put events into a queue and you have a truly event-driven architecture application. If you’ve ever built apps on a message-based platform (for example, Windows desktop applications) you’re familiar with the idea of events and messages “flying all around,” making a system work. When done right, it’s a beautiful symphony of harmonious code.
(OK, that last part was a bit over the top, but you get the idea.)
Knative leverages advanced and fast technologies, including Istio service mesh and gRPC. Although a developer probably won’t need to be aware of these things, someone does, and it does matter. In short, Knative is more of a platform rather than simply an implementation. It gives you a broad and robust foundation for your own implementation of functions.
There are reasons for both Knative and, say, something like OpenWhisk. Knative is speeding the evolution (and, likely, the adoption) of serverless/FaaS solutions, while existing technologies such as OpenWhisk remain useful. Further, it remains to be seen if and how traditional FaaS platforms embrace Knative.
As with any technology, it’s up to you to determine what mix of the two is best for you. Armed with knowledge and enabled with opportunities to test these technologies for zero cost, you’re in a good position to choose and move forward. As the technology evolves, so will your solutions.
For more information about Knative, visit the GitHub repo and the Knative documentation.
The post The evolution of serverless and FaaS: Knative brings change. Front-end developers If you’re a front-end developer, […]
The post Knative: What developers need to know..
The post Knative: What developers need to know […]
The post Init Container Build Pattern: Knative build with plain old Kubernetes deployment.
Before we deploy the Quarkus application on Kubernetes, however, we need to solve the following problems:
Analyzing these problems, we see that they do not apply only to this use case. These are common issues for developers who want to build and deploy Java or other applications on Kubernetes.
So, what is the solution? Let’s address them one by one:
At this point, Kubernetes does not provide any out-of-the-box way to build applications natively on Kubernetes, that is, something like kubectl build myapp. This made us look for something that can do a build on Kubernetes; fortunately, we have Knative Build that does the work for us.
“What? Knative build? I’m not running a serverless application.” That’s a usual reaction from developers when we talk about Knative build. To be clear, Knative build is not just for the serverless world; it can be used with any Kubernetes cluster without the need for any other Knative components and Istio. With just Knative build installed in a Kubernetes cluster, the builds can be run to make Linux container images out of the given source code.
The Kubernetes deployment has to wait for the build to complete before starting to deploy itself. The solution to this problem is to use Init Containers. Adding an init container to the Kubernetes deployment makes the pod’s containers start in a sequence based on the completion state of the previous container in the chain.
If one of the init containers fails, then the whole deployment is deemed failed with the init container being restarted. The following diagram explains how the pattern works inside Kubernetes.
For example, here is a Kubernetes deployment using init containers (see line #31-36):
View the code on Gist.
You can find a complete end-to-end example here and instructions on how to get a Quarkus application built and deployed to your Kubernetes cluster. My colleagues Roland Huß and Bilgin Ibryam have written an excellent book, called Kubernetes Patterns, which explains many patterns that are applicable to everyday cloud-native application development.
If you want to learn more about Knative basics, check out our multi-hour detailed Knative tutorial, which offers demonstrations and explains concepts in a simple and easy way.
The post Init Container Build Pattern: Knative build with plain old Kubernetes deployment […]
The post Quarking Drools: How we turned a 13-year-old Java project into a first-class serverless component.
The post Quarking Drools: How we turned a 13-year-old Java project into a first-class serverless component, […]
The post Serverless and Knative: Installation through Deployment, or binding running services to eventing ecosystems free developers to work on more interesting coding.
In this webinar, we’ll install Knative and its components and take an in-depth look into:
The post Serverless and Knative: Installation through Deployment appeared first on Red Hat Developer Blog.]]> | https://developers.redhat.com/blog/category/knative/feed/atom/ | CC-MAIN-2019-22 | refinedweb | 3,921 | 52.39 |
Stas Bekman wrote:
> When I've ported mod_perl 2.0 build to AIX, I have resorted to just
> using -berok (which is one of the flags enabled by -G). -G itself didn't
> quite work, I don't remember why. However I was told that it depends on
> which compiler is used, I've heard that this postpone-symbol-resolving
> doesn't always work.
actually I should have said " -Wl,-brtl", as that is what we use with
apr (and any apr apps that pick up apr ldflags)... as you pointed out,
-G is more than just rtl
the only part that I've heard depends on the compiler is this:
libtool doesn't enable run-time linking with gcc in cases where it would
do so for the IBM compiler
Apache handled this issue for apxs starting in 2.0.45 by doing something
we should have done all along: pull in APR's ldflags when linking DSOs
so that we no longer relied on libtool to enable run-time linking
the only part I've heard about where run-time linking doesn't always
work is when two pieces of code implement the same symbol, since there
is a flat namespace... traditional AIX dynamic linking is two-level
namespace, where for each symbol the binary indicates which library will
resolve it
if mod_perl is built with apxs or with apr-config ldflags, then it uses
run-time linking... Jens-Uwe Mager had done a lot of work with mod_perl
and Apache 1.3 on AIX and indicated that run-time linking was the way to
go... that, and anecdotes from other folks with mod_perl on AIX,
influenced the choice to use run-time linking by default with apr and
Apache 2.0 | http://mail-archives.apache.org/mod_mbox/httpd-dev/200306.mbox/%3C3EDBEB6A.2030304@attglobal.net%3E | CC-MAIN-2015-35 | refinedweb | 294 | 63.73 |
Scala: Which implicit conversion is being used?
Join the DZone community and get the full member experience.Join For Free
Last week my colleague Pat created a method which had a parameter which he wanted to make optional so that consumers of the API wouldn’t have to provide it if they didn’t want to.
We ended up making the method take in an implicit value such that the method signature looked a bit like this:
def foo[T](implicit blah:(String => T)) = { println(blah("mark")) "foo" }
We can call foo with or without an argument:
scala> foo { x => x + " Needham" } mark Needham res16: java.lang.String = foo
scala> foo mark res17: java.lang.String = foo
In the second case it seems like the function is defaulting to an identity function of some sorts since the same value we pass to it is getting printed out.
We figured that it was probably using one of the implicit conversions in Predef but weren’t sure which one.
I asked about this on the Scala IRC channel and Heikki Vesalainen suggested running scala with the ‘-print’ flag to work it out.
scala -print
The output is pretty verbose but having defined foo as above this is some of the output we get when calling it:
scala> foo [[syntax trees at end of cleanup]]// Scala source: <console> package $line2 { final object $read extends java.lang.Object with ScalaObject { def this(): object $line2.$read = { $read.super.this(); () } }; final object $read$$iw$$iw extends java.lang.Object with ScalaObject { private[this] val res0: java.lang.String = _; <stable> <accessor> def res0(): java.lang.String = $read$$iw$$iw.this.res0; def this(): object $line2.$read$$iw$$iw = { $read$$iw$$iw.super.this(); $read$$iw$$iw.this.res0 = $line1.$read$$iw$$iw.foo(<strong>scala.this.Predef.conforms()</strong>); () } }; final object $read$$iw extends java.lang.Object with ScalaObject { def this(): object $line2.$read$$iw = { $read$$iw.super.this(); () } } }
I’ve highlighted the call to Predef.conforms() which is the implicit conversion that’s been substituted into ‘foo’.
It’s defined like so:
implicit def conforms[A]: A <:< A = new (A <:< A) { def apply(x: A) = x }
I’m not sure where that would be legitimately used but the comments just above it suggest the following:
An instance of `A <:< B` witnesses that `A` is a subtype of `B`.
This is probably a misuse of implcits and we intend to replace the implicit in our code with a default function value but it was interesting investigating where the implicit had come from!
From
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/scala-which-implicit | CC-MAIN-2020-45 | refinedweb | 435 | 62.17 |
It works! Thanks for the quick replies!
It works! Thanks for the quick replies!
I still don't know what to do?
This is just a small piece of the code, but when I run this I get an error.
private void buildGUI() {
addressView = new JXAddressView();
addressView.setData(this);
JFrame frame = new...
I knew that is had a null value, I just don't know why because the method first creates a new Room object and than adds it to the ArrayList
garden = new Room("description");
rooms.add(garden);
...
Is this what you need?
java.lang.NullPointerException
at Game.createRooms(Game.java:136)
at Game.<init>(Game.java:33)
I've uploaded the game, can you please check it out?
Oke sorry, I'm still learning
The error message is: java.lang.NullPointerException: null
and the code is
public class Game
{
private Parser parser;
Oh sorry, I meant that I've editted the post, why can't i see the post anymore? did I made a mistake? | http://www.javaprogrammingforums.com/search.php?s=0d94e8c501bf0eb0ff728fc666d03f93&searchid=1027605 | CC-MAIN-2014-35 | refinedweb | 167 | 86.4 |
Opened 5 years ago
Closed 5 years ago
Last modified 5 years ago
#24752 closed Bug (fixed)
Reusing a Case/Where object in a query causes a crash
Description
Reusing a conditional expression
Case / When that has already been used causes a crash. Here is a simple example:
import django django.setup() from django.contrib.auth.models import User from django.db.models import When, Case, CharField, Value SOME_CASE = Case( When(pk=0, then=Value('0')), default=Value('1'), output_field=CharField(), ) print User.objects.annotate(somecase=SOME_CASE) print User.objects.annotate(somecase=SOME_CASE)
You can safely execute this program in your environment. The second queryset crashes because it reuses the
SOME_CASE object.
This probably related to #24420. This problem exists in both 1.8 and 1.8.1.
Change History (4)
comment:1 Changed 5 years ago by
comment:2 Changed 5 years ago by
comment:3 Changed 5 years ago by
comment:4 Changed 5 years ago by
Note: See TracTickets for help on using tickets.
Shameless self-review. | https://code.djangoproject.com/ticket/24752 | CC-MAIN-2020-24 | refinedweb | 170 | 51.14 |
.
Basically, it allows you to discover every children, or parents, or the whole tree, with a single query. Very efficient! And good for threaded discussions...
Download the necessary files from my website. For now, it's just a model and a view (with a single function). I thought about adding a template tag but stick with the do-it-yourself flexibility.
In Django 0.96 the code given does not work.
If you hack it to add some functionality, drop me an email (inerte is my gmail.com username). Current version is 0.9, and if it doesn't break anything, as soon as I finish editing the forms that I use to serve as an example, it will be 1.0. And if we depend on any new features, it will probably stay at this version forever :p
If you need more documentation, check old versions from this wiki page (specially the first and second), for a longer explanation of the whole thing, when mptt wasn't in a file for you to download, but just code that I pasted here.
How to install
Drop the mptt folder inside your Django project folder.
Edit the settings.py of your project and add a 'mptt' element into the INSTALLED_APPS array.
Change to the directory of your project (where manage.py is) and type:
python manage.py install mptt python manage.py sqlindexes mptt
If you have to, copy the output of sqlindexes and apply to your database. This is very important for performance! The object_id and lft columns should be indexed!
Using it
Edit the views.py file from your app to call mptt's views.py and pass its function to a template:
from your_project_name.mptt.views import * # Example view def your_view(request, id): your_object = get_object_or_404(models, id__exact = id) context = {'your_object': your_object, 'node_tree': node_tree(id), } return render_to_response('dir/file', context)
Now we have a list called "node_tree" to use on your template. What's is it? There's a "stack" attribute on each node now that tells you "how far" each node is from the object. For example, every "root" node (the direct reply to a "post", for example) has a stack of 1. Every node that's an answer to the root nodes has a stack of 2. The stack respects the order (because of order_by=lft?) that the nodes were inserted at the database. So we end up with nodes having stacks numbered like this:
1 2 3 3 4 2 2 1 2 3 3
Template
If you want to show the nodes a little far from the left viewport border, based on their stack numbers, use this on your template:
{% for node in node_tree %} ! | https://code.djangoproject.com/wiki/ModifiedPreorderTreeTraversal?version=11 | CC-MAIN-2015-32 | refinedweb | 449 | 74.59 |
Quoting Eric W. Biederman (ebiederm@xmission.com):> > When working on pid namespaces I keep tripping over /proc.> It's hard coded inode numbers and the amount of cruft> accumulated over the years makes it hard to deal with.> > So to put /proc out of my misery here is a series of patches that> removes the worst of the warts.> > The first patch which introduces task_refs is used later to address> one of the worst faults how much low kernel memory it allowsGlad to see the task_refs patches in particular resubmitted.This is a long set including some big patches, so it's hard to justsit down and audit for errors, but looking at before- and after- theylook nice.Resulting kernel passes ltp stresstests and zseries.-serge-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2006/2/27/152 | CC-MAIN-2016-44 | refinedweb | 154 | 63.9 |
#include <WorldState.h>
List of all members.
This is to support WorldStatePool functionality, so if a behavior in Main is blocking, it doesn't prevent Motion threads from getting updated sensors
Definition at line 105 of file WorldState.h.
[inline]
constructor
Definition at line 107 of file WorldState.h.
[private]
don't call this
smart pointer to the underlying class
Definition at line 108 of file WorldState.h.
Definition at line 109 of file WorldState.h.
pretend we're a simple pointer
Definition at line 110 of file WorldState.h.
assign from a pointer as well
Definition at line 111 of file WorldState.h.
[protected]
This holds a separate WorldState pointer for each process.
Note that under a multi-process model, each process is only ever going to reference one of these, (so we could get away with a single global pointer), but under a uni-process model, we wind up using the various entries to differentiate the thread groups
Definition at line 118 of file WorldState.h.
Referenced by operator *(), operator WorldState *&(), operator->(), operator=(), and WorldStateLookup(). | http://www.tekkotsu.org/dox/classWorldStateLookup.html | crawl-001 | refinedweb | 177 | 54.83 |
Find the closest pair from two sorted arrays in Python
In this tutorial, we will learn how to find the closest pair from two sorted arrays in Python with an example.
Explaination
In this problem we are given two sorted arrays and a number m and we have to find the pair whose sum is closest to x and the pair has an element from each array.We are given two arrays array1[0…g-1] and array2[0..h-1] and a number m, we need to find the pair array1[i] + array2[j] such that absolute value of (array1[i] + array2[j] – x) is minimum.
A Simple Solution is to run two loops. The outer loop considers every element of first array and inner loop checks for the pair in second array. We keep track of the minimum difference between array1[i] + array2[j] and m. Merge given two arrays into an auxiliary array of size m+n using merge sort. While merging keep another boolean array of size g+h to indicate whether the current element in merged array is from array1[] or array2[]. Consider the merged array and use the find the pair with sum closest to x. One extra thing we need to consider only those pairs which have one element from array1[] and other from array2[], we use the boolean array for this purpose.
Below is our required Python code that shows how to find the closest pair from two sorted arrays:
import sys def printClosest(array1, array2, g, h, m): difference=sys.maxsize l = 0 v = h-1 while(l < h and v >= 0): if abs(array1[l] + array2[v] - m) < difference: res_l = l res_v = v difference = abs(array1[l] + array2[v] - m) if array1[l] + array2[v] > m: v=v-1 else: l=l+1 print("The closest pair is [", array1[res_l],",",array2[res_v],"]") array1 = [6, 40, 50, 70] array2 = [15, 22, 35, 45] g = len(array1) h = len(array2) m = 38 printClosest(array1, array2, g, h, m)
Below is the given output for our program:
The closest pair is[6,35] | https://www.codespeedy.com/find-the-closest-pair-from-two-sorted-arrays-in-python/ | CC-MAIN-2021-43 | refinedweb | 350 | 64.24 |
Deaf Teens in Southern California
Deaf teens in Southern California share their story through interviews on Deafhood, family, communication, education, wishes, goals, and advice. Produced by the California School for the Deaf in Riverside and the California Deaf Education Resource Center - Southern California - Fall 2013
California School for the Deaf - Riverside Special Edition Issue, Fall 2013 Deaf Teens Jens Rechenberg Brianna - Re-learned ASL two years ago “Part of my life is being Deaf. I want to learn about that world.” Most parents eagerly anticipate hearing their child‟s first words, “Mommy” and “Daddy”, and for that child to grow up hearing them speak their words of love and encouragement. Brianna, who was born profoundly deaf, has made her parents‟ wishes come true through hard work, cochlear implant surgery, one-on-one tutoring, and home schooling. With persistence and parental sacrifice, Brianna can hear and talk through spoken English, and credits her parents for her success. The first language Brianna learned was sign language, before she underwent inner ear surgery. When she was an infant, her mother did not realize her colicky baby was deaf and in Page 2 need of visual communication. Once they discovered her hearing level, they used sign language for two years, giving Brianna a large sign language vocabulary base for a toddler. At the age of three, she got a cochlear implant. Her Oral/Aural preschool teachers advised her mother to stop using sign language and to expect Brianna to focus on auditory input to learn English. With the implant, she spent her childhood in extensive training, working on her listening and speaking skills and learning to filter out background noise. Brianna‟s mother also reflects how much work it was to raise her deaf daughter: therapy and communication expectations were timeconsuming and emotionally draining. It involved particularly a lot of driving, to and from various California School for the Deaf—Riverside, Fall 2013 Erika Thompson “When I listen, it‟s hard work, but when I see, it‟s easy.” - Brianna with her only deaf Erika Thompson classmate therapists all over southern California, with multiple appointments within the week every week for many years. The hardest part, her mother said, was the switch from sign language to a limited spoken English communication. “I would pick her up from school and she would beg through sign for food, and I was not allowed to respond to her signed requests. I had to wait until she asked through speech. It was absolutely the hardest, most heart-wrenching thing I have ever done!” Brianna had eventually learned to focus on speech and listening, working with her speech teacher of seven years who was always her constant guide. A little sign language was still advantageous during bath time, bedtime, or when Brianna was out at the beach, times she could not wear her hearing device. Brianna is completely deaf if not wearing her cochlear implant device. With her implant, she can hear English “within the speech banana on the audiogram” while in a quiet environment which is free of distraction. She is limited to two others in a group conversation in order to hear them. She hears over the phone, but with a lot of hard work and struggle. At restaurants where it is noisy, she feels left out as she smiles at her dining companions without understanding the conversations. In the classroom, hearing through her cochlear implant device alone is a problem in the midst of so many noisy hearing peers and teachers. Brianna took classes without an interpreter at Deaf Teens a school where teachers used spoken English only. When Brianna was in the lower elementary grades, she was still too young to understand that she was different. In spite of all the auditory amplification and training, Brianna felt that she learned mostly from looking around, using her eyes. For years, she could not understand her teachers through hearing alone. What truly helped her all these years was the tremendous support she got from her friends from church. They shared their class notes and even the questions and answers with her, and helped her with her feelings. For her education, she relied on work outside the class, especially with the increasingly advanced content in high school. Too much stimulation occurred in the classroom; the information was too much to take in, as Brianna tried to figure out what the teacher was saying or who in class was talking. Brianna understood none of the movies, unless they came with captioning or subtitles. She would have been able to handle any information if it had been visible. Lastly, Brianna did not know enough sign language to be able to use an interpreter. As she has never had this service up to this point in her education, it would be a huge undertaking for her family now to assert her rights for an interpreter. Before Brianna went into the third grade, her family had been concerned about her possibly becoming lost within the increased class size and increased auditory distractions. Therefore, they Page 3 decided on home schooling and language she first learned as a one-on-one tutoring. This was baby, and the natural language helpful because Brianna could of Deaf people. She is now focus on the tutor‟s spoken words learning ASL in her second year in a quiet, low-key setting. Also of study, along with her deaf helpful was Brianna‟s intentional peer, Kaiden. Their parents hired placement with a deaf boy, a hearing expert with a credential Kaiden, who had the same kind of and teaching experience in the hearing and experiences that Deaf Education field, including at Brianna had. They took the same the California School for the Deaf classes and tutoring together in Riverside. Stacey Winsberg through a Christian homehad immersed herself into the schooling system, with partial “Deaf world” with best friends mainstreaming for one or two who have also grown up orally. classes through Biola Star They now communicate through Academics. Without this deaf peer ASL and spoken English. Stacey alongside her, Brianna would not was taking some years off as a have been who she is today, she stay-at-home mother. She admits. The one class she took in a “ Brianna‟s writing and agreed to teach these two deaf large classroom setting was still students ASL in a classroom her word order have setting with hearing peers for their difficult for her, because the information was not visually World Language course improved a lot this year, accessible. “I am a visual person,” requirement, only because she felt Brianna said. She gradually she was meant to reach out to after she started learning reached a passing grade due to all them. On the first day of ASL class, the one-on-one tutoring. Ms. Winsberg told Brianna and ASL.” - Mom As an extracurricular activity, Kaiden in front of the entire class, “I Brianna has been involved in ballet since she am humbly honored to teach you your native was three years old. She hears the vibrations, but language*”. Upon hearing this, Brianna‟s eyes not the music melody or the lyrics. She admits brimmed with tears, as she felt emotionally dance class was frustrating because she could empowered and ready to make connections to her not hear or lipread her teacher while they were in own identity. motion. They were not very patient with her. This Brianna is happy with the progress she has made. “I could finally understand 100% of the ballerina has learned to overcome, however, and conversation after I learned sign language!” is now dancing at her church, gifting others with Brianna exclaimed. The conversation through the gift of her natural talent. spoken English had always been too fast for her to Brianna is relatively satisfied with her early follow, but she is now becoming increasingly upbringing, but she yearns for something more. comfortable talking through sign language. “When I She is in close communication with her family listen, it‟s hard work, but when I see, it‟s easy,” Brianna explained. She and her ASL teacher have and her few friends from home school and developed a close relationship over the last two church. She has grown into a polite and sweet years of ASL class and tutoring at her house, and teenager, with hearing and speaking skills, but Brianna is thriving. At Ms. Winsberg‟s house she feels alone and isolated. This strikingly Brianna‟s mother reflected, “Brianna‟s writing and beautiful, yet hesitant and shy teenager wants to her word order have improved a lot this year, after explore her Deaf identity. “Part of my life is being she started learning ASL.” Brianna admits that if Deaf. I want to learn about that world.” She is she were communicating with somebody who knew both English and sign language, she would be curious and growingly fascinated with the Courtesy of Brianna Page 4 California School for the Deaf—Riverside, Fall 2013 Courtesy of Brianna Courtesy of Brianna Mom wants to love and protect her daughter as she always has, but also let her fly away to develop her own identity. more comfortable conversing in sign language, as she does with Stacey. That signed conversation is a normal, pleasurable, and free interaction for Brianna, without mental stress. Because of her struggles in the regular classroom with the nonvisual information overload, Ms. Winsberg who is now her personal advocate suggested using an interpreter. Brianna is still considering it. Among its advantages, she could benefit from an interpreter's gist of the auditory messages occurring in the classroom. For example, if told about a question from a student in the back of the room, out of her vision, she could be saved the embarrassment of asking the same question again moments later. Deaf Teens Briannaâ€&#x;s mother is now also learning ASL along with her daughter, wanting to improve her skills and be prepared for the fact that ASL might be a bigger part of Brianna's life someday. Most Deaf people join the community at some point in their lives. Should Brianna choose this language, or have friends who communicate through ASL, she wants to be a part of their lives. Like many parents, her mother is at the point where she wants to love and protect her daughter as she always has, but also let her fly away to develop her own identity. Page 5 Courtesy of Brianna Erika Thompson & Laurie Lewis Alexa Two Soun ds, One Love “I could easily communicate with everyone and I bonded instantly with my deaf friends.” Page 6 became deaf as a baby. When they discovered Alexa‟s hearing level, her parents took sign language classes so they could communicate effectively with their child. Alexa eventually learned to speak and use sign language simultaneously when talking to her family, who did the same with her. For some years during her childhood, Alexa was enrolled in a speech class at the University of Redlands. Her mother, through experience in this program, decided to become a speech therapist. Alexa is most comfortable using both sign and speech with her family and other hearing people. She is also at ease signing only with her deaf peers and deaf adults. Alexa has worn hearing aids from around age three to age 12. When she became a student at CSDR in the sixth grade, she wanted to fit in, and chose to stop wearing her aids for a few years. However, she missed listening to music and signing out the lyrics that she heard well with her hearing aids. She donned her aids again last year after having gradually become more confident in herself, and less worried about what others thought. Her grandmother had wanted a cochlear implant for her, but Alexa felt it was too imposing. With the residual hearing she still had, she believes the operation would have been pointless. “I do not want something stuck in my head forever,” Alexa confesses. She became California School for the Deaf—Riverside, Fall 2013 Two Sounds, Alexa One Love Lew Laurie is so tired of the constant inquiries about getting the implant that she now flatly says “No”. Hearing aids are sufficient to meet her needs. From age three to the 5th grade, Alexa learned in a small deaf program in a selfcontained classroom at a public school. Because she was still physically tiny by the end of fifth grade, she repeated that grade to stay in Elementary school. During that repeated year, she tried for the first time to mainstream into a hearing classroom with an interpreter. She got along well with the teacher and students, who helped her during the days the interpreter was absent. The teacher, who fortunately was familiar with the Deaf community, was also very helpful to Alexa in the public classroom. Alexa first found out about CSDR through the annual Silent Sleigh parade that the school hosts, where she met a Deaf Santa Claus and Mrs. Claus. Upon originally learning of the “deaf school”, she had thought it was a college for deaf students, and could not wait to attend the school. At that time, she realized the school served younger students, including middle and high school students, and she immediately requested to enroll there. Alexa feels that the best part about being at a school for the deaf is the ease of communication Deaf Teens with everyone, and she has instantly bonded with her deaf friends. This kind of access is what she wants to keep during her college years at Gallaudet University, where she now attends as a freshman. At CSDR, Alexa took advanced courses with her G B deaf peers. She communicated directly AS with teachers and other students in ASL, written English, and spoken English when applicable. She was also involved in many school activities. Alexa served on the Associated Student Body Government as Treasurer, handling membership dues, reports, membership drives, monthly student government news, and fundraising. She played on the Girls Softball team, which won the national title last year among all schools for the deaf. Two years in a row, CSDR and other deaf schools across the nation recognized Alexa as “Best Actress” in the school‟s student-produced movie as part of the annual Movie Night competition. During the past school year, the high school student body elected Alexa as their Homecoming Queen, and she rode in the Silent Sleigh parade. Who would have thought that Alexa, who first saw CSDR at this parade as a little girl, would in her final year at CSDR be reigning over the parade as its queen? Page 7 Garrett - He did not receive services because previous professionals determined he was “hearing enough” to not qualify for assistance. needed. signs only with his mother and tries to decipher what his friends and relatives say through lip-reading. Otherwise, for most of his life, Garrett relies on his listening ability to communicate with hearing people. When people speak fast or converse in noisy areas, he is unable to follow the conversation. He is comfortable with using both spoken English and American Sign Language, switching back and forth as Garrett was born deaf to hearing parents. He wears a hearing aid in his left ear, and has had a cochlear implant behind his right ear since he was ten years old. His mother learned a little ASL while he was young, but for the most part, his parents use spoken English with him. He also speaks clearly and fluently using his voice. If he is swimming with his hearing family, he cannot wear his hearing devices. At those times, he Page 8 Garrett transferred to University High School in Irvine, the high school site of the Orange County Department of Education Deaf and Hard of Hearing Program. This program enrolls about 125 deaf and hard-ofhearing students who have the option of taking classes with teachers who use sign language, or classes with general education teachers and a sign language interpreter. Interacting with the critical mass of deaf peers on campus, Garrett Erika Thompson Throughout his school years, Garrett attended public schools, including one semester at a public high school. During those years he did not receive any services because previous professionals determined he was “hearing enough” to not qualify for assistance. He did not have a sign language interpreter and had to rely solely on lipreading for pieces of information, teacher notes, and extensive studying. This was challenging, especially in classes like Biology. He had to wait until after class to approach the teacher to ask for lecture notes because he missed some information during class. His hard work outside of class did pay off with good class grades. California School for the Deaf—Riverside, Fall 2013 “I wish I had this extra support the whole Erika Thompson time.” quickly picked up more sign language. For the first time, Garrett had access to an ASL interpreter in the classroom, watching both the teacher and then the interpreter when he missed certain information or needed clarification. "I wish I had this extra support the whole time (starting from Kindergarten)", he admits. Garrett continues to do well in school, getting much more out of his education at this program setting. He is not involved in extracurricular activities and wishes for such a club that focuses on Art, his primary interest. However, he enjoys his Advanced Art class. He wants to attend a four -year college to study Digital Art, hopefully at the Rochester Institute of Technology, the top college on his list. RIT houses the National Technical Institute for the Deaf that encompasses one thousand three hundred deaf and hard of hearing students on a campus of fifteen thousand hearing students. Garrett credits his success and happiness to his parents and stepparents who have been supportive of him, and to ASL for extra clarification, and to his social interactions and friendships. Most of his friends are deaf peers with whom he is the most comfortable. At his former school where he was the only deaf student, Garrett had no friends, in spite of having clear speech and lipreading abilities. He was isolated. "Professionals have not understood that having deaf students around, communication -like deaf peers with whom he could interact, was important for his social-emotional environment and growth”, explained principal Jon Levy of the Deaf and Hard of Hearing Program. Now Garrett likes mingling among hearing peers and interacting closely with his deaf friends during class breaks and lunchtime. He feels accepted here. He also attends school dances and football games, enjoying the high school scene, which is new to him. Garrett's mother is thrilled for him. "Upon arrival (at these events), my son eagerly took off to see his friends . . . That had never happened before." Deaf Teens Page 9 Karina - Daughter of an oral deaf father who later learned ASL “My dad remembers always trying to figure out everything, feeling left out, and he did not want the same for me.” classes and excels in the school-wide Math Olympiad that CSDR hosts every spring. She is also active in the Jr. National Association of the Deaf as vice president, and just attended the Youth Leadership Camp for deaf students last summer. Karina was born deaf to a deaf father who himself was raised in an oral program and later used ASL as an adult. Her mother is not in the picture and her father has raised Karina since an early age. She first enrolled at Tripod in Burbank, a deaf program from Kindergarten through first grade. “It served as my foundation for ASL, which would not have happened if I were in a hearing program at first,” Karina explained. Until the 9th grade she was always mainstreamed. At the elementary level she had hearing friends, which was possible because the school was small. California School for the Deaf—Riverside, Fall 2013 Karina has attended and excelled in almost every type of school setting for the Deaf: a small deaf program, full-time mainstreaming, and a school for the Deaf. Karina did well academically as a freshman at a large public high school, using a sign language interpreter. She transferred to the California School for the Deaf in Riverside for the rest of her high school years to pursue personal, academic, and social growth. At CSDR Karina competed on the national Deaf Academic Bowl and was selected as an all-star player in the Western Regional. In school sports she participates in volleyball, softball, cheerleading, and wrestling. She is enrolled in Honors and AP Page 10 Courtesy of Karina There she was popular among her peers and happy for a while. Other students were motivated to learn sign language and communicate with her. However, she was sometimes left out during interactions that involved more than two people. Overall, her experience was wonderful, but fleeting. When she entered middle school, the student population was larger, which made interaction tough for her. Some friends entered into cliques. “I felt alone. Students had to work harder with gestures to communicate with me. Most students did not bother to learn sign language,” Karina shared. She had friends who could fingerspell well enough to communicate, but they did not sign fluently. “I was somewhat happy, but still felt left out, especially at times when others joined in the group with my friends. These friends would interact through voice only and I could not be a part of it.” Karina mostly signed with her family members. Her deaf parents were the only deaf members of their families, thus Karina had hearing grandparents. Her grandmother in particular was supportive, patient, and never got frustrated with her. She was able to deal with her temper, which is typical among deaf children if communication is a struggle. “When I was young, I felt nobody understood me (who had rapid and urgent vocal expressions). I would throw tantrums, but Grandma rarely got mad at me.” Her grandparents worked hard in learning sign language for their granddaughter. Even though their son (Karina‟s father) was deaf, he was oral enough to communicate through speech only. Now with Karina, the parents finally learned ASL a generation later. It was worthwhile for Karina‟s grandmother because she and Karina developed a close relationship for a good, precious while, before she passed away with cancer when Karina Deaf Teens was in the fourth grade. “I really miss Grandma; she was the one who taught me to talk.” Karina described how they would pretend to talk to each other on the phone. Once, she went to a friend‟s house across the street and phoned her Grandma to say hello. She heard her Grandma‟s response, „Hello, Karina‟, with her hearing aids. “I was so thrilled that I ran back home to hug Grandma!” Karina also went to a speech program in Los Angeles County. The director advised her to wear her hearing aids for speech therapy, but she disliked the prolonged use of her aids, and stopped using speech. “I hated my hearing aids. They did not feel natural and were in fact very annoying. I have long accepted that I am who I am. The hearing aids did not help. I learned to rely on my eyes.” After experiencing limited access to communication in the public classroom, Karina eventually got fed up with mainstreaming, wanting full access at a school for the Deaf. When her family first requested a transfer to CSDR, her school district resisted even after many meetings throughout the summer and early Fall. a rin Ka of y “I was upset that it had Courtes been so hard for me,” Karina protested. The battle had deeply affected her because she did not win the transfer until two months after the school year had begun. She came as a freshman, and found that students had already bonded with each other for the beginning of their high school experience. Having missed the deadlines for tryouts and applications, she also was not able to participate in any organizations that year, such as volleyball, the formation of the school song troupe, and Associated Student Body Government. The academic demand was pressing, too. “I was behind in homework for that quarter, having to catch up a whole lot,” Karina explained in frustration. Page 11 “I would like to travel and establish schools for deaf opportunities as I did.” Starting as a new student mid-semester at CSDR was awkward at first for Karina who, like many newcomers, required time to adjust. She was used to the hearing school system, so she experienced culture shock at this new school for deaf students. The class size was smaller for one thing. However, she held steadfast to why she wanted to come here in the first place: she could finally be herself with direct communication. In a flamingly rapid string of statements, Karina emphasized, “I am naturally a very straightforward person. I can be blunt and I hate to be restricted to the interpreter for sending my verbal expressions. Sometimes I like to speak up to the teacher directly, but the interpreter would screen out or modify my words or tone, so I could not really be Page 12 Karina expresses herself easily and rapidly, with a sharp mind, always thinking quickly. Karina‟s cognitive growth is likely due to reading in ASL at such an early age. myself. Here, at CSDR, I can finally start a life where I could communicate with everybody.” At school, Karina has many close friends, including a best friend who is deaf and very similar to her in personality and life goals. She is still taking time to acclimate to the variety of different personalities on campus. She no longer needs to befriend just a few deaf friends, but has a whole range of options in friendships. After a few months, Karina felt that she could fit in and she believes that it definitely is a lot easier to do that at the school for the deaf, as compared to at a public school. Karina thanks her dad for his support, for always asking her questions, and for fighting so hard to help her enroll at this school. “My dad remembers always trying California School for the Deaf—Riverside, Fall 2013 Courtesy of Karina children world-wide, so they can get equal t o f i g u r e o u t everything, feeling left out, and he did not want the same for me.” Karina‟s father d e scribe d , th rough a personal interview, how Karina expresses herself easily and rapidly, with a sharp mind, always thinking quickly. “She is definitely not like me.” The father remembers how he used to read to his daughter every night during bedtime, signing aloud all the stories. At age five, Karina insisted upon reading the books herself. The father credits Karina‟s cognitive growth to reading in ASL at such an early age. The father added that Karina has had a difficult route in her life, not being with her mother and losing her grandmother and uncle. But through this all, she still does her very best. “If not for my dad, I would have been alone,” Karina said. Karina has several ideas about what she wants to do after graduation. She enjoys English and looks up to her CSDR high school AP English teacher, Gloria Daniels, who is hearing with deaf parents. Ms. Daniels knows how to handle such an intelligent and articulate student, and to teach to her highest potential. Among Karina‟s dreams are to attend Gallaudet University for her four-year college experience, and to study for her master‟s degree elsewhere with specialization in English and Special Education. Karina concluded, “I want to travel and establish schools for deaf children world-wide, so they can get equal opportunities as I did.” Deaf Teens “Here at CSDR, I can finally start a life where I could everybody.” (Above) Karina triumphant at the summer Youth Leadership Camp in Silver Falls where she hiked all the way up with other deaf students from across the country communicate with Erika Thompson Page 13 Courtesy of Karina Clarisia Cour tesy o f Clari sia - Speaks three languages: Spanish, English, and ASL FAMILY AND LANGUAGE Clarisia is trilingual – fluent in Spanish, English, and American Sign Language. She grew up oral in Sylmar, California, the seventh child of nine children and first in her family born in the United States. Her entire family communicates in Spanish, while some of the children also speak English. Three of them were born deaf: The oldest only speaks Spanish as a native Mexican. Clarisia and her older sister are the only two later blessed with fluency in a third language, American Sign Language. They later in their childhood communicated with each other in ASL or in voiced Spanish with ASL on the hands simultaneously. Some of the hearing siblings learned ASL later to communicate better with the Deaf sisters. Clarisia learned her three languages in this order: Spanish, English, and ASL. She could hear a lot with her hearing aids, hearing her first language spoken by her parents and hearing siblings. She learned English at three years of age, and remembers finding this language tough and unclear. She eventually got the hang of English because of school. She unfortunately had no chance to improve her Spanish syntax. She had only used it informally at home. In a Spanish course at CSUN, with the support of a trilingual ASL interpreter in the classroom, she is learning formal Spanish to strengthen her first language. Now, as an adult with hearing aids that still work well for her, she is most comfortable in these languages in this reversed order: ASL, English, and Spanish. Clarisia uses ASL with natural ease, reads and writes in English, and if she speaks with other signers who also know Spanish, she likes using ASL with spoken Spanish. Overall, Clarisia prefers to sign. “Sometimes, I would rebel and take out my hearing aids. English makes me feel restrained, while ASL helps me express exactly what I want, and get right to the point.” California School for the Deaf—Riverside, Fall 2013 “I struggled to pay attention. I focused too much on listening to the words, and not to the whole message.” Page 14 Erika Thompson SCHOOL While her older deaf sister attended a deaf program in San Fernando Valley in Los Angeles, Clarisia was mainstreamed, accustomed to that environment and preferring to stay close to home. She had used a special hearing aid with amplifier to help block out background noise, intended for her to focus on sounds from her teacher. She admits she had struggled to pay attention. She focused too much on listening to the words and not to the overall message. She started using an interpreter as a teenager at Granada Hills High School and began learning ASL. Finally, she no longer felt behind in her classes and could participate with the others. Moreover, she understood the best in the mainstreamed class that had a deaf teacher. He taught the students directly using ASL and written English. Clarissa wishes now she had attended a school for the deaf. IDENTITY At the National Deaf Archives Library (CSUN), Clarisia builds the Deaf identity she Courtesy of Clarisia Clarisia treasures both her Hispanic heritage and her Deaf culture, although she wishes she had found Deaf culture earlier in life. Clarisia had always been searching for her Deaf identity, emphasizing that we all have a need for it. “I had been acting like someone I was not,” and only recently have finally been able to fit in “with my people. I am not hearing; I am different,” Clarisia emphasized. She advises that a deaf child needs to be comfortable to accept who he or she is. She felt that she would have been accepted and understood more if she was already rooted with a solid identity. As for her Hispanic culture, she suggests that educators make more effort to reach out to Latino parents of deaf children. She believes many of these families possibly feel intimidated in the American society and with the Deaf community. “My parents didn‟t have the resources, feeling overwhelmed and intimidated because they did not speak English,” Clarisia explained. She wishes her parents had been empowered to teach her more about her Deaf identity, rather than just “letting the school take care of it.” COLLEGE has sought for so long. culture, art, folklore, famous people, schools, Deaf sub-cultures including Deaf of Color and Diversity. The contents timeline include Deaf people from the time of Aristotle, through the seventeenth century when Thomas Hopkins Gallaudet and Deaf Frenchman Laurent Clerc first brought Deaf education to America, to the current “Deaf World” events across the globe. Clarisia immerses herself in these rich Deaf culture materials, now building her Deaf identity. At CSUN, she is also involved in the Deaf sorority and serves on several committees. They include organizing for the Mr. and Miss Deaf CSUN biennial pageant and for the annual spring banquet of CSUN‟s National Center on Deafness (NCOD). She invites its deaf and hard-of-hearing students, Deaf Studies majors, most of whom are hearing students, and friends in the community. She has great opportunities there. “If it wasn‟t for the Deaf community, I would not have been as involved, and I would have felt alone,” Clarissa reflects. With her major in Sociology and Criminology, Clarissa aims to work in the field of rehabilitation for Deaf and Latino juveniles, to help steer lost teenagers onto the right path. Page 15 Clarisia enjoys her college life at California State University, Northridge. She works in the National Deaf Archives Library at NCOD three times a week in between her classes. The library houses a vast collection of publications and media on Deaf history, Deaf Teens Tyler Berdy Tyler turns the car radio up, blasting rock music on his way to meet his teammates for golf practice. He is confident that his team will win the next match, as their current league record is 14-0, undefeated. Afterwards at home, he does his homework for pre-calculus with country music playing in the background to help him focus. His mother, an executive assistant for a film production, is home and prepares him dinner. He hears her calling him, and they chat with ease about how his day went. His father will soon return home from his engineering firm in Orange County where he manages the firmâ€&#x;s Auto-CAD department, a microcomputer design software program. It is only the three of them at home, as Tyler approaches his last year of high school. His older brother works as an actor in L.A. The Berdy family appears to be the typical American family, with one exception. Everybody is Deaf. They communicate using American Sign Language with one another, using written and spoken English as needed with hearing people. - Tyler (bottom right) in the film Erika Thompson & Wes Rinella Page 16 California School for the Deaf—Riverside, Fall 2013 When Tyler was a child, he starred in “The of Mountain lipread, but if I‟m Legend Man” by Mark Wood‟s ASL Films, acting along in a group of with his family. He played an adorable little hearing people, or boy who first saw „Bigfoot‟, and was the the information is only character who personally interacted important, I with it. In a twisted turn of events, „Bigfoot‟ expect to have an c ar ri es t h e b o y‟s unconscious body across the mountain. interpreter.” Throughout their journey, the boy and the monster develop a heartwarming, playful banter and understanding with each other, which finally help his family accept the existence and good intentions of the creature at the end. Looking back to this movie production, Tyler feels honored to have worked with such Deaf acting legends as Howie Seago, Freda Norman, film director Mark Wood, and the late De‟VIA artist, Chuck Baird. “I can speak and gave them opportunities to talk and practice in playful, relaxed settings. “If I saw that the child did not do well, I would not force speech upon him. It just turned out that my boys took to it well when learning speech,” Tyler‟s mother said. Tyler recalls that it helped to see his Deaf father using his voice and lipreading with the other engineers, on the occasional days he took the boys to his workplace. “I watched how Dad talked with hundreds of employees. That stuck with me as I realized the importance of speech with others.” Tyler chose to stop speech therapy during his early teen years, because he had acquired enough skills for his daily use. If he ever needs further reinforcement or should his skills ever decline someday, he would not mind resuming sessions as needed. However, if instruction or conversation ever occurs in a whole group of hearing people, or if the information is vital, as for academics, Tyler expects to have an ASL interpreter. SCHOOL From age three to the fourth grade, Tyler attended the California School for the Deaf in Riverside, where he learned through sign language and written English. He had full access to academic information and social interactions with peers while he continued his twice-weekly speech sessions. Tyler was doing well, so the family experimented the next year with his attendance in the mornings at a local public elementary school a few blocks away. Later in the day he attended the school for the Deaf where he could play after-school sports as a student. Though Tyler‟s family was deaf, exposure to hearing peers and the spoken English environment was not new to him because his parents created intentional opportunities for interaction in the hearing world. For instance, his father had always expected Tyler to order food for himself at restaurants. Tyler was, nevertheless, new to the large class size, After solid the many hearing friends, and being so close to development in home. This transitional experience helped Tyler his ASL, Tyler‟s make his own decision to attend the local public English took off as s c hool f ull -t im e t he following year as a sixth grader. After s o l i d his second, equally development in his native fluent language. language of ASL, Tyler‟s Page 17 HEARING / LANGUAGE Tyler was born profoundly deaf. He has worn his hearing aids since he was six months old. He can hear people signaling him. When he lipreads without aids, he finds it requires too much effort. His hearing aids help him match the sounds to the lips he watches. He hears the lyrics in songs that, as a lover of music, he craves every day. Tyler calls music his therapy, and adds, “I‟m still thinking about last night‟s production of „Grease.‟ I truly enjoyed such an amazing show and the great music!” Tyler uses spoken English for casual conversations. He is not fond of writing back and forth, especially if he is able to speak and lipread well. His good oral skills are due to his Deaf parents‟ insistence on private speech therapy that began before his first birthday, starting out as simple babbling exercises. Therapy continued at school in two brief sessions a week on campus as requested by his parents at his annual IEP meetings. Tyler was at first reluctant about taking up speech, but now is thankful to his mom for preparing him for “the hearing world”. His mother explained that she Deaf Teens English grammar and writing became his second, but equally fluent language in his public English classroom, where he also had access to a full-time ASL interpreter. During his late middle school and early high school years, Tyler and his brother moved to Indiana where he once again attended the state‟s school for the Deaf. He has fond memories of his school years there with his Deaf and his hearing friends who had Deaf parents, many of whom were equally fluent in ASL and English. Tyler served in a variety of extracurricular functions as Treasurer for his student class and as chair for fundraisers and organizations where he developed life-long leadership skills. For his junior year of high school, Tyler moved back to California where his brother began working in L.A. Tyler attended a public high school that had a good golf program, where he met up with some of the same classmates whom he had known from elementary school. Tyler had been happily consumed with golf, playing on the high school golf team. During golf instruction, Tyler used his voice and lipreading skills in one-on-one conversations. This fall semester, Tyler has recently begun training in the afternoons with the prestigious Hank Haney International Junior Golf Academy. He continues his studies in the mornings as a transfer student at the Heritage Academy. To help pay for golf tuition, the Deaf community rallied together at several game fundraiser events in northern and southern California to support Tyler‟s dream. After experiencing both education settings at mainstreaming and deaf schools, Tyler has grown into a well-balanced, competent bilingual young man with ASL verbal skills, spoken English functionality for casual and social settings, and written English skills for academics. GOALS Tyler wants to be an engineer like his father, in the biomedical field. He has enjoyed his biology and earth space science courses. He became fascinated with surgery and technology, after meeting a deaf biomedical engineer who commuted daily on the train with his father. He is interested in coming up with more efficient technology to alleviate the trauma and risks of surgery. Tyler has his heart set on attending Arizona State University for its golf program, services for deaf students, and strong focus on the science and technology major, including an Engineering student housing unit with specialized tutoring. “Research lists Biomedical Engineering as the second best job on earth,” says the eager, future Dr. Berdy. “Be involved in both the hearing and Deaf worlds; that will lead to more success.” Courtesy of Tyler Berdy ADVICE Tyler advises other deaf children to be involved in both the hearing and Deaf worlds; “That will lead to more success. They will have better job opportunities and know how to deal with hearing people and deaf people. If I never had the experience with both hearing and Deaf people, I never could have thought of pursing my dream in biomedical engineering.” His mother added that she made an effort to make sure they did well in school. When asked if Tyler had any wishful thoughts about his childhood, he grinned, “I wish I had played golf when I was younger, so I could be an even better player.” California School for the Deaf—Riverside, Fall 2013 Page 18 Dakota is a happy deaf thirteen-year-old girl who tests above grade level in everything, takes honor courses, and enjoys her art studies in theatre, drawing, and circus performing. BILINGUALISM Dakota FAMILY AND COMMUNICATION - Daughter of active hearing parents full time ASL teacher. The mother found teaching ASL so much more fun than math, and looks forward to this career transition. Dakota is pleased with her mother‟s progress in ASL. She notes how her mother shows advancing, varied sign vocabulary choices as they converse with each other. Dakota, who is at ease using either speech or sign Page 19 Dakota's family use both ASL and English. This includes her older brother (age 16) who signs fairly well. Her parents also express themselves through ASL with her. With them, she sometimes uses simultaneous communication (ASL and voice), as she does with those who know some sign language but not fluently. Her mother, a math teacher, is almost done with her certificate courses in becoming a Deaf Teens Erika Thompson Dakota's parents dreamed of having a bilingual child. During her mother‟s pregnancy, they as residents of southern California debated whether to teach their daughter Spanish or German. German, the language her mother and grandmother spoke fluently in addition to English, won over as the language spoken to Dakota for the first months of her life. Dakota ironically never heard any of it, as her parents eventually discovered she was deaf. Her surprised mother realized, "She's not a German baby… she's a signing baby!" Dakota grew up as a happy bilingual, and reaped the rewards of knowing more than one language English and ASL. This cognitive advantage has helped Dakota succeed in her Honors English class and get high-test scores at her public school in San Diego. In the father‟s view as an engineer at the time he learned of Dakota‟s hearing level, important decisions on how to raise a deaf daughter must come with a safety net, a contingency. “Don‟t put all your eggs in one basket,” the father clarifies. “Dakota‟s vision tested at 20/20 while her hearing is not 20/20, so of course (her information and language) must be visual.” “After the birth of our daughter, we as new parents viewed the sign or speech questions as being analogous to being in a restaurant, asked if we wanted “soup or salad?” with our entrée. Our answer was simple: „Yes, we want both‟.” Erika Thompson ; “Dakota‟s vision was 20/20, but her hearing is definitely not “20/20” so of course she must utilize the access she has most, visual access.” *** “Our household is primarily bilingual in ASL and English.” Erika Thompson Page 20 “For children who are Deaf or Hard of Hearing to arrive being „Kindergarten Ready‟… they must have early language development. Lay the foundation for those children to be raised happy and successful.” California School for the Deaf—Riverside, Fall 2013 language, admits she enjoys using ASL the most. “Through ASL, you can express yourself more,” Dakota points out, although she is happy that she can also talk. The Ronco family has stayed in touch with the Deaf community. They attended family summer camps through the American Society for Deaf Children since Dakota was three years old. This has become a biennial family tradition to attend this camp together with other deaf children and their families. The mother has a masters degree in Deaf Education and is teaching ASL with connections to deaf educators. The father serves on countless projects and committees, including as a member of Hands and Voices, and travels a lot to give presentations on the importance of parents communicating with their child. FRIENDS some of her friends take classes geared for deaf students with a teacher who sign. This person is also available to Dakota for assistance as needed. At school, Dakota teaches her hearing peers how to sign the alphabet and chats with them, as she also has full conversations with her deaf friends. Now in the eighth grade, Dakota and her family are uncertain where to place her for her high school years. While the California School for the Deaf in Riverside is an a t t r a c t i ve o p t i o n , t h e parents wish the school was situated closer to home so they could see Dakota everyday during the week. ACTIVITIES Dakota has some hearing friends who can sign, but her close deaf best friends have always been in her life - one since the age of four and the other since she was eight, through her parents who connected with other likeparents with deaf children. She and her f riends stuck together intentionally from the previous school to the next. “I don‟t think I could bear to be without them. I would have hated being the only deaf student, and be picked on. I‟m glad I have my friends,” Dakota asserted. SCHOOL Dakota advanced to the green belt in Aikido to protect herself. She‟s also in a community circus group as a performer where access to communication is visually facilitated. “This circus group is theatrical and can use miming skills without depending on their voices. The teamwork and activities help me build my confidence.” Dakota loves writing stories. A year ago, she Erika Thompson published a book about a girl who converts into a mermaid, a story she had gifted to a friend. Dakota is also an artist. She painted her mother‟s van in colors and patterns of the hippy seventies. A wall mural she once did for the city won her an IPad. GOALS Dakota along with thirty other deaf students is enrolled at the Creative Performance Media Arts program for up to the eighth grade. The curriculum includes creative, fun classes, computer arts, dance, as well as academics. She has ASL interpreters for her public classes, while Deaf Teens Dakota, at thirteen, is interested in acting or other active events that involve traveling and fun underwater biology, for instance. She does not see herself sitting in an office dealing with phone calls and computer work. For college, Gallaudet University and the Rochester Institute of Page 21 Technology are feasible options. “People need to speak slowly to me. If the place is too loud, I have to ask them to move to another room with me to continue talking.” friends, he often felt left out as he tried to communicate with them. He appreciates having learned to talk and interact with hearing people, but admits that he is more comfortable with his deaf friends who are just like him. When he goes out with his deaf friends, he can relax and sign with them without wor r y ing to and about noi s y backgrounds or working hard decipher teachers auditory in Deaf information. His deaf friends education and Deaf culture studies have all helped him to learn more about his Deaf Tyler Erika Thompson & Laurie Lewis Tyler is a mild mannered, softly spoken 18 year old who has graduated with good grades last spring from University High School. He now attends California State University, Northridge to study mathematics. Tyler was enrolled in high school AP Calculus in a public classroom with hearing peers. Deaf since birth, Tyler hears through cochlear implants in both of his ears and is able to discern speech from teachers and peers, as long as the conversation takes place in person in a quiet setting. His own speech is well understood by a majority of people, especially when others have had had time to get to know him personally. Why did a successful kid like Tyler, who could hear and speak, eventually learn sign language? Could he have thrived through English alone? Tyler answered, "No, I needed both." With his hearing Page 22 identity, and to feel confident in himself as a person. Tyler's parents have also learned a little sign language, and are able to communicate with him visually when restaurants get noisy or when Tyler is at the beach without his hearing devices. Tyler acknowledges that his mother has always been encouraging and accepting of who he is as a deaf person. She has continually guided him in learning right from wrong, and always encouraged him to achieve his personal best. Sign language has played a surprisingly large role in his education and personal growth. At intervals in the classroom, Tyler cannot hear the teacher clearly, so he quickly looks at the sign language interpreter for the missed information or additional clarifications. When the class watches educational films that are not subtitled, the interpreter is the only way Tyler can California School for the Deaf—Riverside, Fall 2013 “I never knew how much I had missed before sign language. For the first time, I am understanding everything.â€? access of m At the audio because excessive e d i a with other won deaf the students, Tyler Courtesy of Tyler noise through transmission. school Courtesy of Tyler enjoyed competing for the Deaf Academic Bowl. They recently Western Regional attends college with a large number of other deaf students. Jon Levy, principal of the Deaf and Hard of Hearing Program of the Orange County Department of Education, pointed out, "When Tyler first came here in the seventh grade with very little to no sign language, he was greatly delayed in literacy. After learning sign language, he has grown to his present reading level in English, jumping eight levels within only six years of being here." "ASL filled in the gaps that I had. I learned a lot more about English structure from explanations through sign language." Tyler admitted. After his immersion in an educational setting of bilingualism and biliteracy through a visually accessible mode, Tyler realizes, "I never knew how much I had missed before sign language. For the first time, I am understanding everything." Page 23 championships against other Deaf schools and regional programs. His team also placed second on the national level at the Gallaudet University campus in Washington, D.C., the only Deaf Liberal Arts college in the world. Tyler likes school, learning a lot, making friends, and having great memories. He served as a Deaf/ Hard of Hearing representative on the high school Associated Student Body and as a member of the Junior National Association of the Deaf with his deaf peers. He says that learning ASL contributed to his language development and English literacy. He continues learning American Sign Language, through which he communicates with some struggle, as he also does through speech. He knows he will continue refining his ASL skills throughout his college years, especially if he Deaf Teens Alana “... a true scholar-athlete who excels in academics, leadership, and sports. This ideal student sets an example for others.” Ernesto Rodriguez’14 California School for the Deaf—Riverside, Fall 2013 Alana is the only deaf member of a family of musicians. Her mother is a singer, her father a drummer, and her older sister an oboist, who blows through her reed to give the sound a vibrating, penetrating voice. Alana wants to be a part of her family and make them proud by learning to dance ballet and to play the piano. As she routinely fingers the keys, she struggles to identify the difference between low and high-pitched sounds. Her parents analyze the quality of music they hear at concerts, but the sounds are generally all the same to Alana who is profoundly deaf and wears hearing aids. While she attempts to relate to her family‟s interest in music, Alana‟s spirit and passion are in sports and theatre. Whether on the softball or track field, or the basketball or volleyball court, she is a deadly threat to opponents who play against her. Alana's power plays have earned her honors as Most Valuable Player, and have helped her team win games in the CIF league playoffs. Alana also performed on stage as the lead actor in a community musical theatre production of “Nobody‟s Perfect,” playing a deaf character who used sign language and spoken English. She truly enjoyed this experience, especially when she taught the other actors and crew about Deaf culture, inspiring them so much that some of them wanted to major in Deaf Studies in college. Alana also stunned the audience as - Has attended CSDR since she was two Page 24 Erika Thompson & Laurie Lewis a competent and dazzling mistress of ceremonies at the national Deaf pageant, hosted by CSDR‟s Jr. NAD student organization. She glows as an ASL singer with the school‟s spirit song group and in her high school Drama productions. Many would agree that Alana‟s vibrant voice flows through in her body language, her facial expressions, and through her hands, visibly just as rich as the sounds from her sister‟s oboe. In addition to sports and performance activities, Alana is a gifted and earnest (CSDR) is like learner at the California School for the Deaf in Riverside, where she has While Alana‟s parents sign to their my second attended since she was two years old. As daughter, she responds back through if her additional activities as a class speech because she wants them to family.” officer, a member of Jr.NAD, and part of understand her. They struggle in “reading” the student body government were not sign language, as they have not had enough, she excels in academics with a 4.0 GPA in enough practice in doing so. This arrangement is her honors and AP courses. She passed the high “do-able”, but sometimes Alana feels uncomfortable school exit exam on her first try. Alana reads a lot when she gets sick, has a sore throat, or is not in the and admits that her studies in speech, in addition to mood to speak. She regrets that she cannot talk to ASL, have had some influence on her literacy skills. her own parents with ease. “It gets harder as my Her specialty in math has also enhanced her team‟s parents get older,” Alana admits. “My mom feels bad, success in the Deaf Academic Bowl, where they but I tell her she‟s a good mom,” who has tried to do competed against the finalists from other schools what she thought was best. Like the title of Alana‟s and programs for deaf students in the nation. She play, “Nobody‟s Perfect“. also enjoyed her 8th grade trip to Rochester Institute of Technology, which houses the National Technical Institute for the Deaf, where her CSDR team competed in the national “Math Counts” tournament. Her love of mathematics and technology has her thinking of attending this university in New York, or Gallaudet University, to major in engineering or business. Alana says, “I want to make an impact on the world. I know I am meant to do something important.” CSDR high school principal Mr. Timothy Hile described Alana as he presented the Top HS Pupil award to her onstage in front of 425 students as, “A true scholar-athlete who excels in academics, leadership, and sports. This ideal student sets an example for others.” Alana credits her success to her school and to everyone she has known here. “Everyone gave me opportunities to go out and see the world more, and to open my - The whole family learned sign eyes to both cultures of the hearing and the Deaf. They taught me how to cope with the language to communicate with hearing world, and to learn from other people‟s mistakes. CSDR is like my second Alana (right) family.” “This Although Alana‟s immediate family uses sign language with her, she wishes her relatives would also learn her language. She would love them to be a greater part of her life and be able to see what kind of lively and successful personality Alana truly is when she is expressing herself through ASL. It has been somewhat hard on Alana that her older sister has moved out to attend college. Her sister has been her best and most communicative link to her at family events. "If everybody school relatives signed, we would be made into one big, even happier family," Alana said wistfully. Deaf Teens Page 25 Courtesy of Alana Dominique - Deafened at age 8 in Liberia CSDR Valedictorian „13 Out of 180 high school deaf students at California School for the Deaf in Riverside, Dominique received the award as this year‟s top student in the Career Technology Education program. Principal Ms. Shelly Gravatt explained onstage to the entire school of 425 students that in all the courses Dominique took, she excelled with high grades during her four years at CSDR. These courses included Computer Applications, Digital Imagining, Television and Film Production, Career Preparation, Leadership, Yearbook, Health, and Work Experience. Dominique has a positive attitude in learning, is attentive to her teachers and is kind to her peers. Dominique’s Personal Narrative: “At nearly 18 years of age, my life experiences are uniquely different than most of my friends and peers of the same age. I was born hearing, with my twin sister, on the twenty-seventh day of May in Liberia, West Africa. As my first language during childhood I spoke English, which was one of the official languages in Liberia, while my parents spoke both English and Bassa. At eight years old I became ill with meningitis, which made me deaf. I remained in the hospital, recovering from meningitis, for two long and excruciating years. Not knowing that sign language existed, my family had me attend a public school in Liberia. Consequently, I could not hear what the instructors were teaching or what other classmates were saying. Years passed where I did not know what went on inside classrooms, all because there was not any known communication for me. My parents knew that I deserved a better education elsewhere, outside of Africa, and had saved money for our move to help give me a better life. In May of 2006, my aunt, who was already living in America, acquired guardianship of my twin sister and me. We travelled to southern California to live with my aunt, intending to gain a better education. We enrolled into the 5th grade at a local public school. During 5th and 6th grades I California School for the Deaf—Riverside, Fall 2013 Page 26 Erika Thompson With her hearing twin sister, Dominique (left) fled her Erika Thompson homeland in Liberia for a better life here. continued to fight to understand, since there were no interpreters and I did not know any sign ASL is now my most comfortable, native language. I felt I did not belong where I was and I language, while I also still speak English, using felt lost. Then my middle school counselor finally voice and writing. I do wish that my family could mentioned to my aunt about the School for the also learn ASL so I could feel free to Deaf located in Riverside. communicate with them about what is happening When my family and I first visited the CSDR at school and in my life. My twin sister and I have campus, I remember catching my first glimpse of a unique, uncanny ability to understand each American Sign Language and other through speech and signs, in a feeling captivated with way that only twins can do. With the amazement and disbelief. I took school‟s cultural and educational ASL classes at CSDR when I “My feelings of approach with bilingualism, I have enrolled into CSDR‟s middle become proud and ultimately happier. school. Soon after, my feelings insecurity, I have graduated from CSDR with of insecurity, confusion, and honors as Valedictorian for the class exclusion dissipated. Finally, I confusion, and of 2013, and I have been accepted to could communicate with others, Gallaudet University in Washington understand my teachers, and serve in student activities. Within exclusion dissipated. D.C. You see, I was just a black deaf woman from a faraway country a few short years of being without education, fighting for a better exposed to and educated in Finally, I could life, and to this day, I made it to the ASL, I felt confident and ASL next chapter of my life but not without seemed to be a natural part of my opinion, my voice, my family, my my life. I had not accepted my communicate with teachers, the staff, and California Deaf identity until I came here to School for the Deaf, Riverside.” CSDR. Here I finally fit in. others . . .” Deaf Teens Page 27 Elena Mayer Center on National Deafness, C SUN - Miss Deaf California State University Northridge, 2013 Page 28 California School for the Deaf—Riverside, Fall 2013 Terri Vincent family recipe: a mix of everything with her oral deaf parents who also use sign language, as State University in Northridge confidently does her younger deaf brother. With her family, switches back and forth between using spoken communication choices were up to Elena and her English with her hearing classmates and personal mood. Sometimes she spoke to her professors, and American Sign Language with deaf parents entirely through speech, other times other students and staff who are also deaf or wholly in ASL, and sometimes using both. For example, if her hands were full while carrying know sign language. Elena wears a cochlear something, she spoke and if her implant, likes speaking and mouth was full with food or if listening, and shows pride in her exchanging information through Deaf identity with respect for a glass window, she signed. “My cochlear American Sign Language. She It was Elena who asked her loves who she is, and has parents for a cochlear implant implant was just a confidence in both worlds of the when she was five years old. hearing and the Deaf. She has all She specifically remembers tool; I already had options and access to signing to them, “I want to hear communication. Elena enjoys more.” She had seen how her Deaf culture and bilingualism and communicates younger brother‟s hearing aids with ease. A few days after her language at home.” eventually stopped working for interview for this article, Elena won him, so he got the implant. She the biennial pageant for the title of explained that she had perceived “Miss Deaf CSUN”. the cochlear implant as advanced hearing that did not change who she was. “My cochlear FAMILY AND COMMUNICATION implant was just a tool; I already had Deaf culture Back home in Missouri, Elena uses either ASL, and language at home.” voice, listening, speechreading, or a unique Elena, a Psychology major student at California Deaf Teens Page 29 Terri Vincent SCHOOL / COMMUNICATION PHILOSOPHIES Because Elena learned ASL and English at the same time, her language and academic skills are high. Her parents guided and supported Elena in making demands for her educational needs. Elena‟s mother runs her own business as a life coach. Her father supported her behind the scenes while he also served as an alumni director for the National Technical Institute for the Deaf in New York. Elena first began preschool at a school for the deaf in Rochester, NY, prior to EDUCATIONAL SERVICES getting her implant. “This was where I got my When asked about the support services she language foundation with signing and talking,” received in school, Elena‟s answer was unique. Elena said. She initially asked for no interpreting services at The family later moved to St. Louis, a city school, preferring real-time captioning for videos considered an “Oral Mecca” with three local oral and reading text. She wanted to develop her own schools, the state school for the deaf located strategies, different from those of her deaf several hours out of the city. Before the move, parents. Elena emphasized the Elena had been happy, loving both importance of the IEP* meeting. sign and speech. When she Her parents, teachers, counselor, enrolled in an oral program, the “I was disappointed and she discussed in depth what educators asked her to hide her she needed. She insisted that all hands under her thighs. She ended educational videos be captioned. up signing less at home with her and shocked. Why If not possible, the teacher would own deaf family. Elena reflects, “I excuse her from participating was disappointed and shocked. had the education without affecting her grade. Her Why had the education system teacher would make every effort been split with sign only or speech to look at her and not at the system been split only? Why not use both?” She and board while speaking, and to her family did not follow the confirm understanding with her school‟s advice; they took with sign only or frequently. She asked one to two advantage of both at home. The friends in each class to help as oral school later reprimanded the speech only? Why needed. Elena met with the parents, asking that they stop teacher one-on-one during signing at home. Elena‟s mother, in breaks and Study Hall when not use both?” a sharp response, explained to the possible. Without this support school, “You do your job with she does not think she would speech, and let us do our job with have done so well, especially in high school with language.” its large class size. At CSUN, she now is using With their limited school options, Elena‟s parents interpreting services. She found it helpful that the offered her any school of her choice, even interpreter informed her of homework MSSD, the national school for the Deaf in assignments that she might have overlooked at Washington DC. Elena chose to attend a local the end of class while taking notes. She does not mainstreaming program in middle and high consider this as “enabling, just supportive.” She school and succeeded there, always making the has a right to access environmental information. Honor Roll. Her Advanced Placement courses Elena was already accustomed to the interpreter, prepared her well for college. She was lucky to requested by her parents at family functions such have teachers willing to go out of the way to as temple services and graduation ceremonies. accommodate her. Her parents always However, she was new to accessing the encouraged achievement, making her work hard interpreter in the classroom setting. At first, she and giving her so much that she wants to give Page 30 back to them and to the community. Her parents impressed upon Elena that college was not optional, that she must attend the college of her choice. Elena chose CSUN because of its mixed hearing and deaf student population. “I learned a lot from my mainstreaming program setting, because I already had my „Deaf Fantasyland‟ at home. Since I had both environments, I am satisfied with my educational upbringing,” Elena asserted. California School for the Deaf—Riverside, Fall 2013 did not trust the interpreter as her medium for class information. Eventually she grew relaxed and comfortable after she compared the information she hears with what the interpreter signed. She found that ASL-interpreting made her courses much easier to understand, so much so that she felt she could handle other courses that are more difficult. ADVOCACY TOOLS AND DEAF IDENTITY Life has worked out for Elena in large part because her parents taught her to advocate for her own rights. She regularly makes requests and asserts for her needs. She felt that if her parents had been hearing without knowledge of the Deaf world or the right tools, they might have felt compelled to take over for her. “If so, I might have felt disconnected from them, perhaps felt lost,” Elena pondered. Elena has personally witnessed this tendency among her cochlear-implanted deaf friends and their families back in St. Louis. She saw how some parents took over for their children. Some of her friends, who are now in college, got ASL-interpreting services, but others still do not sign. Elena tells, “They realize later in life that the cochlear implant did not change who they were and are now learning about their own identities.” Elena has grown up as an independent, self-reliant young Deaf woman with confidence and tools for advocacy that will take her very far in life. ADVICE Elena offers some advice to parents of deaf children: “Give your child all options. As my mother (a life coach) would say, maybe your child would not want to sign or to use voice. That is okay as long as you start your “Start your child‟s child‟s life immersed in education with life immersed in language and everything! Let education with your child lead you.” language and everything! Let your child lead you.” - Elena‟s mother *IEP = Individualized Education Plan, an annual meeting for a student in Special Education, parents, and school district to discuss student needs, progress, and educational placement Courtesy of Elena Mayer Deaf Teens Page 31 CALIFORNIA SCHOOL FOR THE DEAF—RIVERSIDE “Where language and learning thrive!” Serving as a school and a state resource center for Deaf and Hard-of-Hearing students in Southern California ASL/Literacy Instruction Common Core Standards Career/Technology Education Full Accreditation Speech/Audiology Services Family Sign Language Classes Parent Infant Program 3044 Horace Street, Riverside, CA 92506 info@csdr-cde.ca.gov 951-248-7700 / 951-782-4817 español Erika Thompson AP/Honors Classes Academic Bowl After-School Activities Athletics Program Close-Up Program International Club Student Government Transition Partnership Resource/Service Referrals Steven Gonzales (‘13) California Deaf Education Resource Center The California Department of Education, along with the California Schools for the Deaf, agrees that one of its most important goals is to ensure a quality education for Deaf and Hard of Hearing children and adolescents. Together, we recognize that the more consistently deaf and hard of hearing children in California receive resources and services, the more these children can benefit from a quality education. Following the initiative of Scott Kerby, Director of the State Special Schools and Services Division, these entities are working to establish the California Deaf Education Resource Center (CDERC). In accordance with California Education Codes, the CDERC aims to provide support to all educators, professionals and caregivers who work with Deaf and Hard of Hearing children. These services will include training and guidance on early intervention, parent education, curricula California Special Edition Deaf Teen Issue September 2013 Author, Editor, and Designer: Erika Thompson Community Resource Specialist ethompson@csdr-cde.ca.gov Copy Editors: Brandi Davies and Lynn Gold Photo/Layout Assistance: Terri Vincent, Laurie Lewis, Wes Rinella, and Rene Visco Special Thanks: Julie Rems-Smario, Jon Levy, Stacey Winsberg, and Niel Thompson School Superintendent: Mal Grossinger and assessment, and community education, as well as assistance to Local Education Agencies. Under the leadership of the Schools for the Deaf, CDERC will have the advantage of a large, state-wide community of professionals from which to draw resources and information to develop trainings and services. The CDERC invites everyone to work together toward a shared vision of language, educational opportunities, school readiness, and prosperity among all Deaf and Hard of Hearing children in California. To access services or to ask questions in Southern California, contact Dr. M. Natasha Kordus, Ph.D. To access services or to ask questions in Northern California, contact Ms. Roberta Daniels at the California School for the Deaf in Fremont. - M. Natasha Kordus, Ph.D., CDERC Supervisor at CSD Riverside Resource Center Contact: Southern California— M. Natasha Kordus, Ph.D. 951.248.7700 x6542 951.824.8105 VP nkordus@csdr-cde.ca.gov | http://issuu.com/csdrinfo/docs/csdr_special_edition_2013_-_deaf_te_2d9b068c896b20 | CC-MAIN-2014-52 | refinedweb | 12,461 | 60.45 |
cpp is not run when building .c or .cpp files
SummarySummary
C pre-processor is not run and
-D or
-optP flags are not respected when building
.c or
.cpp files.
Steps to reproduceSteps to reproduce
$ cat test.c #ifndef TEST #define TEST 0 #endif #include <stdio.h> int main(void) { printf("%d\n", TEST); return 0; } $ ghc -DTEST=9 test.c -no-hs-main && ./a.out 0
or
$ mkdir -p x && touch x/x.h $ echo '#include "x.h"' > y.c $ ghc -Ix -c y.c $ ghc -optP=-Ix -c y.c y.c:1:10: error: fatal error: x.h: No such file or directory #include "x.h" ^~~~~ | 1 | #include "x.h" | ^ compilation terminated. `gcc' failed in phase `C Compiler'. (Exit code: 1)
Expected behaviorExpected behavior
Output
9 for the first example.
Both
ghc -Ix -c y.c and
ghc -optP=-Ix -c y.c work in the second example.
This is because we don't run cpp for
Cc,
Ccxx,
Cobjc and
Cobjcxx phase and we probably should do so.
For
HCc phase, we usually run cpp in previous phase and probably don't want to run it again.
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information | https://gitlab.haskell.org/ghc/ghc/-/issues/16737 | CC-MAIN-2022-27 | refinedweb | 210 | 79.67 |
Hi, im trying to get this to read the registry by reading the contents of a folder, turning it in to an array then displaying the contents. When i compile it works fine with no errors, however when i run it once compiled, it give me the errorI've tried putting throw IOException in to, which compiles fine but still gives me the same error message.I've tried putting throw IOException in to, which compiles fine but still gives me the same error message.Exception in thread "main" java.lang.NullPointerException
at USBDevices.main(USBDevices.java:19)
Can anyone help me with this?
Thanks
import java.io.*; import java.util.*; public class USBDevices { public static void main(String[] args) { // This retrieves the files from the registry and inserting it in to an array called file java.io.File listroot = new java.io.File("\"HKLM\\SYSTEM\\CurrentControlSet\\Enum\\USBSTOR\""); java.io.File[] files = listroot.listFiles(); System.out.println("USB Registry Entries:"); // This opens the array called and displays the results for (java.io.File file : files) { if (file.isDirectory()) continue; System.out.println(file.getPath()); } } } | http://www.javaprogrammingforums.com/whats-wrong-my-code/13841-can-anyone-help-my-code-please-reading-registry.html | CC-MAIN-2015-18 | refinedweb | 185 | 59.8 |
.
In this example, we are going to create a process that calculates the square of numbers and prints the results to the console.
from multiprocessing import Process def square(x): for x in numbers: print('%s squared is %s' % (x, x**2)) if __name__ == '__main__': numbers = [43, 50, 5, 98, 34, 35] p = Process(target=square, args=('x',)) p.start() p.join print "Done" #result Done 43 squared is 1849 50 squared is 2500 5 squared is 25 98 squared is 9604 34 squared is 1156 35 squared is 1225
You can also create more than one process at the same time, as shown in the example below, in which process p1 gets the results of numbers squared, while the second process p2 checks if the given numbers are even.
from multiprocessing import Process def square(x): for x in numbers: print('%s squared is %s' % (x, x**2)) def is_even(x): for x in numbers: if x % 2 == 0: print('%s is an even number ' % (x)) if __name__ == '__main__': numbers = [43, 50, 5, 98, 34, 35] p1 = Process(target=square, args=('x',)) p2 = Process(target=is_even, args=('x',)) p1.start() p2.start() p1.join() p2.join() print "Done" #result 43 squared is 1849 50 squared is 2500 5 squared is 25 98 squared is 9604 34 squared is 1156 35 squared is 1225 50 is an even number 98 is an even number 34 is an even number Done
Communication Between Processes
Multiprocessing supports two types of communication channels between processes:
- Pipes
- Queues
Queues
Queue objects are used to pass data between processes. They can store any pickle-able Python object, and you can use them as shown in the example below:
import multiprocessing def is_even(numbers, q): for n in numbers: if n % 2 == 0: q.put(n) if __name__ == "__main__": q = multiprocessing.Queue() p = multiprocessing.Process(target=is_even, args=(range(20), q)) p.start() p.join() while q: print(q.get())
In the above example, we first create a function that checks if a number is even and then put the result at the end of the queue. We then instantiate a queue object and a process object and begin the process.
Finally, we check if the queue is empty, and if not, we get the values from the front of the queue and print them to the console.
We have shown how to share data between two processes using a queue, and the result is as shown below.
# result 0 2 4 6 8 10 12 14 16 18
It's also important to note that Python has a Queue module which lives in the process module and is used to share data between threads, unlike the multiprocessing queue which lives in shared memory and is used to share data between processes.
Pipes
Pipes in multiprocessing are primarily used for communication between processes. Usage is as simple as:
from multiprocessing import Process, Pipe def f(conn): conn.send(['hello world']) conn.close() if __name__ == '__main__': parent_conn, child_conn = Pipe() p = Process(target=f, args=(child_conn,)) p.start() print parent_conn.recv() p.join()
Pipe() returns two connection objects which represent the two ends of the pipe. Each connection object has
send() and
recv() methods. Here we create a process that prints the string
hello world and then shares the data across.
Result
# result ['hello world']
Locks
Locks work by ensuring that only one process is executed at a time, hence blocking other processes from executing similar code. This allows the process to be completed, and only then can the lock be released.
The example below shows a pretty straightforward usage of the Lock method.
from multiprocessing import Process, Lock def greeting(l, i): l.acquire() print 'hello', i l.release() if __name__ == '__main__': lock = Lock() names = ['Alex', 'sam', 'Bernard', 'Patrick', 'Jude', 'Williams'] for name in names: Process(target=greeting, args=(lock, name)).start() #result hello Alex hello sam hello Bernard hello Patrick hello Jude hello Williams
In this code, we first import the Lock method, acquire it, execute the print function, and then release it.
Logging
The multiprocessing module also provides support for logging, although the logging package doesn't use locks so messages between processes might end up being mixed up during execution.
Usage of logging is as simple as:
import multiprocessing, logging logger = multiprocessing.log_to_stderr() logger.setLevel(logging.INFO) logger.warning('Error has occurred')
Here we first import the logging and multiprocessing modules, and we then define the
multiprocessing.log_to_stderr() method, which performs a call to
get_logger() as well as adding a handler which sends output to
sys.stderr. Finally, we set the logger level and the message we want to convey.
Conclusion
This tutorial has covered what is necessary to get started with multiprocessing in Python. Multiprocessing overcomes the problem of GIL (Global Interpreter Lock) since it leverages the use of subprocesses instead of threads.
There is much more in the Python documentation that isn’t covered in this tutorial, so feel free to visit the Python multiprocessing docs and utilize the full power of this module.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/introduction-to-multiprocessing-in-python--cms-30281 | CC-MAIN-2018-22 | refinedweb | 862 | 59.74 |
Jupyter = Julia + Python + R
If you are working as a data scientist, you are likely recording your complete analysis process daily, much in the same way other scientists use a lab notebook to record tests, progress, results, and conclusions. What tools are you using to do this? I am using Jupyter Notebook every day-let me introduce it briefly to you.
- What is a Jupyter Notebook?
- Why is it useful for data analysis?
- What are the features of Jupyter Notebook?
- Perform simple data analysis in Machine Learning.
Introduction to Jupyter Notebooks
What is a Jupyter Notebook?
Jupyter Project¹ is a spin-off project from the I-Python project, which initially provided an interface only for the Python language and continues to make available the canonical Python kernel for Jupyter. The name Jupyter itself is derived from the combination of Julia, Python, and R.
This is a sample opening page in Jupyter.
Why is it useful?
Project Jupyter exists to develop an open-source platform, open standards, and services for interactive computing across many programming languages such as Python, R, and MATLAB.
Jupyter is available as a web application on a cloud ecosystem from a number of places, such as Saturn Cloud². It can also be used locally over a wide variety of installations which contain live code, equations, figures, interactive apps, and Markdown text.
Features of Jupyter Notebooks
A Jupyter Notebook is fundamentally a JSON file with a number of annotations. There are three main parts of the Notebook:
- Metadata: a data dictionary of definitions used to set-up and display the notebook.
- Notebook format: version numbers of the software used to create the notebook. The version number is used for backward compatibility.
- List of cells: there are three different types of cells — markdown (display), code (to excite), and output.
If you open the IPYNB file in a text editor, you will see the basic contents of a Jupyter node.
How will we work with Jupyter notebooks?
There are four steps:
- First step: Create a new notebook for data analysis.
- Second step: Add your analysis steps, coding, and output.
- Third step: Surround your analysis with organizational and presentational markdown to communicate an entire story.
- Last step: Interactive notebooks will then be used by others to modify parameters and data to note the effects of their changes.
Getting Jupyter Notebooks with Saturn Cloud
One of the quickest ways to get a Jupyter Notebook is to register an account on Saturn Cloud. It allows you to quickly spin up Jupyter notebooks in the cloud and scale them according to your needs.
- It deploys in your cloud so there’s no need to migrate your data. Use the whole Python ecosystem via Jupyter.
- Easily build environments and import packages (Pandas, NumPy, SciPy, etc).
- You can publish notebooks and easily collaborate on cloud-hosted Jupyter.
- Scalable Dask from laptop to server to cluster.
Above are steps to create a Jupyter Notebook on Saturn Cloud.
See further:
Can we convert a Jupyter Notebook to a Python script?
Yes, you can convert a Jupyter Notebook to a Python script. This is equivalent to copying and pasting the contents of each code block (cell) into a single .py file. The markdown sections are also included as comments.
The conversion can be done in the command line:
jupyter nbconvert --to=python notebook-name.ipynb
An example of a conversion from notebook to script.
An example of using Jupyter Notebooks for ML
Let assume that you are a doctor evaluating data for ten people and predicting if somebody could get coronavirus.
We will go step by step to evaluate our algorithm by calculating metrics such as TP, TN, FP, FN, TPR, TNR, PPV, NPV, FPR and ACC. Let us assume that you are familiar with those metrics (if not, read further here⁴).
Let’s start!
First of all, we create a new Jupyter Notebook file.
— “coronavirus.ipynb”
You predict six people will get coronavirus.
y_pred = [0, 0, 0, 0, 1, 1, 1, 1, 1, 1]
By the end of the season, you find only five people had coronavirus.
y_true = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1]
We create a confusion matrix and display it.
from sklearn.metrics import confusion_matrix tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel()print('Here is the confusion matrix') print(confusion_matrix(y_true, y_pred, labels=[1, 0]))
And here is the confusion matrix: TP = 5, TN = 4, FP = 1, FN = 0.
We calculate the percentage of sick people who are correctly identified as having the condition (also called sensitivity).
sensitivity = tp / (tp+fn)print('The percentage of sick people who are correctly identified as having the condition') print('Sensitivity : %7.3f %%' % (sensitivity*100),'\n')
The result shows that you can 100% correctly predict the people who will get coronavirus. That means sensitivity is 100%.
We also calculate the percentage of healthy people who are correctly identified as not having the condition (also called specificity).
specificity = tn / (tn+fp)print(‘The percentage of healthy people who are correctly identified as not having the condition’) print(‘Specificity : %7.3f %%’ % (specificity*100))
The result shows that you can correctly predict at a rate of 80% that people will not get coronavirus, that is, specificity is 80%.
Next, we calculate the precision of this algorithm.
from sklearn.metrics import precision_score print('The ratio of properly predicted positive clarifications to the total predicted positive clarifications.') print(precision_score(y_true, y_pred, average=None))
The algorithm can 100% predict ‘no-coronavirus’ but only 80% ‘coronavirus’ cases correctly.
We calculate the probability that records with a negative predicted result truly should be negative (called NPV metric).
npv = tn / (tn+fn) print(‘The probability that records with a negative predicted result truly should be negative: %7.3f %%’ % (npv*100))
It shows that NPV = 100%, which is very good.
We calculate the proportion of positives that yield negative prediction outcomes with the specific model (also called miss rate or FNR).
fnr = fp / (fn+tp) print(‘The proportion of positives that yield negative prediction outcomes with the specific model: %7.3f %%’ % (fnr*100))
It shows that 20% of the predicted positives are negative, meaning 1 in 5 predicted negative outputs are positive. This is not good, as you will miss 1 in 5 patients.
Then, we also calculate the false positive rate (also called FPR).
fdr = fp / (fp+tp) print(‘False discovery rate: %7.3f %%’ % (fdr*100))
It shows that nearly 17% of the predicted negative is positive, meaning 17 in 100 predicted positive outcomes are negative.
Finally, we calculate statistical biases, as these cause a difference between a result and a “true” value.
acc = (tp + tn) / (tp + tn + fp + fn) print(‘Accuracy: %7.3f %%’ % (acc*100))
This will be reported as 90% accuracy. This is a good outcome for our coronavirus model.
Conclusion
We learned how to get Jupyter Notebook on the cloud with Saturn Cloud. We also were exposed to the notebook structure, and saw the typical workflow used when developing a notebook. Lastly, we did some simple data analysis in ML.
Guest Post: Trung Anh Dang
Stay up to date with Saturn Cloud on LinkedIn and Twitter.
You may also be interested in: Best Practices for Jupyter Notebooks
References
- Jupyter homepage:
- The Jupyter notebook file:
- Metrics to Test the Accuracy of Machine Learning Algorithms | https://www.saturncloud.io/s/jupyternotebookmachinelearning/ | CC-MAIN-2020-24 | refinedweb | 1,212 | 56.45 |
A Python implementation of the AAA algorithm for rational approximation
Project description
The AAA algorithm for rational approximation
This is a Python implementation of the AAA algorithm for rational approximation described in the paper "The AAA Algorithm for Rational Approximation" by Yuji Nakatsukasa, Olivier Sète, and Lloyd N. Trefethen, SIAM Journal on Scientific Computing 2018 40:3, A1494-A1522. (doi)
A MATLAB implementation of this algorithm is contained in Chebfun. The present Python version is a more or less direct port of the MATLAB version.
The "cleanup" feature for spurious poles and zeros is not currently implemented.
Installation
The implementation is in pure Python and requires only numpy and scipy as dependencies. Install it using pip:
pip install aaa-approx
Usage
Here's an example of how to approximate a function in the interval [0,1]:
import numpy as np from aaa import aaa Z = np.linspace(0.0, 1.0, 1000) F = np.exp(Z) * np.sin(2*np.pi*Z) r = aaa(F, Z, mmax=10)
Instead of the maximum number of terms
mmax, it's also possible to specify
the error tolerance
tol. Both arguments work exactly as in the MATLAB
version.
The returned object
r is an instance of the class
aaa.BarycentricRational and can
be called like a function. For instance, you can compute the error on
Z like this:
err = F - r(Z) print(np.linalg.norm(err, np.inf))
If you are interested in the poles and residues of the computed rational function, you can query them like
pol,res = r.polres()
and the zeroes using
zer = r.zeros()
Finally, the nodes, values and weights used for interpolation (called
zj,
fj
and
wj in the original implementation) can be accessed as properties:
r.nodes r.values r.weights
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/aaa-approx/ | CC-MAIN-2022-33 | refinedweb | 324 | 55.03 |
.58 to v2.5.59
============================================
<jsimmons@maxwell.earthlink.net>
I810 fbdev updates. Cursor fix for ati mach 64 cards on big endian machines. Buffer over flow fix for fbcon putcs function. C99 initializers for the STI console drivers.Voodoo 1/2 and NVIDIA driver updates.
<jsimmons@maxwell.earthlink.net>
Added resize support for the framebuffer console. Now you can change the console size via stty. Also support for color palette changing on VC switch is supported.
<jsimmons@maxwell.earthlink.net>
[RIVA FBDEV] Driver now uses its own fb_open and fb_release function again. It has no ill effects. The drivers uses strickly hardware acceleration so we don't need cfb_fillrect and cfb_copyarea.
Cleaned up font.h. Geerts orignal pacth broke them up into a font.h in video and one in linux. Now I put them back together again in include/linux. The m68k platform has been updated for this change.
<jsimmons@maxwell.earthlink.net>
Updates from Helge Deller for the console/fbdev drivers for the PARISC platform. Small fix for clearing the screen and a string typo for the Voodoo 1/2 driver.
<jsimmons@maxwell.earthlink.net>
[MONITOR support] GTF support for VESA complaint monitors. Here we calculate the general timings needed so we don't over step the bounds for a monitor.
[fbmem.c cleanup] Name change to make teh code easier to read.
<jsimmons@kozmo.(none)>
[ATY] Somehow a merge mistake happened. We removed fb_set_var.
<jsimmons@maxwell.earthlink.net>
Remove fb_set_var. Some how it was missed in a merge conflict.
<jsimmons@maxwell.earthlink.net>
Final updtes to the GTF code. Now the code can gnerate GTF timings regardless of the validity of info->monospecs.
[ATYFB] Updates to the aty driver.
<jsimmons@maxwell.earthlink.net>
[TRIDENT FBDEV] Driver ported to the new api.
<bjorn_helgaas@hp.com>
[AGP] factor device command updates
<bjorn_helgaas@hp.com>
[AGP] fix old pci_find_capability merge botch
<bjorn_helgaas@hp.com>
[AGP] Remove unused var
<bjorn_helgaas@hp.com>
[AGP] Print AGP version & mode when programming devices.
<bjorn_helgaas@hp.com>
[AGP] factor device capability collection
<bjorn_helgaas@hp.com>
[AGP] use PCI_AGP_* constants
<bjorn_helgaas@hp.com>
[AGP] use pci_find_capability in sworks-agp.c
<falk.hueffner@student.uni-tuebingen.de>
[AGP] missing includes on Alpha
<patmans@us.ibm.com>
[PATCH] USB storage sysfs fix).
<greg@kroah.com>
[PATCH] USB: put the usb storage's SCSI device in the proper place in sysfs.
Also makes usb_ifnum_to_if() a public function
<davej@codemonkey.org.uk>
[WATCHDOG] clean up includes.
<davej@codemonkey.org.uk>
[WATCHDOG] Final 2.4 bits for advantechwdt
<davej@codemonkey.org.uk>
[WATCHDOG] Final 2.4 bits for eurotechwdt
<davej@codemonkey.org.uk>
[WATCHDOG] Final 2.4 bits for ib700wdt
<anton@samba.org>
pp64: move BUG into asm/bug.h
<anton@samba.org>
ppc64: update comment, we now zero extend all 6 arguments in the 32bit syscall path, from Milton Miller
<davem@nuts.ninka.net>
[SUNSAB]: Fix uart_get_baud_rate args.
<anton@samba.org>
ppc64: 2.5 module support, from Rusty
<anton@samba.org>
ppc64: fix build when CONFIG_MODULES=n
<jmorris@intercode.com.au>
[CRYPTO]: Add support for SHA-386 and SHA-512
- Merged SHA-384 and SHA-512 code from Kyle McMartin
<kyle@gondolin.debian.net>
- Added test vectors.
- Documentation and credits updates.
<jmorris@intercode.com.au>
[CRYPTO] remove superfluous goto from des module init exception path
<jmorris@intercode.com.au>
[CRYPTO] Add AES and MD4 to tcrypto crypto_alg_available() test.
<davem@nuts.ninka.net>
[SPARC64]: Define PAGE_BUG in asm/bug.h
<davem@nuts.ninka.net>
[SPARC64]: Add UltraSPARC-III cpu frequency driver.
<mochel@osdl.org>
driver model: update documentation.
From Art Haas.
<mochel@osdl.org>
kobject: export kset_find_obj.
From Louis Zhang.
<mochel@osdl.org>
sysfs: fixup some remaining s390 files.
From Arnd Bergmann.
<mochel@osdl.org>
sysfs: fixup NUMA file that was missed.
- Remove count and off parameters from show() method.
<davej@codemonkey.org.uk>
[WATCHDOG] Final 2.4 bits for softdog.c
<sandeen@sgi.com>
[XFS] Make sure we don't walk off the end of the err_level array
SGI Modid: 2.5.x-xfs:slinx:135827a
<sandeen@sgi.com>
[XFS] Fix dyslexic definition of XFS_MAX_ERR_LEVEL...
SGI Modid: 2.5.x-xfs:slinx:135847a
<davej@codemonkey.org.uk>
[WATCHDOG] Final 2.4 changes for w83877f_wdt.c
<sandeen@sgi.com>
[XFS] Merge max file offset fix - use standard Linux macros
SGI Modid: 2.5.x-xfs:slinx:135881a
<nathans@sgi.com>
[XFS] Fix up some comments, tidy up some macros - no functional changes.
SGI Modid: 2.5.x-xfs:slinx:135917a
<sandeen@sgi.com>
[XFS] Handle mode 0 inodes that find their way onto the unlinked list
These shouldn't be there, probably the result of corruption.
However, if we find one, handle it specially so that we don't
deadlock during unlinked list processing in recovery. Without
xfs_iput_new, we'd be waiting on an inode lock we already hold.
SGI Modid: 2.5.x-xfs:slinx:136535a
<davej@codemonkey.org.uk>
[WATCHDOG] final 2.4 fixes for wdt.c
<hch@sgi.com>
[XFS] remove superlous MAXNAMELEN checks
SGI Modid: 2.5.x-xfs:slinx:135168a
<hch@sgi.com>
[XFS] some more rename cleanups
SGI Modid: 2.5.x-xfs:slinx:135919a
<hch@sgi.com>
[XFS] xfs_getattr should be static
SGI Modid: 2.5.x-xfs:slinx:135920a
<cattelan@sgi.com>
[XFS] Fix the cmn_err stuff to mask the error level before it checks for max value
SGI Modid: 2.5.x-xfs:slinx:135869a
<davej@codemonkey.org.uk>
[WATCHDOG] Final 2.4 changes for wdt285.c
<cattelan@sgi.com>
[XFS] make *cmn_err interrupt safe
SGI Modid: 2.5.x-xfs:slinx:136126a
<cattelan@sgi.com>
[XFS] Revisit the remount read only code again.
apparently the root file system are not being synced
correctly during system shutdown
SGI Modid: 2.5.x-xfs:slinx:136269a
<hch@sgi.com>
[XFS] remount r/o fixes
SGI Modid: 2.5.x-xfs:slinx:136795a
<davej@codemonkey.org.uk>
[WATCHDOG] Final 2.4 changes for wdt_pci.c
<hch@sgi.com>
[XFS] update xattr.h copyright date
<hch@sgi.com>
[XFS] add dmapi miscdevice minor number
This doesn't mean dmapi is scheduled for inclusion, just adding
the reserved minor number to miscdevice.h for documentation purposes.
<hch@sgi.com>
[XFS] fix namespace pullution
SGI Modid: 2.5.x-xfs:slinx:136827a
<mochel@osdl.org>
sysfs: minor documentation update.
<willy@debian.org>
[PATCH] acpi_bus_register_driver patch
The current ACPI code searches for a _HID of PNP0A03. This is wrong,
it needs to check _CID too. But we already have generic code for doing
that, so this patch converts the ACPI pcihp code to do this.
<kai@tp1.ruhr-uni-bochum.de>
Module Sanity Check
This patch, based on Rusty's implementation, adds a special section
to vmlinux and all modules, which contain the kernel version
string, values of some particularly important config options
(SMP,preempt,proc family) and the gcc version.
When inserting a module, the version string is checked against the
kernel version string and loading is rejected if they don't match.
The version string is actually added to the modules during the
final .ko generation, so that a changed version string does only
cause relinking, not recompilation, which is a major performance
improvement over the old 2.4 way of doing things.
<mochel@osdl.org>
sysfs: fixup SCSI debug driver files.
<jsimmons@maxwell.earthlink.net>
[GENERIC IMAGEBLIT ACCEL]
a) Fix for cfb_imagblit so it can handle monochrome bitmaps with widths not multiples of 8.
b) Further optiminzation of fast_imageblit by removing unnecessary steps from its main loop.
c) fast_imageblit show now work for bitmaps widths which are least divisible by 4. 4x6 and 12x22 shoudl use fast_imageblit now.
d) Use hadrware syncing method of hardware if present.
e) trivial: wrap text at 80 columns.
[RIVA and 3DFX] imageblit functions busted for large images. Use generic functions for now.
Several syncing issues fixed between the accel engine and access to the framebuffer is several files.
<mochel@osdl.org>
deadline iosched: make sure queue is valid before unregistering it.
- Fixes oops on boot when freeing initrd in 2.5.58.
<rmk@flint.arm.linux.org.uk>
[ARM] Add new system call entries
Add entries for sendfile64, futex, async io, etc system calls to
both unistd.h and the system call handler table.
<rmk@flint.arm.linux.org.uk>
[ARM] Remove redundant definitions from ide.h
Remove ide_release_lock and ide_get_lock definitions from asm-arm/ide.h;
they're defined in include/linux/ide.h.
<rmk@flint.arm.linux.org.uk>
[ARM] Fix CPUFREQ initialisation oops
The CPUFREQ initialisation now registers an interface with the device
model, and thus needs to initialise after postcore. We use the
arch level for this. This does, however, impose the restriction
that cpufreq may not be available for other architecture
initialisated code.
<akpm@digeo.com>
[PATCH] ext3 ino_t removal
Patch from Andreas Dilger <adilger@clusterfs.com>
This patch against 2.5.53 removes my erronous use of ino_t in a couple of
places in the ext3 code. This has been replaced with unsigned long (the same
as is used for inode->i_ino). This patch matches the fix submitted to 2.4
for fixing 64-bit compiler warnings, and also replaces a couple of %ld with
%lu to forestall output wierdness with filesystems with a few billion inodes.
<akpm@digeo.com>
[PATCH] factor free memory into max_sane_readahead()
max_sane_readahead() permits the user to readahead up to
half-the-inactive-list's worth of pages. Which is totally wrong if most of
memory is free.
So make the limit be
(nr_inactive + nr_free) / 2
<akpm@digeo.com>
[PATCH] fix ext3 memory leak
This is the leak which Con found. Long story...
- If a dirty page is fed into ext3_writepage() during truncate,
block_write_full_page() will reutrn -EIO (it's outside i_size) and will
leave the buffers dirty. In the expectation that discard_buffer() will
clean them.
- ext3_writepage() then adds the still-dirty buffers to the journal's
"async data list". These are buffers which are known to have had IO
started. All we need to do is to wait on them in commit.
- meanwhile, truncate will chop the pages off the address_space. But
truncate cannot invalidate the buffers (in journal_unmap_buffer()) because
the buffers are attached to the committing transaction. (hm. This
behaviour in journal_unmap_buffer() is bogus. We just never need to write
these buffers.)
- ext3 commit will "wait on writeout" of these writepage buffers (even
though it was never started) and will then release them from the
journalling system.
So we end up with pages which are attached to no mapping, which are clean and
which have dirty buffers. These are unreclaimable.
Aside:
ext3-ordered has two buffer lists: the "sync data list" and the "async
data list".
The sync list consists of probably-dirty buffers which were dirtied in
commit_write(). Transaction commit must write all thee out and wait on
them.
The async list supposedly consists of clean buffers which were attached
to the journal in ->writepage. These have had IO started (by writepage) so
commit merely needs to wait on them.
This is all designed for the 2.4 VM really. In 2.5, tons of writeback
goes via writepage (instead of the buffer lru) and these buffers end up
madly hpooing between the async and sync lists.
Plus it's arguably incorrect to just wait on the writes in commit - if
the buffers were set dirty again (say, by zap_pte_range()) then perhaps we
should write them again before committing.
So what the patch does is to remove the async list. All ordered-data buffers
are now attached to the single "sync data list". So when we come to commit,
those buffers which are dirty will have IO started and all buffers are waited
upon.
This means that the dirty buffers against a clean page which came about from
block_write_full_page()'s -EIO will be written to disk in commit - this
cleans them, and the page is now reclaimable. No leak.
It seems bogus to write these buffers in commit, and indeed it is. But ext3
will not allow those blocks to be reused until the commit has ended so there
is no corruption risk. And the amount of data involved is low - it only
comes about as a race between truncate and writepage().
<akpm@digeo.com>
[PATCH] hugetlbfs: don't implement read/write file_ops
From Rohit Seth.
We're currently disabling read() and write() against hugetlbfs files via the
address_space ops. But that's a bit awkward, and results in reads and writes
instantiating useless, non-uptodate 4k pagecache against the inodes.
The patch removes the read, write and sendfile file_operations entries. So
the caller will get their -EINVAL without us ever having to go to the
pagecache layer.
<akpm@digeo.com>
[PATCH] Use for_each_task_pid() in do_SAK()
Patch from Bill Irwin.
__do_SAK() simply wants to kill off processes using a given tty. This
converts it to use for_each_task_pid().
<akpm@digeo.com>
[PATCH] Create a per-cpu proces counter for /proc reporting
proc_fill_super() simply wants a count of processes, not threads.
This creates a per-cpu counter for it to use to determine that.
<akpm@digeo.com>
[PATCH] remove has_stopped_jobs()
patch from Bill Irwin
has_stopped_jobs() is completely unused. This patch removes
has_stopped_jobs() and renames __has_stopped_jobs() to has_stopped_jobs().
<greg@kroah.com>
TTY: add module reference counting for tty drivers.
Note, there are still races with unloading modules, this patch
does not fix that...
<drepper@redhat.com>
[PATCH] new CPUID bit
Northwood P4's have one more bit in the CPUID processor info set: bit
31. Intel calls the feature PBE (Pending Break Enable).
The attached patch adds the necessary entry.
<alex@ssi.bg>
[PATCH] missing break in amd 486 cpu case
An old AMD 486DX4-SE(as reported by bios) here crashes on boot in
amd_init, doing rdmsr in k6 case. Elan fixes added case 4 there but
without break. This break allows it to boot.
<cloos@jhcloos.com>
[PATCH] i8k driver update to i8k-1.13
In addition to the diff from the 1.7 i8kutils release to the 1.13
release, I made the new globals static as per Rusty's namespace
pollution patch yesterday, and removed the reference to an include no
longer in 2.5.
<jsimmons@maxwell.earthlink.net>
[STI] Updates to latest PARISC changes. Use the latest PCI ids.
<rl@hellgate.ch>
[PATCH] export skb_pad symbol
Actually exporting the symbol introduced in 2.5.57 makes module users
happy.
<rl@hellgate.ch>
[PATCH] Fix via-rhine using skb_padto
This patch has already made it into 2.4.21pre3-ac4. Please apply.
<greg@kroah.com>
[PATCH] USB: add dev attribute for usb-serial devices in sysfs
<gerg@snapgear.com>
[PATCH] bug.h for m68knommu arch
This adds the new bug.h file for the m68knommu arch.
It is basically a copy of asm-cris/bug.h.
<gerg@snapgear.com>
[PATCH] remove BUG from m68knommu arch page.h
This removes the BUG and PAGE_BUG macros from asm-m68knommu/page.h.
All this is now moved into asm-m68knommu/bug.h.
<gerg@snapgear.com>
[PATCH] remove obsolete himem.ld from m68knommu sub-arch
This removes the last remaining obsolete m68knommu sub-architecture
linker script. No longer needed with the new merge script.
<gerg@snapgear.com>
[PATCH] clean up linker symbols in 68EZ328 ucsimm target
This cleans up the linker symbols used in the 68EZ328 ucsimm
target assembler head file. Removed some unused (and not defined)
names. Also changes a couple of names to be consistent with all
other m68knommu targets.
<gerg@snapgear.com>
[PATCH] clean up linker symbols in 68EZ328 ucdimm target
This cleans up the linker symbols used in the 68EZ328 ucdimm
target assembler head file. Removed some unused (and not defined)
names. Also changes a couple of names to be consistent with all
other m68knommu targets.
<gerg@snapgear.com>
[PATCH] move common macros into m68knommu entry.h
This overhauls asm-m68knommu/entry.h. It contains much cruft,
and is out of date relative to the underlying entry.S code.
More specifically this brings the SAVE_ALL and RESTORE_ALL macros
from the various m68knommu entry.S files into this header.
<gerg@snapgear.com>
[PATCH] remove common code from m68knommu/5307 entry.S
This converts the current m68knommu/5307 entry.S file to use the new
entry.h, and also removes all the common entry.S code that is now in the
common m68knommu/kernel/entry.S.
<gerg@snapgear.com>
[PATCH] remove common code from m68knommu/68328 entry.S
This converts the current m68knommu/68328 entry.S file to use the new
entry.h, and also removes all the common entry.S code that is now in the
common m68knommu/kernel/entry.S.
<gerg@snapgear.com>
[PATCH] remove common code from m68knommu/68360 entry.S
This converts the current m68knommu/68360 entry.S file to use the new
entry.h, and also removes all the common entry.S code that is now in the
common m68knommu/kernel/entry.S.
<gerg@snapgear.com>
[PATCH] build common m68knommu entry.S
This adds the new common m68knommu entry.S into the build list.
<anton@samba.org>
ppc64: move BUG_ILLEGAL_INSTR into asm/bug.h, noted by Milton Miller
<anton@samba.org>
ppc64: remove old strace hack
<davej@codemonkey.org.uk>
[AGPGART] warning fixes from Bjorn's last patches.
<kai@tp1.ruhr-uni-bochum.de>
ISDN/HiSax: Fix typo in drivers/isdn/hisax/config.c
<kai@tp1.ruhr-uni-bochum.de>
ISDN/HiSax: Fix PnP merge
Now it actually even compiles.
<kai@tp1.ruhr-uni-bochum.de>
ISDN: Fix the janitor fix
Adding a check for allocation failure was a good idea, it just needs
checking the right variable...
<hch@lst.de>
[PATCH] more procfs bits for !CONFIG_MMU
New version with all ifdef CONFIG_MMU gone from procfs.
Instead, the conditional code is in either task_mmu.c/task_nommu.c, and
the Makefile will select the proper file for inclusion depending on
CONFIG_MMU.
<davej@codemonkey.org.uk>
[AGPGART] implement module locking that works.
<davej@codemonkey.org.uk>
[AGPGART] Remove ancient unused bits from headers.
<paubert@iram.es>
[PATCH] Cleanup of the lcall7/lcall27 entry path.
I have more carefully tested the proposed removal of the NT flag
clearing on lcall entry.
The question I wanted to answer is: is it necessary to clear NT in the
sysenter entry path as implemented for lcall7/lcall27 or is it possible
to remove the flag manipulation from do_lcall?
Doing it only for one and not the other looks wrong since several return
paths are shared, especially the ones which end up in iret, the only
instruction which is affected by the NT flag.
The conclusion is that 2.5 is NT safe (had to dig out an old P5-133 which
I could crash without fear of data loss, so I have only tested on 1
machine). The reason this cleanup works is that now (since Jan 5th) flags
are saved and restored in switch_to() to keep IOPL private to a process
even when using sysenter/sysexit.
The side effect of that patch is that NT becomes also process-private
instead of infecting all processes and triggering a killfest of all user
mode processes, including init (AFAICT kernel threads survived, but I
did not have any debug tools enabled in the kernel).
The only addition to the preceding version is that interrupts are
reenabled in the iret fixup path because it seems that do_exit() might
otherwise spend quite some time with interrupts disabled.
<kai@tp1.ruhr-uni-bochum.de>
Consolidate read-only sections in arch/*/vmlinux.lds.S
It's annoying having to touch 20+ arch vmlinux.lds.S file for every
new section introduced, just because they all duplicate the same
statements. Since we preprocess vmlinux.lds.S anyway, let's
#include <asm-generic/vmlinux.lds.h> and share the common statements.
This is a first step in consolidating most of the read-only sections.
<trond.myklebust@fys.uio.no>
[PATCH] Fix RPC client warning in 2.5.58...
The warning
/lockd/clntXxXxXxXx RPC: Couldn't create pipefs entry
is due to the lockd process starting RPC clients as an unprivileged
user, causing path_walk() to fail. The following patch fixes it.
<trond.myklebust@fys.uio.no>
[PATCH] Fix NFS root mount handling
<ink@jurassic.park.msu.ru>
[PATCH] alpha ksyms
From Jeff.Wiedemeier@hp.com:
Export proper functions when debugging is enabled.
<ink@jurassic.park.msu.ru>
[PATCH] alpha bootp target
From Jeff.Wiedemeier@hp.com:
Fix alpha Makefiles for bootpfile target.
<ink@jurassic.park.msu.ru>
[PATCH] alpha ipi timeout
From Jeff.Wiedemeier@hp.com:
Two stage timeout in alpha call_function_on_cpu. If the
primary timeout expires with no response, log a message and
start secondary timeout. If reponse is received log how
far into secondary timeout. If no response is received,
crash.
<ink@jurassic.park.msu.ru>
[PATCH] alpha HARDIRQ_BITS
From Jeff.Wiedemeier@hp.com:
Adjust Alpha HARDIRQ_BITS check to make sure there is enough
room for each IPL, not each interrupt (Marvel can have
too many unique device interrupts for that, and it really
only needs to cover potential nesting of interrupts, which covering
the IPLs does)
<ink@jurassic.park.msu.ru>
[PATCH] alpha kernel layout
From Jeff.Wiedemeier@hp.com:
Adjust kernel layout format to match other architectures and
prevent reording of the first entry in a section with the
section start label.
<ink@jurassic.park.msu.ru>
[PATCH] alpha osf_shmat lock
From Jeff.Wiedemeier@hp.com:
Remove redundant lock in osf_shmat (sys_shmat locks already);
redundant lock has been seen to cause livelock in some workloads.
<ink@jurassic.park.msu.ru>
[PATCH] alpha ev6/ev7 virt_to_phys
From Jeff.Wiedemeier@hp.com:
Adjust virt_to_phys / phys_to_virt functions to follow
EV6/7 PA sign extension to properly convert between 43-bit
superpage I/O addresses and physical addresses.
This change is backwards compatible with all previous Alphas
as they implemented fewer than 41 bits of physical address.
<rth@are.twiddle.net>
[ALPHA] Expose shifts in virt_to_phys to the compiler.
<ink@jurassic.park.msu.ru>
[PATCH] alpha console callbacks
From Jeff.Wiedemeier@hp.com:
Add open_console / close_console callback definitions.
<ink@jurassic.park.msu.ru>
[PATCH] alpha ide hwifs
From Jeff.Wiedemeier@hp.com:
Make the max IDE HWIFS configurable on alpha (default to
previous hardwired value of 4).
<ink@jurassic.park.msu.ru>
[PATCH] alpha mem_size_limit
From Jeff.Wiedemeier@hp.com:
This adds the 32GB limit to setup.c. (It actually hits the first
2 nodes on Marvel, but that's ok, where we really run into a big
problem is if we go past 4, then we hit a much larger hole.)
<ink@jurassic.park.msu.ru>
[PATCH] alpha numa iommu
From Jeff.Wiedemeier@hp.com:
On NUMA alpha systems, attempt to allocate scatter-gather
tables local to IO processor. If that doesn't work, then
allocate anywhere in the system.
<hch@lst.de>
[PATCH] remove more junk from i2c headers
<hch@lst.de>
[PATCH] remove some junk from fs/devfs/Makefile
<hch@lst.de>
[PATCH] remove obsolete kern_umount alias for mntput
<ink@jurassic.park.msu.ru>
[PATCH] alpha numa update
From Jeff.Wiedemeier@hp.com:
numa mm update including moving alpha numa support into
machine vector so a generic numa kernel can be used.
<mbligh@aracnet.com>
[PATCH] Fix interrupt dest mode / delivery mode confusion
Patch from James Cleverdon & John Stultz
Currently the naming for the IO-APIC fields is very confused, we assign
dest_LowestPrio to delivery_mode, and INT_DELIVERY_MODE to dest_mode.
The values are correct, but the naming is wrong - this patch corrects
that confusion. It also moves the definitions of those settings into
subarch, where they belong (we have to use fixed delivery mode for Summit
due to what seems to be an Intel IO-APIC bug with P4 clustered mode).
<ink@jurassic.park.msu.ru>
[PATCH] alpha smp fixes
From Jeff.Wiedemeier@hp.com:
Misc alpha smp updates for 2.5 tree.
<mbligh@aracnet.com>
[PATCH] Add ACPI hook, rename raw_phys_apicid to bios_cpu_apicid
Patch from James Cleverdon & John Stultz
This adds machine a type detection hook to the acpi code, and renames
raw_phys_apicid to bios_cpu_apicid (it's an array of apicid's to boot,
indexed by the bios' cpu numbering), and I other large machines will
need to use it later ... not necessarily using physical interrupts.
<mbligh@aracnet.com>
[PATCH] Make IRQ balancing work with clustered APICs
Patch from James Cleverdon & John Stultz
The IRQ balancing code currently assumes that the logical apicid is
always '1 << cpu', which is not true for the larger platforms.
We express this as an abstracted macro instead, and move the
cpu_to_logical_apicid definition to subarch, so we can make it exactly
"1 << cpu" for normal machines - maximum speed, minimum change risk.
A couple of things are abstracted from the smp_boot_cpus loop in order
to enable us to use the bios_cpu_apicid array to boot cpus from without
disturbing the code path of current machines.
<mbligh@aracnet.com>
[PATCH] Fix APIC header defines for Summit
Patch from James Cleverdon & John Stultz
Changes IO_APIC_MAX_ID to depend on the APIC type we're using.
The Summit machines have to use a larger set of bits in the apic registers,
we enlarge under ifdef for Summit only. We enlarge MAX_APICS for summit
as well as NUMA-Q (it would be nice to move this to subarch, but it creates
circular dependency problems ... I'll fix this up later).
Adds a check for the newer Summit boxes with a different name.
<mbligh@aracnet.com>
[PATCH] Enable Summit in makefile, update summit subarch code
Adds the summit subarch hook to the config file, and updates various things
all inside the summit subarch directories (ie this can't possibly break
anyone else ;-)). The Summit's subarch had got out of sync in a few places.
<ink@jurassic.park.msu.ru>
[PATCH] alpha kernel start address
From Jeff.Wiedemeier@hp.com:
Bump non-legacy start addr to 16mb to accomodate new larger
SRM console footprint.
<linux@brodo.de>
[PATCH] cpufreq: fix compilation, name of gx-suspmod driver
- fix cpufreq drivers compilation on not-bleeding-edge-gcc's (Adrian Bunk)
- gx-suspmod.c hasn't had a name yet
<hch@lst.de>
[PATCH] fix intermezzo compilation
Have I already mentioned that the intermezzo code isn't exactly nicely
readable? ..
<hch@lst.de>
[PATCH] don't include coda_fs_i.h in fs.h
It's simply not needed anymore in 2.5
<henning@meier-geinitz.de>
[PATCH] Change maintainership of USB scanner driver
<hch@lst.de>
[PATCH] umode_t changes from Adam's mini-devfs
The use of umode_t instead of devfs-specific char vs block #defines
in Adam's mini-devfs patch makes sense independant of whether his patch
should get merged. While reviewing his changes I also notices that
most of the number allocation functionality in devfs has no business
beeing exported. In addition I cleaned up devfs_alloc_devnum/
devfs_dealloc_devnum a bit.
<hch@sgi.com>
[PATCH] stale bdev reference in quotactl
sys_quotacl tries to do a get_super on a struct block_device * to which
it doesn't hold a reference (nor does it actually have to be non-NULL).
As lookup bdev by name is a rather common operation I splitted out a new
helper, lookup_bdev() that does this out of open_bdev_excl and switched
quota.c to use it. lookup_bdev() holds a proper reference that needs
to be dropped by bdput(), and it's well documented.
<david-b@pacbell.net>
[PATCH] maintain hcd_dev queue in FIFO order
Current uses of the urb_list have all been to make
sure we have some list of pending urbs, so we can
clean them all up after HCs die, and avoid trying
to unlink something that's not actually linked.
So order hasn't mattered.
This makes the order be FIFO, which is more useful
for other purposes. Like being the HCD's internal
schedule, or dumping for debug.
<baldrick@wanadoo.fr>
[PATCH] USB: kill speedtouch tasklet when shutdown
speedtouch: kill receive queue tasklet on shutdown (race pointed
out by Oliver Neukum).
<baldrick@wanadoo.fr>
[PATCH] USB: make more speedtouch functions static
speedtouch: make more functions static.
<baldrick@wanadoo.fr>
[PATCH] USB: SpeedTouch not Speed Touch
speedtouch: use SpeedTouch everywhere (was sometimes Speed Touch).
<greg@kroah.com>
USB: added .owner for USB drivers that have a struct tty_driver
<Nick.Holloway@pyrites.org.uk>
[PATCH] cpia driver update
Here are some minor fixes and cleanups to the cpia (Creative WebCam II et
al) driver. These have been extracted from the sourceforge CVS archive,
and I'd like to get these in before a larger change to the parallel port
code to support more transfer modes.
This patch contains:
* cpia: use the <linux/list.h> list implementation, instead of cpia specific
version.
* cpia_pp: don't clear camera list after cameras have been registered (as
this prevents them being deregistered, and oops after module
unload).
* hold cpia_pp list spinlock while walking list, not just during the
element removal.
<ya@slamail.org>
[PATCH] fix cardbus/hotplugging
The pci_enable_device() function will fail at least on i386 (see
arch/i386/pci/i386.c: pcibios_enable_resource (line 260)) if the
resources have not been assigned previously. Hence the ostensible
resource collisions.
I added a small comment (and modified another) so future janitors won't
move pci_enable above pci_assign_resource again.
<torvalds@penguin.transmeta.com>
Fix backslash at end of file
<mochel@osdl.org>
driver model: fix bogus driver binding error reporting and handling.
Some error checking was added ca. 2.5.58 that would remove a device from
its bus's list of devices if device_attach() returned an error. This
included errors returned from drv->probe(), and the -ENODEV error returned
if the device wasn't bound to any driver.
This was BAD since it was perfectly fine for a device not to bind to a
driver immediately, and for drivers to return an error on probe() if the
device doesn't exactly qualify as one it supports.
This changes device_attach() and driver_attach() to both return void,
instead of an error, since they really can never fail hard enough to cause
the device or driver to be removed from the bus.
<torvalds@penguin.transmeta.com>
Fix page_address() to not re-evaluate its arguments
multiple times under certain circumstances.
This fixes svc_tcp_recvfrom().
Found by Ted Phelps <phelps@dstc.edu.au>
<trini@kernel.crashing.org>
PPC32: Change the MontaVista copyright / GPL boilerplate to
a condensed version.
<torvalds@home.transmeta.com>
We need to assign resources to cardbus cards _regardless_ of whether
probing tells us they already have a range. The old information is
stale.
<anton@samba.org>
ppc64: remove old signal code, unused on 64bit userspace
<rth@dorothy.sfbay.redhat.com>
[ALPHA] Corrections to recent vmlinux.lds.S changes.
Fix merge conflicts with asm-generic/vmlinux.lds.h change.
Fix ordering of large alignment data sections.
<anton@samba.org>
ppc64: Remove code which zero/sign extends arguments 5 and 6, its done unconditionally now
<paulus@samba.org>
PPC32: Add support for PPC 4xx on-chip devices using the generic
device model.
<davem@nuts.ninka.net>
[SPARC64]: Move topology_init to setup.c, it is not SMP specific.
<paulus@samba.org>
PPC32: Page-align the data section of the boot wrapper.
This is needed for Open Firmware on older powermacs to be able to
load the wrapper. Without this OF gives a "CLAIM failed" error.
<paulus@samba.org>
PPC32: Better support for PPC 4xx debug facilities.
This provides for separate global and per-thread debug control
register value(s), which are switched as appropriate. This allows
us to use both an external JTAG debugger for debugging the kernel
as well as using gdb to debug user programs.
<paulus@samba.org>
PPC32: Use a per-cpu variable for prof_counter and prof_multiplier.
<anton@samba.org>
ppc64: fix exception handling in socket multiplexer
<davem@nuts.ninka.net>
[SPARC64]: Use init/exit facility of cpufreq infrastructure.
<randy.dunlap@verizon.net>
[PATCH] update LOG BUF SIZE config.
The current LOG_BUF size is a bit confusing the first
time that "make oldconfig" is used. It's difficult to
select anything other than the default value.
Also, you (Linus) expressed a desire to have this
configurable only if DEBUG_KERNEL or "kernel hacking"
was enabled, so I've changed it to accomplish that.
This patch also uses Kconfig in a way that Roman intended
since a patch in 2.5.52 which enables default values if
a prompt is not enabled, but lets values be chosen when
the prompt is enabled. You also asked for this in setting
this config option.
<davem@nuts.ninka.net>
[SPARC64]: Update defconfig.
<valko@linux.karinthy.hu>
[SPARC64]: Handle SO_TIMESTAMP properly in compat recvmsg.
<roland@topspin.com>
[NET]: Fix up RTM_SETLINK handling.
<anton@samba.org>
ppc64: Temporary workaround for oops during coredump.
<rmk@flint.arm.linux.org.uk>
[ARM] Update sa1100fb
Add cfbfillrect / cfbcopyarea / cfbimgblt objects for SA1100fb.
Remove redundant "pm" member.
<rmk@flint.arm.linux.org.uk>
[ARM] Update acornfb for new fbcon layer.
<rmk@flint.arm.linux.org.uk>
[ARM] Use new asm/bug.h for arch/arm/kernel/bios32.c
<rmk@flint.arm.linux.org.uk>
[ARM] Prevent "scheduling while atomic" in cpu_idle()
<rmk@flint.arm.linux.org.uk>
[ARM] Update mach-types; add 8 new machine types, fix karo entry.
<rmk@flint.arm.linux.org.uk>
[ARM] Fix failure paths in fd1772.c initialisation
Ensure that we clean up properly after initialisation error,
releasing all claimed resources in an orderly manner and
returning the correct error code.
<rmk@flint.arm.linux.org.uk>
[ARM/IDE] Fix BLK_DEV_IDEDMA setting on non-Acorn ARM systems
Only default BLK_DEV_IDEDMA on BLK_DEV_IDEDMA_ICS if ARCH_ACORN is
set, not if ARM is set. There are PCI ARM systems out there!
<rmk@flint.arm.linux.org.uk>
[ARM] Fix Integrator __virt_to_bus/__bus_to_virt
__virt_to_bus/__bus_to_virt depended on INTEGRATOR_HDR0_SDRAM_BASE
Unfortunately, this is defined in arch-integrator/platform.h, and
we really don't want to include it in memory.h.
We instead use BUS_OFFSET, which will eventually depend on the CPU
number in the system.
<mzyngier@freesurf.fr>
sysfs EISA support
Base patch adding sysfs support for the EISA bus
<mzyngier@freesurf.fr>
EISA naming database
Please note that the naming DB is now completely optional. If there is
no eisa.ids in the drivers/eisa/ directory, build will behave as if
CONFIG_EISA_NAMES is disabled. So this patch can be left out if there
is any objection.
<mzyngier@freesurf.fr>
EISA sysfs updates to 3c509 and 3c59x drivers
<mzyngier@freesurf.fr>
EISA sysfs AIP update
Without it, unloading a module
leads to some unpleasant oops...
<sfr@canb.auug.org.au>
[PATCH] compat_{old_}sigset_t generic part
This creates compat_sigset_t and compat_old_sigset_t i.e. just the
types. This is just the generic part, the architecture specific parts
will be sent to respective maintainers.
<sfr@canb.auug.org.au>
[PATCH] compat_{old_}sigset_t s390x part
With Martin's continuing approval, here is the s390x part of the patch.
<sfr@canb.auug.org.au>
[PATCH] compat_sys_sigpending and compat_sys_sigprocmask
This creates compat_sys_sigpending and compat_sys_sigprocmask and
patches sent to maintainers remove all the arch specific versions.
<sfr@canb.auug.org.au>
[PATCH] compat_sys_sigpending and compat_sys_sigprocmask
Here is the s390x patch to use the new generic compatibility functions.
<kai@tp1.ruhr-uni-bochum.de>
kbuild: fix broken kallsyms on non-x86 archs
From: James Bottomley <James.Bottomley@steeleye.com>
kallsyms is broken in parisc on 2.5.56 again because of assembler syntax
subtleties. This is the offending line:
printf("\t.byte 0x%02x ; .asciz\t\"%s\"\n"
Note the `;' separating the two statements. On some platforms `;' is a
comment in assembly code, and thus the following .asciz is ignored.
<kai@tp1.ruhr-uni-bochum.de>
kbuild/modules: Save space on symbol list
The current code reserves 60 bytes for the symbol string of every
exported symbol, unnecessarily wasting kernel memory since most symbols
are much shorter. We revert to the 2.4 solution where the actual strings
are saved out of line and only the pointers are kept.
The latest module-init-tools already handle this case, people who are
using older versions need to update to make sure depmod works
properly.
Saves 80 KB in vmlinux with my .config.
<kai@tp1.ruhr-uni-bochum.de>
kbuild: Make asm-generic/vmlinux.lds.h usable for IA-64
Allow for different LMA vs VMA (logical/virtual memory address).
IA-64 uses the LMA to tell the bootloader the physical location
of the image, whereas the VMA as always represents the address the
image gets mapped to.
The default (used for non IA-64) is LMA == VMA, which is what
the linker previously assumed anyway.
Also:
o remove duplicate .rodata1 section
o __vermagic doesn't need its own section in vmlinux, it can
just go into .rodata
o .kstrtab hasn't been used since the introduction of the new
module loader, so it should be deleted from the linker scripts
as well (except for arch/um, which does not seem up to date
w.r.t the new module loader yet)
o The kallsyms mechanism has changed to not need its own section,
so again the references in the linker scripts can go away
<mbligh@aracnet.com>
[PATCH] make vm_enough_memory more efficient
vm_enough_memory seems to call si_meminfo just to get the total
RAM, which seems far too expensive. This replaces the comment
saying "this is crap" with some code that's less crap.
<kai@tp1.ruhr-uni-bochum.de>
kbuild: kallsyms cleanup
There's no need to alias the kallsyms-related symbols to a dummy
variable, we can as well just do the sanity check against NULL.
<geert@linux-m68k.org>
[PATCH] Amiga keyboard fix
Amiga keyboard: the release bit indicates a key release, not a key press.
<geert@linux-m68k.org>
[PATCH] Q40/Q60 IRQ updates from 2.4.x
Q40/Q60 IRQ updates from 2.4.x
<geert@linux-m68k.org>
[PATCH] M68k exception table updates
M68k exception table updates to compensate for changes in 2.5.55
<geert@linux-m68k.org>
[PATCH] Sun-3: Add missing deactivate_mm()
Sun-3: Add missing deactivate_mm() (yes, there should be two of them in
include/asm-m68k/mmu_context.h: one for Motorola MMUs and one for Sun-3 MMUs)
<geert@linux-m68k.org>
[PATCH] M68k generic RTC driver updates
M68k generic RTC driver updates:
- Revive help text for CONFIG_GEN_RTC
- Re-add lost config option for CONFIG_GEN_RTC_X
- Re-add lost mach_get_ss()
- Export mach_[gs]et_rtc_pll()
- Add implementation of mach_get_ss() and mach_[gs]et_rtc_pll() for Q40/Q60
- Add missing include
- Add implementation of get_rtc_ss()
<geert@linux-m68k.org>
[PATCH] Atari ST-RAM swap update
Jeff removed the swap_device member from struct swap_info_struct
()
but it is still used in the m68k arch for the ST-RAM. The below
should remove it.
Frankly, I didn't try compiling... My original intent was to move the
swap_list definition from swap.h to mm/swapfile.c, but m68k still
uses it here :( so perhaps this isn't possible. And I just happened to
stumble upon this.
(from Marcus Alanen <maalanen@ra.abo.fi> through Rusty Trivial Russell)
<geert@linux-m68k.org>
[PATCH] Q40/Q60 keyboard fixes
Q40/Q60 keyboard fixes:
- IRQ definitions were prepended with Q40_
- <asm/keyboard.h> no longer exists
- Let q40kbd_init() fails if not running on a Q40/Q60
- q40kbd_init() must return an error code
- Make q40kbd_{init,exit}() static
<geert@linux-m68k.org>
[PATCH] Generic RTC driver documentation
Generic RTC driver: fix spelling in documentation (from Geoffrey Lee
<glee@gnupilgrims.org>)
<geert@linux-m68k.org>
[PATCH] Mac/m68k NCR5380 SCSI updates
Mac/m68k NCR5380 SCSI updates (forward port of Ray Knight's changes in 2.4.x):
- Forward port of pseudo-DMA from 2.2.20
- Move SCSI host template definition from mac_scsi.h to mac_scsi.c
<cloos@jhcloos.com>
[PATCH] i8k driver cleanups
The input system in 2.5 is able to see the volume keys on inspiron
notebooks w/o help from i8k.c. This patch therefore removes the
new code from i8kutils-1.17 for feeding those keypresses to the
keyboard driver.
This leaves only MODULE_PARM(restricted, "i") as the useful addition
to what was in 2.5.58's i8k.c. This module parm restricts control of
the system fans to processes with CAP_SYS_ADMIN set.
<cloos@jhcloos.com>
[PATCH] alsa before oss in Kconfig
Move ALSA before OSS
<hch@sgi.com>
[PATCH] fix signed/unsigned issue in SGI partitioning code
The Linux code for SGI partitions uses an int instead of an unsigned int
in the ondisk structure in two places, which breaks > TB partitions.
While porting the code over from an internal 2.4-based tree I've also
switched it to use the explicit uXX/sXX types everywhere and moved the
struct defintions above sgi_partition().
<hch@sgi.com>
[PATCH] remove GET_USE_COUNT
This is a left-over from the old modules code, Rusty stubbed it out
to always return 0. Three scsi pcmcia driver check it for beeing non-NULL,
trying to work around their unload races. I've added #warnings there
and stubbed out the GET_USE_COUNT so we can remove it from the core.
<ink@jurassic.park.msu.ru>
[PATCH] alpha PCI setup update
Until now, we were configuring all PCI resources from scratch.
This patch allows to use unchanged PCI setup on platforms
where the firmware does it reasonably well (titan and marvel).
[The patch to setup-bus.c that removes "FIXME" from here (ie makes
pci_assign_unassigned_resources to match its name) exists at least
for two months, but I've yet to convince Linus that it does the
right thing...]
Ivan.
<ink@jurassic.park.msu.ru>
[PATCH] alpha_remap_area_pages
From Jeff.Wiedemeier@hp.com:
Add arch/alpha/mm/remap.c (__alpha_remap_area_pages).
<ink@jurassic.park.msu.ru>
[PATCH] alpha titan update
From Jeff.Wiedemeier@hp.com:
Update titan system support include AlphaServer DS25, AGP,
enhanced machine check handling.
<rth@kanga.twiddle.net>
[ALPHA] Use direct calls to titan_ioremap/unmap when building
a titan specific kernel.
<ink@jurassic.park.msu.ru>
[PATCH] alpha irq proc update
From Jeff.Wiedemeier@hp.com:
- Only create smp_affinity /proc nodes if a set_affinity handler
is provided.
- Limit the number of irq nodes that will be created in /proc
to avoid overfilling the /proc inode space.
<ink@jurassic.park.msu.ru>
[PATCH] alpha smp callin
From Jeff.Wiedemeier@hp.com:
Add platform-specific callin for SMP.
<ak@muc.de>
[PATCH] x86_64 update
x86-64 updates for 2.5.58. Changes only x86-64 specific files.
- Rewrote module allocation. Lots of bugs fixed. Module loading
should work now again.
- Kconfig help fixes from Randy Dunlap
- Makefile cleanups from Pavel Machek and Sam Ravnborg
- Assembly cleanups from Pavel
- defconfig update
- Better strlen_user/strnlen_user
- Merge with i386: new ptrace commands, 32bit vsyscall signal trampolines
new deactivate_mm, add asm/bug.h
- Make sure initramfs is freed after booting (thanks to Kai for the hint)
- User per cpu data for profile counters (Ravikiran Thirumalai)
- 32bit compat_* updates from Stephen Rothwell
- Fix race in context switch. The exception handler for bogus segment
loads in __switch_to needs to keep interrupts disabled, otherwise an
interrupt can deadlock on scheduler locks. Also make sure they don't
printk or set oops_in_progress during printk because printk does a
wake_up too.
- Disable 64bit GS base changes for processes. I cannot get it to work
reliably.
- Clear IOPL on kernel entry
<hch@sgi.com>
[PATCH] remove MOD_IN_USE
Another left-over from ancient module code, it was supposed to return
non-zero if the module has a use count, but currently it always
evaluates to 0.
There are a few users of different types:
(1) ioctl that perform a while(MOD_IN_USE) MOD_DEC_USE_COUNT loop.
Just rip them out, we now have forced module unloading.
(2) printk's that moan if the use-count in not zero in the exitfunc.
Just rip them out, this can't happen.
(3) if(MOD_IN_USE) MOD_DEC_USE_COUNT constructs in ->close of a few
serial drivers. Just remove the conditional, we did a
MOD_INC_USE_COUNT in ->open.
(4) This one is interesting: drivers/sbus/char/display7seg.c uses
the module use count to track openers. Replace this with an
atomic_t.
In addition remove tons of stale comments in network driver that aren't
understandable for anyone who doesn't know ancient Linux module semantics.
<mbligh@aracnet.com>
[PATCH] (1/3) Minimal NUMA scheduler
Patch from Martin J. Bligh
This adds a small hook to the find_busiest_queue routine to allow us to
specify a mask of which CPUs to search over. In the NUMA case, it will
only balance inside the node (much cheaper to search, and stops tasks
from bouncing across nodes, which is very costly). The cpus_to_balance
routine is conditionally defined to ensure no impact to non-NUMA machines.
This is a tiny NUMA scheduler, but it needs the assistance of the second
and third patches in order to spread tasks across nodes.
<mbligh@aracnet.com>
[PATCH] (2/3) Initial load balancing
Patch from Michael Hohnbaum
This adds a hook, sched_balance_exec(), to the exec code, to make it
place the exec'ed task on the least loaded queue. We have less state
to move at exec time than fork time, so this is the cheapest point
to cross-node migrate. Experience in Dynix/PTX and testing on Linux
has confirmed that this is the cheapest time to move tasks between nodes.
It also macro-wraps changes to nr_running, to allow us to keep track of
per-node nr_running as well. Again, no impact on non-NUMA machines.
<mbligh@aracnet.com>
[PATCH] (3/3) NUMA rebalancer
Patch from Erich Focht
This adds a hook to rebalance globally across nodes every NODE_BALANCE_RATE
iterations of the rebalancer. This allows us to easily tune on an architecture
specific basis how often we wish to rebalance - machines with higher NUMA
ratios (more expensive off-node access) will want to do this less often.
It's currently set to 100 for NUMA-Q and 10 for other machines. If the
imbalance between nodes is > 125%, we'll rebalance them. The hook for this
is added to the NUMA definition of cpus_to_balance, so again, no impact
on non-NUMA machines.
<rth@kanga.twiddle.net>
[ALPHA] AGP infrastructure for AGP implemented in Alpha corelogic
(Titan / Marvel), Kconfig and headers.
From Jeff Wiedemeier.
<rth@kanga.twiddle.net>
[ALPHA] Marvel (AlphaServer ES47, ES80, GS1280) support.
From Jeff.Wiedemeier@hp.com.
<rth@kanga.twiddle.net>
[ALPHA] Fixups to Marvel and Titan for incomplete merging
of AGP and SRMCONS patches.
<rth@kanga.twiddle.net>
[ALPHA] Formatting cleanup, warning removal, move declarations
to header files where they belong.
<rth@kanga.twiddle.net>
[ALPHA] Correct io.h exports and inlining for marvel and titan.
<Jeff.Wiedemeier@hp.com>
[PATCH] Fix marvel irq count computation.
Found a buglet in the marvel code -- doesn't change the number of IRQS
just the logic to get there.. This applies on top of the other marvel
code.
/jeff
<torvalds@penguin.transmeta.com>
Linux v2.5.59
Linux is a registered trademark of Linus Torvalds | http://lwn.net/Articles/20315/ | crawl-003 | refinedweb | 7,766 | 59.3 |
At 00:01 +0200 2004/07/21, Laurence Finston wrote: >> 4) The acronym POD stands for "plain old data." > >Thanks again. You certainly know your standard. Nope, just some parts of it. This is one thing one eventually encounters. >> The idea of C++ is to take those running time expenses in favor of >> programming convenience. >I don't agree that this is the idea of C++. I believe C++ is designed to >support a variety of different programming styles. I believe that the design criterion that BS expressed was that C++ should admit efficient implementations, but by no means as optimized as in C. Sed the book DEC++ ("Design and Evolution of C++"). > One of the important >goals in the design of C++ was also not to lose the ability to program in a >low-level style efficiently. Right: C++ is a multiparadigm language. But the high level cannot be as efficient as optimized low-level programming. > I don't quite understand what you mean by >automatic cleanup, since C++ doesn't do garbage collection. I assume you mean >something having to do with destructors performing deallocation when a count >reaches 0, or something similar. Right: If one uses a C++ stack as semantic stack, when the Bison parser unwinds it, the C++ code will invoke class destructors, which can be used for cleanup then. Under C, one does not have that possibility: It must somehow be made by hand. The reference count is just a way to provide a primitive GC: Saving some memory and recopying. If time and space are not programming concerns, one does not need to use it. But I have found that it is very convenient in a parser, as it typically builds some objects, which then are conveniently fit together. >> . > >I was wrong to say it wouldn't work. I believe it would work, I just >don't think it would be good design for my application. Right. > For example, many of >the types in 3DLDF are derived from the abstract base class `Shape', e.g., >`Point', `Path', `Circle', etc. Others are not, e.g., `Transform', `Pen', and >`Color'. If it would make sense to define virtual functions that might be >called on objects of any of these types in the parser actions, then I think it >would make sense to define an abstract base class for all of them. Since this >is not the case, I don't see any advantage in doing so over using a `void*'. Actually, I have a similar situation in my theorem prover: I have a class object, which maintains a reference-counted pointer to a base class object_root. All polymorphic objects are derived from object_root. For example: class formula_root; class proposition_root; class variable_root; But as it becomes difficult to keep track of the semantics while programming, I have added a few more object classes: class proposition; class formula; class variable; These need not having any derivation order, but are only there to simplify my programming. It is easy to convert data of any of them to class object data. Thus, class semantic_type { public: long number_; std::string text_; my::object object_; semantic_type() : number_(0) {} }; #define YYSTYPE semantic_type will suffice as semantic type. One converts data using either dynamic_cast<T*>, in which case null pointer must be checked, or dynamic_cast<T&>, in which case null pointers case an exception to be thrown. >Overhead is also a significant factor. Currently, 3DLDF produces only >MetaPost code, but ultimately I'd like it to perform its own scan converting >and rendering and produce output in graphics formats such as PNG and MNG. >Someday, I'd also like it to be able to produce graphics in real-time. So I >do have to try to program efficiently in way that's not necessary for a lot of >applications in these days of fast processors and cheap memory. I'm actually >glad of the excuse to program in a somewhat low-level style. Sadly, it's not >what most employers seem to be looking for these days. I can't tell you what is right for you here. :-) Here is my reference-count setup: class object_root; class object; class object_root { mutable unsigned count_; public: object_root() : count_(1) {} virtual ~object_root() {} object_root* copy() const { ++count_; return const_cast<object_root*>(this); } void shed() { if (--count_ == 0) delete this; } virtual object_root* clone() const = 0; virtual void write(std::ostream&, write_style) const = 0; }; class object { protected: object_root* data_; public: object_root* copy() const { return (data_ == 0)? 0 : data_->copy(); } void shed() { if (data_ != 0) data_->shed(); } object() : data_(0) {} ~object() { shed(); } object(const object& x) : data_(x.copy()) {} object& operator=(const object& x) { if (data_ != x.data_) { shed(); data_ = x.copy(); } return *this; } object(object_root* rp) : data_(rp) {} object_root* data() { return data_; } const object_root* data() const { return data_; } template<class T> struct cast { T item; cast(object& x) : item(dynamic_cast<T>(x.data())) {} operator T() { return item; } }; void write(std::ostream& os, write_style ws) const { if (data_ == 0) { if (trace_null) os << "ø"; } else data_->write(os, ws); } }; The object::cast struct is just an experimental attempt to autmoate dynamic_casts It is probably simöler to do it by hand. Hans Aberg | http://lists.gnu.org/archive/html/help-bison/2004-07/msg00053.html | CC-MAIN-2016-22 | refinedweb | 860 | 62.58 |
Write a recursive C function to print reverse of a given string.
Program:
# include <stdio.h> /* Function to print reverse of the passed string */ void reverse(char *str) { if(*str) { reverse(str+1); printf("%c", *str); } } /* Driver program to test above function */ int main() { char a[] = "Geeks for Geeks"; reverse(a); getchar(); return 0; }
Explanation: Recursive function (reverse) takes string pointer (str) as input and calls itself with next location to passed pointer (str+1). Recursion continues this way, when pointer reaches ‘\0′, all functions accumulated in stack print char at passed location (str) and return one by one.
Time Complexity: O(n) | http://www.geeksforgeeks.org/reverse-a-string-using-recursion/ | CC-MAIN-2015-40 | refinedweb | 103 | 60.85 |
In this C program, we will find the missing characters to make a string panagram. Panagram is any sentence that contains all the alphabets from A-Z.
It’s a difficult task to form a sentence that contains all the alphabets. However, let’s have a look at some of the sentences which can be said to as a panagram.
“The quick brown fox jumps over the lazy dog”.
The above sentence contains all the alphabets from ‘a’ to ‘z’.
Now let’s have a look at the C program which will help you find which character you need to add to make a given sentence a panagram
Missing Characters to Make a String Pangram in C
#include<bits/stdc++.h> using namespace std; const int CHAR_SIZE = 26; string missingChars(string s) { bool present[CHAR_SIZE] = {false}; for (int m=0;m<s.length();m++) { if (s[m] >= 'a' && s[m] <= 'z') present[s[m]-'a'] = true; else if (s[m] >= 'A' && s[m] <= 'Z') present[s[m]-'A'] = true; } string res = ""; for (int n=0; n<CHAR_SIZE; n++) if (present[n] == false) res.push_back((char)(n+'a')); return res; } int main() { string s= "The quick brown fox j" "over the lazy dog"; cout << missingChars(s); return 0; }
Output:
mps | https://www.codeatglance.com/missing-characters-to-make-a-string-pangram-in-c/ | CC-MAIN-2020-40 | refinedweb | 209 | 78.89 |
.
Draft
This page is not complete..
This is written from the perspective of a Firefox extension, but most items apply to extensions for other Mozilla-based applications such as Thunderbird or SeaMonkey.
Web content handling
In general the best way to ensure that the browser is not compromised when you load content is to make sure it does not have those privileges. A more detailed explanation of this process is in Displaying web content in an extension without security issues.
The privileges that a document gets also depend on where it comes from. For example, if you load a chrome URL, this means the content has been registered with Firefox and has full access. Content from a domain (such as) can only access that same domain. Files
loaded using the file protocol can access files on the user's disk and other local devices. There are ways to get around the content/chrome security barrier, if for example, you want a web page to send a notification to the add-on (or vice versa). One way to do this is to use custom DOM events; see Interaction between privileged and non-privileged pages.
Regardless of where the document comes from, you can further restrict what it can do by applying properties to the document holder, known as the docshell.
frame.docShell.allowImages = false;
frame.docShell.allowJavascript = false;
frame.docShell.allowPlugins = false;
There are more examples listed in the document listed above in this section. In certain circumstances you may want to run code in your extension, but you would like to give it restricted privileges. One of the best ways to do this is to use
Components.utils.evalInSandbox(). Note that objects passed into the sandbox are restricted, but objects that come back out from it are not. Refer to the document to find out how to avoid such pitfalls. For more information, refer to the evalInSandbox section.
The sidebar: a use case
The sidebar in Firefox is designed to hold both chrome (privileged) content and Web (nonprivileged) content, the latter being in the form of Web pages. These Web pages can come from a server, or come from local HTML files bundled with the extension. For pages coming from the server, you need to take steps to ensure that the content can not call back into the Web browser and run malicious code. The main way to do this is by creating an iframe or browser element in the sidebar, and loading your content there. Give the element a
type="content" attribute, which essentially sandboxes the code there and blocks callback rights into chrome.
Using eval() in an extension
Using the built-in JavaScript
eval function is frowned upon in the context of extensions. While there are some legitimate use-cases, most of the time there are safer alternatives. This blog post offers some excellent reasons not to use
eval().
Sandboxed HTTP connections
The main purpose of sandboxed HTTP connections is to interact with a web service, without interfering with the cookies set in the browser by that service/site. For example, if you are loading pictures or other data from a photo sharing site, you can sandbox your connections to that site so that the normal browsing of that site by the user in the main Firefox browser is not affected. For a real world use case, take a look at this blog post.
Handling of logins and passwords
There are various ways of storing data in Firefox, but for user logins and passwords, you should use the Login Manager. This is the same store that holds the logins from web pages, and passwords can only be retrieved using a site/username pairing known to the author. The convention for extensions is to use a chrome url for the site identifier, to avoid clashes with logins for sites. It could be the case that your extension provides a different tool or tools for services on your site.
This approach is preferable because it provides users with a familiar interface for interacting with logins, via the Firefox Password Manager. When users clear logins using the "Clear Recent History" option, it will include your extension's data.
APIs and other data handling
Web content is more than just pages, and more and more add-ons are interacting with web services via an Application Programming Interface (API). Many of the items talked about so far in this document apply in this sphere, but here are some additional tips:
- API providers should use the https protocol, which provides better protection for data passed over the network.
- JSON has become a popular data format for return formats for Web services. Be sure to read Using native JSON to find out the correct way to handle it.
- APIs can't be used with self-signed certificates.
Remote JavaScript and Content
There are a number of ways of remote scripts being used in extensions. They can be included in content, or downloaded at intervals.
Non-chrome urls in chrome XUL or HTML such as the following example are not allowed:
<script type="text/javascript" src="" />
In general, scripts that are from remote sources that run in the chrome context are not acceptable, as many times the source of the script can never be 100% guaranteed, and they are vulnerable to man-in-the-middle attacks. The only legitimate environment for remote scripts is to run in a sandbox. For more information, refer to the
evalInSandbox() section.
evalInSandbox
The evalInSandbox document explains very well how it works, so there will be no repetition here. The usefulness of it and power of how it works is best illustrated by the popular Greasemonkey extension, which works on the premise of scripts being downloaded and stored locally, to be injected into the web content context via the sandbox. Many extensions use the Greasemonkey compiler to bundle it in their extension for convenience. If you choose to do so, beware when making edits to the bundled files so as not to break the well thought out security architecture.
Third-party JavaScript
In the context of Web sites, using JavaScript written by others is very common. It is not unheard of in add-ons as well, and can provide a useful way to avoid code duplication and accelerate development. This article is about websites, but gives some insights into the general risks. When you do include other scripts, there are a number of things you can do to ensure their integrity and safety for users. The first is to always get it from a trusted source. Another thing you should do is namespace it just in case other add-ons include it. For example, if you're using jQuery, there's
jQuery.noConflict().
Conclusion
Security can't be taken for granted, and for every release of your add-on, a new security audit should take place. A good place to keep up with Mozilla security announcements and security discussion is at the Mozilla Security Blog. | https://developer.mozilla.org/en-US/docs/Archive/Add-ons/Security_best_practices_in_extensions | CC-MAIN-2019-09 | refinedweb | 1,164 | 60.55 |
Setting up your Python 3.9 development environment in a Linux container is quick and easy. This article shows you how to install Python 3.9, set up your environment, and use it to create and run a Python web service on Red Hat Enterprise Linux (RHEL) 8. The whole process should take about 15 minutes.
The amazing thing about building and using a Linux container with Python is that you don't actually need Python on your machine to do it. Creating a Python containerized application on a machine without Python support might not be ideal, but it is possible.
Step 1: Install Python 3.9 on RHEL 8
Use the following commands to install Python 3.9 on your RHEL 8 machine:
sudo yum module install python39/build
Now you can start using Python via the
python3.9 command, as shown in Figure 1.
Notice that we installed a module, the
yum module. Modules were introduced with RHEL 8 as part of the new Application Streams concept. A module is a streamlined way to get all the components you would typically need for a particular deployment. For example, the
Python3.9 module includes development tools like
numby,
pip,
setuptools,
scipy, and many more. You can see a complete list by running the
yum module info python39 command.
Step 2: Don't install Docker (you don't need to)
That's right, there's no need to install Docker on RHEL 8 because Podman is included automatically. Podman is the open source alternative to Docker that does not run as root, which improves security. You can run
podman --version to verify that it exists.
You can use the Docker "Hello World" example to see a containerized application running on your local system. Enter the following command to see it run:
podman run hello-world
You'll see output like the screenshot in Figure 2.
Note: If you really feel the need to run
docker commands, you can always use
alias docker='podman'. Also, every
podman instance in this article can be replaced with
docker; they're command-for-command compatible.
Step 3: Create a Python web service
Now it's time to create a simple Python HTTP server that will act as our very basic web service. It will run on port 8000 and return a "Hello world"-type message.
There are three parts to this service:
- The HTML file that will be served.
- The Python code to run as the HTTP server.
- The Dockerfile build instructions to build the container image.
Note: I'm borrowing the code for this article from my colleague Mike Guerette. See his tutorial Build your first application using Python 3.5 on RHEL 7 with containers and Red Hat Software Collections if you need instructions for building a Python application on RHEL 7.
Let's get started with our Python web service.
Set up a directory for the Python project
First, create a directory and move into it with the following commands:
mkdir firstpython && cd firstpython
Create the HTML file
Typically, a web service will return data as a JSON document, but for this demonstration, we'll be returning HTML. That means it'll display nicely in a browser.
Create a file named
index.html with the following contents:
<html>Hello, Red Hat Developers World from Python!</html>
This content will be returned by our web service.
Write the Python code to run as the HTTP server
Create a file named
web.py with the following contents:
# # A very simple Python HTTP server # import http.server import socketserver PORT = 8000 Handler = http.server.SimpleHTTPRequestHandler httpd = socketserver.TCPServer(("", PORT), Handler) print("serving at port", PORT) httpd.serve_forever()
This is a very simple HTTP server, running on port 8000. That's good enough for our demonstration.
Step 4: Test the Python application locally
You can test your Python application before building an image and running it in a container. Use the following command to start the web server, running at
localhost:8000:
python3.9 -u web.py
Then, either use the
curl command or open your browser to the address. You'll see results similar to the screenshot in Figure 3.
Step 5: Build a container image
Now that we have the Python web service, and we've tested it, we'll build a container image for it.
We will use a Dockerfile containing build instructions to build the container image. Create a file named Dockerfile with the following contents:
FROM registry.access.redhat.com/ubi8/python-39 EXPOSE 8000 COPY . /opt/app-root/src CMD /bin/bash -c 'python3 -u web.py'
Use the following command to build the image:
podman build -t pythonweb .
As the image is being built, you will see the underlying image (
ubi8/python-39) being pulled from the Red Hat registry. This image will be stored on your local machine. If you use this underlying image in a future build, it will not be pulled again.
Note: UBI is the acronym for Universal Base Images. A UBI is a Red Hat image that allows you to use RHEL in your container and make sure it runs anywhere. UBI is specifically designed for cloud-native and containerized applications.
Finally, the commands in your Dockerfile build instructions are carried out, and the resulting image ID is displayed. Figure 4 shows the build on my machine.
You can see the images on your local machine by running the command
podman images, as shown in Figure 5.
Step 6: Run, run, run ... run it in a container
Now that we've built the image, we can run it in a container. Use the following command:
podman run --detach --publish 8000:8000 --name=helloweb localhost/pythonweb
When you enter this command, the container runtime engine runs the image in the background—that's what the
--detach flag does—and returns the container ID. The
--publish flag publishes the port to the host. In this case, the container's port 8000 is made available to the host (your local machine), which, in turn, is mapping it to it's own port 8000. Note that these port numbers do not need to match. Figure 6 shows an example of the command output on my machine.
Just to recap: The image ID is created when you build the image. The container ID is assigned to the container in which the image is being run. You can see the container running by entering the command
podman ps. Figure 7 shows the running container.
Results? We got 'em
That's it, we've created a Python web service and it's running in a container. Now let's view the results. As before, open your browser or use the
curl command with the address. You'll get something like the screenshot in Figure 8.
What's in a name?
Did you notice the mess I've made with naming? The directory is named
firstpython. The image is named
pythonweb. The name I assigned to the container is
helloweb.
I did this on purpose to demonstrate that, if you really want to, you can make a colossal mess with naming. A best practice would be to have the directory name, the image name, and the container name match.
Additionally, the name that I assigned to the image,
pythonweb, was not fully qualified by me, so the system assigned it to the
localhost namespace. The tag assigned, by default, is
:latest. So, putting this together, the name is
localhost/pythonweb:latest.
In real life, you would use an image registry as part of your namespace, and perhaps assign a tag. For example, if I were to build this image for my own (personal) image registry—where I will later send it using the
podman push command—I would use the following command to name and build it:
podman build -t quay.io/donschenck/pythonweb:v1 .
It is not uncommon to use only two tags for image naming:
:latest and
:next. When you wish to update to the next version, you build the code for the
:next image, but tag it as
:latest.
"But what about rolling back?"
You don't. You never roll back; you roll forward, always. This idea is not without controversy, but it does force you to keep your microservices small and simple, and easy to update.
Keep all of this in mind, especially when you create your own free Kubernetes cluster in the Developer Sandbox for Red Hat OpenShift and run your Python application there.
Tips for running your application in a container
To stop the container from running, use the following command:
podman stop helloweb
You can view the logs of the container with the following command:
podman logs helloweb
You can restart the container if you wish—I'll let you do a web search for that command.
Finally, you can delete the container with the following command:
podman rm helloweb
After you remove the container, the logs are gone, which makes sense. But the image (
localhost/pythonweb) is still on your local machine. In fact, if you want to see something interesting, run the following command:
podman inspect localhost/pythonweb
Now see what happens if you run the
podman inspect command but, instead, reference the Red Hat Universal Base Images 8 image that was pulled down during the build process.
Where do we go from here?
This article has been a quick introduction to creating and running a Python web service in a RHEL 8 Linux container. If you are wondering about next steps, here are a few suggestions:
- Download your free copy of RHEL 8 and run it in a virtual machine (I'm using Windows 10 and Hyper-V).
- Are you a Windows developer and not super skilled with Linux? No worries: Download Burr Sutter's Linux Commands Cheat Sheet.
- Build an application on your RHEL 8 machine.
- Create an image from the application and push it to your own image registry.
- Get a free Kubernetes cluster and start experimenting in the Developer Sandbox for Red Hat OpenShift.
- Join Red Hat Developer for more resources like this one. | https://developers.redhat.com/articles/2021/08/26/build-your-first-python-application-linux-container | CC-MAIN-2022-05 | refinedweb | 1,686 | 65.32 |
This is a multi-part message in MIME format. --Boundary_(ID_K+WtxWU4+owRK5lyzEc5lw) Content-type: text/plain; charset=iso-8859-1 Content-transfer-encoding: 7BIT [Guido] >. I think that's over! I'm very tired, though (couldn't get to sleep until 11, and woke up at 2 with the last, umm, episode <wink>). This is a Big Project if done right. I volunteered time for it a few years ago, but there wasn't enough interest then to keep it going. I'll attach the last publicly-distribued module I had then, solely devoted to combinations. It was meant to be the first in a series, all following some basic design decisions: + A Basic class that doesn't compromise on speed, typically by working on canonical representatives in Python list-of-int form. + A more general class that deals with arbitrary sequences, perhaps at great loss of efficiency. + Multiple iterators are important: lex order is needed sometimes; Gray code order is an enormous help sometimes; random generation is vital sometimes. + State-of-the-art algorithms. That's a burden for anything that goes into the core -- if it's a toy algorithm, users can do just as well on their own, and then people submit patch after patch that the original author isn't really qualified to judge (else they would have done a state-of-the-art thing to begin with). + The ability to override the random number generator. Python's default WH generator is showing its age as machines get faster; it's simply not adequate anymore for long-running programs making heavy use of it on a fast box. Combinatorial algorithms in particular do tend to make heavy use of it. (Speaking of which, "someone" should look into grabbing one of the Mersenne Twister extensions for Python -- that's the current state of *that* art). Ideas not worth taking: + Leave the chi-square algorithm out of it. A better implementation would be nice to have in a statistics package, but it doesn't belong here regardless. me-i'm-going-back-to-sleep-ly y'rs - tim --Boundary_(ID_K+WtxWU4+owRK5lyzEc5lw) Content-type: text/plain; name=combgen.py Content-transfer-encoding: 7BIT Content-disposition: attachment; filename=combgen.py # Module combgen version 0.9.1 # Released to the public domain 18-Dec-1999, # by Tim Peters (tim_one@email.msn.com). # Provided as-is; use at your own risk; no warranty; no promises; enjoy! """\ CombGen(s, k) supplies methods for generating k-combinations from s. CombGenBasic(n, k) acts like CombGen(range(n), k) but is more efficient. s is of any sequence type such that s supports catenation (s1 + s2) and slicing (s[i:j]). For example, s can be a list, tuple or string. k is an integer such that 0 <= k <= len(s). A k-combination of s is a subsequence C of s where len(C) = k, and for some k integers i_0, i_1, ..., i_km1 (km1 = k-1) with 0 <= i_0 < i_1 < ... < i_km1 < len(s), C[0] is s[i_0] C[1] is s[i_1] ... C[k-1] is s[i_km1] Note that each k-combination is a sequence of the same type as s. Different methods generate k-combinations in lexicographic index order, a particular "Gray code" order, or at random. The .reset() method can be used to start over. The .set_start(ivector) method can be used to force generation to begin at a particular combination. Module function comb(n, k) returns the number of combinations of n things taken k at a time; n >= k >= 0 required. CAUTIONS + The CombGen constructor saves a reference to (not a copy of) s, so don't mutate s after calling CombGen. + For efficiency, CombGenBasic getlex and getgray return the *same* list each time, mutating it in place. You must not mutate this list; and, if you want to save a combination's value across calls, copy the list. For example, >>> g = CombGenBasic(2, 1) >>> x = g.getlex(); y = g.getlex() >>> x is y # the same! 1 >>> x, y # so these print the same thing ([1], [1]) >>> g.reset() >>> x = g.getlex()[:]; y = g.getlex()[:] >>> x, y # copies work as expected ([0], [1]) >>> In contrast, CombGen methods return a new sequence each time -- but they're slower. GETLEX -- LEXICOGRAPHIC GENERATION Each invocation of .getlex() returns a new k-combination of s. The combinations are generated in lexicographic index order (for CombGenBasic, the k-combinations themselves are in lexicographic order). That is, the first k-combination consists of s[0], s[1], ..., s[k-1] in that order; the next of s[0], s[1], ..., s[k] and so on until reaching s[len(s)-k], s[len(s)-k+1], ..., s[len(s)-1] After all k-combinations have been generated, .getlex() returns None. Examples: >>> g = CombGen("abc", 0).getlex >>> g(), g() ('', None) >>> g = CombGen("abc", 1).getlex >>> g(), g(), g(), g() ('a', 'b', 'c', None) >>> g = CombGenBasic(3, 2).getlex >>> print g(), g(), g(), g() [0, 1] [0, 2] [1, 2] None >>> g = CombGen((0, 1, 2), 3).getlex >>> print g(), g(), g() (0, 1, 2) None None >>> p = CombGenBasic(4, 2) >>> g = p.getlex >>> print g(), g(), g(), g(), g(), g(), g(), g() [0, 1] [0, 2] [0, 3] [1, 2] [1, 3] [2, 3] None None >>> p.reset() >>> print g(), g(), g(), g(), g(), g(), g(), g() [0, 1] [0, 2] [0, 3] [1, 2] [1, 3] [2, 3] None None >>> GETGRAY -- GRAY CODE GENERATION Each invocation of .getgray() returns a triple C, tossed, added where C is the next k-combination of s tossed is the element of s removed from the last k-combination added is the element of s added to the last k-combination tossed and added are None for the first call. Consecutive combinations returned by .getgray() differ by two elements (one removed, one added). If you invoke getgray() more than comb(n,k) times, it "wraps around" and generates the same sequence again. Note that the last combination in the return sequence also differs by two elements from the first combination in the return sequence. Gray code ordering can be very useful when you're computing an expensive function on each combination: that exactly one element is added and exactly one removed can often be exploited to save recomputation for the k-2 common elements. >>> o = CombGen("abcd", 2) >>> for i in range(7): # note that this wraps around ... print o.getgray() ('ab', None, None) ('bd', 'a', 'd') ('bc', 'd', 'c') ('cd', 'b', 'd') ('ad', 'c', 'a') ('ac', 'd', 'c') ('ab', 'c', 'b') >>> GETRAND -- RANDOM GENERATION Each invocation of .getrand() returns a random k-combination. >>> o = CombGenBasic(1000, 6) >>> import random >>> random.seed(87654) >>> o.getrand() [69, 223, 437, 573, 722, 778] >>> o.getrand() [409, 542, 666, 703, 732, 847] >>> CombGenBasic(1000000, 4).getrand() [199449, 439831, 606885, 874530] >>> """ # 0,0,1 09-Dec-1999 # initial version # 0,0,2 10-Dec-1999 # Sped CombGenBasic.{getlex, getgray} substantially by no longer # making copies of the indices; getgray is truly O(1) now. # A bad aspect is that they return the same list object each time # now, which can be confusing; e.g., had to change some examples. # Use CombGen instead if this bothers you -- CombGenBasic's # purpose in life is to be lean & mean. # Removed the restriction on mixing calls to CombGenBasic's # getlex and getgray; not sure it's useful, but it was irksome. # Changed __findj to return a simpler result. This is less useful # for getgray, but now getlex can exploit it too (there are no # longer any Python-level loops in CombGenBasic's getlex; there's # an implied C-level loop (via "range"), and it's in the nature of # lex order that this can't be removed). # Added some exhaustive tests for getlex, and finger verification. # 0,9,1 18-Dec-1999 # Changed _testrand to compute and print chi-square statistics, # and probabilities, because one of _testrand's outputs didn't # "look random" to me. Indeed, it's got a poor chi-square value! # But sometimes that *should* happen, and it does not appear to # be happening more often than expected. __version__ = 0, 9, 1 def _chop(n): """n -> int if it fits, else long.""" try: return int(n) except OverflowError: return n def comb(n, k): """n, k -> number of combinations of n items, k at a time. n >= k >= 0 required. >>> for i in range(7): ... print "comb(6, %d) ==" % i, comb(6, i) comb(6, 0) == 1 comb(6, 1) == 6 comb(6, 2) == 15 comb(6, 3) == 20 comb(6, 4) == 15 comb(6, 5) == 6 comb(6, 6) == 1 >>> comb(52, 5) # number of poker hands 2598960 >>> comb(52, 13) # number of bridge hands 635013559600L """ if not n >= k >= 0: raise ValueError("n >= k >= 0 required: " + `n, k`) if k > (n >> 1): k = n-k if k == 0: return 1 result = long(n) i = 2 n, k = n-1, k-1 while k: # assert (result * n) % i == 0 result = result * n / i i = i+1 k = k-1 n = n-1 return _chop(result) import random class CombGenBasic: def __init__(self, n, k): self.n, self.k = n, k if not n >= k >= 0: raise ValueError("n >= k >= 0 required:" + `n, k`) self.reset() def reset(self): """Restore state to that immediately after construction.""" # The first result is the same for either lexicographic or # Gray code generation. self.set_start(range(self.k)) # __findj is used only to initialize self.j for getlex and # getgray. It returns the largest j such that slot j has # "breathing room"; that is, such that slot j isn't at its largest # possible value (n-k+j). j is -1 if no such index exists. # After initialization, getlex and getgray incrementally update # this more efficiently. def __findj(self, v): n, k = self.n, self.k assert len(v) == k j = k-1 while j >= 0 and v[j] == n-k+j: # v[j] is at its largest possible value j = j-1 return j def getlex(self): """Return next (in lexicographic order) k-combination. Return None if all possibilities have been generated. Caution: getlex returns the *same* list each time, mutated in place. Don't mutate it yourself, or save a reference to it (the next call will mutate its contents; make a copy if you need to save the value across calls). """ indices, n, k, j = self.indices, self.n, self.k, self.j if self.firstcall: self.firstcall = 0 return indices if j < 0: return None new = indices[j] = indices[j] + 1 if j+1 == k: if new + 1 == n: j = j-1 else: if new + 1 < indices[j+1]: indices[j:] = range(new, new + k - j) j = k-1 else: j = j-1 self.j = j # assert j == self.__findj(indices) return indices def getgray(self): """Return next (c, tossed, added) triple. c is the next k-combination in a particular Gray code order. tossed is the element of range(n) removed from the last combination. added is the element of range(n) added to the last combination. tossed and added are None if this is the first call, or on every call if there is only one k-combination. Else tossed != added, and neither is None. Caution: getgray wraps around if you invoke it more than comb(n, k) times. Caution: getgray returns the *same* list each time, mutated in place. Don't mutate it yourself, or save a reference to it (the next call will mutate its contents; make a copy if you need to save the value across calls). """ # The popular routine in Nijenhuis & Wilf's "Combinatorial # Algorithms" is exceedingly complicated (although trivial # to program with recursive generators!). # # Instead I'm using a variation of Algorithm A3 from the paper # "Loopless Gray Code Algorithms", by T.A. Jenkyns (Brock # University, Ontario). The code is much simpler, and, # because it's loop-free, takes O(1) time on each call (not # just amortized over the whole sequence). # # Because the paper doesn't yet seem to be well known, here's # the idea: Modify the definition of lexicographic ordering # in a funky way: in the element comparisons, replace "<" by # ">" in every other element position starting at the 2nd. # IOW, and skipping end cases, sequence s is "less than" # sequence t iff their elements are equal up until index # position i, and then s[i] < t[i] if i is even, or s[i] > # t[i] if i is odd. Jenkyns calls this "alternating # lexicographic" order. It's clear that this defines a total # ordering. What isn't obvious is that it's also a Gray code # ordering! Very pretty. # # Modifications made here to A3 are minor, and include # switching from 1-based to 0-based; allowing for trivial # sequences; allowing for wrap-around; returning the "tossed" # and "added" elements; starting the generation at an # arbitrary k-combination; and sharing a finger (self.j) with # the getlex method. indices, n, k, j = self.indices, self.n, self.k, self.j if self.firstcall: self.firstcall = 0 return indices, None, None # Slide over to first slot that *may* be able to move down. # Note that this leaves odd j alone (including -1!), and may # make j equal to k. j = j | 1 if j == k: # k is odd and so indices[-1] "wants to move up", and # indices[-1] < n-1 so it *can* move up. tossed = indices[-1] added = indices[-1] = tossed + 1 j = j-1 if added == n-1: j = j-1 elif j < 0: # indices has the last value in alt-lex order, e.g. # [4, 5, 6, 7]; wrap around to the first value, e.g. # [0, 5, 6, 7]. assert indices == range(n-k, n) if k and indices[0]: tossed = indices[0] added = indices[0] = 0 j = 0 else: # comb(n, k) is 1 -- this is a trivial sequence. tossed = added = None else: # 0 < j < k (note that 0 < j because j is odd). # Want to move this slot down (again because j is odd). atj = indices[j] if indices[j-1] + 1 == atj: # can't move it down; move preceding up tossed = atj - 1 # the value in indices[j-1] indices[j-1] = atj added = indices[j] = n-k+j j = j-1 if atj + 1 == added: j = j-1 else: # can move it down tossed = atj added = indices[j] = atj - 1 if j+1 < k: tossed = indices[j+1] indices[j+1] = atj j = j+1 self.j = j # assert j == self.__findj(indices) return indices, tossed, added def set_start(self, start): """Force .getlex() or .getgray() to start at given value. start is a vector of k unique integers in range(n), where k and n were passed to the CombGenBasic constructor. The vector is sorted in increasing order, and is used as the the next k-combination to be returned by .getlex() or .getgray(). >>> gen = CombGenBasic(3, 2) >>> for i in range(4): ... print gen.getgray() ([0, 1], None, None) ([1, 2], 0, 2) ([0, 2], 1, 0) ([0, 1], 2, 1) >>> gen.set_start([0, 2]) >>> for i in range(4): ... print gen.getgray() ([0, 2], None, None) ([0, 1], 2, 1) ([1, 2], 0, 2) ([0, 2], 1, 0) """ if len(start) != self.k: raise ValueError("start vector not of length " + `k`) indices = start[:] indices.sort() seen = {} # Verify the vector makes sense. for i in indices: if not 0 <= i < self.n: raise ValueError("start vector contains element " "not in 0.." + `self.n-1` + ": " + `i`) if seen.has_key(i): raise ValueError("start vector contains duplicate " "element: " + `i`) seen[i] = 1 self.indices = indices self.j = self.__findj(indices) self.firstcall = 1 def getrand(self, random=random.random): """Return a k-combination at random. Optional arg random specifies a no-argument function that returns a random float in [0., 1.). By default, random.random is used. """ # The trap to avoid is doing O(n) work when k is much less # than n. Letting m = min(k, n-k), we actually do Python work # of O(m), and C-level work of O(m log m) for a sort. In # addition, O(k) work is required to build the final result, # but at worst O(m) of that work is done at Python speed. n, k = self.n, self.k complement = 0 if k > n/2: # Generate the values *not* in the combination. complement = 1 k = n-k # Generate k distinct random values. result = {} for i in xrange(k): # The expected # of times thru the next loop is n/(n-i). # Since i < k <= n/2, n-i > n/2, so n/(n-i) < 2 and is # usually closer to 1: on average, this succeeds very # quickly! while 1: candidate = int(random() * n) if not result.has_key(candidate): result[candidate] = 1 break result = result.keys() result.sort() if complement: # We want everything in range(n) that's *not* in result. avoid = result avoid.append(n) result = [] start = 0 for limit in avoid: result.extend(range(start, limit)) start = limit + 1 return result class CombGen: def __init__(self, seq, k): n = len(seq) if not 0 <= k <= n: raise ValueError("k must be in 0.." + `n` + ": " + `k`) self.seq = seq self.base = CombGenBasic(n, k) def reset(self): """Restore state to that immediately after construction.""" self.base.reset() def getlex(self): """Return next (in lexicographic index order) k-combination. Return None if all possibilities have been generated. """ indices = self.base.getlex() if indices is None: return None else: return self.__indices2seq(indices) def getgray(self): """Return next (c, tossed, added) triple. c is the next k-combination in a particular Gray code order. tossed is the element of s removed from the last combination. added is the element of s added to the last combination. Caution: getgray wraps around if you invoke it more than comb(len(s), k) times. """ indices, tossed, added = self.base.getgray() if tossed is None: return (self.__indices2seq(indices), None, None) else: return (self.__indices2seq(indices), self.seq[tossed], self.seq[added]) def set_start(self, start): """Force .getlex() or .getgray() to start at given value. start is a vector of k unique integers in range(len(s)), where k and s were passed to the CombGen constructor. The vector is sorted in increasing order, and is used as a vector of indices (into s) for the next k-combination to be returned by .getlex() or .getgray(). >>> gen = CombGen("abc", 2) >>> for i in range(4): ... print gen.getgray() ('ab', None, None) ('bc', 'a', 'c') ('ac', 'b', 'a') ('ab', 'c', 'b') >>> gen.set_start([0, 2]) # start with "ac" >>> for i in range(4): ... print gen.getgray() ('ac', None, None) ('ab', 'c', 'b') ('bc', 'a', 'c') ('ac', 'b', 'a') >>> gen.set_start([0, 2]) # ditto >>> print gen.getlex(), gen.getlex(), gen.getlex() ac bc None """ self.base.set_start(start) def getrand(self, random=random.random): """Return a k-combination at random. Optional arg random specifies a no-argument function that returns a random float in [0., 1.). By default, random.random is used. """ return self.__indices2seq(self.base.getrand(random)) def __indices2seq(self, ivec): assert len(ivec) == self.base.k, "else internal error" seq = self.seq result = seq[0:0] # an empty sequence of the proper type for i in ivec: result = result + seq[i:i+1] return result del random ##################################################################### # Testing. ##################################################################### def _verifycomb(n, k, comb, inbase, baseobj=None): if len(comb) != k: print "OUCH!", this, "should have length", k # verify it's an increasing sequence of baseseq elements lastelt = None for elt in comb: if not inbase(elt): print "OUCH!", elt, "not in base seqeuence", n, k, comb if not lastelt < elt: print "OUCH!", elt, ">=", lastelt, n, k, comb lastelt = elt if baseobj: # verify search finger is correct cachedj = baseobj.j truej = baseobj._CombGenBasic__findj(baseobj.indices) if cachedj != truej: print "OUCH! cached j", cachedj, "!= true j", truej, \ n, k, comb def _testnk_gray(n, k):>> _testgray() testing getgray 0 testing getgray 1 testing getgray 2 testing getgray 3 testing getgray 4 testing getgray 5 testing getgray 6 testing getgray 7 testing getgray 8 testing getgray 9 testing getgray 10 testing getgray 11 testing getgray 12 """ for n in range(13): print "testing getgray", n for k in range(n+1): _testnk_gray(n, k) # getlex is easier. def _testnk_lex(n, k):>> _testlex() testing getlex 0 testing getlex 1 testing getlex 2 testing getlex 3 testing getlex 4 testing getlex 5 testing getlex 6 testing getlex 7 testing getlex 8 """ for n in range(9): print "testing getlex", n for k in range(n+1): _testnk_lex(n, k) import math _math = math del math # This is a half-assed implementation, prone to overflow and/or # underflow given "large" x or v. If they're both <= a few hundred, # though, it's quite accurate. The main advantage is that it's # self-contained. def _chi_square_distrib(x, v): """x, v -> return probability that chi-square statistic <= x. v is the number of degrees of freedom, an integer >= 1. x is a non-negative float or int. """ if x < 0: raise ValueError("x must be >= 0: " + `x`) if v < 1: raise ValueError("v must be >= 1: " + `v`) if v != int(v): raise TypeError("v must be an integer: " + `v`) if x == 0: return 0.0 # (x/2)**(v/2) / gamma((v+2)/2) * exp(-x/2) * # (1 + sum(i=1 to inf, x**i/prod(j=1 to i, v+2*j))) # Alas, for even moderately large x or v, this is numerically # intractable. But the mean of the distribution is v, so in # practice v will likely be "close to" x. Rewrite the first # line as # (x/2/e)**(v/2) / gamma((v+2)/2) * exp(v/2-x/2) # Now exp is much less likely to over or underflow. The power is # still a problem, though, so we compute # (x/2/e)**(v/2) / gamma((v+2)/2) # via repeated multiplication. x = float(x) a = x / 2 / _math.exp(1) v = float(v) v2 = v/2 if int(v2) * 2 == v: # v is even base = 1.0 i = 1.0 else: # v is odd, so the gamma bottoms out at gamma(.5) = sqrt(pi), # and we need to get a sqrt(a) factor into the numerator # (since v2 "ends with" .5). base = 1.0 / _math.sqrt(a * _math.pi) i = 0.5 while i <= v2: base = base * (a / i) i = i + 1.0 base = base * _math.exp(v2 - x/2) # Now do the infinite sum. oldsum = None sum = base while oldsum != sum: oldsum = sum v = v + 2.0 base = base * (x / v) sum = sum + base return sum def _chisq(observed, expected): n = len(observed) assert n == len(expected) sum = 0.0 for i in range(n): e = float(expected[i]) sum = sum + (observed[i] - e)**2 / e return sum, _chi_square_distrib(sum, n-1) def _testrand(): """ >>> _testrand() random 0 combs of abcde 100 random 1 combs of abcde a 99 b 106 c 98 d 99 e 98 probability[chisq <= 0.46] = 0.0227 random 2 combs of abcde ab 100 ac 115 ad 111 ae 98 bc 98 bd 103 be 95 cd 84 ce 100 de 96 probability[chisq <= 6.6] = 0.321 random 3 combs of abcde abc 83 abd 119 abe 86 acd 88 ace 103 ade 94 bcd 107 bce 101 bde 112 cde 107 probability[chisq <= 12.78] = 0.827 random 4 combs of abcde abcd 86 abce 99 abde 113 acde 101 bcde 101 probability[chisq <= 3.68] = 0.549 random 5 combs of abcde abcde 100 """ def drive(s, k): print "random", k, "combs of", s o = CombGen(s, k) g = o.getrand n = len(s) def inbase(elt, s=s): return elt in s count = {} c = comb(len(s), k) for i in xrange(100 * c): x = g() _verifycomb(n, k, x, inbase) count[x] = count.get(x, 0) + 1 items = count.items() items.sort() for x, i in items: print x, i if c > 1: observed = count.values() if len(observed) < c: observed.extend([0] * (c - len(observed))) x, p = _chisq(observed, [100]*c) print "probability[chisq <= %g] = %.3g" % (x, p) for k in range(6): drive("abcde", k) __test__ = {"_testgray": _testgray, "_testlex": _testlex, "_testrand": _testrand} def _test(): import doctest, combgen doctest.testmod(combgen) if __name__ == "__main__": _test() --Boundary_(ID_K+WtxWU4+owRK5lyzEc5lw)-- | https://mail.python.org/pipermail/python-dev/2002-August/028399.html | CC-MAIN-2018-26 | refinedweb | 4,046 | 74.49 |
Bluetooth.connect( adv.mac ) hangs
What am I doing wrong here? This code hangs when it comes to "bt.connect( adv.mac )"
Does anyone have working code to read services data values?
Thanks, in advance?
from network import Bluetooth import time bt = Bluetooth() bt.init() while( True ): if( not bt.isscanning() ): bt.start_scan(-1) print("Scan") adv = bt.get_adv() if( adv ): bt.stop_scan() print( adv.mac ) try: conn = bt.connect( adv.mac ) if( conn ): if( conn.isconnected()): print(len(conn.services)) conn.disconnect() else: print("Not Connected") else: print("Conn not available") except Exception as e: print( e.message ) time.sleep( 1 )
Hmm ok I've flagged this internally and will investigate! Thanks for your patience!
Honestly, I don;t know what fixed. I had already upgrade the firmware and verified the version number and the code I had would not work.
The next day. I re-flashed the firmware again to be sure. And tried some other code that essentially did the same thing and it works.
I really don;t know what was the magic solution but I am able to get Bt.Connect() to work now. Although actually reading values is not reliable for me. But, that could be the Beacon or the LOPY and I have not figured out which yet.
Good Luck with yours.
thrag
os.uname()
(sysname='LoPy', nodename='LoPy', release='1.7.1.b2', version='v1.8.6-642-g84447452 on 2017-06-03', machine='LoPy with E
SP32', lorawan='1.0.0')
I'm trying to connect to this device for the first time:
However I was connected with the LoPy to my phone using the 'nrf connect' android app with success 2 days ago and it is not working anymore either.
I have the same issue here, it was working but not anymore and it hangs everytime i try to connect with my LoPy. Then I have to power cycle the board.
Did you do anything ?
Actually, I have to correct myself. This is actually working now.. at least better than before. It does not seem to find all advertisements that are available but that could be an issue with the beacon.
Not sure why it works tonight when it definitely did not yesterday. But anyway, Thanks. It work.
Thanks for the reply.
Yes, the very latest. '1.7.2.b1'
Garth
- jmarcelino last edited by
Are you using the latest firmware?
os.uname()should tell you | https://forum.pycom.io/topic/1331/bluetooth-connect-adv-mac-hangs | CC-MAIN-2020-40 | refinedweb | 406 | 79.36 |
Navigating snap.
As soon has you have more than a couple pages of code in a project, you'll want to use the Visual Studio navigation features to move intelligently through your code. In this tip I'll show you some the Visual Studio features that make moving through your code a snap.
C# curly brace matching
Use Ctrl- ] (Square Bracket) to move between matching pairs of braces or parenthesis.
Go to Definition
If you've been using Visual Studio for any length of time you probably are familiar with this shortcut. Place your cursor on a variable, type name or other code item. Press F12 to navigate the definition of that item. You can also use the context menu. Simply right-click and choose "Go to Definition.
Navigating backward and forward
In the previous section I talked about using F12 navigate to the code definition. Visual Studio provides an easy way to return to your original code location. Click the View/Navigate Backward menu item or choose the Navigate Back tool buttons.
Find All References
It can be useful to find every place in a project where a method is called or some other symbol is used. You can find where a method or property is called by right-clicking its definition and selecting Find All References from the drop-down menu. This is also available by selecting the definition in your code and pressing Shift+F12. This feature searches the entire solution for any reference to a chosen item. Note that it finds true references. This better than using search as you might have several items declared with the same name in different areas or namespaces.
View Call Hierarchy
This fabulous new feature enables you to navigate through your C#/C++ code by displaying all calls to and from a selected type member. Unlike Find All References, the Call Hierarchy feature provides and more detailed information about calls. This enables you to better understand your code flow and to navigate to calling code.
Invoke the tool by right-clicking on a method or property name in the code editor and choosing View Call Hierarchy from the context menu.
The Call Hierarchy window is displayed, it's usually docked the lower edge of the screen.
Bookmarks
Bookmarks are a way to add placeholders within your code. Basically you add a bookmark to a code line by pressing Ctrl-K,Ctrl-K or using the Bookmark toolbar.
Once you create a bookmark, Visual studio adds a symbol to the right margin of your code window.
Now you can move between the bookmarks. There are number ways of doing this. You can use the bookmark toolbar or bookmarks menu. There is also a bookmark window that shows you every bookmark in your project. You'll find it in the view menu.
A handy feature of the bookmark window is that you can rename each bookmark.
As you can see, there are many ways to navigate your code besides using the traditional up/down arrows and mouse scrollbars. Why don't you try a few of these new techniques on your next project?
Start the conversation | https://searchwindevelopment.techtarget.com/tip/Navigating-your-code-in-Visual-Studio-2010 | CC-MAIN-2019-30 | refinedweb | 524 | 64.81 |
Kid is a template languages that provides inline code capability within a markup document similar to PHP or ASP, except the language used here is (of course) Python. Code can exist in a separate block like:
<?python x = 5 y = 7 ?>
But although Kid understands Python code in the template, placing lots of code there is not, when building with TurboGears, the objective. In this integrated approach, most code is kept separate in the controller classes (MVC paradigm) and inline Python is used mainly to deliver and place data coming from the controller. This is done with substitution strings and with attributes to specific markup elements. use anywhere in your template. So if the dictionary contained an item {‘food’:’apple’}, your template would replace any occurrences of ${food} with the text apple.
When variables are dropped into your template, Kid will automatically escape them, e.g. you need not worry about values that contain <&%? etc.
The time when you do need to care is when you actually have XHTML itself to drop into place. When you do, wrap your substitution value in XML(). For example, say we had an XHTML fragment called header. You could write ${XML(header)}, and the header would be dropped in without being escaped.
This approach adds Python logic as an attribute of an element, effecting specifically that element:
<h2 py: Title </h2>
The if statement above has the effect of controlling the appearance of the h2 element. If x is not less than seven, then this heading element, neither the content Title nor the <h2> tag, will appear in the rendering of this page.
In any of the py attributes, unlike substitutions, just use the variable from the dictionary as you would a local variable in Python, i.e. without the ${}.
Primary source references on the template language syntax can be found in:
The following is a minimal overview.
One of the great things about Kid is that everything you know about Python applies here. For example, py:for="fruit in fruits" behaves just like for fruit in fruits: in Python. In this case, fruits is expected to be some kind of iterable object passed in via the dictionary. Whatever element the py:for attribute is attached to will be rendered, along with all of its contained sub-elements, again for every item in the loop.
For example, if fruits contains [‘pears’, ‘apples’, ‘oranges’], and the py attribute above is added to an <li> element:
<ul> <li py:<b>${fruit}</b></li> </ul>
will be rendered as:
<ul> <li><b>pears</b></li> <li><b>apples</b></li> <li><b>oranges</b></li> </ul>
These two attribute functions are almost the same. They replace the content and all sub-elements of the element they tag. The distinction between the two is:
Replaces all content within the element, including any sub-elements:
<h1 py:<i>Good Morning</i></h1>
is rendered as:
<h1>Hello</h1>
Also replaces the element tag itself:
<h1 py:<i>Good Morning</i></h1>
is rendered as simply:
Hello
Nearly opposite of py:if. If evaluated True, removes the element, but unlike py:if, leaves sub-elements intact.
Behaves somewhat like a macro substitution. Elements tagged with the py:match attribute are not output when they are encountered. Instead, the entire document is compared to the match condition and when a match is found, the element that was matched, along with all of its decendents, is replaced by the match element and all of its decendants.
Kid templates can be any XML document with namespaced attributes that tells Kid how to process the template. In practice, your templates will be XHTML documents that will be processed and transformed into valid HTML documents.
Don’t forget to define the py XML namespace in the <html> tag of your template. This is key to having your template understood as valid XML. Two common examples:
TurboGears 1.0 chose Kid as the default template language, but now there is a successor project called Genshi. The Genshi template language is very similar, but contrary to Kid, Genshi does not compile templates to Python. Nevertheless, it has better performance.
Therefore, Genshi will replace Kid as the TurboGears default template language, beginning with TG 1.1. But Genshi can be already be used now in TG 1.0 as described below. Everything else described within this document is the same for both Genshi and Kid.
Note however, that TG 1.0 and TG 1.1 widgets were written for and only work in Kid. This has been improved in TG 1.5 where Genshi is also the default templating language for TG widgets. Anyway, Kid can still be used in TG 1.5 for both page and widget templates.
Primary source references on the Genshi template language syntax can be found in:
To add Genshi support to TG 1.0:
easy_install Genshi
Then either specify Genshi templates on a per case basis in your expose statements:
@expose(template="genshi:example.templates.foobar")
or, set up your project to use Genshi by default by using the Genshi quickstart-template [1]:
easy_install gsquickstart tg-admin quickstart -t tggenshi
You can still serve pages with Kid by adding a prefix in the expose statement:
@expose(template="kid:example.templates.foobar")
Previous: Using Your Model : Next: Template Variables You Get for Free | http://www.turbogears.org/1.0/docs/GettingStarted/Kid.html | CC-MAIN-2014-10 | refinedweb | 890 | 63.8 |
Stop sending SIGPIPE to debuggerd. SIGPIPE is a pretty normal way for command-line apps to die, but because we catch it and report it via debuggerd, we get a lot of bogus bugs. We could catch SIGPIPE in our tools, but that's not really legit and slightly misleading. "But", you say, "catching SIGPIPE is useful for app bugs!". Except a trawl through buganizer suggests it's misleading there too. Not least because it's usually an innocent victim that dies --- the problem is usually on the other end of the pipe (which you learn nothing about because that process already died, which is what closed the pipe). We also don't catch SIGALRM, which is another signal that will terminate your process if you don't catch it, but that one actually represents a logic error in the crashing process, so there's a stronger argument for catching that. (Except it too is not a real source of bugs.) Bug: Change-Id: I79820b36573ddaa9a7bad0561a52f23e7a8d15ac
diff --git a/linker/debugger.cpp b/linker/debugger.cpp index 46c97af..3731c99 100644 --- a/linker/debugger.cpp +++ b/linker/debugger.cpp
@@ -135,9 +135,6 @@ signal_name = "SIGILL"; has_address = true; break; - case SIGPIPE: - signal_name = "SIGPIPE"; - break; case SIGSEGV: signal_name = "SIGSEGV"; has_address = true; @@ -273,7 +270,7 @@ signal(signal_number, SIG_DFL); // These signals are not re-thrown when we resume. This means that - // crashing due to (say) SIGPIPE doesn't work the way you'd expect it + // crashing due to (say) SIGABRT doesn't work the way you'd expect it // to. We work around this by throwing them manually. We don't want // to do this for *all* signals because it'll screw up the si_addr for // faults like SIGSEGV. It does screw up the si_code, which is why we @@ -281,7 +278,6 @@ switch (signal_number) { case SIGABRT: case SIGFPE: - case SIGPIPE: #if defined(SIGSTKFLT) case SIGSTKFLT: #endif @@ -307,7 +303,6 @@ sigaction(SIGBUS, &action, nullptr); sigaction(SIGFPE, &action, nullptr); sigaction(SIGILL, &action, nullptr); - sigaction(SIGPIPE, &action, nullptr); sigaction(SIGSEGV, &action, nullptr); #if defined(SIGSTKFLT) sigaction(SIGSTKFLT, &action, nullptr); | https://android.googlesource.com/platform/bionic/+/9f03ed1%5E%21/ | CC-MAIN-2019-18 | refinedweb | 343 | 61.06 |
A consistent API to AWS
Project description
Acky Library
The Acky library provides a consistent interface to AWS. Based on botocore, it abstracts some of the API work involved and allows the user to interact with AWS APIs in a consistent way with minimal overhead.
Acky takes a different approach to the API from libraries like the venerable Boto <>. Rather than model AWS objects as Python objects, Acky simply wraps the API to provide a more consistent interface. Most objects in AWS are represented as collections in Acky, with get(), create(), and destroy() methods. The get() method always accepts a filter map, no matter if the underlying API method does.
In cases where the API’s multitude of parameters would make for awkward method calls (as is the case with EC2’s RunInstances), Acky provides a utility class whose attributes can be set before executing the API call.
Using Acky
Acky uses a botocore-style AWS credential configuration, the same as the official AWS CLI. Before you use Acky, you’ll need to set up your config <>.
Once your credentials are set up, using acky is as simple as creating an instance of the AWS object:
from acky.aws import AWS aws = AWS(region, profile) instances = aws.ec2.Instances.get(filters={'tag:Name': 'web-*'}) print('Found {} web servers'.format(len(instances))) for instance in instances: print(' {}'.format(instance['PublicDnsName'])
Module Structure
The expected module structure for Acky follows. Many APIs are not yet implemented, but those that are can be considered stable.
- AWS
- username (property)
- userinfo (property)
- account_id (property)
- environment (property)
- ec2
- regions
- zones
- ACEs
- ACLs
- ElasticIPs
- Instances
- IpPermissions
- KeyPairs
- PlacementGroups
- SecurityGroups
- Snapshots
- Subnets
- VPCs
- Volumes
- iam
- Users
- Groups
- Keys
- rds
- engine_versions
- Instances
- Snapshots
- EventSubscriptions
- SecurityGroups
- SecurityGroupRules
- sqs
- Queues
- Messages
- sts
- GetFederationToken
- GetSessionToken
Other services will be added in future versions.
Installing acky
acky is available in PyPI and is installable via pip:
pip install acky
You may also install acky from source, perhaps from the GitHub repo:
git clone cd acky python setup.py install
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/acky/ | CC-MAIN-2018-47 | refinedweb | 359 | 53.41 |
table of contents
NAME¶
getuid, geteuid - get user identity
SYNOPSIS¶
#include <unistd.h>
uid_t getuid(void); uid_t geteuid(void);
DESCRIPTION¶
getuid() returns the real user ID of the calling process.
geteuid() returns the effective user ID of the calling process.
ERRORS¶
These functions are always successful and never modify errno.
CONFORMING TO¶
POSIX.1-2001, POSIX.1-2008, 4.3BSD.
NOTES¶
History¶.
SEE ALSO¶
getresuid(2), setreuid(2), setuid(2), credentials(7)
COLOPHON¶
This page is part of release 5.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://manpages.debian.org/unstable/manpages-dev/getuid32.2.en.html | CC-MAIN-2022-33 | refinedweb | 107 | 51.24 |
to input an unsigned integer and reverse the first and last nibble of the number
I_m_rude
I_m_rude
I_m_rude
I_m_rude
I_m_rude
I_m_rude
I_m_rude commented: :) +0
I_m_rude
deceptikon 1,790
I_m_rude
to input an unsigned integer and reverse the first and last nibble of the number
SO ? what ?
Post the code, that you have written.
i am not getting a solution for this question , Any help is appreciated. thanks.
hii decptikon if ur der please give me a solution
for this this above task
hey, please don't ask directly for codes. I know nobody will give codes here. WE can help you in your doubts , nut can't do these type of homeowrks.
can i know the logic how its work
I am not geeting one thing that "why are you not trying yourself first?" why are you so lazy for this ? thanks
oh. WaltP is here. @waltP If this is a joke, then i must laugh for this. haha.ok. And if it is the answer of this question, then I am going from this post as you again will say rude to me which i never want again :-| And if you again doing this comment for insulting me or anyone, then THANKS(because it is in your nature). take care and thanks. GOD BLESS YOU actually, WaltP don't know the idiom in proper manner. ;) if he wants yo refer, then . :p and I am going from here as waltP, the great is here :D And now this thread will not be solved because
1.Either he will give the rules and regulations now
2. OR he will talk rudely and say some insultive
3. OR if his pity showers on us, then he will answer in only one line, may be reason is that his each line cost many dollars.
4. OR he is not going to answer now(this one has most chances to occur ) ;) Let's see what will be the case now :)
don't know the idiom in proper manner
Then why are you complaining and thinking it's an insult? Touchy, aren't you?
As for the rest of your rant, there was no question to be answered. So, as you guessed in #4, what could I answer?
What the heck does #3 mean? Have you gone crazy?
why are you so lazy for this ?
Why are you being rude to shashikumar? Then call me rude? You may need to understand this concept (पाखंडी)
lol ;) I was quite correct in my guessings. ;)
Case 2 is there as in first line only, you again talked rudely.
Case 3 is there as you again answered in one line only. (2nd line) :-D
REQUEST: @waltP I know as per knowledge , I am nothing in front of you. But sir, this is not good what u always do on posts of beginners like me or this one. I know narue(though sometimes got ridiculous but is not rude), Deceptikon(the best one on this forum), ancient dragon and many others who have may be more knowledge than you dont have any ego or rude behaviour. Will feel high if you never do insult or show rude behaviour with anyone in future. heartly thanks. ;)
You're darn lucky Narue isn't here. Your eyes would burn! You're welcome.
okies. I think you are above 60 ;) that's why you talsk like this. If you are not above 60 and still you talk like this, them you need Mr.das kumar pankaji. ;) lolxx.
hii decptikon if ur der please give me a solution
I require that you make an attempt first. The problem statement is also ambiguous. Are you reversing the bits in each nibble or swapping the nibbles? Anyway, without actually solving the problem for you, I can help you visualize the bits of a value with a simple test program:
#include "stdio.h" #include "limits.h" /* @description: Prints the bits from value in the range of [first,last) to stdout. */ void show_bits(unsigned long value, unsigned first, unsigned last) { if (first > sizeof value * CHAR_BIT || last > sizeof value * CHAR_BIT) { return; } while (last-- > first) { putchar((value & (1U << last)) ? '1' : '0'); /* Break at the byte boundary for better presentation */ if (last % CHAR_BIT == 0) putchar(' '); /* Break at the nibble boundary too. This places a double break at the byte boundary. */ if (last % (CHAR_BIT / 2) == 0) putchar(' '); } putchar('\n'); } int main() { unsigned long value = 0x1B; /* Display all bits in the value */ show_bits(value, 0, sizeof(unsigned long) * CHAR_BIT); /* Display the first 7 bits */ show_bits(value, 0, 7); return 0; }
Using this you can actually display the bits directly. Such a function is very useful when experimenting with bit manipulation.
I apologize for my rude behaviour to WAltP, but it is also a request to you that if you don't know answer or in case, you don't want to reply then it's a humble request to you that please don't make joke or mock at me. that's what i want. thanks for your help. ;) | https://www.daniweb.com/programming/software-development/threads/431742/unsigned-integer | CC-MAIN-2019-04 | refinedweb | 838 | 81.43 |
DEBSOURCES
Skip Quicknav
sources / sapphire / 0.15.8-9
sapphire (0.15.8-9.1) unstable; urgency=medium
* Non-maintainer upload.
* Remove the obsolete dh_desktop call. (Closes: #817306)
-- Adrian Bunk <bunk@debian.org> Fri, 13 Jan 2017 14:31:12 +0200
sapphire (0.15.8-9) unstable; urgency=low
* Acknowledge NMU; thanks Moritz.
* Depend on xfonts-100dpi|xfonts-75dpi (the default font is lucida).
(closes: #510264)
* Fix debian/watch. (closes: #450076, #529138)
* Don't strip in Makefile, leave it to conditional dh_strip.
(closes: #437953)
* Don't use absolute path for update-alternatives. (closes: #510938)
* Bump Standards-Version to 3.8.1.0.
* Fix menu section.
* Add Homepage field.
* Bump debhelper version to 7.
* Catch errors from make clean.
* Use dh_prep, not dh_clean -k.
-- Chris Boyle <cmb@debian.org> Mon, 25 May 2009 05:05:43 +0100
sapphire (0.15.8-8.1) unstable; urgency=low
* Non-maintainer upload.
* Add missing buid-deps on x11proto-xext-dev and libxext-dev, fixes
FTBFS (Closes: #487003)
-- Moritz Muehlenhoff <jmm@debian.org> Fri, 27 Jun 2008 23:12:18 +0200
sapphire (0.15.8-8) unstable; urgency=low
* Add outputencoding to menu-method, thanks Bill Alombert.
(closes: #393082)
* Fix interpreter in menu-method.
* Bump Standards-Version to 3.7.2.2.
-- Chris Boyle <cmb@debian.org> Sat, 28 Oct 2006 14:12:45 +0100
sapphire (0.15.8-7) unstable; urgency=low
* Update Build-Depends for Xorg. (closes: #347057)
-- Chris Boyle <cmb@debian.org> Fri, 13 Jan 2006 00:48:53 +0000
sapphire (0.15.8-6) unstable; urgency=low
* Add .desktop file. (closes: #330061)
* Bump Standards-Version to 3.6.2.
* Update GPL declaration in debian/copyright.
-- Chris Boyle <cmb@debian.org> Sun, 16 Oct 2005 16:34:36 +0100
sapphire (0.15.8-5) unstable; urgency=low
* Fix description tyops, thanks Jens Nachtigall
<nachtigall@web.de>. (closes: #272212)
-- Chris Boyle <cmb@debian.org> Sat, 27 Nov 2004 22:34:05 +0000
sapphire (0.15.8-4) unstable; urgency=low
* Bumped standards-version to 3.6.1.0.
* Fixed debian/watch file (sourceforge ftp layout).
-- Chris Boyle <cmb@debian.org> Sun, 28 Mar 2004 19:18:05 +0100
sapphire (0.15.8-3) unstable; urgency=low
* Finished upstream's incomplete support for switching window
managers, added a new "wmexec" menu item type, changed menu-method
to use it instead of my half-assed kludge using "skill sapphire".
(closes: #182430)
* Corrected version string in windowmanager.cc:30.
-- Chris Boyle <cmb@debian.org> Wed, 20 Aug 2003 13:45:35 +0100
sapphire (0.15.8-2) unstable; urgency=low
* Changed maintainer address, now here I *really* haven't been paying
attention. (note: hmm, no new upstream release in all this time, I
guess he's working on aewm++ instead)
* Added support for DEB_BUILD_OPTIONS.
* Bumped standards version to 3.6.0.
* Building using pbuilder.
-- Chris Boyle <cmb@debian.org> Fri, 25 Jul 2003 13:57:49 +0100
sapphire (0.15.8-1) unstable; urgency=medium
* New upstream release.
- Incorporates all source changes I've made so far (the bugfixes
from previous versions).
* Fixed build failure on g++ 2.96 (#include <stdlib.h> was missing in
linkedlist.cc), hence the urgency. (closes: #128371)
* Added support for switching to a different wm and bumped alternatives
priority to 50 accordingly.
* Added support for "text" menu items (calling x-terminal-emulator).
* Cleaned up some "dh_make'isms" (comments from the example debian
files that were still lying around).
-- Chris Boyle <cmb@bluelinux.co.uk> Wed, 9 Jan 2002 19:38:10 +0000
sapphire (0.15.7-2) unstable; urgency=low
* Fixed build failure with g++ 3.0 by changing use of "or" as a variable
name in image.cc (patch from LaMont Jones <lamont@smallone.fc.hp.com>).
(closes: #126830)
* Other changes for g++ 3.0 from the same patch ("using namespace std;"
and use of "friend class" instead of "friend" a few times).
* Added the manpage to the alternatives system as a slave link.
* Fixed "postrm: unknown argument" problems. That message will appear
(harmlessly) on upgrading from the previous version, but hopefully
for the last time.
-- Chris Boyle <cmb@bluelinux.co.uk> Mon, 31 Dec 2001 10:34:46 +0100).
-- Chris Boyle <cmb@bluelinux.co.uk> Mon, 17 Dec 2001 09:48:50 +0000 | https://sources.debian.org/src/sapphire/0.15.8-9.1/debian/changelog/ | CC-MAIN-2020-40 | refinedweb | 704 | 64.17 |
Hello!
I'm pleased to announce version 3.9.0, the first release of branch 3.9 of SQLObject.
What's new in SQLObject =======================
Contributors for this release are:
+ Michael S. Root, Ameya Bapat - ``JSONCol``;
+ Jerry Nance - reported a bug with ``DateTime`` from ``Zope``.
Features --------
* Add ``JSONCol``: a universal json column that converts simple Python objects (None, bool, int, float, long, dict, list, str/unicode to/from JSON using json.dumps/loads. A subclass of StringCol. Requires ``VARCHAR``/``TEXT`` columns at backends, doesn't work with ``JSON`` columns.
* Extend/fix support for ``DateTime`` from ``Zope``.
* Drop support for very old version of ``mxDateTime`` without ``mx.`` namespace.
Drivers -------
* Support `mariadb`_.
CI --
* Run tests with Python 3.9 at Travis and AppVeyor.
For a more complete list, please see the news:
What is SQLObject =================
SQLObject is an object-relational mapper. Your database tables are described as classes, and rows are instances of those classes. SQLObject is meant to be easy to use and quick to get started with.
It currently supports MySQL, PostgreSQL and SQLite; connections to other backends - Firebird, Sybase, MSSQL and MaxDB (also known as SAPDB) - are lesser debugged).()
Queries::
p3 = Person.selectBy(lname="Doe")[0] p3
<Person 1
pc = Person.select(Person.q.lname=="Doe").count() pc
1
Oleg.
python-announce-list@python.org | https://mail.python.org/archives/list/python-announce-list@python.org/thread/JXQMYANE255B5MOLDO7SX6D2GVB3ARPX/ | CC-MAIN-2022-27 | refinedweb | 216 | 61.93 |
Below are some differences between struct and class in C#
- classes are reference types where as struct are value types.
- Null value cannot be assigned to the Struct because it is a non-nullable value type . Whereas the object of a class can be assigned a null value.
- Classes support inheritance but the struct doesn’t.
- struct cannot have destructor but a class can have.
- struct is stored on stack where as objects instantiated for a class are stored in heap.
- struct cannot have field initializers where as class can have.
- You should use the new keyword to initiate the class but for struct it is an option.
- The access modifiers of a struct cannot be protected or protected internal.
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace ConsoleApplication1 { public class Employee { public string Name { get; set; } } struct Student { // Field initializers are not allowed in strcut //int i = 1; public string Name { get; set; } } class Program { static void Main(string[] args) { Employee emp = new Employee(); emp.Name = "Abundant Code"; Student stu = new Student(); stu.Name = "Abundant Code"; // Cannot assign null value to stu //stu = null; emp = null; } } } | https://abundantcode.com/what-is-the-difference-between-struct-and-class-in-c/ | CC-MAIN-2021-10 | refinedweb | 196 | 59.7 |
The console-based interface approach of the preceding section works, and it may be sufficient for some users assuming that they are comfortable with typing commands in a console window. With just a little extra work, though, we can add a GUI that is more modern, easier to use and less error prone, and arguably sexier.
As we'll see later in this book, a variety of GUI toolkits and builders are available for Python programmers: Tkinter, wxPython, PyQt, PythonCard, Dabo, and more. Of these, Tkinter ships with Python, and it is something of a de facto standard.
Tkinter is a lightweight toolkit and so meshes well with a scripting language such as Python; it's easy to do basic things with Tkinter, and it's straightforward to do more advanced things with extensions and OOP-based code. As an added bonus, Tkinter GUIs are portable across Windows, Linux/Unix, and Macintosh; simply copy the source code to the machine on which you wish to use your GUI.
Because Tkinter is designed for scripting, coding GUIs with it is straightforward. We'll study all of its concepts and tools later in this book. But as a first example, the first program in Tkinter is just a few lines of code, as shown in Example 2-23.
Example 2-23. PP3E\Preview\tkinter001.py
from Tkinter import * Label(text='Spam').pack( ) mainloop( )
This isn't the most useful GUI ever coded, but it demonstrates Tkinter basics and it builds the fully functional window shown in Figure 2-1 in just three simple lines ...
No credit card required | https://www.safaribooksonline.com/library/view/programming-python-3rd/0596009259/ch02s07.html | CC-MAIN-2018-17 | refinedweb | 266 | 58.82 |
Finding VASP calculations in a directory tree
Posted March 20, 2014 at 08:09 PM | categories: vasp, python | tags: | View Comments
The goal in this post is to work out a way to find all the directories in some root directory that contain VASP calculations. This is a precursor to doing something with those directories, e.g. creating a summary file, adding entries to a database, doing some analysis, etc… For fun, we will just calculate the total elapsed time in the calculations.
What is challenging about this problem is that the calculations are often nested in a variety of different subdirectories, and we do not always know the structure. We need to recursively descend into those directories to check if they contain VASP calculations.
We will use a function that returns True or False to tell us if a particular directory is a VASP calculation or not. We can tell that because a completed VASP calculation has specific files in it, and specific content in those files. Notably, there is an OUTCAR file that contains the text "General timing and accounting informations for this job:" near the end of the file.
We will also use os.walk as the way to recursively descend into the root directory.
import os from jasp import * def vasp_p(directory): 'returns True if a finished OUTCAR file exists in the current directory, else False' outcar = os.path.join(directory, 'OUTCAR') if os.path.exists(outcar): with open(outcar, 'r') as f: contents = f.read() if 'General timing and accounting informations for this job:' in contents: return True return False total_time = 0 for root, dirs, files in os.walk('/home-research/jkitchin/research/rutile-atat'): for d in dirs: # compute absolute path to each directory in the current root absd = os.path.join(root, d) if vasp_p(absd): # we found a vasp directory, so we can do something in it. # here we get the elapsed time from the calculation with jasp(absd) as calc: total_time += calc.get_elapsed_time() print 'Total computational time on this project is {0:1.0f} minutes.'.format(total_time / 60)
Total computational time on this project is 231 minutes.
Copyright (C) 2014 by John Kitchin. See the License for information about copying.
Org-mode version = 8.2.5h | http://kitchingroup.cheme.cmu.edu/blog/2014/03/20/Finding-VASP-calculations-in-a-directory-tree/ | CC-MAIN-2018-30 | refinedweb | 375 | 55.64 |
I am writing an app, which takes a STL file as input. I want to get volume of the stl object without saving the stl file and use the volume to calculate a quote and post it back to browser. Right now I am using
numpy-stl package, but I am stuck on how to create a mesh object for numpy-stl from the file I get with
request.files['file'].read(). Any help is appreciated.
Answer
You can try the following code:
import io filedata = request.files['file'].read() data = io.BytesIO(filedata) tmp_mesh = mesh.Mesh.from_file("tmp.stl", fh=data)
You can use tmp_mesh object to do you interesting operation
suggestion to add error handle on something not expected
- if request.files not contain ‘file’ keys | https://www.tutorialguruji.com/python/how-to-use-numpy-stl-with-file-uploaded-with-flask-request/ | CC-MAIN-2021-39 | refinedweb | 128 | 67.35 |
Answer:Answer:
LCD takes only ASCII values as input and displays the digit. For Displaying float numbers on LCD, first we need to convert the floating point number to array of ASCII values. For example, if we want to display 3.1416, then we need to find ASCII values for each digit. To do this, we can make use of "ftoa()" function. For the execution of this function stdlib.h must be included
This function takes two arguments, where the first argument is the floating point number which needs to be converted. The second argument is the address of the variable in which the error code is to be saved. It returns the pointer to the array containing the ASCII data. A code snippet for the same is provided below:
#include "stdlib.h"
#include "string.h"
{
float MyFloat;
int Status;
char* buf;
MyFloat = 1.2345;
buf = ftoa(MyFloat, &Status);
LCD_Position(0,0);
LCD_PrString(buf);
}
If the number is less than 0.0000001, the error code is 1. If the number is greater than 2147483520, the error code is 2. In both these cases, number will not be converted successfully.
If the number is within the range 0.0000001< |input| < 2147483520, then the number will be successfully converted and the error code will be 0. The converted ASCII values will be saved in the array pointed by the returned pointer. Using this pointer, the array of ASCII values can be accessed and sent to LCD byte by byte. Or, LCD_PrString() function can be used instead which takes pointer to string as argument. | https://community.cypress.com/docs/DOC-11151 | CC-MAIN-2017-51 | refinedweb | 261 | 76.32 |
JS++ 0.9.2 is now available for download and features
final variables and fields. Additionally, due to Apple’s decision to stop supporting 32-bit applications beginning with macOS Catalina, we have changed the default binary to the 64-bit compiler for Mac.
final variables can now be used:
import System; final int x = 1; Console.log(x);
final can also be applied to fields:
import System; class Foo { public static final int bar = 1; public int baz = 2; } Console.log(Foo.bar); Console.log((new Foo).baz);
The
final keyword, when applied to a class or method, had already been implemented in previous versions.
macOS Catalina (released Oct 7, 2019) has ended support for 32-bit applications. Previously, JS++ distributed a 32-bit build of the compiler as the default for universality. Going forward, we will be distributing the 64-bit build as the default for macOS. If you still need the 32-bit binary, it is included with all releases going forward as the
js++-x32 binary. All guides and tutorials have been updated. | https://www.onux.com/jspp/blog/category/jspp/ | CC-MAIN-2020-34 | refinedweb | 177 | 55.34 |
Andrei Oprescu9,547 Points
Why am I getting an error?
I have this code which is just the same as Kenneth's:
import random def game(): secret_num = random.randint(1, 10) guesses = [] while len(guesses) < 5: try: guess = int(input("Guess a number between 1 and 10: ")) except ValueError: print("{} isn't a number".format(guess)) else: if guess == secret_num: print("You got it! My number was {}".format(secret_num)) break elif guess > secret_num: print("Too high!") elif guess < secret_num: print("Too low!") guesses.append(guess) else: print("You didn't get it! My number was {}".format(secret_num)) play_again = input("Do you want to play again? Y/n ") if play_again != 'n': game() else: print("Bye!") game()
And I was testing this code, I stumbled across an error:
Guess a number between 1 and 10: a
File "number_game.py", line 8, in game
guess = int(input("Guess a number between 1 and 10: "))
ValueError: invalid literal for int() with base 10: 'a'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "number_game.py", line 30, in <module>
game()
File "number_game.py", line 10, in game
print("{} isn't a number".format(guess))
UnboundLocalError: local variable 'guess' referenced before assignment
As you can see, I tried to see if the except 'ValueError' worked. And it didn't. Can you help me understand why this is happening?
Thank you
1 Answer
Jennifer NordellTreehouse Staff
Hi there! This bug is intentional. Check out the Teacher's Notes where Kenneth asks if you spotted the bug. Kudos on that by the way. Not everyone spots it.
Here's what's happening: inside your try block you prompt the user for a number and then try to turn it into an integer and assign the result to the variable
guess. But if that fails then the assignment to guess also fails.
In the
except block you then try to print a string to the user formatting it with the value stored in
guess, but because the assignment failed,
guess is undefined.
There are a couple of ways to fix this.
You could simply change the error message to something like:
print("That was not a number!")
Or if you're set on displaying what the user input you're going to need to put the prompt assigning the value to guess outside the
try block and only include the conversion to an integer in the
try. Take a look:
while len(guesses) < 5: guess = input("Guess a number between 1 and 10: ") # assign whatever they type to guess try: guess = int(guess) # try and convert their input to an integer except ValueError: print("{} isn't a number".format(guess)) #if the value cannot be converted to an integer print out what they typed
Hope this helps!
Andrei Oprescu9,547 Points
Andrei Oprescu9,547 Points
Thank you very much! It helped a lot! | https://teamtreehouse.com/community/why-am-i-getting-an-error-12 | CC-MAIN-2020-10 | refinedweb | 481 | 75.4 |
Code. Collaborate. Organize.
No Limits. Try it Today.
The code snippet shown in this article is used to delay load a DLL, i.e., DLL is implicitly linked but not actually loaded until your code attempts to reference a symbol contained within the DLL. If your application uses several DLLs, its initialization time might be slow because the loader maps all of the required DLLs into the process' address space and there is every possibility that even a single function from one of these DLLs is not called, so a better way for a loading a DLL which is rarely used is to delay load, it i.e., load it when required instead of loading initially. This improves the start up time. Sounds great. Now, we will actually try to delay load a DLL, and analyze its advantages and disadvantages, so all set...
First, you create a DLL just as you normally would. You also create an executable as you normally would but you do have to change a couple of linker switches and re-link the executable. Here are the two linker switches you need to add:
/Lib:DelayImp.lib
/DelayLoad:MyDll.dll
#pragma comment(lib, "DelayImp.lib")
#pragma comment(linker, "/DelayLoad:Dll.Dll")
#pragma comment(linker, "/Delay:unload")
So simple, isn't it? Just add the above three lines in your EXE's code and your DLL is delay-loaded. Now, let's have a look at how it is achieved.
The Lib, "DelayImp.lib" switch tells the linker to embed a special function, __delayLoadHelper, into your executable.
__delayLoadHelper
The second switch tells the linker the following things:
Remove Dll.dll from the executable module's import section so that the operating system loader does not implicitly load the DLL when the process initializes. You can watch it from the dependency walker utility that ships with VC 6.0. Below is the image of the dependency walker with and without delay loading. As seen, in the case of delay loading, it has no entry in the EXE's dependencies.
As seen in the first case without delay-loading, Dll.Dll is shown in the EXE's dependencies but not in the second image of the EXE which is using delay loaded DLL.
... It embeds a new Delay Import section (called .didata) in the executable indicating which functions are being imported from Dll.dll.
Resolve calls to the delay-loaded functions by having calls jump to the __delayLoadHelper function.
When the application runs, a call to a delay-loaded function actually calls the __delayLoadHelper function instead. This function references the special Delay Import section and knows to call LoadLibrary followed by GetProcAddress. Once the address of the delay-loaded function is obtained, __delayLoadHelper fixes up calls to that function so future calls go directly to the delay-loaded function. Note that other functions in the same DLL still have to be fixed up the first time you call them. Also note that you can specify the /DelayLoad linker switch multiple times—once for every DLL that you want to delay-load.
LoadLibrary
GetProcAddress
When the operating system loader loads your executable, it tries to load all the required DLLs. If a DLL can't be loaded, the loader displays an error message. But for delay-loaded DLLs, the existence of the DLL is not checked at initialization time. If the DLL can't be found when a delay-loaded function is called, the __delayLoadHelper function raises a software exception. You can trap this exception using structured exception handling (SEH) and keep your application running. If you don't trap the exception, your process is terminated.
Another problem can occur when _delayLoadHelper does find your DLL but the function you're trying to call isn't in the DLL. This can happen if the loader finds an old version of the DLL, for example. In this case, _delayLoadHelper also raises a software exception and the same rules apply.
_delayLoadHelper
If both the DLL and the function are found, the loader will try to load the DLL when a function or a symbol from that DLL is referenced. Since it is loaded now and not at application start up, it will take some time, but things get even worse if while building your DLL you haven't rebased its address to an appropriate one so at the time of function call, the loader will first try to load the DLL and it has to rebase the DLL which has a lot of overhead (for rebasing details, please go through the article Need for Rebasing a DLL). So also, if possible, rebase it properly to load the DLL quickly.)
#pragma comment(linker, "/DelayLoad:Dll.Dll")
#pragma comment(linker, "/Delay:unload")
#include <windows.h>
#pragma comment(lib, "delayimp")
#pragma comment(lib, "user32")
int main() {
// user32.dll will load at this point
MessageBox(NULL, "Hello", "Hello", MB_OK);
}
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
C# 6: First reactions | http://www.codeproject.com/Articles/9428/Delay-Loading-a-DLL?fid=147603&df=90&mpp=25&noise=5&prof=True&sort=Position&view=None&spc=None | CC-MAIN-2014-15 | refinedweb | 846 | 63.59 |
Please remind me, what happens if you do not put .staticmethod("garply"), and then try : >>> Foo().garply(0,0) >>> Foo.garply("") ? > class_<Foo>("Foo") > .def("garply", (string (*)(string)) &Foo::garply) > .def("garply", (string (Foo::*)(int, int)) &Foo::garply) > .staticmethod("garply") Bilokon, Paul wrote: > Hi, > > In C++ it's possible to have the following situation: > > class Foo > { > public: > > string garply(int x, int y) > { > return "I am the non-static Foo::garply(int, int)"; > } > > static string garply(string z) > { > return "I am the static Foo::garly(string)"; > } > }; > > Now I want to expose both the static and non-static garply to Python, > and here I'm stuck: > > using namespace boost::python; > > BOOST_PYTHON_MODULE(overloading) > { > class_<Foo>("Foo") > .def("garply", (string (*)(string)) &Foo::garply) > .def("garply", (string (Foo::*)(int, int)) &Foo::garply) > .staticmethod("garply") > // ??? Which garply is that ??? > ; > } > > staticmethod takes a single string as its parameter, the function name. > But I have both static and non-static variants. How do I specify which > of them is static? [Technically it's possible. We already differentiate > between overloaded functions on the Python side, so we just need to > check if the first argument is "self", e.g. by looking at its type -- > hardly foolproof though, e.g. consider a "copy-constructor" > signature...] > > Of course, one may ask why don't I give the Python functions different > names. But what if I'm trying to replicate the C++ API as faithfully as > I can? I would like to avoid name changes. > > What's the best thing to do? > > Many thanks! > > Regards, > Paul > > > - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - > >. | http://mail.python.org/pipermail/cplusplus-sig/2008-August/013630.html | CC-MAIN-2013-20 | refinedweb | 255 | 68.47 |
Hello there. Have lack of information about proper using SSR. Only some common things from official docs. Could anybody give me some advice how to use SSR (or work code) especially in VUEX.
First question, is how to use cookies in Vuex actions.
For example, this code is not working (in Vuex action).
import { LocalStorage, Cookies } from 'quasar' import { cookie } from 'boot/axios' import { i18n } from 'boot/i18n' export function configAction ({ commit, ssrContext }, locale) { ... const cookies = process.env.SERVER ? Cookies.parseSSR(ssrContext) : Cookies // otherwise we're on client cookie ? cookies.set('locale', locale, { expires: 365 }) : LocalStorage.set('locale', locale, { expires: 365 }) ...
I got error “cookies.set is not a function”. As I understood, you couldn’t use Cookies either LocalStorage. But how use it in SSR context?
Or similar question, how to use cookie in Vuex state? Smth. like that:
import { Cookies, LocalStorage } from 'quasar' import { cookie } from 'boot/axios' export default function (data) { console.log(data) const cookies = process.env.SERVER ? Cookies.parseSSR(ssrContext) : Cookies // otherwise we're on client return { user: null, users: [], token: cookie ? cookies.get('token') : LocalStorage.getItem('token'), cookie } }
If anybody could give me any hints I’ll be grateful. Thanks.
@dbehterev This is a great article:
It’s a lot to understand, but it does go over the pitfalls of cookies and SSR and how to avoid them. | https://forum.quasar-framework.org/topic/7323/need-help-with-ssr/? | CC-MAIN-2021-17 | refinedweb | 224 | 52.97 |
Post your Comment
Java Technology on Linux
Java Technology on Linux
Java technology supports Java Platform, Java Standard Edition, Java... for the Linux platform. Java technology also makes
available tested servers
Linux in Internet
inbuilt technology. Linux is a free open source software having
various...Linux in Internet
.... One Linux has two sorts
of category text based and graphics based
What is Android Technology?
technology thanks to its status as for providing an open source platform has already... technology platforms backdated in the coming time.
One of the most viable instances of the onslaught of Android technology over its competitors can be best observed
technology
technology can spread virus by phone number?...is it possible adding one mobile phone number to attack that mobile phone
Linux
Linux hi all,
i want to invoke shell program from java. i m using the following code.
package script;
import java.io.IOException;
public class RunShell {
public static void main(String[] args) throws IOException
Introduction to Java Technology
for
the users such as Windows, Linux, Solaris, Mac OS etc. Java program can be run
on all...:
What Java Programming Technology Can do?
Java is general-purpose, high-level... Protocol
Technology (Java RMI-IIOP Technology) which can be used
Buy Fedora Core 3 Linux CD's in India.
Fedora Core 3 Linux
Now Available Fedora Core 3 CD's
We are providing the free downloadable version of Fedora Core 3 Linux CDs, which is distributed... of world's most widely used Linux operating systems! The Fedora Project is a Red-hat
Linux Books
Linux Books
... other compilers.
The Linux Administrator's...;
Linux
Security for Beginners
There is a saying
VoIP Linux
VoIP Linux
...
.
Linux
VoIP
Many businesses are turning to Voice..., and CounterPath's X-Lite to the
test.
Linux LiveCD VoIP Server
Linux installation
Linux installation I want to install linux for generally purpose functions,which linux[fedora , ubunto,centos etc ]should i install.In my pc there is already windows,How much the linux will effect my windows & data in C
We are providing Fedora Cord 2 Linux CD's .
Fedora Core 2 Linux
Now Available Fedora Cord2 CD's
We are providing... Linux operating systems! The Fedora Project is a Red-hat-sponsored and community... technology that may eventually make its way into Red Hat products
We are providing Downloadable Version of Mandrake 10.1 Power Pack Linux CD's.
Mandrake 10.1 Power Pack Linux
Now Available Mandrake 10.1 Power Pack CD's
Power Pack is a Linux system that will appeal to all advanced users... Pack is a new-generation Linux operating system for servers and desktop
Linux Programming
Linux Programming 1.Write a program that reads an integer by the user and displays it in octal (base 8):
Enter an integer number between 0 and 32767: 2012
In octal, your number is: 3734
(Use the same input and output format
Java technology
Java technology how does java technology implement security
Technology Articles/FAQs
Technology Articles/FAQs
What is Technology?
The Oxford Dictionary defines technology as the application of scientific
knowledge..., and methods to find a solutions of a particular problem.
The word technology
Buy Fedora Core 3 Test 1 Linux CD's in India.
Fedora Core 3 Test 1 Linux
Fedora Core 3 Is Available
Now Available... of Fedora Core 3 Test 1 Linux CDs, which is distributed under GNU public license.
ABOUT FEDORA
Fedora is the latest version of world's most widely used Linux
Linux Open Source
approach to Linux-based operating systems. Sun provides Java technology...
Linux Open Source
Building a Linux Network Appliance
You... border security. But you may not know that a Linux-based iptables firewall...-top boxes in Java.
Thanks
web technology
Post your Comment | http://www.roseindia.net/discussion/24414-Java-Technology-on-Linux.html | CC-MAIN-2014-52 | refinedweb | 623 | 50.12 |
Chapter 9: Text Processing and More about Wrapper Classes. Starting Out with Java: From Control Structures through Objects Fifth Edition by Tony Gaddis. Chapter Topics. Chapter 9 discusses the following main topics: Introduction to Wrapper:Text Processing and More about Wrapper Classes
Starting Out with Java: From Control Structures through Objects
Fifth Edition
by Tony Gaddis
Chapter 9 discusses the following main topics:
Introduction to Wrapper Classes
Character Testing and Conversion with the Character Class
More String Methods
The StringBuilder Class
The StringTokenizer Class
Wrapper Classes for the Numeric Data Types
Focus on Problem Solving: The TestScoreReader Class
Java provides 8 primitive data types.
They are called “primitive” because they are not created from classes.
Java provides wrapper classes for all of the primitive data types.
A wrapper class is a class that is “wrapped around” a primitive data type.
The wrapper classes are part of java.lang so to use them, there is no import statement required.
Wrapper classes allow you to create objects to represent a primitive.
Wrapper classes are immutable, which means that once you create an object, you cannot change the object’s value.
To get the value stored in an object you must call a method.
Wrapper classes provide static methods that are very useful
The Character class allows a char data type to be wrapped in an object.
The Character class provides methods that allow easy testing, processing, and conversion of character data.
Example:
CharacterTest.java
CustomerNumber.java
The Character class provides two methods that will change the case of a character.
See example: CircleArea.java
The String class provides several methods that search for a string inside of a string.
A substring is a string that is part of another string.
Some of the substring searching methods provided by the String class:
boolean startsWith(String str)
boolean endsWith(String str)
boolean regionMatches(int start, String str, int start2, int n)
boolean regionMatches(boolean ignoreCase, int start, String str, int start2, int n)
The startsWith method determines whether a string begins with a specified substring.
String str = "Four score and seven years ago";
if (str.startsWith("Four"))
System.out.println("The string starts with Four.");
else
System.out.println("The string does not start with Four.");
str.startsWith("Four") returns true because str does begin with “Four”.
startsWith is a case sensitive comparison.
The endsWith method determines whether a string ends with a specified substring.
String str = "Four score and seven years ago";
if (str.endsWith("ago"))
System.out.println("The string ends with ago.");
else
System.out.println("The string does not end with ago.");
The endsWith method also performs a case sensitive comparison.
Example: PersonSearch.java
The String class provides methods that will if specified regions of two strings match.
regionMatches(int start, String str, int start2, int n)
returns true if the specified regions match or false if they don’t
Case sensitive comparison
regionMatches(boolean ignoreCase, int start, String str, int start2, int n)
If ignoreCase is true, it performs case insensitive comparison
The String class also provides methods that will locate the position of a substring.
indexOf
returns the first location of a substring or character in the calling String Object.
lastIndexOf
returns the last location of a substring or character in the calling String Object.
String str = "Four score and seven years ago";
int first, last;
first = str.indexOf('r');
last = str.lastIndexOf('r');
System.out.println("The letter r first appears at "
+ "position " + first);
System.out.println("The letter r last appears at "
+ "position " + last);
String str = "and a one and a two and a three";
int position;
System.out.println("The word and appears at the "
+ "following locations.");
position = str.indexOf("and");
while (position != -1)
{
System.out.println(position);
position = str.indexOf("and", position + 1);
}
See Table 9-4 on page 574.
See Table 9-4 on page 574.
The String class provides methods to extract substrings in a String object.
The substring method returns a substring beginning at a start location and an optional ending location.
String fullName = "Cynthia Susan Smith";
String lastName = fullName.substring(14);
System.out.println("The full name is " + fullName);
System.out.println("The last name is " + lastName);
The fullName variable holds the address of a String object.
Address
“Cynthia Susan Smith”
The lastName variable holds the address of a String object.
Address
“Smith”
The String class provides methods to extract substrings in a String object and store them in char arrays.
getChars
Stores a substring in a char array
toCharArray
Returns the String object’s contents in an array of char values.
Example: StringAnalyzer.java
The String class provides methods to return modified String objects.
concat
Returns a String object that is the concatenation of two String objects.
replace
Returns a String object with all occurrences of one character being replaced by another character.
trim
Returns a String object with all leading and trailing whitespace characters removed.
The String class provides several overloaded valueOf methods.
They return a String object representation of
a primitive value or
a character array.
String.valueOf(true) will return "true".
String.valueOf(5.0) will return "5.0".
String.valueOf(‘C’) will return "C".
boolean b = true;
char [] letters = { 'a', 'b', 'c', 'd', 'e' };
double d = 2.4981567;
int i = 7;
System.out.println(String.valueOf(b));
System.out.println(String.valueOf(letters));
System.out.println(String.valueOf(letters, 1, 3));
System.out.println(String.valueOf(d));
System.out.println(String.valueOf(i));
Produces the following output:
true
abcde
bcd
2.4981567
7
The StringBuilder class is similar to the String class.
However, you may change the contents of StringBuilder objects.
You can change specific characters,
insert characters,
delete characters, and
perform other operations.
A StringBuilder object will grow or shrink in size, as needed, to accommodate the changes.
StringBuilder()
This constructor gives the object enough storage space to hold 16 characters.
StringBuilder(int length)
This constructor gives the object enough storage space to hold lengthcharacters.
StringBuilder(String str)
This constructor initializes the object with the string in str.
The object will have at least enough storage space to hold the string in str.
The String and StringBuilder also have common methods:
char charAt(int position)
void getChars(int start, int end,
char[] array, int arrayStart)
int indexOf(String str)
int indexOf(String str, int start)
int lastIndexOf(String str)
int lastIndexOf(String str, int start)
int length()
String substring(int start)
String substring(int start, int end)
The StringBuilder class has several overloaded versions of a method named append.
They append a string representation of their argument to the calling object’s current contents.
The general form of the append method is:
object.append(item);
where object is an instance of the StringBuilder class and item is:
a primitive literal or variable.
a char array, or
a String literal or object.
After the append method is called, a string representation of itemwill be appended to object’s contents.
StringBuilder str = new StringBuilder();
str.append("We sold ");
str.append(12);
str.append(" doughnuts for $");
str.append(15.95);
System.out.println(str);
This code will produce the following output:
We sold 12 doughnuts for $15.95
The StringBuilder class also has several overloaded versions of a method named insert
These methods accept two arguments:
an int that specifies the position to begin insertion, and
the value to be inserted.
The value to be inserted may be
a primitive literal or variable.
a char array, or
a String literal or object.
The general form of a typical call to the insert method.
object.insert(start, item);
where object is an instance of the StringBuilder class, start is the insertion location, and item is:
a primitive literal or variable.
a char array, or
a String literal or object.
Example: Telephone.javaTelephoneTester.java
The StringBuilder class has a replace method that replaces a specified substring with a string.
The general form of a call to the method:
object.replace(start, end, str);
startis an int that specifies the starting position of a substring in the calling object, and
endis an int that specifies the ending position of the substring. (The starting position is included in the substring, but the ending position is not.)
The str parameter is a String object.
After the method executes, the substring will be replaced with str.
The replace method in this code replaces the word “Chicago” with “New York”.
StringBuilder str = new StringBuilder(
"We moved from Chicago to Atlanta.");
str.replace(14, 21, "New York");
System.out.println(str);
The code will produce the following output:
We moved from New York to Atlanta.
The StringBuilder class also provides methods to set and delete characters in an object.
StringBuilder str = new StringBuilder(
"I ate 100 blueberries!");
// Display the StringBuilder object.
System.out.println(str);
// Delete the '0'.
str.deleteCharAt(8);
// Delete "blue".
str.delete(9, 13);
// Display the StringBuilder object.
System.out.println(str);
// Change the '1' to '5'
str.setCharAt(6, '5');
// Display the StringBuilder object.
System.out.println(str);
StringBuilder strb = new StringBuilder("This is a test.");
String str = strb.toString();
The StringTokenizer class breaks a string down into its components, which are called tokens.
Tokens are a series of words or other items of data separated by spaces or other characters.
"peach raspberry strawberry vanilla"
This string contains the following four tokens: peach, raspberry, strawberry, and vanilla.
The character that separates tokens is a delimiter.
"17;92;81;12;46;5"
This string contains the following tokens: 17, 92, 81, 12, 46, and 5 that are delimited by semi-colons.
Some programming problems require you to process a string that contains a list of items.
For example,
The process of breaking a string into tokens is known as tokenizing.
The Java API provides the StringTokenizer class that allows you to tokenize a string.
The following import statement must be used in any class that uses it:
import java.util.StringTokenizer;
To create a StringTokenizer object with the default delimiters (whitespace characters):
StringTokenizer strTokenizer =
new StringTokenizer("2 4 6 8");
To create a StringTokenizer object with the hyphen character as a delimiter:
StringTokenizer strTokenizer =
new StringTokenizer("8-14-2004", "-");
To create a StringTokenizer object with the hyphen character as a delimiter, returning hyphen characters as tokens as well:
StringTokenizer strTokenizer =
new StringTokenizer("8-14-2004", "-", true);
The StringTokenizer class provides:
countTokens
Count the remaining tokens in the string.
hasMoreTokens
Are there any more tokens to extract?
nextToken
Returns the next token in the string.
Throws a NoSuchElementException if there are no more tokens in the string.
Loops are often used to extract tokens from a string.
StringTokenizer strTokenizer =
new StringTokenizer("One Two Three");
while (strTokenizer.hasMoreTokens())
{
System.out.println(strTokenizer.nextToken());
}
This code will produce the following output:
One
Two
Three
Examples: DateComponent.java, DateTester.java
The default delimiters for the StringTokenizer class are the whitespace characters.
\n\r\t\b\f
Other multiple characters can be used as delimiters in the same string.
This string uses two delimiters: @ and .
If non-default delimiters are used
The String class trim method should be used on user input strings to avoid having whitespace become part of the last token.
To extract the tokens from this string we must specify both characters as delimiters to the constructor.
StringTokenizer strTokenizer =
new StringTokenizer("[email protected]", "@.");
while (strTokenizer.hasMoreTokens())
{
System.out.println(strTokenizer.nextToken());
}
This code will produce the following output:
joe
gaddisbooks
com
Tokenizes a String object and returns an array of String objects
Each array element is one token.
// Create a String to tokenize.
String str = "one two three four";
// Get the tokens from the string.
String[] tokens = str.split(" ");
// Display each token.
for (String s : tokens)
System.out.println(s);
This code will produce the following output:
one
two
three
four
Java provides wrapper classes for all of the primitive data types.
The numeric primitive wrapper classes are:
To create objects from these wrapper classes, you can pass a value to the constructor:
Integer number = new Integer(7);
You can also assign a primitive value to a wrapper class object:
Integer number;
number = 7;
Recall from Chapter 2, we converted String input (from JOptionPane) into numbers. Any String containing a number, such as “127.89”, can be converted to a numeric data type.
Each of the numeric wrapper classes has a static method that converts a string to a number.
The Integer class has a method that converts a String to an int,
The Double class has a method that converts a String to a double,
etc.
These methods are known as parse methods because their names begin with the word “parse.”
// Store 1 in bVar.
byte bVar = Byte.parseByte("1");
// Store 2599 in iVar.
int iVar = Integer.parseInt("2599");
// Store 10 in sVar.
short sVar = Short.parseShort("10");
// Store 15908 in lVar.
long lVar = Long.parseLong("15908");
// Store 12.3 in fVar.
float fVar = Float.parseFloat("12.3");
// Store 7945.6 in dVar.
double dVar = Double.parseDouble("7945.6");
The parse methods all throw a NumberFormatException if the String object does not represent a numeric value.
Each of the numeric wrapper classes has a static toString method that converts a number to a string.
The method accepts the number as its argument and returns a string representation of that number.
int i = 12;
double d = 14.95;
String str1 = Integer.toString(i);
String str2 = Double.toString(d);
The Integer and Long classes have three additional methods:
toBinaryString, toHexString, and toOctalString
int number = 14;
System.out.println(Integer.toBinaryString(number));
System.out.println(Integer.toHexString(number));
System.out.println(Integer.toOctalString(number));
This code will produce the following output:
1110
e
16
The numeric wrapper classes each have a set of static final variables
MIN_VALUE and
MAX_VALUE.
These variables hold the minimum and maximum values for a particular data type.
System.out.println("The minimum value for an "
+ "int is "
+ Integer.MIN_VALUE);
System.out.println("The maximum value for an "
+ "int is "
+ Integer.MAX_VALUE);
You can declare a wrapper class variable and assign a value:
Integer number;
number = 7;
You nay think this is an error, but because number is a wrapper class variable, autoboxing occurs.
Unboxing does the opposite with wrapper class variables:
Integer myInt = 5; // Autoboxes the value 5
int primitiveNumber;
primitiveNumber = myInt; // unboxing
You rarely need to declare numeric wrapper class objects, but they can be useful when you need to work with primitives in a context where primitives are not permitted
Recall the ArrayList class, which works only with objects.
ArrayList<int> list =
new ArrayList<int>(); // Error!
ArrayList<Integer> list =
new ArrayList<Integer>(); // OK!
Autoboxing and unboxing allow you to conveniently use ArrayLists with primitives.
Dr. Harrison keeps student scores in an Excel file. This can be exported as a comma separated text file. Each student’s data will be on one line. We want to write a Java program that will find the average for each student. (The number of students changes each year.)
Solution: TestScoreReader.java, TestAverages.java | http://www.slideserve.com/kim/chapter-9-text-processing-and-more-about-wrapper-classes | CC-MAIN-2017-04 | refinedweb | 2,490 | 50.02 |
I felt the need to rant, so why not? :)
In perl you can't do anything obvious like create a hash of a list of strings trivially, you have to spend your entire time worrying about references. Why? I don't know.
In perl you're never sure the operator this person is using is. Perl has operators and variables names after almost every punctuation key on the keyboard. (Pop quiz: Which punctuation keys on a standard qwerty keyboard, aren't valid perl variable names?)
True, but you don't need to know or use these variables. Just because you can, doesn't mean you should :) $! is the string message for the last error, $@/$$/$% do things with references to those data types, $# is the length of a list, $^ is magic for which OS you're on, etc etc. But not knowing this doesn't make perl any harder to use. Most punctuation also does something in C -- you're just more used to them.
Isn't this based on the assumption that "use" is only writing code? I would argue that since you (and others) will end up having to read, and interpret the code at some point, complex and obtuse constructs like this do make the language harder to use. Since they are there, they will be used, so you have to know them to be able to understand the code. -- Stephen Lewis
Also, just to be pedantic, the node is WhyIHatePerl, not WhyCIsBetterThanPerl ;) --Stephen Lewis
This was ment as a random rant, as someone asked me what I disliked about perl, so I thought I'd jot a few of them down. It's not ment to be a complete critism, most of it is mostly due to my not knowing the language properly.
The thing is in perl you can write nice and easy to read and maintain programs, however people don't. Sure I can avoid these things, but it doesn't help me when I'm trying to debug some body elses script. C uses a lot less punctuation than perl, this is easily proven by the fact that perl uses mostly the same punctuation as C, and then adds it's own ones :) I don't think C and Perl really should be compared. They are completely different problem domains. I've got a rant growing somewhere about C too :) Compared to python, perl looks like executable line noise.
And Python looks like executable whitespace. These variables are actually hardly a design decision on Perl's side - 99% of them existed in AWK. Much of the linenoise comes from the AWK and SED heritage actually. The majority of it is loathed by many Perl hackers as well. Too much of I/O is controlled by global special variables; Larry himself has said this situation needs fixing. Of course not much can be done for Perl5. Unlike the Python and PHP folks, we don't go breaking people's old scripts willy nilly :) --
AristotlePagaltzis
Perl didn't get its line noise from AWK. All AWK has is the $ operator for getting from a field, eg $3 will get the third field, $x will get the field for the number held in x. sh? has $ on it's variables and shares some with Perl, but many are unquie to Perl, eg $# is the output format for printed numbers in Perl and the number of command line arguments in sh?. --JonPurvis
Of course the punctuation variables don't all come from the same source. When you mix four different languages with their own spirit each you won't be replicating any of them exactly. --
AristotlePagaltzis
Someone once said that the definition of a low level language is one that requires you to spend your time worrying about the irrelevant. Perl must indeed be a low level language. Trying to figure out how to pass a list to a function, create a local variable in a function, nest types are all things that are difficult to get "right". You have to know a magical encantation to do them. Why don't they do the right thing?
I don't understand the question... To pass a list, you just pass it. Or is the problem that all arguments get flattened into one list? That's a fair gripe. Then again, how do you pass a list in C? Pass a pointer? Then you have to pass a list count as well.
Lists get flattened, variables by default are global, nested types aren't really nested (they're just references). The point of a scripting language is to get on and do what you want, not spend time thinking about how you are going to achieve it. I'm sure there are other examples of perl doing counter intuitive things by default and requiring you to work around them. These are mostly historic since perl grew out of a smaller language into a bigger one. But I think these are good examples of why you shouldn't use perl for a large project. Perl is good at parsing text files and outputting a nice summary, or munging it into a different format. Perl is not something that you should be writing large projects in. Yet people still do. (People still write large user space projects in C too for some unknown reason).
To effectively work on a large project you've got to manage complexity, with perl unless you are very strict, the complexity gets away from you. The thing is most programs start 'small' (I'll just write something that parses subscribe/unsubscribe requests and updates a sendmail alias) but ends up growing over time into a monster (majordomo...) Compared to almost any other language, Perl makes you spend a lot of time working with the irrelevant.
Perl lets you know about the reference as well as the referee. Other languages hide the reference from you without working fundamentally differently. How is that better? --
AristotlePagaltzis
Because if things are designed well enough (!) you don't need that low level of access, even to do unintended things. Higher order functions, complex and nested types (lists, dictionaries/hashes, strings, in any combination), and a strong but lightweight object system (i.e. minimal syntax and typing) let you hack things up for the unusual case without making the usual case burdensome. C hides the ability to check or change the flags bits when arithmetic operations occur but assembly lets you do this any time you want, or to have a different function call method (e.g. for tail calls). We should all code in assembly all the time. How is that better? --JaredUpdike?
But that is no "low level of access." Conceptually, there's no difference between any of the languages, as "nested types" are always implemented in terms of the basic data types and references. What differs is just the syntactic sugar on top. Perl requires the programmer to be more explicit about references because arrays in list context are implicitly flattened, and you need to be explicit to avoid that. But OTOH, concatenating lists, passing flattened arrays to functions and the like requires a lot more effort in languages that are not Perl. Perl is more list-ish by default, and simply optimizes the syntactic sugar in a different direction as far as this issue is concerned. This doesn't have anything to do with level of access. Now you may certainly prefer a different flavour of sugar than I do -- and I really rather prefer the list-y functional feel of Perl --, but that doesn't make your preference more valid than mine or vice versa. --
AristotlePagaltzis
Perl people say this is a good thing, but the problem with theres more than one way to do it, is that noone ever does it the same way twice. To properly read a program you have to understand the 'idioms' that that program is written around. In Perl it's difficult to realise that "ah, thats a switch statement, I can see what that's going to do" because in perl there is no switch statement, you can roll your own (a nice idea), but, you can get half way through and discover someone's modified their idea of a switch statement slightly and now it has a side effect you didn't realise until you read the code very very carefully. In most languages you can learn a couple of constructs (for, while, do..until, if...then...else, functions) and you've learnt a good chunk of the language and can read most of it. (Even C++ which has a huge number of things (virtual private classes? who on earth would use such a thing?) you learn about the various flow control structures, and how to define/call functions/methods, and you can follow a good chunk of C++), however in perl, since theres so many ways of doing anything you have to learn all the idioms before you can read a sizable chunk of perl code since they are all used, randomly.
You're hanging your entire argument on switch statements here. Perl does not have one because all switch statements found in other languages suck to varying degrees and Larry decided we don't want one we'll have to work around rather than with. I haven't been particularly irked by that decision in practice. --
AristotlePagaltzis
(GlynWebster's two cents.)
When Larry Wall and Perlers say "there's more than one way to do it" they seem to be talking about syntax rather than the semantics. They really mean "there's more than one way to say it". I'm not sure that Perl gives anyone more ways to actually do things than any other well-stuffed scripting language. It gives people lots of ways to write down individual statements, which gives them a sensation of freedom while coding2?, but only seems to lead to confusion and trouble later on. The languages that really give you a lot of ways to do things, like CommonLisp, are far less popular.
I think this emphasis comes from Larry Wall's background in linguistics. With a natural language having lots of ways to say things is a good thing, but semantics are something that linguists try not to think too deeply about. The semantics of natural languages are so poorly understood that it's a bottomless pit best to stay out of. But computer languages are not like languages. They are "notations" more akin to the various notations used in mathematics. Larry Wall don't believe this. Larry Wall seems to be be paying more careful attention to semantics in his "Apocalypse" series of design documents, but he doesn't seem to learnt his lesson about syntax -- Perl 6 looks like it will be even hairier than Perl 5. He's more than smart enough to design a computer language, I know I couldn't do what he's done. I just don't feel like following him because I don't think he has taken the right approach, and has the wrong temperament for his task -- a belief that elegance is possible and a mathematician's urge to reduce things to their essentials seem to be called for.
LarryWall believes that humans are linguistic animals, not that computer languages are like natural languages. Mathematical language is precise, but not the way we think. Larry believes usable languages must follow the way we think, not make us think the way the language works, and I absolutely agree. Reducing is not the answer. If animals, which is a category, were all called similary, say "ani"+syllable, and a cow as anico and a cock was anica you might send someone to milk the cock by accident. Things have to be sufficiently different, even if they're similar. You'll also notice that Perl6 is all about reducing similar mechanisms to identical foundations. --
AristotlePagaltzis
Perls operators like "unless" mean you can spend a long time reading a block of code before realising that the entire block doesn't get called because there is an "unless" at the end of it. Cause should preceed effect, not come half a program later.
<Matthias> hmmm... and unless is nice in some cases... it can really help
making code readable... unless you abuse it
<Isomer> I think that sentance just proved my point
Again, just because you can code something in one way, doesn't mean you should. I personally don't use the 'unless' keyword because it doesn't flow the same way a C program would. But then again, it is not uncommon in C to see do { block } while (0); This isn't exactly crystal clear either unless you are a seasoned C programmer.
The problem is that most of the code at least I interact with is written by other people, or at least is modified by other people. If a program is small, then it's irrelevant, people will just tend to rewrite it instead of modifying it. I find very few perl programs that I honestly thing are 'readable' and 'clearly throught out', this is perhaps because I'm not very experienced in the language, but when compared to when I was teaching myself C, I find perl to be much worse.
The thing that made me add this on this list was trying to debug some program that appeared to go off and do something totally unrelated, then had an unless over a screen away which made the entire point moot. Probably it evolved from a single line with an obvious 'unless' at the end of the line, which grew as this case got more and more complicated into the monster I was forced to do battle with, but it's symptomatic of the problems you end up with in perl programs. This is probably a special case of TMTOWTDI, but always something I felt very jarring.
Perl's $_ operator saves a lot of typing, but makes your program have a large hidden dependancy. Like the "more magic" switch of lore, changing something unrelated that does absolutely nothing obvious can cause your program to crash. Is it safe to insert some code in the middle of this block? Who knows! It could disrupt the fabric of space time that this perl program exists in.
Um, that's what local is for. --
AristotlePagaltzis
I dislike ...Perl (too simple, too slow and doesn't scale)
... I don't like Perl either :) --
PerryLorier
... I also think Perl sucks perhaps a couple of LoveLace, no matter what you compare it to! :) Try Python, it's great! -- zcat(1)
Perl combines the worst aspects of C, BASIC and LineNoise -- Anon
I don't like Perl either. I got about ten pages into the manual (somewhere around the "you are not supposed to understand this" part) before I started to think I was having my leg pulled. Then I discovered Python. ... --GlynWebster
Heathens and philistines, all of you! :) A language is only bad if it is not expressive enough. If it is, and Perl is plenty expressive, then it's the programmer who makes it what it is. "If it was possible to write programs in English, we would discover that programmers can't write English" --LarryWall Perl is easy to abuse because it's so expressive you can do things in any number of ways. Unfortunately, it is so easy to successfully say something the computer will understand that few people bother to think about how to say it well. Python is entirely on the other extreme of the spectrum, and I find that extraordinarily obnoxious. --
AristotlePagaltzis
But Python is an attempt at saying something well. There's elegance and forethought in the design of Python that's missing from Perl. "Saying something well" in software involves making the entire program well-structured and comprehensible to others. Perl's great number of syntactic options at the expression level don't help there. --GlynWebster
That you can change the sequence of things in a sentence - that is what my point is about. It does not let you chose between "I want to say the same thing differently depending on the situation" vs "Depending on the situation, I want to say the same thing differently" or even "Differently do I want to say the same depending on the situation". Neither is the verbosity is up to the programmer to choose in to Python. I find it very frustrating how Python doesn't let me get to the point - and no, that doesn't mean my Perl code is compact and incomprehensible, au contraire. I recently
argued such a point with RandalSchwartz (who originated the term "Perl hacker"). Perl gives you the freedom to do whatever you want. The problem, as the LarryWall quote I cited hints at, is more due to the fact that, well, the majority of programmers are mediocre at best, so Perl is a dangerous tool in their hands. It doesn't constrain bad habits at all so they end up writing horrid Perl code, where Python would have forced them to follow a decent style. Why's that a problem? Because an advanced programmer who (should) no longer be in need of those tight confines can't escape them. Perl is not a language to learn programming with; it's a language to get your job done. Just like Pascal is a neat language for a beginner to start out with, but not for an expert to get things done in. --
AristotlePagaltzis
The thing with Pascal is that it was impossible to do most things. Standard Pascal didn't treat files like most newer OS's did (a stream of bytes? wazzat?), but it's big failing was that the size of the array is part of the type, and there was no way to write a generic function to handle arrays of varying sizes. Since strings were a kind of array, you couldn't write a function to take a generic string. Sure, langauges like TurboPascal resolved most of these issues in incompatible ways. Delphi shows that Pascal can be a nice language when "touched up".
Perl may be expressive, but the problem is that you have to maintain other peoples perl programs, and since other peoples Perl programs are difficult at best to modify, you have a problem. while this may not be the language's fault, it is a problem with the language IMHO. --
PerryLorier
Exactly. What I hear sounds like contradicting logic to me:
I'm exaggerating but as a newcomer (constantly warned about Perl but forced into it against my will because other people who do understand it used it for some scripts they no longer maintain-- probably poorly written ones at that) I want to try to vent my frustration without trolling too much.
I will be gracious and try to take the long view: Perl is very hard and doesn't make sense to newbies, but once you get it all, it makes very much sense in its own sort of way. Good Perl hackers (and there really are such people! even I would admit) and advocates are rightly defensive about their language, but may have forgotten how long and hard they worked to get there (and it feels good to stand up there looking down on those who don't know what you know). If you don't want to learn a language that takes a lot of time and energy to understand well (enough to do arguably simple things), then don't learn Perl.
The problem is, other people use Perl so I have to whether or not I want to. That's where my frustration is coming from: things just don't work right the first time, and I have learned and used many new languages in the last 6 years that did work right the first time: Lisp, Python, Scheme, OCaml, Haskell, etc.4?
P.S. If LarryWall was trying to reproduce the chaos, power and ambiguity of natural languages he hit the $nail on the $head. Kudos to Perl for being that one big loveable hateable monster. Just like the English language: it's easy if you grow speaking it natively. --JaredUpdike?
Well, I remember very well how I took my first steps with Perl. Coming from Pascal, Assembler and C, the only real clincher was understanding lists as first-class citizens. That was the one qualitative step I needed to understand the language. Ever since, learning has been purely gradual.
And yes, there are very obscure aspects of the language, most of which stem from its awk/sed heritage (like nearly all of the strange punctuation variables). I learned awk, sed and shell long after Perl, and would frequently go "oh, so that's where Perl got that from." They are very useful in oneliners, but good Perl hackers avoid them in lasting code, and they only really matter if you're maintaining a script written by a Perl hack (as opposed to Perl hacker).
The big reasons I can think of that people have trouble grokking Perl as such (rather than any particular codebase) are a) references b) context sensitivity b) functional constructs.
Then again, since you've basically just said "Perl sucks, I can't read it," I can't say much more than this either.
Perl has weaknesses, but it follows the principle of Mutt: all ProgrammingLanguages suck, this one just sucks less.
If you read this page from a Perl hacker's view, you will continuously stumble over three things:
The first I have to say is that Perl is hard to learn. It is optimized for ease of use rather than ease of learning - a concept that seems to be foreign to modern computing (cf GUIs). It is not a language I would give a programming beginner to learn with, either, for exactly that reason. You don't put a greenhorn who just got his driver's license in a 200 HP sports car either, nor on the wheel of a 50 ton Mack truck. In that sense, Python fills an important gap - many people only write a few lines of code occasionally, and they're better off with a tool that guides them. If you only want to write a few lines of code occasionally, don't learn Perl. You'll only end up hurting yourself. A lot of people have, and the result is literally hundreds of thousands of awful scripts floating around on the web.
Secondly, although its C-like syntax and C heritage may have you thinking otherwise, Perl is a list oriented language. It understands arrays as a type natively - in contrast, arrays in C are syntactically glorified pointers that, as far as the language is concerned, point to a single value. All variables in C represent only a single value; it has no notion of a list. Perl is much closer in spirit to LISP than C, and indeed there's a lot of crossbreeding between LISP and Perl hackers. (I love LISP as well, myself.)
Lastly, there's the issue of references I mentioned before. Any high level language builds complex nested structures out of simpler ones. Perl is no exception, it just makes this process explicit and in that way also lets you distinguish between the referent and the reference, where other languages conceal the latter. That said, in simple cases, Perl lets you pretend the reference isn't there, and my experience is that with a certain fluence in data structure layout for Perl code, the simple cases account for 60-95% of all uses, depending on what you are doing. The derefencing syntax for more complex cases is rather ugly and not something even Perl hackers cherish, though. (Perl6 promises to relieve this.) All that said, it isn't very difficult. With a bit of thought you should achieve any kind of manipulation you wish, something lanugages which hide the reference from you can't always offer. If you want to learn Perl, take your time to understand how (de)referencing, it is the sole key and mechanism to complexity management in Perl - it might be inconvenient to learn, but all advanced mechanisms in the language hinge on references. Once you master them, all of the language is at your command.
From command line switches to the infamous $_ variable to file operations, you can let Perl do a lot of your work when you can't be bothered to spell it all out. This does not mean you always should, but makes development a lot easier. I rarely know beforehand exactly how I am going to implement a program; being able to take shortcuts while I work on it until the final form crystalizes is very convenient. These shortcuts also improve maintainability if used in appropriate places, as well - the reader of a piece of code does not have to parse a lot of explicit red tape to come to the conclusion that it's just red tape and can be ignored in the greater scheme of things. The actually meaningful bits make up more of the code and so get more attention. O.c., abusing abbrvs mks anything < rd'able; I often go back once I have settled on a certain structure and remove or add shortcuts in order to make the coder clearer.
Writing good Perl means finding a proper balance. But you can. Other languages keep you perpetually slanted.
There's no direction in Python. No really. Guido and his crew don't (didn't?) even have any idea how the new language features in the Python 2.2 to 2.3 transition should be addressed. And they're going to break old code by changing the way established data types work. (The PHP folks have been doing this forever of course and noone seems to care. Namespaces? Wazzat? (The core functions are still getting lumped into the main global namespace with name prefixes.))
Perl has a long history of organic growth as well as any complex system must, but there's always a long phase of design and rethinking involved for major changes, and they lead to a new version of the entire language rather than just its implementation. Meanwhile the Perl5 compiler will swallow most Perl3 code unchanged without complaint (the first version that reached any significance). Perl6 is breaking this tradition on the grounds that the language needs an overhaul, and that there will be both a true Perl5 compiler that targets Perl6's VirtualMachine as well as a lesser P5 compatibility mode in Perl6 itself.
to get anyone to use Perl. But nearly all of the criticism on this page hinges on the fact that people don't know Perl well enough (and I'll be so indignant to assume that the rest is only accidentally directed at real weak points because they're unavoidable). I don't see why a Python or Java or whatever advocate would write about why he hates Perl when they haven't bothered to learn it, where it would be much more logical to write about what they love about their favourite language.
2? I think a similar, pleasant feeling of busyness while coding explains some of the popularity of C. "I'm doing lots of work, I must be getting a lot done. Right?"
3? I claim that arbitrarily-nested compound data structures (lists, dictionaries/hashes and all possible nestings) are simple things. Any book on Scheme will cover them in the earliest of chapters.
4? When I say "work right the first time" I usually mean syntactically or semantically, not algorithmically or without bugs. But there have been those times when even a complex algorithm I wrote worked the first time: it happens sometimes in Python and frighteningly often in Haskell (or OCaml). And while I'm talking about Haskell, the best thing is that it DOES change the way I think, giving me newer and better and higher ways to things (I didn't even really know were possible to do!) that you really CAN'T do in Perl, despite the big-time claim that Perl let's you do anything you want, even unintended things.
Disagree. Also, I have this exact experience with Perl; I think of something, and the first time I write it down, it works. Syntactically, always; in terms of logic, about 80% of the time. Maybe that's why I am so fond of the language and why you are not. --
AristotlePagaltzis
I think I understand your point now. It's all in how you're used to looking at things and how you like to look at things. Perl is just very different, with a higher bar to entry (which many do not get over, including me!) and very different from how I like to do things.
My last jab is that I meant by "things work right the first time" that "things work right the first time the first week I've been programming in that language at all!" Truthfully, even Python did not do this for me, but OCaml (and Haskell) did: I coded Random Search Trees the first week I had ever touched OCaml (despite the fact that OCaml is very different from anything I had seen before that time) and once I got it to compile, it worked right the first time. If you coded something as subtle as Random Search Trees in Perl the first week of touching Perl and it worked literally the first time you ran the program, then I bow to you and Perl. Cheers! --JaredUpdike?.
I hate Perl. My is redundant, if I have to put my before everything, what is the point. Just let me declare something globally if I want to, rather than having to declare everything else as my. Also any language that needs to explictly import a module just to handle the arguments passed to a constructor has some problems. my $obj = Some::Thing->new(-field1 => .., -field2 => ..). Ridiculous. What is wrong with type checking on function parameters. It makes things easier really because you know exactly what is going to be passed to the function. If you want to be able to pass extra things, pass a list or a hash containing these extra things. Foreach, how am i supposed to know if i'm on the last iteration. The only good thing is the regex facilities.
Another good reason to hate perl is the arrogance and stupidity of the people writing the texts. "Real perl programmers don't use indexes, they use push and pop." F**k off, sometimes it's useful to use an index. Sometimes I need a for(my $i;$i <= $#array;$i++). Does this mean i'm not a real perl programmer.
One page links to WhyIHatePerl:
lib/main.php:944: Notice: PageInfo: Cannot find action page | http://wiki.wlug.org.nz/WhyIHatePerl?action=PageInfo | CC-MAIN-2015-18 | refinedweb | 5,165 | 69.41 |
Pt-C-1
Dave says:The simplest solution is to add
<%= Time.now %>
somewhere in the sidebar div. However, it’s better form to set an instance variable to the time in the controller’s action, and then to use that instance variable in the view.
Q/ Why is it better to set a variable in the controller’s action over a simple
<%= Time.now %>?
A/ Because it would become easier to edit when it comes time to deal with localization.
Q/ But this is so ugly why not use
<%= Time.now.strftime("%I:%M %p") %>?
A/ By coding this into the view, you’ve made a policy (business) decision in the view (because you’ve decided what it means to be the current time). Then the company says “All servers must run on GMT” and all the time display is messed up. Or maybe you want to display the time in the user’s timezone. Whatever the potential issue, experience shows that it’s generally better to derive the data to display outside the view.
Ken says:
To elaborate on Dave’s ‘better form’ (and knowing that it hasn’t been introduced yet but something to spark some investigation):
before_filter :prepare_time_for_display def prepare_time_for_display @current_time = Time.now end
Something similar to this should go in your ApplicationController? (application.rb) and then use @current_time in the view. This makes it so that its available for all layouts. If its not needed in other layouts then pull this down into the specific controller needed.
Donovan says:
Thanks for the guidance Ken. The ‘before_filter’ approach is greek at this stage for me. So I used the following. Does it violate any of Dave’s guidelines?In store_controller.rb:
class StoreController < ApplicationController def index @products = Product.find_products_for_sale @current_time = Time.now end end
In store layout file (store.rhtml):
<div id="columns"> <div id="side"> <%= @current_time.strftime("%B %d %Y,") %> <%= @current_time.strftime("%I:%M %p") %><br /><br /> <a href="....">Home</a><br /> <a href="">Questions</a><br /> <a href="">News</a><br /> <a href="">Contact</a><br /> </div> ...
Pete says:
After putting the data and time in the sidebar it was barely readable. So I added color: #fff; to the end of #side in depot.css. Now any plain text in the side bar will be white. Here’s the whole style:
#side { float: left; padding-top: 1em; padding-left: 1em; padding-bottom: 1em; width: 14em; background: #141; color: #fff; }
Nick says:
Instead of putting the formatting of the time into the view, you can place the formatting right into the controller. Unless it was your intent to have a specific date/time format in the sidebar.For example in store_controller.rb:
def index @products = Product.find_products_for_sale @current_time = Time.now.strftime("%Y-%m-%d %H:%M:%S") end
And in /store/index.rhtml
<div id="side"> <%= @current_time %><br /><br /> <a href="....">Home</a><br /> <a href="">Questions</a><br /> <a href="">News</a><br /> <a href="">Contact</a><br /> </div>
GarryFre? says:
The above file name and location is incorrect, the code Nick refers to is actually at /depot/app/views/layouts/store.rhtml. I did not want to directly edit Nick’s comments because for one thing, I’m a newbie at ruby, and I felt it his right to correct it not me. I would also like to add that the time code here, is NOT in the final code printed in the second edition of the book, as demonstrated by screen shot on page 104 and its absence in the final code. I would like to see some documentation about good ways to do many of the playtime stuff. So far there is some vagueness about how to best do an assignment and I know from experience, that just guessing around is only a way to create bad programming habits through ignorance.
Billy says:
Donovan, your code is flawed. It puts the definition of what time is into the view like Dave said (and the conventions say) not to. :)
Patrick says:
I did it the following way:
In store_controller.rb:
class StoreController < ApplicationController def index @products = Product.find_products_for_sale @time = Time.now end end
In store layout file (store.rhtml):
<div id="columns"> <div id="side"> <a href="....">Home</a> <br /> <a href="">Questions</a> <br /> <a href="">News</a> <br /> <a href="">Contact</a> <br /> <a href="....">Reload for current time: <%= @time %></a> <br /> </div> </div>
That way I felt the user could click the time as a link that would just reload the page and display a new time and she would know what it was the time represented a bit more. (Plus I wasn’t sure how else to get it in white! :) )
Does anyone know a more direct way of getting the page to refresh with a link like that? Could we just use link_to somehow? I tried putting the @time in the link_to directly but that really doesn’t want to work:
<%= link_to 'Reload for current time: <%= @time %>', :action => 'index' %>
That really doesn’t work but I feel there must be a way to do that. Any thoughts?
Martin says:
Try this:
<%= link_to "Reload for current time: #{@time}", :action => 'index' %>
Note:
I use double quotes (“) instead of single quotes (‘) and enclosed the variable within ”#{…}”. The “ruby-escape-tags” (‘<=’ and ‘>‘) have no special meaning within a string. Remember: these tags enclose ruby-code within html (→ rhtml). so you are already in ruby code. since this is in a general template for the controller ‘store’ it is probably not a good idea to place a static argument :action (‘index’). it should point to the current action, but i don’t know (yet) what variable, method or function holds/returns that value.
Moritz says:
Ken’s solution with the ‘before_filter’ thing definitely works best and, even if I’m a total Rails newbie, feels like it’s the cleanest to me, because the view is free from logic or ‘business decision’ stuff but still I can get a nicely formatted date and time in all my controllers. And I asked myself already how to implement stuff into a Rails application which is available in more then one controller. Now I have an idea how to do this. Thanks Ken!
Steve says:
Following the most common answer (creating the time variable in the controller, and then calling @time in the view) works for the store page…
however, following into the next section of the book, when we are tasked with adding items to a cart (/store/add_to_cart/) the left hand portion of the screen doesnt retain the call to @time… any suggestions?
Tom says:
Take a look at Ken’s solution – he puts the @current_time = Time.now in the application controller, rather than in the store_controller so that it is accessible by all views, not just by index.rhtml
Steve says:
Thanks Tom. Unfortuantely, I’m just not getting the subtle logic implied in Ken’s example. I understand the addition to the application.rb file (and its impact on all subsequent pages), but the line directly above (before_filter :prepare_time_for_display)… well, i’m just missing the point, and how it is actually implemented.
Raul says:
In answer to Steves’s question (that was also mine) about Ken’s solution: the book has a good explanation although it requires to jump to p.447 with section “Before and After Filters”. In summary:
When you have one or more methods that you want to be executed prior any action of some controllers, you usually write them in the ApplicationController? (for example, a method ‘authenticate’). Then in the relevant Controllers (eg, in portions of the site where you want to authenticate users) you specify which methods you want to execute before any action (eg, before_filter :authenticate) of the controller. There is a lot more about Filters; a good place to look is the code itself; in your gems path (eg /usr/local/lib/ruby/gems/1.8/gem), see file actionpack-1.13.3/lib/action_controller/filters.rb. The introductory comment is very interesting (eg, showing how filters can cascade through an inheritance chain, with an example of ‘BankController?’ and ‘VaultController?’). To know ‘how this is implemented’, one needs to read the code, although it requires to know Ruby in some depth (lambdas, code blocks, and some metaprogramming); not inmediate for anyone coming from compiled languages. The basic idea of Filters is however clear: add pre-processing and post-processing layers transparently to the rest of the application (using the metaprogramming tricks dear to Ruby).
Ken says:
I’m glad my comments sparked some people to do a little further digging.
I saw a couple of questions that I can probably answer (and thanks to some of those who already threw out answers).
As to why I would use a Filter: That allows the @current_time variable, which is used in the layout presumably, to be set no matter what controller you are working in. Basically it makes it so that you don’t have to put @current_time = Time.now into every single method of every controller you create. That would be a huge waste of time and a nightmare to maintain. By doing it the way I suggested its what we call DRY.
As to Donovan’s question about is his implementation wrong or right. Your answer is perfectly valid. However once you started to expand into other controllers I’m sure you would see that it become very cumbersome and be looking for a more elegant solution. Perfect time to refactor. See below for the progression of refactoring.
As to what exactly is a filter, Raul gave a good pile of information. It is a way to perform some work before and after a method in a controller is invoked. In my case my code was saying “Before you run this method in any controller, please set a variable called @current_time to the current time.” That variable is then available in my view, just as if I had manually done it like Donovan suggested.
Progression of Refactoring:
First cut – Code @current_time = Time.now into the StoreController? index method. Second cut – Realize you need that information througout the StoreController?, not just on the index method. Put the code into a helper method and call it with before_filter :prepare_time_for_display Third cut – Realize that you are using a global layout (not just one for StoreController?) and that layout also needs the @current_time variable set. Move the those calls to the ApplicationController? and be done with it all. cheers -ken
I wanted to add a different spin – using ajax to test this out so, in my index page I added:
<div id="time"> <%= @current_time %> </div> <div id="ajax_update"> <%= periodically_call_remote(:url =>{ :controller => "store", :action => "current_time"}, :frequency => 1) %> </div>
and created an rjs for the controller with this:
page.replace_html "time", Time.now.strftime("%m-%d-%Y %H:%M:%S")
then added this in the store controller:
def current_time end
the ‘periodically_call_remote was just to see if I could do an ajax call without the user having to press a button. This is NOT the right way to do a clock (a javascript timer would be correct) but it was just to show you can do dynamic content update without the user’s input – a really neat trick in Rails….
-Don
the @current time was using the ‘application’ controller approach.
Scott says:
Here’s an example for 2.1:config/environment.rb
store_controller.rbstore_controller.rb
# ... config.time_zone = 'Central Time (US & Canada)'
store.html.erbstore.html.erb
# ... def index @products = Product.find_products_for_sale @current_time = Time.zone.now.strftime("%c") end
# ... <%= yield %> </div> <span id="timestamp"><%= @current_time %></span> </div> </body> </html>
#side a, #timestamp { color: #bfb; font-size: small; } #timestamp { float: left; padding-left: 1em; margin-top: -20px; }
And here’s a [* table for strftime].
Nick C says: Since it’s in the sidebar and part of the layout, not necessarily the action, is it better to put it in the initialize method, like so? That would ensure that other actions that use the same layout also retain the same sidebar.
h4. Peetah says: I’ve tried it several ways, just toying around. I came up with the following and although it’s not pretty, it came out the way I want it.h4. Peetah says: I’ve tried it several ways, just toying around. I came up with the following and although it’s not pretty, it came out the way I want it.
def initialize super @time = Time.now end
<%= Time.now.to_s(:long) + Time.now.strftime(" %p") %>
Maybe someone could explain why or why not use this beside the use of Time.now twice.
PeterC says: Peetah, I guess the best answer would be that it’s not particularly DRY. You might just as well use
<%= Time.now.strftime("%b %d, %Y %H:%M %p") %>
Even this, however, might be considered problematic. For one thing, since you have 24 hour time through the %H parameter, there is no need to specify AM/PM with the %p parameter. Instead you can simplify your expression by taking out the %p, or else changing the %H to %I to give you a 12 hour clock.
Personally, I’d prefer taking the code out of the view entirely, as others have mentioned above – though I would rather create helper methods which could reside in helpers/application_helper.rb. You could create a set of various time/date helper methods with easy-to-remember names, which you then call wherever you need them. For example:
module ApplicationHelper T = Time.now def time_24_hour T.strftime("%H:%M") end def full_date Date.today.to_s(:long) end def simple_date_and_time T.to_s(:short) end def chatty_time_and_date "It's " + time_24_hour + " on " + full_date end
Then you can insert an appropriate time-stamp in any of your views with a simple method call:
<%= chatty_time_and_date %>
and you’d have an output something like:
It's 17:04 on January 09, 2009
Page History
- V19: David Hislop [almost 2 years ago]
- V18: David Hislop [almost 2 years ago]
- V17: David Hislop [almost 2 years ago]
- V16: Dennis Sutch [about 3 years ago]
- V19: Warren Bain [about 4 years ago]
- V18: Jakub Tuček [over 4 years ago]
- V17: Jakub Tuček [over 4 years ago]
- V16: Jakub Tuček [over 4 years ago]
- V15: Jakub Tuček [over 4 years ago]
- V14: Jakub Tuček [over 4 years ago] | https://pragprog.com/wikis/wiki/Pt-C-1/version/1 | CC-MAIN-2016-50 | refinedweb | 2,389 | 63.7 |
C# Fundamentals for Absolute Beginners: (09) for Iterations
In this lesson, we talk about arrays, which are multi-part variables—a "bucket" containing other "buckets," if you will. We demonstrate how to declare and utilize arrays, and we demonstrate a couple of powerful built-in methods that give arrays added features.
Full course outline:
kinda hard to follow
please give numerical examples and explain step by step
you talk way too much from a developer point of view which is extremely hard to grasp for someone without programming experience
Sir, I don't understand the function of "in names" in foreach statement in line 34. Please explain it.
thanks a lot~ it's helpful for me. I do like these series.
Hi Bob, first thanks for the video. it really is good.
One thing in this session, when doing charArray, the line Array.Reverse(charArray); has error with red line on Reverse.
The error is the type or namespace name 'Reverse' does not exist in the namespace 'Array'. Can you help? I have checked the commands should be correct
Hey Bob,
First of all i wanna thank u for doing this, for a lot of ppl like me, who absolutely zero experience in programming this is something of incredible value.
What i wanted to ask (to get a good grasp on each lesson) is there any way u can add some exercises with answers after lessons? Just so we use learned knowledge of lesson and apply it to different task before we move on to next lesson?
Ty again!
Hey Bob. These videos have been great so far, but I've ran into an issue that I've been unable to solve. When I run the following code -
string[] names = new string[] { "Anna", "Kalle", "David", "Sara" };
foreach (string name in names)
{
Console.WriteLine(names);
}
Console.ReadLine();
I get the following result:
System.String[]
System.String[]
System.String[]
System.String[]
For what reason might this be? All help is appriciated :)
I resolved the issue!
It should be "Console.WriteLine(name);" and not "names". Yesterday I figured that the type "name" previously in the code was some kind of command, not a variable. I know realize it's just another "bucket". The code presented now makes much more sense. Great videos Bob!
thks alot for your time, Bob.
Dear Bob,
No doubt its an amazing and one of the best tutorial I have ever found.
just did not understand FOREACH function. why and when should we use it?
thank
@arifuddin: It is an elegant way to iterate through an array or collection, one iteration for each item in the array or collection. This allows you to inspect / write logic that involves each item in the array / collection. You will need this often ... you will frequently be working with "sets" of data (collections, arrays) and will want to search through the data for one item you're looking for, or perform some operation / logic on every item in the array / collection. Once we get into classes / objects / collections you'll see some practical applications. :)
How to fully grasp these concepts.I mean there must be some sort of examples.Secondly arrays have many other things in them like multidimensional arrays etc.How to learn them?
Dear Bob,
I Love the tutorial and I wanted to let you know that your Lost reference was noticed and appreciated! Thank You for doing these videos.
Van Halen? Nice.
I have a mistake in line
Console.Write(varChar);
until i change it into
Console.Write(charArray); only after this the code worked perfectly.
Like the Lost reference xD
Good videos btw really learned from them
Hay Bob,
I have a question about lesson 10, to be precise using new int or new string as a part of this line:
string[] names = { "Anna", "Kalle", "David", "Sara" }
what is its purpose, I have checked it works without it.
Thanks
Very good videos....
@DVelis - if you haven't found out yet, you use the new operator when you don't want the variable/object/whatever to be destroyed when it goes out of scope. If you created a variable in a for loop, for example, and want it to be accessed after the for loop completes, then you would declare it using the new operator. Source:
For those having issues like philliphs ("error with red line on Reverse."), Try changing the following line:
Array.Reverse(charArray);
with
System.Array.Reverse(charArray);
Source:
Who else has got the Lost reference?
When I read the first comment on this video, I couldn’t believe it - the person said you “talk too much”.... Just goes to show we all have different learning styles. I have watched a number of other other C# tutorials and I prefer your teaching method.
I appreciate the overview and detail that you provide.
And it’s free! (The other courses I paid for).
Thanks for the videos Bob;
I'm an old programmer, from way back to the Hex decimal days and Basic, I thought in my spare time I would re-educate my self and do some simple programming for family and friends, your videos are a welcome addition and a good resource in getting me back into the world of programming, Thank You !! | https://channel9.msdn.com/Series/C-Fundamentals-for-Absolute-Beginners/10 | CC-MAIN-2021-10 | refinedweb | 877 | 73.98 |
XML and Database Mapping in .NETXML and Database Mapping in .NET, like the
SqlConnection,
SqlCommand, and
DataReader (analogous to
the Java
Connection,
SqlCommand, and
ResultSet respectively). The classes in this
namespace, whose names all begin with
Sql, are meant
to work with Microsoft's database of choice, SQL Server..
You'll recall that in previous articles I used a dog show to demonstrate generating classes in Java and C# from an XML schema. This time, since we're building a database, the database schema for the dog show will require the following schema:
Normally if you were building a database like this, you would
use SQL or have access to a graphical entity relationship
modeling tool. With the .NET
DataSet class,
however, you can build a database schema dynamically. The
following code demonstrates one way to do that:
using System; using System.Data; public class MakeDataSet { public static void Main(string [] args) { DataSet dataSet = new DataSet("DogShow"); // create Show table DataTable showTable = dataSet.Tables.Add("Show"); DataColumn showIdColumn = showTable.Columns.Add("Id", typeof(Int32)); showTable.Columns.Add("Name", typeof(String)); showTable.PrimaryKey = new DataColumn [] {showIdColumn}; // create Breed table DataTable breedTable = dataSet.Tables.Add("Breed"); DataColumn breedIdColumn = breedTable.Columns.Add("Id", typeof(Int32)); breedTable.Columns.Add("Name", typeof(String)); breedTable.PrimaryKey = new DataColumn [] {breedIdColumn}; // create Dog table DataTable dogTable = dataSet.Tables.Add("Dog"); DataColumn dogIdColumn = dogTable.Columns.Add("Id", typeof(Int32)); dogTable.Columns.Add("Name", typeof(String)); DataColumn dogBreedIdColumn = dogTable.Columns.Add("BreedId", typeof(Int32)); dogTable.PrimaryKey = new DataColumn [] {dogIdColumn}; // create foreign key relationship dataSet.Relations.Add("DogBreed", breedIdColumn, dogBreedIdColumn); // create Judge table DataTable judgeTable = dataSet.Tables.Add("Judge"); DataColumn judgeIdColumn = judgeTable.Columns.Add("Id", typeof(Int32)); judgeTable.Columns.Add("FirstName", typeof(String)); judgeTable.Columns.Add("LastName", typeof(String)); judgeTable.PrimaryKey = new DataColumn [] {judgeIdColumn}; // create ShowRing table DataTable showRingTable = dataSet.Tables.Add("ShowRing"); DataColumn showRingIdColumn = showRingTable.Columns.Add("Id", typeof(Int32)); showRingTable.Columns.Add("Name", typeof(String)); showRingTable.PrimaryKey = new DataColumn [] {showRingIdColumn}; // create Judging table DataTable judgingTable = dataSet.Tables.Add("Judging"); judgingTable.Columns.Add("ShowTime", typeof(DateTime)); DataColumn judgingBreedIdColumn = judgingTable.Columns.Add("BreedId", typeof(Int32)); DataColumn judgingJudgeIdColumn = judgingTable.Columns.Add("JudgeId", typeof(Int32)); DataColumn judgingShowRingIdColumn = judgingTable.Columns.Add("ShowRingId", typeof(Int32)); DataColumn judgingShowIdColumn = judgingTable.Columns.Add("ShowId", typeof(Int32)); // create foreign key relationships dataSet.Relations.Add("JudgingBreed", breedIdColumn, judgingBreedIdColumn); dataSet.Relations.Add("JudgingJudge", judgeIdColumn, judgingJudgeIdColumn); dataSet.Relations.Add("JudgingShowRing", showRingIdColumn, judgingShowRingIdColumn); dataSet.Relations.Add("JudgingShow", showIdColumn, judgingShowIdColumn); } }
I mentioned that the
DataSet class can represent a
disconnected view of a database. In fact, as in this case, a
DataSet doesn't really need a database backing it
at all. You can even think of the XML output we're producing as
the database, although it lacks all the usual ACID properties of
a relational database. Still, it makes for a convenient
demonstration of some of the XML functionality.
Having created the
DataSet for the dog show, you
could now go on and populate it with data using code something
like this:
DataRow show = showTable.NewRow();
show["Id"] = 1;
show["Name"] = "O'Reilly Invitational Dog Show";
showTable.Rows.Add(show);
That works well enough, although it's a lot of work, and there's
nothing being done at compile time to guarantee that you
actually have a column named "Id" in the
DataTable. It would be much better if you could
access tables and columns in a more type-safe way. Writing the
code to do that could be a lot of work, but you'd only have to
do it once. Actually, the .NET framework can do a lot of the
heavy lifting for you. In addition to the work you saw it do in
the last article, the
xsd tool can generate a
subclass of
DataSet for a specific W3C XML Schema.
But first you need a schema. The
DataSet class has
a
WriteXmlSchema() method which creates an W3C XML
Schema from the
DataSet. You simply need to add a
line to the end of the
MakeDataSet program to write
the schema to a file:
dataSet.WriteXmlSchema("DogShow.xsd");
Now you can generate a
DogShow class with the
following command line:
xsd /dataset DogShow.xsd
The resulting code, placed by default in the file
DogShow.cs, is quite lengthy, and I won't include
the source listing here (you can read it here). Suffice it to say that
xsd has generated a class which is a subclass of
DataSet, which you can use to directly access
tables and columns. It's quite convenient.
Here's the source code for a program which uses the generated
class to create an instance of
DogShow and add data
to it. After that, it writes the data to an XML file.
using System; using System.Data; public class DogShowWriter { public static void Main(string [] args) { DogShow dogShow = new DogShow(); // insert a show row DogShow.ShowRow show = dogShow.Show.AddShowRow(1, "O'Reilly Invitational"); // insert a breed row DogShow.BreedRow breed = dogShow.Breed.AddBreedRow( 1, "English Springer Spaniel"); // insert a couple of dog rows dogShow.Dog.AddDogRow( 1, "Wil-Orion's Angus Highlander", breed); dogShow.Dog.AddDogRow( 2, "Len-Lear's Webmaster", breed); dogShow.Dog.AddDogRow( 3, "Ch. Sallylyn's Condor", breed); // insert a judge row DogShow.JudgeRow judge = dogShow.Judge.AddJudgeRow(1, "John", "Smith"); // insert a show ring row DogShow.ShowRingRow showRing = dogShow.ShowRing.AddShowRingRow(1, "Ring 1"); // insert a judging row DateTime judgingTime = new DateTime(2002,10,20,14,00,00); dogShow.Judging.AddJudgingRow(judgingTime, breed, judge, showRing, show); // write the data dogShow.WriteXml("DogShow.xml"); } }
You can see that with the generated
DogShow class,
it's much easier to write and debug your code, for a few
reasons. First, since the compiler will warn you when you
reference a table or column that doesn't exist, you're much less
likely to run into runtime errors. Second,
xsd has
generated constructors for each table's rows, so you don't need
to go through the tedious effort of referencing each column
through the row's indexer property.
You should note the last line of code, where I've called
dogShow.WriteXml("DogShow.xml");. It creates an XML
file representing the
DataSet that I just created
and filled with data. Here's the file:
<?xml version="1.0" standalone="yes"?> <DogShow> <Show> <Id>1</Id> <Name>O'Reilly Invitational</Name> </Show> <Breed> <Id>1</Id> <Name>English Springer Spaniel</Name> </Breed> <Dog> <Id>1</Id> <Name>Wil-Orion's Angus Highlander</Name> <BreedId>1</BreedId> </Dog> <Dog> <Id>2</Id> <Name>Len-Lear's Webmaster</Name> <BreedId>1</BreedId> </Dog> <Dog> <Id>3</Id> <Name>Ch. Sallylyn's Condor</Name> <BreedId>1</BreedId> </Dog> <Judge> <Id>1</Id> <FirstName>John</FirstName> <LastName>Smith</LastName> </Judge> <ShowRing> <Id>1</Id> <Name>Ring 1</Name> </ShowRing> <Judging> <ShowTime>2002-10-20T14:00:00.0000000-04:00</ShowTime> <BreedId>1</BreedId> <JudgeId>1</JudgeId> <ShowRingId>1</ShowRingId> <ShowId>1</ShowId> </Judging> </DogShow>
This file can also be read back into a
DataSet --
or into an instance of the generated
DogShow class
-- with the
ReadXml() method.
So now we've got an XML file which represents the O'Reilly Invitational Dog Show. Unfortunately, it's not the same as the one in the last article, which was more tree-oriented. The XML generated here is more table-oriented, which is entirely appropriate, given that we are talking about a relational database. It would be nice to be able to transform the original tree-oriented XML into the new table-oriented schema. Seems like there ought to be a tool to do that.
Thanks to the fact that all of .NET's XML classes are tightly
integrated, you can use XSLT to transform a document on disk
into an
XmlDocument in memory, and then pass that
to another class. So, given an appropriate XSLT stylesheet
named "DogShow.xsl", the following code will read it right into
the
DataSet:
using System; using System.Data; using System.Xml; using System.Xml.Xsl; public class DogShowTransformer { public static void Main(string [] args) { // Create a DataSet instance DogShow dogShow = new DogShow(); // Read the data XmlDocument document = new XmlDocument(); document.Load("DogShow.xml"); // Read the stylesheet XslTransform xslt = new XslTransform(); xslt.Load("DogShow.xsl"); // Transform the data XmlReader reader = xslt.Transform(document, null); // Load the DataSet dogShow.ReadXml(reader); } }
DataSet does quite a bit more than I can cover
here. For example, I mentioned its ability to keep track of
changes while disconnected from the database. The
DiffGram is responsible for this ability. The
DiffGram is itself an XML document, and it is used
to serialize
DataSets across a web service. The
DiffGram consists of three sections: one containing
the current data,
<DataInstance>; one containing
any changes to the
DataSet,
<diffgr:before>; and one listing any errors,
<diffgr:errors>. Any changes in the
<diffgr:before> section and errors in the
<diffgr:errors> section are related back to the
data in the
<DataInstance> section by the
diffgr:id attribute.
I haven't touched on the
XmlDataDocument class yet,
either. This class enables you to synchronize multiple of views
of the same
DataSet, much as I demonstrated with
XSLT earlier. (The main reason I did not use
XmlDataDocument is that it depends on the names of
elements being the same in both schemas, so that the elements
can be mapped to each other properly. As you will recall, in the
previous article, the XML schema had different element names.)
There's still a whole lot more to be covered, including the
SqlCommand's
ExecuteXmlReader()
method, which returns an
XmlReader instance you can
use to read data from a database with the regular .NET XML
tools.
I continue to be impressed with the extent of .NET's XML integration. All the major technologies are represented, and Microsoft has use XML in innovative ways throughout the framework. I believe that the Java XML community can certainly learn a thing or two from .NET.
One of the challenges of comparing the handling of XML in Java and C# is the fact that XML processing is not built into the Java class library. When deciding what features to compare, I've often resorted to searching the Web to find comparable Java tools for features that are a standard part of the .NET Common Language Runtime.
In my last article, I wanted to compare XML databinding in Java and C#. I knew that the .NET CLR contained excellent support for generating classes from an XML Schema, and I knew that Castor also supported databinding. As I examined Castor, I saw that it also supported binding Java objects to relational databases using something called JDO, and I put this fact aside to investigate in a future article.
Unfortunately, I forgot to read the fine print:
Does Castor JDO comply with the SUN JSR-000012 specification?
No, Castor JDO doesn't comply with the SUN's JDO specification.
Although Castor JDO carries very similar goals as SUN's JDO, it has been developed independently from the JSR.
And this pretty much tells the story when it comes to support for XML in Java. There are many implementations; some predate the applicable standards, and some conform to the standards.
XML.com Copyright © 1998-2006 O'Reilly Media, Inc. | http://www.xml.com/lpt/a/1054 | CC-MAIN-2014-35 | refinedweb | 1,865 | 50.63 |
The Sprite Editor described in one of my previous articles. This time, I've taken some liberty in creating the sprites for the game, and have modified the map just for the hell of it, but it's still essentially the same game..
So, if all you want to do is reminisce about an old favourite, have a look at this, but if you think way-points and a mini CD-map can help your code, then you have an excuse to do both. Who Framed Roger Rabbit?, so I didn't get any sound-bytes, and though I could have tried YouTube, we'll just call that an excuse to leave Roger at home and take Jessica out instead. He was always a third wheel anyway.
Aside from the new characters and the fact that the player and the badder baddies can shoot diagonally, it's pretty much exactly the way I remember it. So go ahead, and try not to have fun playing it.
The idea of way-points is one that I've described in passing in an article called Battlefield Simulator,.
The way-points exist, at this point, solely on an image, and must be transferred into memory by first scanning the entire bitmap until we've found all the red dots and have a complete array of all the way-points in the game. Next, the 'rebuild, 'neaghbours':
public struct udtShortestPathToWayPoint
{
public classWayPoint wp;
public int index;
public string strPath;
} A to B, it would tell you where to go from wherever you are now.
The integer values contained in the nextWPTable[,] array in Night Stalker refer to the indices of the current way-point's neighbor list. Consider the code taken from classRobot() of the Night Stalker game where the robot needs to find the path to the correct way-point along his route to his final destination.
nextWPTable[,]
classRobot();
}
This sample code demonstrates two references to the table. The first sets the path described in the "N", "E", "S", "W" directions, listed in a string field called strPath. The WPnext field of the udrMove structure is of type classWayPoint and this way-point's neighbors list is being referenced using an index gotten from nextWPTable[,] using WPnext.index (the WPs[] array index of the way-point we're looking at) as the column component of the 2-dimensional array, and WPtarget.index (the destination way-point's index in the WPs[] array) as the row component for the same 2-dimensional nextWPTable. This string tells the robot what path to travel, pixel-by-pixel, to get to the next way-point along its way to the ultimate target destination.
strPath
WPnext
udrMove
classWayPoint
WPnext.index
WPs[]
WPtarget.index
nextWPTable
The second refernce sets a temporary WPNext to the same intermediate way-point along the route which will now be the way-point, subsequent passes along this bit of code for this robot will use.
WPNext 'name' of the way-points to further simplify the example.
The blue example starts at WP #1, and wants to go to WP #5, so the AI looks at the table entry (1,5) and reads '4', which means that it will first have to travel through WP #4 on its way to #5. Then, when on WP#4, the AI cross references current (4) with target (5), and looks at table entry (4,5) to read that '5' is the next and final way-point. Similarly, for the orange example which starts at 2 and wants to get to 6, the AI looks at (2,6) and reads '3', then looks at (3,6) and reads '6'.;
}
The size field (sz) describes, in pixels, how large the object is to be represented in the miniaturized collision detection map, and the point field (ptRelPos) describes the position of the object's center relative to the top left corner of this rectangle (sz). Here's is a typical example:
sz
ptRelPos
udrCDInfo.ptRelPos = new Point(4, 4);
udrCDInfo.sz = new Size(7, 14);
which places Jessica's center at (4,4) on a rectangle of size (7,14). The options to view these while running the debugger are in the formNightStalker's:
formNightStalker
public formNightStalker()
where cLibCD.bolDrawCDInfo draws yellow rectangles around all bullets, robots, and Jessica, while the call to cLibCD.drawCDOnMap(ref Map) draws small yellow 5x5 pixel squares where collision squares appear on the mini bmpCD map.
cLibCD.bolDrawCDInfo
cLibCD.drawCDOnMap(ref Map)
bmpCD
The other colors which the CDmap includes are for 'snares' which only cause collisions during a brief period after Jessica has woken up from having been put to sleep by the bats or spider and is 'groggy' for a time. These are painted 'green' by the drawCDOnMap() function.
CDmap
drawCDOnMap()
Here's a zoomed partial view of the CD map.
The only real difficulty in creating and using this collision detection scheme was in writing the 'bunker-buster' bullets, the higher up and more deadly of which fiends fire at Jessica. I wasn't sure how I was going to do this, but then decided on using separate colors to describe the bunker's collision pixels. They can be seen in the bmpCD 'roof' at the top of the bunker.()
DamageBunker() is where it all happens. The resources directory has a mini bunker wall image, a game-sized bunker image, and at run-time, we have a bmpCD copy of the mini CD map. At the start of damageBunker(), the location of the damage is known, so the only thing left to do is to actually damage the bunker. This is quite easy: fill a white ellipse of some preset size on the spot the collision occurred, and move on.
DamageBunker() and.
Now, knowing where the collision has occurred, we do the current bullet's damage on this copied and uniform colored map, by filling a white ellipse at that location, coloring the damage 'white'.
This bunker now needs to be used for two reasons: drawn onto it. This fill rectangle can be of any color as long as the entire mask bitmap uses the same color, which I'll call 'clrBunkerInterior' here. Then, when this bitmap is complete, it calls MakeTransparent() with the same clrBunkerInterior color to replace the default 'white', making this light-grey color transparent so that when it is subsequently painted over top of a copy of the complete undamaged game-sized bunker image, only the white gets drawn, and we have a game-sized image of the damaged bunker.
fillRectangle
clrBunkerInterior
MakeTransparent()
This game-sized damaged bunker image is made transparent with white as the transparent color, during every game-cycle's animation call, so that we only see whatever is left, and the white 'hole'.
I'm not sure how to deal with the 'LoaderLock'.
Jessica bad? And she said she was just drawn that way!
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Christ Kennedy wrote: can you step through and see what happens? where does the debugger stall?
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/54688/Night-Stalker?fid=1558690&df=90&mpp=10&sort=Position&spc=None&tid=3347081 | CC-MAIN-2015-35 | refinedweb | 1,214 | 58.21 |
Opened 10 years ago
Closed 10 years ago
#4072 closed (invalid)
Path making the simlink works only on Python 2.3
Description
In the instructions to install Django development version from svn [1] the line of code to create the simlink from django_src/django uses a path that includes python2.3 it should be noted that this should be replaces with the path to the python the users has otherwise django-admin.py would fail with this error:
Traceback (most recent call last):
File "/usr/local/bin/django-admin.py", line 2, in ?
from django.core import management
ImportError: No module named django.core
It is trivial to fix the problem but it could take some users by suprise.
This is the line with the problem:
ln -s
pwd/django_src/django /usr/lib/python2.3/site-packages
[1]
It clearly says just under that line:
(In the above line, change python2.3 to match your current Python version.) | https://code.djangoproject.com/ticket/4072 | CC-MAIN-2016-50 | refinedweb | 158 | 65.93 |
Sounds like an environment problem
I would *hazard* the following guesses:
Your five "Out of environment space"s correspond to 5 (or possibly 4) .jar
files in your ANT/lib installation- what its
saying is that your classpath is too big, and it cant append onto the end of
it. Therefore the ANT lib files aren't
getting included on your classpath, and it wont find the ant main class when
it tries to run it.
This is bound to be a problem on Windows 98, as Windows 98 has really bad
support for environment variables.
Possible solution: set CLASSPATH= before running and try again (i.e. delete
it!), or set CLASSPATH to contain
the jar files in ANT/lib at the beginning before you run it. Basically
shorten your CLASSPATH- I think Win98 imposes a
limit of 255 characters on it (but not sure, cos I dont use it)
Hopefully, if its what I think it is, then everything should clear up after
you do this.
Otherwise there are alternative methods of running ANT by defining the
classpath in the ANT_OPTS variable,
or even using a special java flag "-ext.dirs" (I think, check this!!) as an
ANT_OPTS variable- which specified a directory
where all jar files should be picked up from, but I guess that's a last
resort.
Hope this is of some help
Cheers
-Geoff
> -----Original Message-----
> From: Balvinder S Sahota [mailto:balu@telus.net]
> Sent: 17 September 2002 06:41 PM
> To: ant-user@jakarta.apache.org
> Subject: Set up ant in Windows 98 environment
>
> Hi,
>
> I am a first time user and trying to build a Java application. I am
> getting the following error message when try to compile and run a simple
> program:
>
> Error Messages are:
> Out of environment space
> Out of environment space
> Out of environment space
> Out of environment space
> Out of environment space
> Exception in thread "main" java.lang.NoclassDefFoundError:
> org/apache/tools/ant/Main
>
> The build File: build.xml
> Contents:
> <project default="run" basedir=".">
>
> <property name="app.name" value="MyFstApp"/>
>
> <target name="compile" description="* Compiles the source code">
> <javac srcdir="."/>
> </target>
>
> <target name="run" depends="compile" description="* Executes the
> application">
> <java classname="${app.name}"/>
> </target>
>
> </project>
>
>
> Program File: MyFstApp.java
> Contents:
> public class MyFstApp {
> public static void main( String[] args ) {
> // Display the string.
> System.out.println( "This is my first Java application!" );
> }
> }
>
>
> The environment is:
> OS: Windows 98
> Ant Version: Ant 1.5 extracted off jakarta-ant-1.5-bin
> Java Version: j2sdk1.4.0_02 extracted off j2sdk-1_4_0_02-windows-i586
>
> Setting in autoexec.bat file are:
> SET ANT_HOME=c:\ant15
> SET JAVA_HOME=c:\j2s14002
> SET CLASSPATH=c:\LrngJava\lab
> SET PATH=%PATH%;%ANT_HOME%\bin;%JAVA_HOME%\bin
>
> I have no problem in compiling and running MyFstApp on command line by
> using:
>
> javac MyFstApp.java
> java MyFstApp
>
> But when I try to use ant, I get the above mentioned error. Your help to
> get me going will be highly appreciated.
>
> Best regards,
> Bal.
> << File: ATT00007.txt >> | http://mail-archives.apache.org/mod_mbox/ant-user/200209.mbox/%3CLFEBIIHBPBCMBANEODHFCEKMCDAA.geoffm@isocra.com%3E | CC-MAIN-2015-22 | refinedweb | 496 | 56.15 |
Sheepdog 0.2.0
Shepherd GridEngine
Make Grid Engine a bit more useful from Python.
Requirements
On the host:
On the worker nodes:
- No requirements beyond the standard library
- Tested on Python 2.7, Python 3.3
- Should also work on Python 2.6 and 3.2, 3.4
License
MIT, see LICENSE file.
Overview
Running large map style operations on a Grid Engine cluster can be frustrating. Array jobs can only give scripts an input like some range() function call, but this is rarely sufficient. Collecting results is also a huge pain. Suddenly there are shell scripts and result files everywhere and you feel an overwhelming sense of mediocracy.
Sheepdog aims to make life better for a somewhat specific use case:
- You’re using Python. Hopefully even Python 3.3.
- You’ve got access to a Grid Engine cluster on some remote machines. They can also run Python, somehow or other. The cluster computers and your client computer can all communicate over a network.
- You have a function of several parameters and you want to run it many times with different arguments. Results should come back nicely collated, and are reasonably small (you’re not too worried if argument or result objects get copied in memory).
- You’re a PhD student in Div F at CUED desperately trying to use fear effectively.
To accomplish these aims, Sheepdog:
- Takes your function and N tuples of arguments, marshals both
- Creates a mapping range(N) to arguments
- Starts a network interface (over HTTP)
- Starts a size N array job on the Grid Engine cluster, running the client
- Each client talks to the server to map its array job ID into an actual set of arguments, and fetches the Python function to execute as well
- The function is executed with the arguments
- The result is sent back over the network
- Results are collated against arguments
This is very similar to:
- pythongrid. Almost identical. Sheepdog doesn’t have to be run on the cluster head, though. And can’t resubmit jobs or anything fancy like that. And isn’t dead.
- gridmap. A fork of pythongrid that is actually active and looks quite nice! Maybe look at gridmap.
- Celery. Yes. Pretty similar.
- rq. Quite similar.
- Resque. But Resque is written in Ruby, boo.
- Every other distributed map compute queue thing ever written.
Usage
Ensure the GridEngine workers have Python available.
Then,
import sheepdog def f(a, b): return a + b args = [(1, 1), (1, 2), (2, 2)] config = {"host": "fear"} results = sheepdog.map_sync(f, args, config) print("Received results:", results) # Received results: [2, 3, 4]
There is also support for transferring other functions and variables (using the namespace parameter ns of map_sync) and imports can be handled using global, for example:
def f(a, b): import numpy as np global np return g(a, b) def g(a, b): return np.array((a, b)) ** 2 args = [(1, 2), (3, 4)] namespace = {"g": g} config = {"host": "fear"} results = sheepdog.map_sync(f, args, config, namespace)
See the documentation for full details.
Documentation
View Sheepdog on ReadTheDocs.
- Author: Adam Greig
- License: MIT
- Categories
- Package Index Owner: adamgreig
- DOAP record: Sheepdog-0.2.0.xml | https://pypi.python.org/pypi/Sheepdog/0.2.0 | CC-MAIN-2016-26 | refinedweb | 526 | 64.71 |
Hi so I am just trying to figure out what is wrong with my program. I am trying to implement the node class in C++. All I want to do is learn how to implement it and get it working. I am not trying to use it to create a list class, I just want to learn how to work with linked lists and the node class for certain interview questions. Anyways I seem to have a problem with my next pointer. When I insert a new node it seems to work as I kept a length field to see if its working and the length is incremented but I must be doing something wrong with the next and front pointers. Please help, let me know what I may be doing wrong
#ifndef NODE_H_ #define NODE_H_ #include <iostream> using namespace std; template <class T> class node { public: int length; T nodeValue; node<T> *front, *next; // Default constructor node(): next(NULL), front(NULL), length(0) {} // Constructor. Initialize nodeValue and next. node(const T& item, node<T> *nextNode = NULL): nodeValue(item), next(nextNode) {} // Add a new node void addNodeAtFront(const T& value); virtual ~node(); }; template <typename T> void node<T>::addNodeAtFront(const T& value){ node<T> *newNode; newNode = new node<T>(value,next); if (front == NULL){ // Set the new node to be the front front = newNode; if (newNode->next == NULL) cout<<"newNode->next is NULL"<<endl; //cout<<newNode<<endl; length++; } else { // Set the new node to point at the next node newNode->next = front; // Set the new node to be the 'front' front = newNode; //cout<<newNode<<endl; length++; } } template <typename T> node<T>::~node() { // TODO Auto-generated destructor stub } #endif /* NODE_H_ */
Here is the main program
#include <iostream> #include "node.h" using namespace std; int main(){ node<int> *p; p = new node<int>(5); cout<<"Here is your value: "<<p->nodeValue<<endl; for (int i = 0; i < 10; i++) p->addNodeAtFront(i); cout<<"p->length is: "<<p->length<<endl; if (p->next == NULL) cout<<"P->next is NULL"<<endl; else cout<<"P->next is not NULL"<<endl; return 0; }
I had some code to write the list i.e.
while (P != NULL){ cout<<p->nodeValue<<endl; p = p->next; }
but I erased it and that's is why I added the check to see if p->next is NULL and it is.
here is the output:
Here is your value: 5
newNode->next is NULL
p->length is: 10
P->next is NULL
Edited by figuer25: n/a | https://www.daniweb.com/programming/software-development/threads/299939/c-single-linked-list-node-class | CC-MAIN-2017-17 | refinedweb | 416 | 66.61 |
Opened 6 years ago
Closed 6 years ago
Last modified 4 years ago
#11311 closed (fixed)
Deleting model instance with a string id and m2m relation fails
Description
Say a model has a CharField as its primary key and the model also has a many to many relation to some other "normal" model, deleting an instance of the model with string primary key like "abc" fails with an exception, but not with a string id that can be converted into an int, e.g. "1". Seems like the code assumes that primary keys are always int()?
Using django from svn trunk r10982.
The following is a very small test case (models.py) that demonstrate the issue.
from django.db import models class Line(models.Model): name = models.CharField(max_length=100) class Worksheet(models.Model): id = models.CharField(primary_key=True, max_length=100) lines = models.ManyToManyField(Line, blank=True, null=True)
After running syncdb (happens with sqlite3 and postgresql-8.3 btw), then the following is the output from manage.py shell:
In [1]: from x.y.models import * In [2]: w = Worksheet(id='abc') In [3]: w.save() In [4]: w Out[4]: <Worksheet: Worksheet object> In [5]: w.delete() ERROR: An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line statement', (20, 0)) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /private/tmp/x/<ipython console> in <module>() /Library/Python/2.5/site-packages/django/db/models/base.pyc in delete(self) 567 568 # Actually delete the objects. --> 569 delete_objects(seen_objs) 570 571 delete.alters_data = True /Library/Python/2.5/site-packages/django/db/models/query.pyc in delete_objects(seen_objs) 1035 pk_list = [pk for pk,instance in items] 1036 del_query = sql.DeleteQuery(cls, connection) -> 1037 del_query.delete_batch_related(pk_list) 1038 1039 update_query = sql.UpdateQuery(cls, connection) /Library/Python/2.5/site-packages/django/db/models/sql/subqueries.pyc in delete_batch_related(self, pk_list) 68 where.add((Constraint(None, f.m2m_column_name(), f), 'in', 69 pk_list[offset : offset + GET_ITERATOR_CHUNK_SIZE]), ---> 70 AND) 71 if w1: 72 where.add(w1, AND) /Library/Python/2.5/site-packages/django/db/models/sql/where.pyc in add(self, data, connector) 54 if hasattr(obj, "process"): 55 try: ---> 56 obj, params = obj.process(lookup_type, value) 57 except (EmptyShortCircuit, EmptyResultSet): 58 # There are situations where we want to short-circuit any /Library/Python/2.5/site-packages/django/db/models/sql/where.pyc in process(self, lookup_type, value) 267 try: 268 if self.field: --> 269 params = self.field.get_db_prep_lookup(lookup_type, value) 270 db_type = self.field.db_type() 271 else: /Library/Python/2.5/site-packages/django/db/models/fields/related.py in get_db_prep_lookup(self, lookup_type, value) 160 return [pk_trace(value)] 161 if lookup_type in ('range', 'in'): --> 162 return [pk_trace(v) for v in value] 163 elif lookup_type == 'isnull': 164 return [] /Library/Python/2.5/site-packages/django/db/models/fields/related.py in pk_trace(value) 137 if lookup_type in ('range', 'in'): 138 v = [v] --> 139 v = field.get_db_prep_lookup(lookup_type, v) 140 if isinstance(v, list): 141 v = v[0] /Library/Python/2.5/site-packages/django/db/models/fields/__init__.pyc in get_db_prep_lookup(self, lookup_type, value) 210 return [self.get_db_prep_value(value)] 211 elif lookup_type in ('range', 'in'): --> 212 return [self.get_db_prep_value(v) for v in value] 213 elif lookup_type in ('contains', 'icontains'): 214 return ["%%%s%%" % connection.ops.prep_for_like_query(value)] /Library/Python/2.5/site-packages/django/db/models/fields/__init__.pyc in get_db_prep_value(self, value) 359 if value is None: 360 return None --> 361 return int(value) 362 363 def contribute_to_class(self, cls, name): ValueError: invalid literal for int() with base 10: 'abc'
Change History 6 years ago by russellm
comment:4 Changed 6 years ago by ronny
Thanks, Russell. It's working again now.
comment:5 Changed 4 years ago by jacob
- milestone 1.1 deleted
Milestone 1.1 deleted
Ok... this one opens up a nest of vipers.
This is a regression caused by [10952], which closed #10785. That ticket in turn makes reference to #10243, and brokenness in get_db_prep_lookup(). Now that I'm digging deeper, I'm starting to see the magnitude of the problem that Malcolm was referring to.
For v1.1, we may need to simply revert [10952] and live with the edge case that won't work. | https://code.djangoproject.com/ticket/11311 | CC-MAIN-2015-27 | refinedweb | 713 | 51.95 |
import "github.com/mjibson/go-dsp/spectral"
Package spectral provides spectral analysis functions for digital signal processing.
func Pwelch(x []float64, Fs float64, o *PwelchOptions) (Pxx, freqs []float64)
Pwelch estimates the power spectral density of x using Welch's method. Fs is the sampling frequency (samples per time unit) of x. Fs is used to calculate freqs. Returns the power spectral density Pxx and corresponding frequencies freqs. Designed to be similar to the matplotlib implementation below. Reference: See also:
Segment x segmented into segments of length size with specified noverlap. Number of segments returned is (len(x) - size) / (size - noverlap) + 1.
type PwelchOptions struct { // NFFT is the number of data points used in each block for the FFT. Must be // even; a power 2 is most efficient. This should *NOT* be used to get zero // padding, or the scaling of the result will be incorrect. Use Pad for // this instead. // // The default value is 256. NFFT int // Window is a function that returns an array of window values the length // of its input parameter. Each segment is scaled by these values. // // The default (nil) is window.Hann, from the go-dsp/window package. Window func(int) []float64 // Pad is the number of points to which the data segment is padded when // performing the FFT. This can be different from NFFT, which specifies the // number of data points used. While not increasing the actual resolution of // the psd (the minimum distance between resolvable peaks), this can give // more points in the plot, allowing for more detail. // // The value default is 0, which sets Pad equal to NFFT. Pad int // Noverlap is the number of points of overlap between blocks. // // The default value is 0 (no overlap). Noverlap int // Specifies whether the resulting density values should be scaled by the // scaling frequency, which gives density in units of Hz^-1. This allows for // integration over the returned frequency values. The default is set for // MATLAB compatibility. Note that this is the opposite of matplotlib style, // but with equivalent defaults. // // The default value is false (enable scaling). Scale_off bool }
Package spectral imports 5 packages (graph) and is imported by 6 packages. Updated 2019-12-09. Refresh now. Tools for package owners. | https://godoc.org/github.com/mjibson/go-dsp/spectral | CC-MAIN-2020-34 | refinedweb | 368 | 67.96 |
Now that you have gotten a handle on the main concepts of WebSphere Integration Developer and its suite of tools from the first three articles, you are ready to dive deeper into business processes. If you haven't read the fourth article that covered business state machines, don't worry; you can go back and read it later because it is not a prerequisite. In fact, if you just skimmed over the earlier articles and didn't have time to complete the simple application, you can just download the OrderProcessing module at the end of this article and use that as the starting point to build the business process later.
As you may already know, a business process is any system or procedure that an organization uses to achieve a larger business goal. You can automate a business process or you can involve a number of steps that need to be manually completed by one or more users. Business processes can be short running, or take hours, days, weeks or more to complete. Business processes are great for driving your business, but building them within a service-oriented architecture (SOA) puts your application into overdrive. After all, the real power comes from being able to integrate the processes seamlessly with other components and modules to automate getting work done.
With WebSphere Integration Developer, you can create a business process using the business process editor. A business process provides the primary means to coordinate enterprise services and describe your business logic. So, what does that mean? Well, a business process is an implementation of a service component that consists of a series of steps or activities that are run in a specific order. The activities can call other services, perform computations, or do just about any kind of programming work. The activities in a process can run sequentially or in parallel. You can also implement branching or looping in the flow of control between activities. A business process can be part of a larger business process, and it can contain nested business processes.
In the sections that follow, you will learn about:
- The parts that make up a business process
- Business process concepts
- The business process editor
- Building your own business process.
Anatomy of a business process
If you read the previous article about business state machines, you might be interested to learn that business state machines are a special case of a business process. To let you in on a little secret, under the covers a business process actually implements a state machine. Both business state machines and business processes are important techniques for defining your business logic. You are probably scratching your head and wondering, "If they are both important yet similar, how can I expect to know when to use a business process instead of a state machine?"
Well, if your business logic involves responding to events, and the response depends on the current state of the process, then it may be useful to implement the logic as a business state machine. State machines are also useful when the logic is cyclic in nature; that is, where an object or part of the system goes through a series of states repeatedly. For example, a vending machine awaits coins, then lets you to buy a drink, then returns your change before it once again waits patiently for its next junk food victim. Business processes, on the other hand, are useful for all other cases, especially cases where your business logic is a series of steps that you want to execute sequentially or in parallel.
SOA consists of many services connected together that talk to each other to achieve an overall goal. As you know from previous articles, a business process is one way that you can implement a service component.
A business process consists of the following elements, which we will explore in the upcoming sections:
- Activities
- Partners
- Variables
- Correlation sets
- Handlers.
A business process component consists of a set of activities, each of which does some work. Together, these activities represent your business logic. The work each activity does is completely up to you. It could involve performing a computation, calling a business partner's service, or perhaps asking a person in the organization to perform some manual work.
You can break each activity down into still more activities. For example, an activity in your process might be to bill a customer for an order. Prior to your company's going cutting-edge, the overall billing activity might have been broken down into looking up the customer's address, printing an invoice, and then mailing it. The first two activities might be service calls, while the latter would probably be a human task. Activities can execute sequentially, or in parallel. For example, an activity in your ordering process might notify your inventory system that an item needs to be re-ordered while the shipping activity is executing in parallel.
For each type of activity, you can enable event monitoring using the Event Monitoring tab in the properties view. This enables your process to emit Common Event Infrastructure events as it runs. Depending on the activity type, different options are available. For example, an invoke activity can emit entry and/or exit events. An invoke activity emits an entry event when a business process is about to execute or enter the invoke activity, whereas it emits an exit event when the business process is about to finish the execution of the invoke activity.
You can add activities to a process by selecting them from the palette and dropping them onto the business process editor canvas, or right-clicking on the canvas and selecting Add - [activity type].
Now let's look at the various types of activities that you can use in your processes.
Service activities enable your business process to communicate with other services and vice versa. Without service activities, your process would live a lonely, quiet life. There are three types of service activities:
- Receive
- Reply
- Invoke.
Receive activity
A receive activity is an entry point to a process; it is the point where the process starts or continues. You need one receive activity per operation you define in the process's interface. In the process editor, you can specify which operation corresponds to which receive activity. That means that when a call is made to one of the process's operations, the corresponding receive activity accepts the call, and the process continues running from there. A process requires at least one receive activity to start. A receive activity can also occur in the middle of a business process. In this case, if the process encounters a receive activity while it is running, the process stops and waits for the corresponding operation to be called.
An example should make this easier to understand.
In the example in Figure 1,
SimpleProcess has an interface
that has two operations:
start and
continue. These
operations are receive activities that run
in sequential order. Letâs take a look at what will happen
when the process runs.
- When another component calls the
startoperation, it creates a a new process instance.
- Next, the input parameter, which you can see at the bottom of the figure, will have its value assigned to the variable
Input1. The input and output variables are created for you when you create the process.
- The
CallServiceactivity runs and, when it finishes, the process stops running until it receives a call to the
continueoperation.
- Once a call is made to the
continueoperation, the process continues to the
PrepareResponseactivity.
Figure 1. Service activities
Reply activity
When a receive activity belongs to a
request-response operation, a reply activity returns the
output of the operation. Figure 1 shows a reply activity
as the last node in the simple process. As Figure 2
shows, it sends the response for the
start operation
using
Output1
as the output parameter. A reply
activity doesn't necessarily need to be at the end of
the process. A process could start with a receive
activity, and then return a response before proceeding
to do other work. You could have more than one reply
activity for each receive activity, such as in the case where your process has
multiple paths. The idea is that, when another component
calls a request-response operation of a process's
interface, it needs to eventually get a response for
that operation (it could instead get a fault returned
instead of a reply; we'll talk about that shortly).
Figure 2. Reply activity details
Invoke activity
An invoke activity simply calls an operation on another service. The services you can call with an invoke activity depend on your processes' partners, which we will explain shortly.
The way you implement an invoke activity is shown in
Figure 3, where the
CallService invoke activity is
defined to invoke the
doService operation of the
ServicePartner.
Figure 3. Details for an invoke activity
A structured activity contains other activities. The structured activities include:
- Sequence
- Choice
- Receive choice
- Parallel
- While loop
- Scope.
Sequence activity
The simplest structured activity is a sequence. A sequence activity contains other activities that run sequentially in the order in which they appear within the sequence. The activities it contains may be simple or they may be other structured activities. One thing we should point out: looking back at Figure 1, the complete process consists of a sequence of activities between the initial and final nodes. The editor canvas is actually one big sequence activity where you add more simple or complex activities. The sequence activity is hidden to keep the diagram clean. In fact, all the structured activities that follow, except for parallel, contain a hidden sequence wherever light-grey arrows connect activities.
Figure 4 shows a simple sequence activity containing two
nodes. The
CallFirst activity runs first
followed by the
CallSecond activity.
Figure 4. A sequence activity
Choice activity
A choice structured activity (also known as a switch) controls which path the process takes, based on a condition. Simply put, a choice activity lets you decide the next set of activities that will run. A choice activity contains case elements that consist of an expression that evaluates to either true or false, followed by a sequence of activities. The first case element that evaluates to true wins, and its sequence of activities run next. A choice activity can also contain an otherwise element, which is the path that is taken when no case element evaluates to true.
Take Figure 5 for example,
ShippingChoice
is a choice activity and
SmallOrder is a
case element.
SmallOrder has a condition
that is expressed using a visual snippet. The condition
specifies that if the quantity is greater than or equal
to 1 and less than 10, then the return value will be true
and the
CourierOrder activity will
run next. The
fourth article
describes in detail the visual snippet editor,
which is what the example uses to define the condition
for the
SmallOrder case. Figure 5 also shows an
Otherwise element, whose path will be taken
when neither of the two case elements evaluate to true.
Figure 5. A choice activity and case condition
Receive choice activity
Related to both the choice and receive activities is the receive choice activity. A receive choice activity (also known as a pick) looks and works a lot like a choice activity. The difference is that in place of the case elements, there are one or more receive elements and no otherwise element. Each of the receive elements in a receive choice activity accepts a certain type of message (a particular operation of the processâs interface). When a process reaches a receive choice activity, execution stops and waits to receive a message. The difference between a plain receive activity and a receive choice activity is that, with a receive choice activity, any one of a number of operations could be received. The first operation the process receives wins, just like a plain choice activity, and the process follows its path.
Figure 6 shows an example of a receive choice activity
called
OrderAction. When
OrderAction is reached, the
process will stop and wait for a call to either the
Proceed or the
Cancel operation.
If the
Proceed
operation is called first, then the
ProcessOrder invoke
activity will run, followed by the
Update activity.
Figure 6. A receive choice activity
Parallel activity
Sometimes, you don't want all your activities to run in order, as they do in a sequence activity. When your process has groups of activities that can run concurrently, or the flow might branch to activities on other paths, you can place them in a parallel activity (also known as a flow). Activities inside a parallel activity can still run in sequential order by connecting them with links.
A link is directional that you can draw from a source activity to a target activity. When the source activity finishes running, the target activity at the other end of the link runs. You can create links from a single source activity to multiple target activities or multiple source activities to a single target activity. The only thing that you cannot do is to create a cycle, meaning the source activity links to the target activity which links back to the source (similar to an endless loop).
Links can have a link condition, which is an expression to control whether the link is allowed. If the link condition returns false, that means the link cannot be followed. In this case, if this is the only link between a source and a target activity, the target activity does not run. This situation gets a bit complicated when it comes to multiple incoming links.
Letâs imagine a situation where a target activity has
many incoming links such as
Activity4 in Figure 7. What
would happen if some of the links are followed and some
are not? By followed, we mean that the activity at the
source of the link finished running and there was no
link condition returning false. By default, the target
activity runs when any one of the incoming links is
followed. A join condition lets you specify when the
target of one or more links should run. You can create a
join condition using Java code, a visual snippet, or
selecting from a list of simple choices.
If a join condition is not satisfied, then a join failure fault is thrown. You can tell the process not to throw a join failure fault, and instead, simply skip the activity and carry on to the next one, by selecting Yes for Suppress Join Failure on the Join Behavior tab of the activityâs properties.
That's a lot to digest, so this is probably a good time for
an example. Figure 7
shows a parallel activity that contains four activities.
After
Activity1 completes,
Activity3 runs, since
there is no condition on the link between
Activity1 and
Activity3.
Activity2 runs concurrently, but only if
the
amount value is less than 5 (as the link
condition shows in the bottom of the figure). Figure 8 shows
that the join condition for
Activity4 is set
to
All, which means that
Activity4
will run only when both of the incoming links are followed.
However, if the link condidtion between
Activity1
and
Activity2 is false, then the link between
Activity2 and
Activity3 is never
be followed. In this case, the join condition can never be
satisfied and a join failure occurs.
Figure 7. A parallel activity
Figure 8. A join condition
While loop activity
When you want a group of activities to run multiple times, until some condition no longer holds true, you can use a while loop activity. A while loop activity contains other activities and a condition. When the condition evaluates to false, the loop terminates and the next activity after the while loop runs.
Let's look at a simple example. Figure 9 shows a while
loop that stops iterating whenever the
isComplete
variable has a value of
true (since the
inverse
node returns
false ). Each time the loop
iterates, it calls the
CheckServiceComplete service,
which returns a boolean value that is assigned to the
isComplete variable. Thus, whenever the service returns
true, the loop exits and the
Reply activity
runs.
Figure 9. A while loop activity
Scope activity
A scope activity is a structured activity that can enclose any other activities. A scope activity lets you define local variables, local correlation sets, and various handlers. The activities within a scope have access to any variables that belong to that scope. A scope activity can contain more scope activities within it, so activities within a scope can also access variables of all enclosing scopes.
In the details for a scope, you can enable two options: isolated and compensable. When you choose isolated, access to variables is controlled so that, when simultaneous activities are running, only one can access the variables at a time. When you choose compensable, you can invoke compensation handlers for the scope.
We'll revisit scope activities when we discuss variables, correlation sets and each type of handler later in the article.
Inevitably, situations arise that prevent your process from reaching completion. A fault is an anticipated error that can occur. Fortunately, there are specialized activities to deal with these cases:
- Throw
- Rethrow
- Compensate
- Terminate.
Throw activity
A throw activity lets you signal that something has gone wrong in your process. If the operation was a request-response type and had a fault part along with the input and output parts of the interface, then a throw activity can signal an error condition to the caller of the operation. When you throw a fault with a throw activity, you might want to handle some problems as part of your processes instead of just returning a fault, in which case you can create a fault handler (more about fault handlers later, but you can probably guess what they do) to catch the fault thrown. A fault must have a name and it can optionally contain a variable that holds information related to the error.
Earlier in the choice activity section, we mentioned in the
example that an
OrderSizeFault would be thrown for order
sizes that weren't in the range covered by each case
element. Figure 10 shows the details for the
ThrowOrderSizeFault activity.
Figure 10. Throw activity
Rethrow activity
A rethrow activity is the same as a throw activity, except that it occurs within a fault handler. It enables you to rethrow a fault that is caught by a fault handler so that any enclosing scopes or callers of the process can handle it. For example, the OrderSizeFault hander might log the exception, and then rethrow it so that the order processing component, who called the process, doesnât proceed to bill the customer. Weâll explain this further in the fault handling section.
Compensate activity
A compensate activity lets you invoke a compensation operation for an activity or a compensation handler for a scope. You can only place it inside a compensation handler or fault handler. Compensation is an "undo" action for work that has already successfully completed. For example, suppose your process involves shipping orders after payment is received and an activity to accept payment from a customer has completed successfully. Then, something goes wrong and the complete order can't be shipped. A compensation handler might do something like reimburse the customer for the missing items. We'll talk more about compensation handlers shortly.
For a compensation activity, you set the target activity for the compensation, which is either a single activity or a scope, as Figure 11 shows.
Figure 11. A compensation activity
Terminate activity
A terminate activity lets you stop a process instance as soon as possible without performing any compensation or fault handling.
Some other useful activities include
- Assign
- Snippet
- Human task
- Wait
- Empty.
Assign activity
An assign activity lets you copy values from one variable
to another or to initialize variables. In the
fourth article,
you saw something similar when you assigned values to
the inputs and copied values from the outputs of service
calls within the business state machine. As an example
of what an assign activity can do, suppose you want to
update order information after shipping completes.
Figure 12 shows how to copy the
quantityShipped value
from the
shippedOrder business object to the
quantity
attribute of the
order business object.
Figure 12. An assign activity
Snippet activity
When you need more complex logic for an activity than other activities can provide, you can use a snippet activity, which lets you delegate work to a small program that you create. You can choose plain Java™ to define what the snipped activity does, or you can use the visual snippet editor that we covered in the fourth article.
Human task activity
In the second and third articles, we introduced you to human task components. You can also include human tasks directly in a process. A human task activity is used when work is to be performed by a person.
Wait activity
You can use a wait activity when you need your process
to stop running for some period of time. In the details for the
activity, you can simply specify a duration to wait, or
you can specify a specific time and date when the
process should continue to run. You can even use the
visual snippet editor to compute the duration to wait.
Figure 13 shows a wait activity that will wait
until December 1, 2006 at noon to call the
Figure 13. A wait activity
Empty activity
You might need an activity that does nothing at all, such as when you have a construct that requires an activity, but there is no work to be done. For example, as we will describe in the fault handling section, you might want to ignore certain faults and just carry on with your process. Since a fault handler must contain an activity, you can just insert an empty activity to suppress the fault. You can also use empty activities as place holders for activities whose details you'll fill in later.
A service component defines a set of interfaces that your process must implement. It may also define a set of references to other services that your process can call. Within a process, we use the term partner to describe the other services that may be calling your interfaces or that you may be calling. There will be one interface partner for each interface on the component for which your process belongs. Each reference of the component that the process implements corresponds to one reference partner within the process. Put very simply, when a client calls your process, you can think of it as an interface partner that called you. When you call another service, you will do so using a reference partner. So, an interface partner contains incoming operations and a reference partner the outgoing operations.
As an example, the component for the process in Figure
1 has an interface with the
start, and
continue
operations (for the
Start and
Continue
activities, respectively), and those operations belong to the
SimpleProcess interface partner. Reference partners are
what you use to call the other services, as you saw in
Figure 3, where we defined the
CallService invoke activity
to invoke the
doService operation of
ServicePartner.
When you generate a process implementation for a component by right-clicking on a component in the assembly editor, and selecting Generate Implementation - Process, all the component references appear under Reference Partners and are available for service calls. Likewise, when you create a process first, then drag the process on to the Assembly editor, it creates component references for all the reference partners. This is just one of many ways that Websphere Integration Developer makes things a little bit easier when creating and connecting services. Of course, you can add more references and partners as needed after creating the component and process.
A variable is a container for business data that is used within a process (or any component type for that matter). You can declare variables by right-clicking in the Variables section of the process editor and selecting Add variable. We mentioned in the scope activity section that you can declare local variables within a scope. A local variable is one that is only available for use (that is, you can assign to it, or fetch its value) within the scope in which it is declared, and within nested scopes as well.
When you add a variable in the Variables section, you are adding it to the currently selected scope. For example, when you select different scopes, you will see the variables section change to display the variables that belong to each scope. A global variable, on the other hand, you can access anywhere within your process. To declare one, make sure no scope activities are selected when you add a variable. As we mentioned when we described receive activities, global variables are created automatically for the input and output parameters of the process's operations.
When you create a variable, you need to declare what type it
is before you can use it. There are two kinds of
variables: data type and interface. A data type variable
is a business object or a simple type such
as string or integer. An interface variable has a
type based on an input or output message type in the
WSDL interface file. If you look back at Figure 3, you'll see that
Use Data
Type Variables is checked. This enabled us to set the
order business object as the variable.
A handler is a set of activities that is associated with a particular activity or the process as a whole. A handler runs when certain situations occur. The types of handlers are:
- Fault
- Event
- Compensation.
When a problem or exceptional situation occurs while a process runs, you can use a fault handler to undo partial and unsuccessful work for the scope in which the fault occurred. A fault handler is an optional part of a scope or invoke activity. You create one by right-clicking on a scope or invoke activity and selecting Add Fault Handler. A fault handler is triggered when a fault is thrown by the runtime infrastructure, when a service is called, or when a throw activity runs.
A fault handler contains one or more catch elements. You can add one catch for each fault that could potentially occur within the scope or the activity. Each catch contains a sequence where you can add activities to do whatever work needs to be done for the particular fault. This is where you can use the compensate, rethrow, or empty activities, in addition to any other activity. You can also add a catch all element to deal with any faults that are not caught by any catch element.
As an example, Figure 15 shows a simple fault handler
for the
ShipOrder activity.
Figure 14 shows the interface for the process that contains the handler.
Normally, the process replies with the
shippedOrder business
object, but if the
ShipOrder service (whose operation
also has the
invalidCustId fault) throws an
invalidCustId fault, then the process replies with
the
invalidCustId fault instead.
Figure 14. Interface with fault
Figure 15. A fault handler
We saw how to use receive and receive choice activities
to accept calls made to a process. You probably
wondered, "what if the process isn't stopped and waiting
for a particular operation call?" This is exactly what
event handlers are for. An event
handler looks a lot like an exception handler, except
that it starts running when an
OnEvent element receives
a call to the corresponding operation. An OnEvent
element is equivalent to a receive activity, so you need
a corresponding operation in one of the process's
interfaces.
Another element you can add to an event handler is a timeout. A timeout is implemented in the same manner as a wait activity. The difference is that a timeout in an event handler works independently of the rest of the process.
You can add an event handler to any scope when a process is long running (as we will explain, long running means that the process is interruptible and can wait for external asynchronous inputs). An event handler starts running when the activity enters the scope and stops running when the activity exits the scope.
In Figure 16, if you call the
cancelOrder operation on the
OrderProcessing interface, and either the
CheckInventory or the
ShipOrder activities are running,
then the
terminate activity runs and the process ends.
Figure 16. An event handler
As we mentioned when we talked about the compensate activity, compensation lets you undo a completed activity. You can add a compensation handler to an invoke or scope activity to take whatever steps are necessary to reverse the completed work. The scope or invoke activity defining the compensation handler must complete before the compensation handler can run. Therefore, if a scope or an invoke throws a fault, compensation for that activity cannot run since it never completed its work. Once the activity installs a compensation handler, you can invoke the handler using a compensate activity.
When a compensation handler runs, it is as though it is a continuation of the scope's execution. That is, you have access to the variables that were visible in the scope, and they will contain the same values as when the scope completed.
Figure 17 shows a compensation handler for the
inner scope (whose name is
CallAndNotifyScope), and contains the
CorrectNotify activity. In the outer scope,
a fault handler contains a compensate activity whose
target is set to the inner scope. If something goes
wrong after the inner scope has completed and the fault
handler runs, then the
CorrectNotify activity runs.
Figure 17. A compensation handler
Earlier in Figure 1, we showed a process that had two receive operations. One question you might ask is, "when there are multiple instances of a process running, how do you know which instance will receive the call to the second operation?" If you read the fourth article on business state machines, you'll remember that we had exactly the same problem when sending messages (calling operations) to multiple instances of running state machines. To solve the problem, state machines use a correlation set. Not surprisingly, the same concept applies to business processes (which, as we mentioned, provides the underlying implementation of a state machine).
A correlation set lets you specify which messages belong
to which process instance. It consists of a property, on
which correlation is based, and aliases for that
property. An alias is a part of a message that
represents the correlation property. For example, in
Figure 18, which is the correlation set for the process
in Figure 1, the activity uses the
orderNum property to
correlate the incoming requests. Figure 19 shows
creating an alias (as a result of clicking New),
which is the
orderNumber attribute of the
Order
business object. In Figure 18,
orderNumber
correlates both
start and
continue operation calls.
Note that for each alias, you could use different business objects. All that matters is that each alias has
the same type, which in this case is
int.
Figure 18. A correlation set
Figure 19. A correlation property alias
Once you have a correlation set, you need to specify
which alias the application will use to initiate the correlation
set. This means that the process instance that receives
the value of that alias will be used when any of the aliases contain that same
value. Figure 20 shows setting the correlation set to be
initialized when the
Start activity runs.
Figure 20. Using a correlation set
Let's look at how this works at runtime to make sense of
all this. Suppose the application calls the
start operation once and
the value of
orderNumber is
123. The process then calls
start
again with an
orderNumber value of
124. Later, if the
value of
orderNumber contains
123 when the application calls
continue,
then the first process instance receives the
call.
The property that is used for
correlation could change as the process continues. For
example, you might have a process that is correlated on
a customer identifier when the process starts. Later, if
the customer orders something, the rest of the process
might need to be correlated on the order number. Notice
in Figure 19 that we selected Input as the direction
(hence the
Direction in Figure 20 was set to
Receive).
You could also select Output to initiate another correlation
set when a reply of an invoke
activity is received.
Long running processes and microflows
There are two types of business processes: long running and microflows. A microflow is a business process that runs as a single transaction. A microflow is non-interruptible, which means that it cannot contain a wait or human task activity or more than one receive activity since each of these require the process to pause. It also can only have compensation on activities, not compensation handlers, since no state is preserved as the process runs.
A long running process, on the other hand, can stop running and all features are permitted. The state is preserved as the process runs. If you restart the server, the process will still be running and in the same state where it left off. Typically, components implemented as long running processes are called asynchronous, which means that a client would call them, and then proceed to do other work while it waits for a reply.
One final thing to point out before you get back to work and build a business process: WebSphere Integration Developer supports the Business Process Execution Language for Web Services (BPEL4WS) Specification, but it also provides convenient enhancements. You can disable these extensions if you want to ensure your process can run on another BPEL-compliant server by checking Disable WebSphere Process Server BPEL Extensions when you use the New Business Process wizard. As a side note, in case you haven't discovered it, rather than right-clicking a component and selecting Generate Implementation, as we have shown you, you can also create implementations using a wizard.
Here's a list of some of the enhancements that are available:
- Embedded human tasks
- Java expressions and Java snippets
- Microflows
- Valid-from setting
- Additional properties for each process element such as display name, description, and documentation.
Let's have some fun and use what we learned to enhance
the Order Processing application -- the one you put together
in the
third article.
We implemented the
ShippingProcess component
with a trivial business process: it just
makes a call to the
ShippingTask service, and then
notifies the
ProcessOrder component that the ShippingTask
service shipped the order. Let's build on that and implement it with more
complex logic in the process. We also included the complete application
(
orderprocessingcompleteprocess.zip.)
as a download if you just want to skip ahead and
browse the complete business process.
Suppose your boss tells you that it is possible that not all items in an order will be shipped at once. If an item is out of stock, the company policy is to ship what is available, and then ship the rest whenever the stock is replenished. If it turns out, however, that it will be more than five days until the item to be shipped will be restocked, then the process completes and notifies the customer that it did not ship all of the items.
First, let's import the project that we built in the
third article. To save you some time and let you
concentrate on just the new business process, the latest
version of the module has the necessary additional
interfaces and business objects already added, so be
sure to start with the project in the
orderprocessingemptyprocess.zip file.
- Download the
orderprocessingemptyprocess.zipfile from the Downloads section.
- Click File => Import => Project Interchange, and then click Next.
- Click Browse next to the From zip file field. Browse to the
orderprocessingemptyprocess.zipfile that you just downloaded, and click Open. This zip file is the Project Interchange file.
- Check OrderProcessing, and then click Finish. The OrderProcessing module opens in the Business Integration view and, as the project builds, you see a "Building workspace" message in the lower right corner of the workbench. Wait for building to complete.
- Expand the OrderProcessing module and then double-click OrderProcessing to open the assembly editor.
- Double-click the ShippingProcess component in the Assembly editor to open the business process editor. Figure 21 shows the business process that you will enhance.
Figure 21. The ShippingProcess business process
Let's quickly recap what we have in Figure 21. The
business process adheres to the
Shipping interface. You
can double-click the Shipping interface in the Business
Integration view to open it in the interface editor to
browse it. You'll see that it has one operation,
shipOrder, which takes an
order business object as
input. So, the fact that we used this interface means
that a call to the
shipOrder operation of the Shipping service invokes the process. The data
contained within the
Order object will be available for
use within the process.
Because the
ShippingOrder component was wired to the
ProcessOrder
and
ShippingTask components, the generation of the process implementation
created two reference partners:
ProcessOrderPartner and
ShippingTaskPartner
respectively. As you might recall from the
third article,
you will use these to make service calls to
those components.
Based on the rules specified earlier, you need to have
the process continue trying to ship items for the order
until either all items are shipped, or until the
restocking time is too long to wait. You can use a
WhileLoop activity for this. The loop needs to determine
when to exit, so in the next steps, you will also create
the
shippingComplete variable which, when set to
true,
will cause the loop to exit. Feel free to review the
section on the visual snippet editor in the
fourth article
before you proceed, since we won't elaborate on
the steps we take to create the visual loop condition.
Creating and initializing variables
Before we get to that though, we need some global variables to use throughout the process. Let's create them in the next steps:
- Right-click in the Variables section and select Add Variable, and then change the name to
shippingComplete.
- In the Details section of the Properties view for the new variable, ensure Data Type is checked, click Browse, and then select boolean from the list of types.
- In the same manner, create a
shippingFaultvariable with the ShippingFault type.
Just a note about that last step: in the original
OrderProcessing module, the
ShippingTask service
returned an
Order business object, but that business
object doesn't contain some information that you need for
the proposed changes, namely the quantity shipped or
restocking information. Therefore, in the new version of
the application you downloaded, there is a new
ShippedOrder business object that contains the
quantityShipped and
restockDays attributes. We also
changed the interface for the
ShippingTask component to return this new business object.
Next, you need to initialize some of the variables that the process uses later.
- Right-click the ShippingTask activity, and then select Insert Before => Assign. Change the name to
Initialize.
- In the From list in the details for the Initialize activity, select Fixed Value, and then type
falsefor the value.
- In the To list, select Variable, and then select shippingComplete as Figure 22 shows.
Figure 22. The Initialize assign activity details
You have just specified that the
shippingComplete
variable has a value of
false immediately after
the process moves past the
Receive activity.
Now we can get to the loop part of the process:
- Right-click the ShippingTask activity and select Insert Before => While Loop. Rename it to
LoopUntilShipped.
- In the Details section of the Properties view for the loop, click Create a New Condition.
- Drag the shippingComplete variable from the Variables section of the visual snippet editor to the canvas.
- Right-click the editor and select Add => Standard, select inverse under the logic category, click OK, and then click the editor canvas.
- Delete the false node since it isn't needed.
- Connect the shippingComplete node to the inverse node, and the inverse node to the return node so that your loop condition is the same as Figure 23.
- Save the business process editor
Figure 23. A loop condition
Your process should now look as in Figure 23. You might
want to look ahead to Figure 29 to see what the final
process looks like to help keep things in perspective. You need
the
inverse node because you want the loop
to exit when
shippingComplete is
true, but
the loop needs a value of
false for the loop to terminate.
The red
'x' in
LoopUntilShipped occurs because there are no
activities within the loop. This is an error
because you now have an infinite loop. After all,
with no logic inside the loop, there is no way for
shippedOrder to be set to
true.
Implementing logic within the loop
The next steps
begin to fill in the logic within the loop, which is to
invoke the
ShippingTask service and then determine
whether to wait and try again. Since we have already
covered visual snippets in two of the previous articles,
we will not bore you with very detailed steps to create
snippets any more.
- Drag the ShippingTask activity inside the LoopUntilShipped activity.
- Right-click within the LoopUntilShipped activity, select Add => Snippet, and then rename it to
CalcTotalShipped.
- Fill in the visual snippet logic as in Figure 24. The
addnode is under the math category of the standard visual snippets.
Figure 24. Calculating the amount shipped
You now have an activity to do the shipping work. The
value returned by the
ShippingTask activity contains the
number of items
ShippingTask shipped, so the snippet will
increment the
quantityShipped variable. You
will use that variable to make the decision of what to
do next following the steps that create the choice
activity:
- Right-click again within the loop activity, select Add => Choice, and then rename it to
DetermineNextAction.
- Right-click the DetermineNextAction activity, select Add => Case, right-click again and select Add => Otherwise.
- In the Description tab of the properties for the first Case, type
NotEnoughShipped, and then, in the Details tab, click Create a New Condition. This opens the visual snippet editor.
- Fill in the snippet logic as in Figure 25. The
andnode is under the logic category of the standard visual snippets.
- In the same manner, rename the second case to
EnoughShipped, and then fill in the snippet logic as in Figure 26.
Figure 25. The NotEnoughShipped case
Figure 26. The EnoughShipped case
The logic for the first case element is to return
true
if not enough items were shipped and the number of days
left for the items to be restocked is less than five.
The other case element of the loop handles
where the full order was shipped. There's no condition
to define for the
Otherwise element, since it is the path
that will be taken if the first two cases do not
evaluate to
true.
The next step is to create some
activities for each case. In fact, if you have saved the
process, you will see a red âx' next to the
NotEnoughShipped case because at least one activity is
expected. The next steps fill in the activities for
each path that your logic can take in the choice activity.
If the
NotEnoughShipped case is
true, then we want to
wait a while, and then try shipping the rest of the
items again. For this, we need to add a wait activity.
To ship the rest of the items, we just need to let the
loop iterate again, since the activity that calls to the
shipping component is at the start of the loop.
- Right-click the NotEnoughShipped case element and select Add => Wait.
- In the Details section of the Properties view, select Java for the expression language and then select Duration.
- Change the value
0to
10in the created expression. This value is the number of seconds to wait.
When the wait activity is reached, the process stops
running for ten seconds, and then, since there are no
further activities after the wait, and since the
shippingComplete variable is still
false, the next
iteration of the loop occurs. When all the items
have shipped, we want to exit the loop. To do this,
we simply set
shippingComplete to
true. Let's fill in
the case when enough items are shipped:
- Right-click the EnoughShipped case element, select Add => Assign, and then change the name to
SetShippingComplete.
- In the same manner as the Initialize activity earlier, set the shippingComplete variable to
true.
The only case that is left, when the first two cases are
false, is that more items were shipped than were
ordered, or not enough were shipped, but the restocking
time is too long. This is a problem, so we need to throw
a fault here. To keep things simple, we will just report
that a problem occurred.
- Add another assign under the Otherwise element and rename it to
SetErrorMsg.
- In the details for the assign activity, set the From list to Fixed Value and type
"Shipping problem"(including the quotes) for the fixed value.
- Set the To list to Variable and select shippingFault => message.
- Click New at the bottom-left of the assignment details, then create another assignment of the fixed value of
trueto the shippingComplete variable. (Assignment activities can have multiple assignments.)
- Add a throw activity under the SetErrorMsg activity.
- In the throw activity details, select User-defined for the fault type, leave the default namespace, and set the fault name to
shippingFault.
- For the fault variable, click Browse and select shippingFault from the list.
The next step is to define the fault handler to catch
the
shippingFault:
- Right-click the LoopUntilShipped activity, and then select Insert Before => Scope.
- Left-click the LoopUntilShipped activity, and then drag it into the scope activity.
- Right-click the scope activity and select Add Fault Handler.
- In the details for the Catch element in the fault handler, select User-defined for the fault type. Note that clicking the orange and red 'x' at the top-right of the scope activity toggles between showing and hiding the fault handler.
- Type
ShippingFaultfor the Fault Name and then type
shippingFaultfor the Variable.
- Browse to ShippingFault for the data type.
- Under the ShippingFault element in the fault handler, add a snippet activity, and then fill in the details as in Figure 27.
Figure 27. Fault snippet contents
Notifying the client that the order shipped
The last step is to notify the client that the order
shipped with the updated order values (the quantity
shipped will now contain the amount that was actually
shipped). The
NotifyShipped activity is
already in place as the last activity of the process to
do just that. Your process should now look like the
one in Figure 28.
Figure 28. The complete shipping process
Let's quickly test the component now. If you are
familiar with running the entire module as the
third article
showed you, you can try running that. We'll just
quickly show you the steps to test the
ShippingProcess
component here.
- Right-click on the ShippingProcess component in the OrderProcessing assembly editor and select Test Component.
- In the test client that opens, enter
1for orderNumber,
10for quantityOrdered, leave quantityShipped as
0and enter any values for productID and customerID.
- Click Continue. Select a server when the deployment location dialog opens and click Finish.
- When the test client for the
ShippingTaskcomponent receives the emulate event (since you selected to test one component in the module, the rest will be emulated by default), enter
9for quantityShipped and
3for restockDays.
- Click Continue.
This data satisfies the
NotEnoughShipped case and thus causes the
Wait activity to run. Ten seconds later, the loop
runs again.
- When the
ShippingTaskcomponent receives the emulate event again, enter
1for quantityShipped, leave restockDays at
0, and then click Continue.
- When the
ProcessOrdercomponent receives the emulate event, just click Continue.
Figure 29. Testing the business process
The process has now completed. It has shipped the customer their ten items and notified the
ProcessOrder component.
Business processes are a key part of your business integration application. You can use them to the define steps of your business logic, to coordinate and choreograph other services, and to involve people. Business processes can execute their activities in sequence or in parallel, over a very short period of time, or a much longer duration spanning days, weeks or longer. This article and the previous article on business state machines showed you two important ways to define your application logic.
Information about download methods
Learn
- Business Process Execution Language for Web Services version 1.1
- Business Process with BPEL4WS: Understanding BPEL4WS
- WebSphere Integration Developer product information
- WebSphere Process Server product information
- WebSphere Process Server: IBM's new foundation for SOA
- Build a Hello World SOA application
- Service Component Architecture
- Common Event Infrastructure
Get products and technologies
Discuss
Richard Gregory is a software developer at the IBM Toronto Lab on the WebSphere Integration Developer team. His responsibilities include working on the evolution and delivery of test tools for WebSphere Integration Developer.
Jane is a Staff Software Developer at IBM Canada Ltd. She is responsible for developing the Business Process Executable Language (BPEL) and Business Rules debugger in WebSphere Integration Developer. Previously, she was the team lead of the WebSphere Studio Technical Support team. Jane received a bachelor in Electrical Engineering from the University of Waterloo in year 2000. She has extensive publishing experience, including numerous developerWorks articles. Jane was the lead author of an IBM Press book, An Introduction to Rational Application Developer, A Guided Tour.
_32<<. | http://www.ibm.com/developerworks/websphere/techjournal/0607_gregory/0607_gregory.html | crawl-003 | refinedweb | 8,306 | 52.29 |
Sometimes data is best shaped where the data is in the form of a wide table where the description is in a column header, and sometimes it is best shaped as as having the data descriptor as a variable within a tall table.
To begin with you may find it a little confusing what happens to the index field as we switch between different formats. But hang in there and you’ll get the hang of it!
Lets look at some examples, beginning as usual with creating a dataframe.
import pandas as pd df = pd.DataFrame() names = ['Gandolf', 'Gimli', 'Frodo', 'Legolas', 'Bilbo', 'Sam', 'Pippin', 'Boromir', 'Aragorn', 'Galadriel', 'Meriadoc'] types = ['Wizard', 'Dwarf', 'Hobbit', 'Elf', 'Hobbit', 'Hobbit', 'Hobbit', 'Man', 'Man', 'Elf', 'Hobbit'] magic = [10, 1, 4, 6, 4, 2, 0, 0, 2, 9, 0] aggression = [7, 10, 2, 5, 1, 6, 3, 8, 7, 2, 4] stealth = [8, 2, 5, 10, 5, 4 ,5, 3, 9, 10, 6] df['names'] = names df['type'] = types df['magic_power'] = magic df['aggression'] = aggression df['stealth'] = stealth
When we look at this table, the data descriptors are columns, and the data table is ’wide’.
print (df)
Stack and unstack
We can convert between the two formats of data with stack and unstack. To convert from a wide table to a tall and skinny, use stack. Notice this creates a more complex index which has two levels the first level is person id, and the second level is the data header. This is called a multi-index.
df_stacked = df.stack() print(df_stacked.head(20)) # pront forst 20 rows OUT: dtype: object
We can convert back to wide table with unstack. This recreates a single index for each line of data.
df_unstacked = df_stacked.unstack() print (df_unstacked)
Returning to our stacked data, we can convert our multi-index to two separate fields by resetting the index. By default this method names the separated index field ’level_0’ and ’level_1’ (multi-level indexes may have further levels as well), and the data field ’0’. Let’s rename them as well (comment out that row with a # to see what it would look like without renaming them). You can see the effect below:
reindexed_stacked_df = df_stacked.reset_index() reindexed_stacked_df.rename( columns={'level_0': 'ID', 'level_1': 'variable', 0:'value'},inplace=True) print (reindexed_stacked_df.head(20)) # print first 20 rows OUT: ID variable value 0 0 names Gandolf 1 0 type Wizard 2 0 magic_power 10 3 0 aggression 7 4 0 stealth 8 5 1 names Gimli 6 1 type Dwarf 7 1 magic_power 1 8 1 aggression 10 9 1 stealth 2 10 2 names Frodo 11 2 type Hobbit 12 2 magic_power 4 13 2 aggression 2 14 2 stealth 5 15 3 names Legolas 16 3 type Elf 17 3 magic_power 6 18 3 aggression 5 19 3 stealth 10
We can return to a multi-index, if we want to, by setting the index to the two fields to be combined. Whether a multi-index is preferred or not will depend on what you wish to do wit the dataframe, so it useful to know how to convert back and forth between multi-index and single-index.
reindexed_stacked_df.set_index(['ID', 'variable'], inplace=True) print (reindexed_stacked_df.head(20)) OUT: value ID variable
Melt and pivot
melt and pivot are like stack and unstack, but offer some other options.
melt de-pivots data (into a tall skinny table)
pivot will re-pivot data into a wide table.
Let’s return to our original dataframe created (which we called ’df’) and create a tall skinny table of selected fields using melt. We will separate out one or more of the fields, such as ’names’ as an ID field, as below:
unpivoted = df.melt(id_vars=['names'], value_vars=['type','magic_power']) print (unpivoted) OUT: names variable value 0 Gandolf type Wizard 1 Gimli type Dwarf 2 Frodo type Hobbit 3 Legolas type Elf 4 Bilbo type Hobbit 5 Sam type Hobbit 6 Pippin type Hobbit 7 Boromir type Man 8 Aragorn type Man 9 Galadriel type Elf 10 Meriadoc type Hobbit 11 Gandolf magic_power 10 12 Gimli magic_power 1 13 Frodo magic_power 4 14 Legolas magic_power 6 15 Bilbo magic_power 4 16 Sam magic_power 2 17 Pippin magic_power 0 18 Boromir magic_power 0 19 Aragorn magic_power 2 20 Galadriel magic_power 9 21 Meriadoc magic_power 0
And we can use the pivot method to re-pivot the data, defining which field identifies the data to be grouped together, which column contains the new column headers, and which field contains the data.
pivoted = unpivoted.pivot(index='names', columns='variable', values='value') print (pivoted_2) OUT: variable magic_power type names Aragorn 2 Man Bilbo 4 Hobbit Boromir 0 Man Frodo 4 Hobbit Galadriel 9 Elf Gandolf 10 Wizard Gimli 1 Dwarf Legolas 6 Elf Meriadoc 0 Hobbit Pippin 0 Hobbit Sam 2 Hobbit
One thought on “32. Reshaping Pandas data with stack, unstack, pivot and melt” | https://pythonhealthcare.org/2018/04/08/32-reshaping-pandas-data-with-stack-unstack-pivot-and-melt/ | CC-MAIN-2020-29 | refinedweb | 814 | 51.11 |
Microsoft’s Windows Phone 7 (WP7) OS garnered good reviews upon introduction, but thus far, WP7 phones made by HTC, Samsung, LG, and Dell haven’t sold very well. Some speculate that’s because of poor marketing and/or lack of enthusiasm on the part of the wireless carriers. Others think potential buyers are waiting for the first WP7 phones from Nokia, which has committed to making Windows Phone its principal platform. While these are undoubtedly factors, many of us have held back on adopting WP7 because of all the features, considered standard on Android, iPhone, and other platforms (and even on its own predecessor, Windows Mobile) that are missing in action in WP7 v1.
Recently the company unveiled its first major update, code-named Mango. Which of these critical deficiencies will it fix? Will it be enough to pep up sales? And which important shortcomings does it fail to address?
What Mango brings to the table
Mango is still in the testing stages and won’t be available on phones for at least a few more months. A launch date hasn’t been announced but most industry pundits are predicting an autumn release date to compete with the iPhone 5 (expected to be out in September) and Android holiday offerings. According to Steve Ballmer, Mango includes more than 500 new features. But how many of those really matter to users?
The user interface
The distinguishing feature of the WP7 interface — which is being carried over to the next version of the Windows desktop operating system — is Live Tiles. These are more useful than icons because they can provide updated information about the apps they represent. Tiles have been improved in Mango, with tile notifications now supporting two-sided application and secondary tiles. Tiles pinned to the start screen flip periodically, making them more animated and more informative, and an app can have more than one tile pinned to the start screen (for example, if you want a tiles for weather information in two locations).
Multitasking at last
Mango does address one of the most often criticized shortcomings of WP7 by adding multitasking support. Multitasking, along with the ability to copy and paste, were two basic “missing” features that Microsoft absolutely had to include as quickly as possible in order to be competitive. (Copy and paste was added to WP7 via its first minor update, “NoDo,” released in March).
Even though the first version of the iPhone didn’t multitask, either, WP7 came out of the gate competing with iOS 4, which supports multitasking. Android users already had the feature, and perhaps more important, those coming to WP7 from Windows Mobile were used to having it. And even though some would argue that multitasking doesn’t matter on a handheld device, and even that it causes more trouble than it’s worth, it was important for Windows phones to be able to check off that box in a features comparison list of the major phone platforms. The challenge was to be able to implement multitasking without draining the battery and using up all the memory. Here’s a video that explains how multitasking works in Mango.
What does it mean to the user? You can now run audio apps in the background, with music continuing to play when you launch other apps, or start a file download that continues after you navigate away to a different app.
In my opinion, the email client was already one of the most impressive things about WP7. Its version of Mobile Outlook is clean and easy to read and navigate. I like the one-line preview of the message below the subject line, the large font used for the sender’s name, and the ease with which you can switch from all contents of the Inbox to unread, flagged, or urgent messages. The WP7 mail client is shown in Figure A on the left, in comparison with the HTC Droid Incredible’s email client on the right.
Figure A
Mango makes WP7’s already good email client even better.
Mango adds several improvements to the email experience:
- The Inbox shows mail from multiple accounts on one page (universal Inbox).
- You can view conversation threads, like you do in Outlook 2010, and expand or collapse them.
- You can view all communications between you and a particular contact, including not just email but also SMS, Windows Live, and Facebook communications.
- Text-to-speech feature that reads incoming messages and speech-to-text for composing or replying to messages.
- You can set up two tiles, one for your work email account and one for your personal account.
All of these features will make it much easier for on-the-go professionals to use email more quickly and effectively. I’m excited about the speech recognition integration, which helps solve the problems that arise from the small size of the keyboard and screen on a phone.
A better browsing experience
WP7 shipped with the mobile version of Internet Explorer 7. Mango adds IE 9, which will offer some of the same benefits as the desktop version of IE 9, including HTML5 and CSS 3 support, as well as hardware acceleration for graphics that will enhance performance. In fact, tests have shown Mango’s IE 9 outperforming Safari in iOS 4 and possibly iOS 5 as well. It reportedly doesn’t have Flash and Silverlight support (at least, at this time).
Mango changes the look of the browser, moving the address box from the top of the page to the bottom. The three soft buttons that were at the bottom of the browser window (Add Favorite, Favorites, and Tabs) are gone, giving you more room for the display of the web page. The address box no longer disappears when you switch to landscape orientation, so you can still enter addresses when you’re in landscape mode.
Brian Klug over on AnandTech put the mobile IE 9 browser through its paces and shares the results of performance and standards compliance testing in this article. His conclusion: IE 9 makes big improvements to the web browsing experience.
Developers, developers, developers
The added features and functionality discussed above are aimed at enhancing the user experience, but the success of Windows Phone will ultimately be closely tied to the apps available for the platform, and that means Microsoft has to woo developers. No matter how much users want a particular app, developers can’t deliver it without the necessary APIs. Mango adds new APIs that will allow for development of additional types of apps:
- VoIP and video chat apps that need direct access to the network.
- Apps that need local SQL CE databases.
- Apps that need direct access to the camera or gyro.
- Apps that need (read-only) access to contacts and calendar information.
See this MSDN article for a list of new namespaces and classes.
The Windows Phone 7.1 Developer Tools provide for a great deal of new functionality. Apps will be able to use TCP and UDP protocols to communicate over sockets, enabling two-way communications with cloud services or multi-player gaming. Developers can also use Silverlight and XNA in a single app instead of choosing one or the other, and Visual Basic is available for both Silverlight and XNA Framework apps. Cryptography APIs allow apps to store login credentials in encrypted form so that users don’t have to log in every time they use the app.
Developers will also be happy to know that apps that work on Windows Phone 7.0 will continue to work on Windows Phone 7.1 devices.
How to make it even better
We all have our own priorities and private wish lists for features we’d like to see added to Mango (or the following update). Even though I’m impressed with some of the improvements, I’m disappointed that a couple of deal-breaking problems still haven’t been addressed. Before I can commit to a Windows Phone as my primary handheld device, it must have:
- Tethering capability. This is vital. When I travel — or on the rare occasions that my home Internet connection goes down — I need to be able to connect my laptop and/or tablet to the Internet using my phone’s data connection. There’s no compromising on this one.
- Access to the full file system from my computer. I hate the requirement to use Zune to transfer files between phone and PC, just as the requirement to iTunes was a deal-killer for me when I considered an iPhone. I can plug my Android phone into the computer via USB and access its files in Windows Explorer. I could do it with my old WinMo device, too. That’s what I want — no, that’s what I require — from a Windows phone.
There are other features that would be very nice to have, but those are the biggies. Once we get that out of the way, we can focus on making a pretty and usable interface even more so. Many of the things I’d like to see wouldn’t be difficult at all to do. Why do we have so few (and such ugly) color choices for the tiles? Why in the world can’t we set a background picture behind the tiles? Heck, some folks would even like to see the Aero UI on Windows phone, and a couple of things I really miss a lot when going from a Droid to a Windows phone are the notification bar at the top and the all-important fourth button (the menu button). Take a look at how one creative high school student envisions a more attractive look for Windows phone.
Summary
Microsoft has a reputation for not really getting any product right until the third try: Windows 3.x, IE 3, and many more. Progress is made incrementally, but it’s not until that magical v3 that things really come together. In one sense, Mango is v2 of Windows Phone (as opposed to Windows Mobile). It’s the first major update to the completely redesigned OS. It goes a long way toward making Windows Phone more competitive with iPhone and Android, but it doesn’t go quite far enough to win me (and many other Windows fan) over. I’m hoping the third time will be a charm, and the next major update — running on slick new Nokia hardware — will have all my “musts” and more so I can finally say I’m “all in” with Windows Phone.
Do you think Mango will put Microsoft back in the smartphone game? Post your thoughts in the discussion. | http://www.techrepublic.com/blog/smartphones/will-mango-put-microsoft-back-in-the-smartphone-game/3010 | crawl-003 | refinedweb | 1,772 | 60.14 |
Get rotational vector that matches normal [SOLVED]
On 09/02/2015 at 03:58, xxxxxxxx wrote:
I have a plane.
I can get the normal vector like this:
obj = GetObjectByName("Plane") obj_mat = obj.GetMg() p = obj.GetPolygon(0) points = obj.GetAllPoints() # Get global coords of points. p1, p2, p3 = points[p.a] * obj_mat, points[p.b] * obj_mat, points[p.c] * obj_mat # Calc the plane normal from three of the points. vec_normal = (p2 - p1).Cross(p3 - p1).GetNormalized() print vec_normal
How do I get the same angle with a null oriented the same way as the plane, or straight from the plane rotation?
I tried:
obj = GetObjectByName("Plane") obj_mat = obj.GetMg() obj_rot_vec, w = c4d.utils.MatrixToRotAxis(obj_mat) print obj_rot_vec
But, this didn't give me the same rotation vector. The 'w' rotation isn't needed.
I just want to get a 'normal' vector from the direction the null is pointed.
Thanks.
On 09/02/2015 at 05:54, xxxxxxxx wrote:
Hello Christopher,
this matrix transformation might help you?
it gives you a null z rotatetd in the facenormal direction:
import c4d from c4d import utils def main() : if not op:return if not op.IsInstanceOf(c4d.Opolygon) :return print op op_mat = op.GetMg() p = op.GetPolygon(0) points = op.GetAllPoints() # Get global coords of points. p1, p2, p3 = points[p.a] * op_mat, points[p.b] * op_mat, points[p.c] * op_mat # Calc the plane normal from three of the points. vec_normal = (p2 - p1).Cross(p3 - p1).GetNormalized() print vec_normal newMatrix = c4d.Matrix() newMatrix.v3 = vec_normal #in Z_direction newMatrix.v1 = c4d.Vector(0,1,0).Cross(newMatrix.v3) #in X_direction newMatrix.v2 = newMatrix.v3.Cross(newMatrix.v1) #in Y_direction null = c4d.BaseObject(c4d.Onull) null.SetMg(newMatrix) doc.InsertObject(null) print newMatrix c4d.EventAdd() if __name__=='__main__': main()
Best wishes
Martin
On 09/02/2015 at 11:26, xxxxxxxx wrote:
Thanks Martin, but what if I never have a plane?
I want to calculate the same 'normal' rotation from an actual object rotation.
I'd rather not create a plane, transfer the matrix, calculate the normal, and then delete the plane.
That seems kind of hackish.
Chris
On 09/02/2015 at 11:37, xxxxxxxx wrote:
Hi Chris,
I´m not quite sure if I understand your problem right.
What to you mean by 'normal' rotation?
The rotation of an object is given by three vectors.
#the components of a c4d matrix (rotation in x,y,z and the offset)
matr = op.GetMg()
print matr.v1, matr.v2, matr.v3, matr.off
if you want set an object(op2) to the same rotation as another object(op), you simply overwrite the
Matrix like:
matr = op.GetMg()
op2.SetMg(matr)
Hope this helps?
Martin
On 09/02/2015 at 12:27, xxxxxxxx wrote:
I'm using this plane class to calculate distance and intersections with a plane:
The problem is, to initialize the class it wants a rotation. The only rotation format I've found that will make it work properly is a calculated 'normal' rotation like above. But, I'm not starting with a plane, I'm starting with a rotated object in my program, and calculating distance and intersection to a virtual plane aligned to the same rotation as the object.
I've tried this:
obj_rot_vec, w = c4d.utils.MatrixToRotAxis(obj_mat)
But the rotation returned is not the same as a calculated 'normal' rotation with my method above.
So to use this class, I've had to create a plane, set its rotation to the same as the object, calculate the rotation of the plane poly normal, and then give this value to the class to initialize it.
I'm missing something.
Isn't there some way to get the rotation that the class wants from the rotated object directly.
It isn't eulers.
It isn't the 3 rotation vectors in the matrix.
For a polygon, it is the direction of the normal.
But I don't have a polygon, I have a rotated object.
It is a rotation without the 'heading' or 'w' value.
I thought...
c4d.utils.MatrixToRotAxis
would return the same thing a calculating a plane poly normal.
But it doesn't.
What am I missing?
On 09/02/2015 at 12:58, xxxxxxxx wrote:
I see.
it needs a normal and a position two simple vectors to initialize.
n is the normal of a plane and pos a point which is lying on the plane.
def __init__(self, pos, n) : super(Plane, self).__init__() self.pos = pos self.n = n.GetNormalized() if DEBUG: print "self.pos = %r, self.n = %r" % (pos, n)
if you want to use the xy_Plane corresponding to the object and it´s rotation
just use the z Vector of the objects matrix as n, which is z_vec = op.GetMg().v3
and the offset as pos. offset = op.GetMg().off
if you want to use the xz-Plane corresponding to the object and it´s rotation
just use the y Vector of the objects matrix as n, which is y_vec = op.GetMg().v2
and the offset as pos. offset = op.GetMg().off
and so on....
Best wishes
Martin
On 09/02/2015 at 14:13, xxxxxxxx wrote:
Martin,
Of course... if you just take one of the 3 matrix rotation vectors you are getting the equivalent of a plane without the rotation around the axis perpendicular to the plane.
Thanks, I new it was something simple like that. | https://plugincafe.maxon.net/topic/8497/11094_get-rotational-vector-that-matches-normal-solved | CC-MAIN-2021-17 | refinedweb | 904 | 69.38 |
There is a certain DataGridView in which columns are created manually, it is done something like this:
dataGridView.Columns.Add ("Column 1", "Heading 1"); dataGridView.Columns.Add ("Column 2", "Heading 2"); dataGridView.Columns.Add ("Column 3", "Heading 3");
How to make it so that when the data source is specified through the DataSource
property
dataGridView.DataSource = mySource;
Values from desired properties of my source went to desired columns in DataGridView . For example, let’s say my data source contains properties A , B , C and let’s say I want:
- property A was displayed in a column named “Column 3 ”
- property B in the column named “Column 2 ”
- and the property C in the column named “Column 1 ”
P.S. Now, to achieve the desired result, you have to loop through the data source and add data from it to the DataGridView as follows:
dataGridView.Rows.Add ("Value for cell 1", "Value for cell 2", "Value for cell 3 ");
Answer 1, authority 100%
You need property DataPropertyName . Let me give you a small example. Let’s have a class that describes our data:
public class MyClass { public MyClass (string a, string b, string c) { A = a; B = b; C = c; } public string A {get; set; } public string B {get; set; } public string C {get; set; } }
Add columns to DataGridView:
dataGridView.Columns.Add (new DataGridViewTextBoxColumn { DataPropertyName = "A", HeaderText = "Header 1" }); dataGridView.Columns.Add (new DataGridViewTextBoxColumn { DataPropertyName = "B", HeaderText = "Header 2" }); dataGridView.Columns.Add (new DataGridViewTextBoxColumn { DataPropertyName = "C", HeaderText = "Header 3" });
Let’s create a collection and specify it as a data source:
var data = new List & lt; MyClass & gt; { new MyClass ("1", "2", "3"), new MyClass ("4", "5", "6"), new MyClass ("7", "8", "9") }; dataGridView.DataSource = data;
That’s all. | https://computicket.co.za/c-datagridview-data-binding/ | CC-MAIN-2022-27 | refinedweb | 288 | 52.19 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.