text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
We are using Toggl purely for Time Tracking into JIRA. No Reporting or anything else.
But we still have to create Projects in Toggl and map them wit JIRA to have it properly working.
I'm not clear why this is necessary, as the JIRA Issue Number should be unique enough to map the to JIRA Issues.
In my understanding it would be possible to remove the requirement to create Toggl Projects and the mapping? As this creates a lot of hazzle for us and leads to lost time in JIRA.
Thanks a lot.
Michael
Hello Michael
Joggler is designed to map JIRA projects to Toggl projects. Typically you have a JIRA project with several issues that you are working on. On each of the Issues, the Joggler Timetracking button is displayed. When you click it, Joggler connects to Toggl and looks up the corresponding Toggle project that is configured in the Joggler settings.
We believe that most development teams have a number of projects that each contains a bunch (potentially a big bunch) of issues. In such a case, the initial minimal overhead to map JIRA to Toggl projects is negligible compared to the amount of time that is tracked on the issues.
Hope this helps,
Till
Hi Till
Thanks for your fast reply
So I understand that it is a performance issue? So the time to map a Toggle Entry to potential all JIRA Issues is too big? So you minimize the amount of issues that could mach first with having a subset of JIRA Issues only within the project?
It's just the problem of the Process, that a person which creates a new Project in JIRA, has to go to Toggl, create a project, go back to JIRA, map the project before you can start toggl. Also the person that has rights to create a project does not necessarily have the rights to edit Settings of Addons like Toggl.
We're using it since a couple of weeks and it's great, but this issue prevents me from really going into it and fully decide that it is worth the money to put into Joggler.
Hi Michael,
sorry, I was not clear enough: It is a design-decision, not a performance issue.
Joggler assumes that you would want to report the time on the project you are working on. Hence you have to sync the two namespaces (project names in Toggl and project names in Jira) manually once. We believe it is worth the initial effort.
Cheers,
Till. | https://community.atlassian.com/t5/Marketplace-Apps-questions/Possibility-to-not-need-to-create-Toggl-Projects/qaq-p/454927 | CC-MAIN-2019-09 | refinedweb | 444 | 69.82 |
Hi, I've been struggling a lot with trying to create a scope / namespace / environment / something for embedded scripts to run in. Putting in instances with (in 2.0) SetVariable is fine, but so far I've not managed to put classes there (I wish classes were objects like in Python so things would just work .. in fact I'm considering porting this script server thing other people have earlier started as c# to ironpy to perhaps get rid of these probs :) So I hope there is either something nice in the new Microsoft.Scripting API that I've not found yet, or something in the .net reflection system that am overlooking. Have tried this kind of things: System.Type Timer = new System.Threading.Timer((System.Threading.TimerCallback)this.DummyTimerCallback).GetType(); Script.SetVariable("Timer", Timer); .. but of course that just puts the (useless?) reflection class to the namespace, not the actual class that could be constructed? Perhaps that typeinfo-type can be used to instanciate with some Invoke thing, but obviously I don't want to expose that to scripters. Dunno if I could hide that in some wrapper funcs implemented on the py side. Previously I've used this kind of technique to expose some modules for completely trusted scripts: (this from 1.1, in a project where we used Mogre - we are not using it in this project) Mogre.Timer t = new Mogre.Timer(); //just to get the mogre module python.LoadAssembly(t.GetType().Assembly); //gives 'Mogre' so after that in 1.1 'import Mogre' worked in the scripts and that was fine there. but now I'd like to give the scripts Threading.Timer, but not the whole System assembly. I've seen the CreateModule etc. methods in ipy, and also now the new CreateScope thing in Microsoft.Scripting, but did not find any documentation on that - and anyway the same problem seems to be there that I can't put references to classes as values in the scopes. Should I indeed try porting the whole thing to IronPython to perhaps get rid of this prob., or is there some nice way to expose assemblies or parts of them? Sorry if this is FAQ stuff but I did not find this in any of the embedding examples etc. patiently looking forward to be shown the right path, ~Toni | https://mail.python.org/pipermail/ironpython-users/2007-November/005951.html | CC-MAIN-2017-30 | refinedweb | 391 | 74.9 |
43608/how-to-generate-urls-in-django
Hello @kartik,
You can try this:
from django.views.generic import ...READ MORE
You should use django-pjax which is built exactly for ...READ MORE
This actually gives you the property names ...READ MORE
In case someone googles here searching for ...READ MORE
To install Django, you can simply open ...READ MORE
Try to install an older version i.e., ...READ MORE
ALLOWED_HOSTS as in docs is quite self ...READ MORE
Go to your project directory
cd project
cd project
ALLOWED_HOSTS ...READ MORE
The global variable can be used in ...READ MORE
You can use the reversed function in ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/43608/how-to-generate-urls-in-django | CC-MAIN-2021-43 | refinedweb | 132 | 68.87 |
If you’ve used React before, chances are that you’ve had to require some form of method for managing things.
If we take, for example, an eCommerce site, it’s likely that your application is going to have to look after a variety of things, such as what items are in stock, and what items a user has added to their shopping cart. These require state management, which leads us — in React — to use class-based components.
An example of a class-based component for our eCommerce store may look something like this:
class App extends Component {
constructor(props) {
super(props);
this.state = {
items: [
{
id: 0,
name: ‘Banana’
price: 25,
stock: 2
}, {
id: 0,
name: ‘Pineapple’
price: 90,
stock: 5
},
{
id: 0,
name: ‘Orange’
price: 20,
stock: 8
},
{
id: 0,
name: ‘Apple’
price: 50,
stock: 1
},
],
cart: [],
total: 0,
}
}
}
So far, this makes sense. But what about the components in our app that simply handle presentation and do not require any internal state?
Well, that’s where we can start to use functional components instead.
Continuing on with our eCommerce example, each product that we show in the app is likely to be its own component — let’s refer to this component as Product.js.
Now inside of Product.js, there may very well be smaller sub-components, such as buttons that can be clicked on to add/remove items to the shopping cart.
Let’s examine a component we may have called ProductAdd.js, which is used to add a product to the shopping cart. Now we may initially, out of habit, look to create a class component for this button which could look something like this:
import React, {Component} from ‘react’;
class ProductAdd extends Component {
render() {
return (
<button onClick={(e) => this.props.addToCart(this.props.id)}> + </button>
);
}
}
export default ProductAdd;
It’s not the end of the world if we write this, but given that it requires no internal state, we could rewrite this as a functional component instead. It would then end up looking something like this:
import React from ‘react’;
const ProductAdd = (props) => {
return (
<button onClick={(e) => props.addToCart(props.id)}>+</button>
);
}
export default ProductAdd;
One thing you may also notice here is that we are still able to pass props to the component which can be in the form of either data or a function.
So with the exception of handling the internal state of a component, functional components can do the same things that a class component can do.
With the example we’ve used here, we could decide to go even further up the chain and rewrite our Product.js as a functional component, as the state of the file would have been passed down from the main App.js file that we showed at the start of the article. So there’s quite a bit of refactoring that we could be doing here.
But given that our entirely class-based component application is working just fine, why would we bother taking the time to refactor?
Let’s take a look at three reasons why.
1. No Class means no ‘this’
It’s always advantageous if you don’t have to use ‘this’ when writing your Javascript code. And fine, you may be reading this and feel that you already have a good enough grasp of the ‘this’ keyword. But when it comes to debugging and overall readability, not having to reason about the scope of ‘this’ is always a plus.
We’ve all had moments in time where we’ve had to debug something in React and found that some part of our app wasn’t working as expected because we’d referred to a bit of state as
something, rather than
this.something. The issue of this is non-existent with functional components.
And to add another bonus, not having to use this means that we also don’t have to use bind, which is an even more confusing concept to wrap your head around. So two fewer things to wrap your head around, which means two fewer tangles, which means cleaner, clearer code. Win win!
2. Fewer lines = better performance
As you may have noticed from our ProductAdd functional component, it had two fewer lines than our class-based equivalent. The two fewer lines here were a result of us not having to wrap our JSX inside of a
render() function.
Two fewer lines may not seem like much here, but if you have an eCommerce site, where each product is its own component, we could quite easily have in excess of 1000 components. So those two fewer lines would total up to 2000 lines saved!
Another plus to this is that the fewer lines of code a developer has to read and write, the quicker and easier their code is to understand.
Now besides the obvious improvement of potentially using fewer lines of code when using a stateless functional component, it’s been well documented that functional components in React (as of Oct 2018) do not provide an improvement in terms of internal performance.
However, it has been equally well documented that stateless functional components may soon offer improved performance in future iterations of React. This boost will be the result of there being no state or lifecycle methods to worry about.
So with this in mind, it’s worth getting used to using them now as a means of future-proofing your codebase and general understanding of React best practices.
Nevertheless, functional components still transpile down to less code than class components, which means functional components = smaller bundles.
3. Easier to read, easier to understand, easier to reason about, easier to test
As we have seen, stateless functional components are simply functions that return JSX. Because there is no state being manipulated in any way, this makes them easier to read and understand.
Because the component does not rely on any internal state, this means that they’re easier to reason with, as we know that any state being passed into a stateless functional component has arrived in the form of a prop being passed in by a parent component. This means that we can go further up the tree when it comes to debugging.
And ultimately, because these components are simply functions that return JSX, this makes them really easy to test because you are simply looking to assert that a function returns what you want it to.
And there we have it!
There’s three advantages to using functional components and why you should look to add them to your arsenal of tools today!. | https://blog.logrocket.com/react-functional-components-3-advantages-and-why-you-should-use-them-a570c83adb5e/ | CC-MAIN-2019-43 | refinedweb | 1,099 | 65.76 |
The QModemCallBarring class implements the call barring settings for AT-based modems. More...
#include <QModemCallBarring>
Inherits QCallBarring.
The QModemCallBarring class implements the call barring settings for AT-based modems.
This class uses the AT+CLCK command from 3GPP TS 27.007.
QModemCallBarring implements the QCallBarring telephony interface. Client applications should use QCallBarring instead of this class to access the modem's call barring settings.
See also QCallBarring.
Construct a new modem call barring handler for service.
Destroy this modem call barring handler.
Convert type into its two-letter 3GPP TS 27.007 string form. This function is virtual to allow for the possibility of modems that support more call barring types than those specified in 3GPP TS 27.007. Returns an empty string if type is not supported.
See also QCallBarring::BarringType. | https://doc.qt.io/archives/qtextended4.4/qmodemcallbarring.html | CC-MAIN-2021-43 | refinedweb | 132 | 54.08 |
1A C++ coursework, Michaelmas Term 2015-16
This document makes no attempt to exhaustively cover C++ - the recommended book by Deitel and Deitel is 1300 pages long and even that is incomplete - but it does try to prepare you for your future programming work and the exams. To get the complete course material that the examination covers, make sure you read full contents of this page (with the extras), which you might not have done when you first did these exercises.
The first few sections introduce just enough C++ for you to write small programs and run them. The idea is to give you the confidence to learn more yourself - like learning to snowplough as the first stage when skiing, or learning not to be scared of the water when learning to swim. Once you can compile and run programs, you can copy example source code and experiment with it to learn how it works. Like skiing and swimming, you can only learn programming by doing it. If you need more exercises like the early ones, look in the More exercises section. The later sections revisit topics, adding more detail.
The C++ language and the methodologies described here will appear strange if you have never written a computer program. Don't attempt to "understand" everything from first principles. More explanation will be given later here or in the CUED Tutorial Guide to C++ Programming. For now, concentrate on learning how to use the language. The document includes some quick tests so that you can check your understanding, and some sections of extra information that you can show or hide. If you want to see all the extra sections, click on this Extras button. Use Ctrl + and Ctrl - to adjust the text size.
The course comprises 6 timetabled sessions, each beginning with a mini-lecture.
Work through the document at your own speed. Getting ahead of this schedule is fine, but you shouldn't fall far behind it. When you've finished exercises 1-4 get a demonstrator to mark your work (4 marks). Try to get at least that far on your first day of programming. When you've had exercises 1-4 marked, continue with the other exercises. If you don't finish exercise 5 on day 1, finish it before day 2. Contact Tim Love (tl136) for help. When you've finished exercises 5-9, get them marked (5 marks), then finish exercises 10-12 before the final marking (3 marks). If you finish early, you're strongly advised to try some More exercises.
Start sessions in the DPO by clicking on the
button at the top-left of the screen and then clicking on the CUED 1st Year option, then "Start 1AComputing". This will put some icons on the screen for you, and give you a folder called 1AC++Examples full of example source code. It also creates a folder called 1AComputing, a good place to store any course-related files.
If you want to work from home, see our Installing C++ compilers page.
In the course of this work your screen might become rather cluttered. The Window management section of the New User Guide has some useful tips.
Variables [ back to contents]
Variables are places to store things. The line
int num;
creates a variable called num in which an integer can be stored. Note the final semi-colon - in C++, semi-colons are a little like full-stops at the end of English sentences. You can also have variables that store a
- float ("floating point" number - a real number)
- char (character)
- string (text)
etc. To set an existing variable to a value, use an = sign. E.g.
num=5;
or
num=num+1;
The latter line might look a bit strange at first sight, but it isn't saying that the LHS is the same as the RHS. It's an assignment - it's setting the LHS to the value of the RHS - i.e. adding 1 to num.
You can create and set a variable in one line. E.g.
float num=5.1;
C++ is fussy about variable names - they can't have spaces or dots in them, nor can they begin with a digit. It distinguishes between upper and lower case characters - num and Num are different variables. It's also fussy about the type of the variable - you can't put text into an integer variable, for example.[show extra information]
Strings and Characters [ back to contents]
Strings are sequences of characters. If you add 2 strings using +, the result will be that the 2nd string is appended to the 1st. strings are more sophisticated than simple data like ints and floats (technically speaking, a string is an object). They have extra functionality associated with them. For example, if you want to find the length of a string s, you can use s.length(). Here's an example of appending to, then finding the length of, a string
string s="hello"; s=s+" world"; int l=s.length();
To find a particular character in a string (the 3rd, for example) use this method
string s="hello"; char thirdCharacter=s[2];
Note that the numbering of the characters starts at 0.
Whereas strings have double quotes around them, characters have single quotes, so to create a character variable and set it to x you need to do
char c='x';[show extra information]
Output [ back to contents]
Use cout (short for "console output") to print to the screen. Before each thing you print out you need to have << (having 2 "less-than" symbols together like this is nothing to do with the mathematical "less-than" operator) . So
cout << 5;
prints out a 5, and
cout << num;
prints out the value of the num variable. If you want to print text out, put double-quotes around it - e.g.
cout << "hello";
prints hello. To end a line and start a new one, use the special symbol endl.
cout << endl;
You can print several things out at once, so
int num=5; cout << "The value of num=" << num << endl;
prints
The value of num=5
Putting it all together [ back to contents]
All the examples so far have been fragments. Now you're going to write
complete programs.
All the C++ programs that you're likely to write will need the following
framework. The Input/Output and
string functionality is not actually part of the core language. To use it you need the following lines at the start of your code.
#include <iostream> #include <string> using namespace std;
In C++ a function is a runnable bit of code that has a name. The code might calculate a value (like a function in mathematics does) but it might just perform a task (like printing something to the screen). Every C++ program has a function called main. When the program is started, the main function is run first. So your program needs a main function which will look like this.
int main() { ... }
Don't worry for now what this all means. Just remember that all your programs will probably need lines like these.
Now we'll write a minimal program. At the moment your 1AComputing folder is nearly empty (you'll need the secretmessage file soon). Start
geany and use
Save as to create an empty file called program1.cc in the 1AComputing folder.
You'll get the following window
geany has lots of features to help with writing C++ programs, including an editor. Type (or copy-paste) the following text into it. It's a complete program with some initial setting-up lines and a main function containing some code. Note that // and anything after it on a line is ignored by the compiler - you can add comments to your programs this way to remind yourself about how they work.
#include <iostream> // We want to use input/output functions #include <string> // We want to use strings using namespace std; // We want to use the standard versions of // the above functions. // My first program! This is the main function int main() { int i=3; // Create an integer variable and set it to 3 // print the value of i and end the line cout << "i has the value " << i << endl; }
In the Build menu, choose the Build option (or use the
icon). This will
try to compile your code.
If you've made no typing mistakes then you'll see "Compilation finished
successfully" and a file called program1 will be created, which is
in a form
that the computer's chip can understand. You'll be able to click on
the
icon to run this program. If it prints out i has the value 3 you've produced your first program!
Note that geany
- colour-codes the program to make it more readable. If you don't get colours it's because you haven't named the file with a ".cc" suffix - geany expects C++ filenames to have that suffix
- shows line numbers but those line numbers aren't in the C++ source file.
- saves the file automatically whenever you do a build
Though the compiler doesn't care about the layout, you should make your code easy to read by copying the layout style of the provided code. In particular, use indentation consistently.[show extra information]
Errors and Warnings [ back to contents]
You may not get everything right first time. Don't be worried by the number of error messages - the compiler is trying to give as much help as it can. You can get a lot of error messages from one mistake so just look at the first error message. Clicking on the error message in geany will move the editor's cursor to the corresponding line. Often, the compiler will report that the error is in a line just after where the mistake is. If you cannot spot the mistake straight away, look at the lines immediately preceding the reported line.
The most common errors at this stage will be due to undeclared variables or variables incorrectly declared, and missing or incorrect punctuation. Check to see that brackets match up, and check your spelling. For example, if you have a source file called program2.cc with
ant i;
instead of
int i;
on line 3 our compiler might give the message
program2.cc:3: error: ant does not name a type
This message tells you
- the filename (program2.cc)
- the line number where the compiler had trouble (line 3)
- a description of the problem.
It may not tell you exactly what's wrong, but it's a clue. Sometimes the compiler doesn't give very helpful messages at all. E.g. if you write
cin >> endl;
instead of
cout << endl;
the compiler will give you a page of obscure messages like this
/usr/include/c++/4.3/bits/istream.tcc:858: note: std::basic_istream<_CharT, _Traits>& std::operator>>(std::basic_istream<_CharT, _Traits>&, _CharT&) [with _CharT = char, _Traits = std::char_traits<char>]
Don't panic. All you can do is look at the first line number that's mentioned in the
list of errors, and study that line of code.
When you think you've identified the trouble, correct it, save the file and build again.
Even if your code is legal and builds without error, your code may do the wrong thing - perhaps because you've put a '+' instead of a '-'. One of the most effective things to do in this situation is to use cout to print out the values of certain variables to help you diagnose where the problem is. Don't just passively stare at your code - make it print out clues for you.
Many common bugs are explained on our C++ Frequently Asked Questions page. Look there first before asking a demonstrator for help. Many more tips are in the Troubleshooting section.
Sometimes compilers will warn you about something that's legal but suspicious. It's worth worrying about such warnings, though they might not be a problem. For instance, if you create a variable called cnew and don't use it, the compiler might report
warning: unused variable 'cnew'[show extra information]
Input [ back to contents]
Use cin (short for "console input") to get values from the keyboard into variables. Before each thing you input you need to have >>. The variables need to be created beforehand. E.g.
int num; cin >> num;
will wait for the user to type something (they have to press the Return key after). If they type an integer, it will be stored in the num variable.
You can input several things on one line, like so
int num, angle, weight; cin >> num >> angle >> weight;[show extra information]
Exercise 1 - Adding [ back to contents]
You now know enough to write your own programs. Use geany's "New" option to create a new file. Save it as adding.cc in your 1AComputing folder. You're going to write a program that prints the sum of 2 integers typed in by the user.
I suggest you start by taking a copy of program1.cc and removing the contents of the main function. Your function needs to
- Create 2 integer variables
- Ask the user to type in 2 integers
- Use cin to read the values into your variables
- Print an output line looking rather like this
The sum of 6 and 73 is 79
Write the code to do this now.
Decisions [ back to contents]
Use the if keyword. Here's an example
if(num<5) { cout << "num is less than 5" << endl; }
The curly brackets are used to show which code is run if the condition is true. You can have many lines of code within the curly brackets. Instead of using < (meaning 'less than') to compare values you can use
- <= (meaning 'is less than or equal to')
- > (meaning 'is greater than')
- >= (meaning 'is greater than or equal to')
- != (meaning 'isn't equal to')
- == (meaning 'is equal to').
A common error in C++ is to use = to check for equality. If you do this on our system with geany, the compiler will say
warning: suggest parentheses around assignment used as truth value
but many compilers won't say anything. Train yourself to use == when making comparisons (some languages use 3 equals signs so count your blessings). Be careful to avoid mixed-type comparisons - if you compare a floating point number with an integer the equality tests may not work as expected.
You can use else in combination with if - e.g.
if(num<5) { cout << "num is less than 5" << endl; } else { cout << "num is greater than or equal to 5" << endl; }
You can combine comparisons using boolean logic. Suppose you want to run a line of code if num is between 3 and 5. The following doesn't work as expected though the compiler won't complain!
if (3<num<5) ...
Instead you need to use
if (3<num and num<5) ...
As well as and, the C++ language understands or and not.
Don't put a semi-colon straight after if(...)[show extra information]
Arithmetic [ back to contents]
Use + (add), - (subtract), * (multiply), / (divide), and % (modulus - i.e. getting the remainder in integer division). Note that there's no operator for exponentiation (in particular, 3^2 doesn't produce 9)..
Although addition, subtraction and multiplication are the same for both integers and floats, division is different. If you write
float a=13.0, b=4.0, result; result = a/b;
then real division is performed and result becomes 3.25. You get a different result if the operands are defined as integers:
int a=13,b=4; float result; result = a/b;
result is assigned the integer value 3 because in C++, arithmetic performed purely with integers produces an integer as output. If at least one of the numbers is a real, the result will be a real. This explains why later in this document you'll sometimes see 2.0 being used instead of 2 - it forces real division to be done.[show extra information]
While Loops [ back to contents]
For repetitive tasks, use loops. Easiest is the while loop - code that repeatedly runs while some condition is true. Here's an example
int num=1; while (num<11) { cout << num << endl; num=num+1; }
When the computer runs this particular while loop, it continues printing num and adding one to it while num is less than 11, so it prints the integers from 1 to 10.
The indentation of the lines isn't necessary, but it makes the code easier for humans to read, and helps you match up opening and closing braces.
When you write a loop, always make sure that it will eventually stop cycling round, otherwise your program might appear to "freeze". Without the num=num+1 line in this example, num would always be 1 and the loop would cycle forever. If your program does get stuck in a loop like this, use geany's
button to kill the program.
Don't put a semi-colon straight after while(...)[show extra information]
Exercise 2 - Times Table [ back to contents]
Use geany's "New" option to create a new file. Save it as timestable.cc in your 1AComputing folder. You're going to use a while loop to print out the first 10 entries in the 6 times table - i.e.
If you put the while loop example inside a main routine like the one above, your program will nearly be finished - all you need to do is change the cout line so that it not only prints the variable that goes from 1 to 10, but the rest of the line too. Some of the rest of the line doesn't change. The final number does, but it can be expressed in terms of the first number on the line.
Functions [ back to contents]
As we mentioned earlier, in C++ a function is a runnable bit of code that has a name. Most programs have many functions. Now you're going to write your own ones. First we'll produce a times table (the 7 times table this time) using functions. Here's a function called timesBy7 that multiplies its input by 7.
int timesBy7(int number) { return number*7; }
Conceptually, C++ functions are rather like maths functions or functions on your calculator. You can think of them as generating output from their input. They might need 1 number as input (like the square root function) or several, or none. Note that in C++, functions are said to "return" their output
back to the thing that asked for them. They can return one thing or nothing. Execution of the function code ends when a return statement is reached, or when the function code ends.
The first line of the function above is compact and contains a lot of information.
- int - this is saying that the function is going to calculate an integer value rather than (say) a string.
- timesBy7 - this is the name of the function
- (int number) - this is saying that timesBy7 needs to be given an integer as input, and inside this function the integer is going to be known as number. Some functions don't need any inputs. Other functions might need many inputs. This function needs exactly one integer.
If we wanted to use the function to store the result of 9*7 we could just do
int a=timesBy7(9);
Here's a little program that uses this function to display some multiples of 7. Though short, it illustrates what you need to do when writing your own functions. From now on, just about all your programs in every computing language you learn will use functions, so study this example carefully
#include <iostream> #include <string> using namespace std; // This function multiples the given number by 7 and returns the result int timesBy7(int number) { return number*7; } int main() { int num=1; while (num<11) { cout << timesBy7(num) << endl; num=num+1; } }
This file contains 2 functions. Like all C++ programs, execution begins at the main function. From the main function, timesBy7 is called. It needs to be given an integer (in this case num) as input. In this example its output is immediately printed out. You can call it in other ways - for example, you could save its output into a variable called answer then print answer
... int main() { int num=1; int answer; while (num<11) { answer=timesBy7(num); cout << answer << endl; num=num+1; } }
Note that the timesBy7 function appears in the file before it's called. That's because the C++ compiler doesn't like calling a function if it doesn't know about it first. In fact, it doesn't need to know everything about the function, just the information contained on its first line. That first line can be used by itself (with a semi-colon at the end) to "summarise" the function. It's called a prototype.
It's often convenient to have the function prototypes at the top of the file, and the full function code elsewhere so that's the style we'll adopt from now on.
We're now going to write a program with a function that will tell us whether numbers are even or odd. The program's output will be
In the main function below we set i to 0, 1, ... 10 checking each time to see if i is even. The code uses if and else which are quite easy to understand, I hope. You can read the "if" line as saying "if i is even, then do ...". This code fragment assumes that C++ has a function called is_even which returns true or false.
int main() { int i=0; while(i<11) { if (is_even(i)) { // Or you could have if (is_even(i)==true) { cout << i << " is even " << endl; } else { cout << i << " is odd " << endl; } i=i+1; } }
Read through the code until you understand it. Don't hesitate to trace your finger along the route the computer takes through the code, keeping a note of what value i has.
Unfortunately there's no function called is_even so we'll have to write it ourselves. We could do it many ways. Here we'll use the % operator, which gives us the remainder after integer division. If we do number%2 and the answer is 0, 2 divides exactly into the number, so the number is even. We want the is_even function to give us a true/false answer. In C++ there's a type of variable called bool (short for Boolean) which can store such answers. Here's the is_even function.
bool is_even(int number) { if ( (number % 2) == 0) { return true; } else { return false; } }
The prototype of this function is
bool is_even(int number);
We now have nearly all the code. Below on the left is the complete program with the main and is_even functions we've prepared. To the right is an animation showing what happens when the program runs for a few cycles
#include <iostream> #include <string> using namespace std; bool is_even(int number); // the function prototype int main() { int i=0; while(i<11) { if (is_even(i)) { // the function call cout << i << " is even " << endl; } else { cout << i << " is odd " << endl; } i=i+1; } } // the is_even function's definition bool is_even(int number) { if ( (number % 2) == 0) { return true; } else { return false; } }
The bold text shows the 3 key aspects of writing your own functions -
- The declaration of the function above the main program. The declaration (also known as the prototype) tells the compiler about the function and the type of data it requires and will return on completion. Note that the declaration doesn't make the code run.
- The function call in the main body of the program determines when to branch to the function and how to return the value of the data computed back to the main program. Note that when you call a function you don't mention datatypes (like int, bool, etc) - you don't write is_even(int i) to call the function.
- The definition of the function. The definition consists of a header which specifies how the function will interface with the main program and a body which lists the statements to be executed when the function is called.
The animation's green arrow starts in the main routine at the stop sign and goes round and round the loop. It jumps to and from the is_even function. Inside that routine it sometimes follows the if route and sometimes the else route, depending on whether the number is even or odd.
Don't worry if you can't keep up with the animation. The important thing to realise is that the code isn't simply run from top of the file to the bottom - some lines are run many times. It's also worth noting that the i variable exists only in the main function - that's why it keeps appearing and disappearing at the top of the animation. It's described as a local variable.
When you create a function, don't give it a name that's already in use. Don't, for example, call it int because that's part of the C++ language. Don't call it something general (like vector, or max) because C++ often uses such names internally.
Function definitions can't be nested. In the example code above, for example, the main function has to be finished with a final curly bracket before the is_even function can be started.
See the C++ Tutorial Guide for more information.
Exercise 3 - functions [ back to contents]
Adapt the previous example so that instead of identifying even numbers it identifies multiples of 4. Give the function a sensible name. Call the program multiplesof4.cc.
Arrays [ back to contents]. Inside the computer the memory layout will be something like this
The integers have the names month[0], month[1] ... month[11] because array indexing starts at 0. Array items are usually referred to as elements. They're stored contiguously in memory. Arrays and loops go well together. To set all the elements of the month array to 0, you could do
int i=0; while (i<12) { month[i]=0; i=i+1; }
This is much shorter than doing
month[0]=0; month[1]=0; ... month[11]=0;
Be careful when using arrays - if you create an array of 5 elements but you use 6 elements, you're using memory that you haven't asked for, memory that might be being used by something else. A crash is likely.[show extra information]
Self-test 2 [ back to contents]
Look at this code.
#include <iostream> #include <string> using namespace std; int main() { int x,y; y=3; x=y*2; y=5; cout << x << endl; }
- How many variables are used?
Reading from files [ back to contents]
To be able to use the file-reading facilities you need to add
#include <fstream>
to the top of your file. Then you can do the following to get a line of text from a file (in this case a file called secretmessage) and put it into a string (in this case called message).
string message; ifstream fin; // a variable used for storing info about a file. // There's nothing special about the name 'fin' - // any name would do. fin.open("secretmessage"); // trying to open the file for reading // The next line checks if the file's been opened successfully if(not fin.good()) { // print an error message cout << "Couldn't open the secretmessage file." << endl; cout << "It needs to be in the same folder as your program" << endl; return 1; // In the main function, this line quits // from the whole program } // we've managed to open the file. Now we'll read a line // from the file into the string getline(fin, message);
This code includes a check to see if the file exists. There are many other file-reading facilities, but that's all you'll need for now.[show extra information]
More about functions [ back to contents]
The functions we've seen so far have 0 or 1 input values and 1 output value, but C++ is more flexible than that.
- Often the purpose of a function is to work out a value and return it, but sometimes a function will perform a task (like printing to the screen) and won't have anything useful to return. In such situations, the function needn't return anything. If a function called printOut returns nothing and needs no input, its prototype would be
void printOut();where the void word means that nothing is returned.
- Functions can have several input values. For example, a function with the prototype
float pow(float x, float y);needs to be given 2 floating point numbers. It returns a floating point number too.
The following table shows how C++ represents various types of function diagrams
The variables created in a function can only be used in that function - they're local variables. If you have "int i;" in your main function and another "int i;" in another function the 2 i variables will be independent.
Try the "function" teaching aid to get some practise
Troubleshooting - a checklist [ back to contents]
The longer your programs become, the harder it will be to fix bugs and the more important it will be to write tidy code. Here are some things to check
- Variables
- Are they created at the right time? Do they have the right initial value?
- Are they the right type? (int rather than float maybe, or an array instead of a simple value)
- Are they sensibly named? Are you clear about what each variable's for and what its contents represent?
- Program Organisation
- Do you have a main function? Does it begin with int main()?
- Have you #included the right files? Do you have using namespace std;?
- You can create variables outside of functions, but is all your code inside functions?
- Do you know when one function ends and another begins?
- Do all your blocks of code (in while loops, if .. else constructions, etc) start and end where they're meant to?
- Have you put any comments in your code?
- Functions
- Do they have the right inputs and outputs?
- Does the prototype's input/output specification match the function code's specification?
- Are you calling the function? How do you know? (add cout commands). Remember that when you call a function you need brackets after the name even if the function requires no input values.
- If you're using random numbers are you calling srandom exactly once?
- Arrays and Loops
- Do your loops go round forever?
- Do you go off the end of an array? (if you do, a "segmentation fault" or an "out of range" error may be reported,)
- Did you remember that the 1st item in an array has an index of 0?
- Punctuation
- Are you mixing up = and ==?
- Are you using semi-colons correctly?
- if and while need to be followed by a condition in brackets. Have you added the brackets?
- Strategies and workflow
- If the code's not compiling, make it compile by removing (or commenting-out) problematic lines until it does. Then restore a line or 2 at a time. Re-build after each addition. Print out some variables to see if the code is behaving.
- If you've changed your code but your program's behaviour hasn't changed, maybe you haven't created a new version of the program. Remember, the Compile button doesn't create a new program, but the Build menu item does - F9 is a short-cut for it.
- When the code runs but does the wrong thing, do a dry run - run through the code on paper as if you were the computer, keeping a note of variables' values, and checking that the correct route is taken through the program.
- Read through the C++ Frequently Asked Questions and the C++ CUED Crib
Exercise 4 - Code Solving (reading files) [ back to contents]
In your 1AComputing folder there's a file called "secretmessage". Your mission is to write a program called decode.cc to decode the contents of this file. It contains a single line of text that's been encoded using a simple shifting algorithm where each letter has been replaced by the letter N places before. Complete each stage before starting the next.
- Using the code in the previous section, read the message in from the file into a string using getline (the message is only one line long). Display the encoded message on your screen.
- Create an integer N (the amount the characters have been shifted by) and set it to a value. For now, let's just guess the number 7.
- You can use the string as if it were an array of characters. Pick out the characters in the string one at a time using a loop like this
char c; // The next variable is declared as unsigned (i.e. non-negative) // so that the compiler doesn't warn us later unsigned int num_chars_processed=0; // While we're not at the end of the string, get the next character. while(num_chars_processed < message.length()) { c=message[num_chars_processed]; num_chars_processed=num_chars_processed+1; }Each character is represented in the computer as an integer, so you're allowed to perform arithmetic on the value.
- Inside the while loop, process each character c.
- If the character is a space (i.e. if it's equal to ' '), print it
- If the character is not a space, set c to be c plus N. If the resulting character is more than 'Z' (i.e, if (c > 'Z')), subtract 26 from c so that the letter cycles round. Print the resulting c out.
Build and run the program. If you're lucky you'll get a readable message. It's more likely that you'll get junk because N will need to be a different number. Rather than manually having guess after guess (changing N, recompiling and running until you get the answer) try the following (if you've understand things so far, this will require 3 more lines of code (all rather the same), and a few brackets - a minute or two's work. If you don't know what a prototype is, or why they're needed, you'll need to look back, look at the FAQ or search the web)
- Restructure the code to make it neater. This is something developers often have to do - re-engineer their work so that it does the same as before, but in a tidier way. Rather than have one big function you're going to split your code into 2 shorter functions - decode_and_print and main.
- Write the prototype for a function called decode_and_print that takes the coded message and N as its input parameters, and doesn't return anything.
- Write the decode_and_print function definition using the lines you've already written. Its job is just to print the decoded message, not read from the file.
- Call decode_and_print from your main function to shift the characters in the string by 7. You should get the same result as before!
- Now change the main routine so that it tries decoding with N=1, then N=2 up to N=25, printing the decoded message each time. Do this using a loop. Determine the N that works for your message.
When you've completed programs 1 to 4 get them marked. Make sure that you have restructured the code - a single 40-line main function isn't good enough.
Exercise 5 - Word lengths [ back to contents]
Your next task is to read the words in a file, find their lengths and print the frequency of word-lengths from 1 to 10. Call the program wordlengths.cc
Tips
- Create a file with some words in it - just a few words initially, several per line if you want. This file needs to be in the same folder as your program. Remember to save the file.
- Read the words one by one into a string using code like this
#include <fstream> // so that file-reading works string str; ifstream fileInput; // a variable of a type that lets you input from files fileInput.open("filename"); // this tries to open a file called "filename" // The next line uses 'not', a keyword if(not fileInput.good()) { // print an error message cout << "Couldn't open the file." << endl; cout << "Does one exist in the same folder as your program?" << endl; return 1; } // the code below reads the next word from the file and puts it into the // variable 'str'. It does so while there are words left in the file // Note that 'fileInput >> str' works like 'cin >> str' would, // except it's reading from a file instead of the keyboard while(fileInput >> str) { // print it out to check that the code's working cout << str << endl; }Check that this works as expected before going on to the next stage. You'll need to write a main function to contain this code, and include some files at the top.
If you want to know how this works in more detail, read a book or ask a demonstrator.
- Create an array of variables called frequency to store the frequencies. Easiest is to arrange things so that frequency[1] is used to store the number of words of length 1, frequency[2] is used to store the number of words of length 2, etc. Initialise these variables to 0. Change your code in the while loop so that 1 is added to the appropriate variable each time you've read in a word and found its length. For example, if the length of a word is 3 you'll need to add 1 to frequency[3]. More generally, if the word length is len, you need to add 1 to frequency[len].
Once all the words have been read, this array of variables will hold the final frequency counts.
- Print a simple table of the results like the following, using a loopLength Frequency 1 3 2 6 ... 10 1
- When you have this working with a little file of words, try it with a bigger file - you could even try downloading something from Project Gutenberg. What are you going to do if len is more than the number of elements in the frequency array?
If you haven't got the hang of using arrays to store how many times things happen, read the I don't understand how count things and store the frequencies in an array item.
Standard functions [ back to contents]
By putting extra lines at the top of your file you gain access to further functions that have already been written. Here are some examples.
- #include <cmath>
With this you can use maths routines - sin(x) (which uses radians); pow(x,y) (which raises x to the power y), sqrt(x), etc.
- #include <cstdlib>
With this you can access functions that generate random numbers. Before the random number generator is used for the first time, call
srandom(time(0));so that you get a different sequence of random numbers each time you run the program. Don't call srandom more than once. Each time the function random() is called, it will return a random positive integer (in the range 0 to 32767 or so). Work out what the following function does and how it works.
int RollDie() { int randomNumber, die; randomNumber = random(); die = 1 + (randomNumber % 6); return die; }You'll be using this code later, so if you've any doubts about what this code does, write a main function that calls it, add some cout statements to print useful variables out, then build and execute the code.
The numbers produced by the random routine are only pseudo-random. Here we're using the computer's real time clock as the seed. See wikipedia's Random number generation page if you want more details. For more information about functions in general, see
- A section of the C++ tutorial guide
- A more sophisticated animation (which might only work in the DPO)
- An answer to a frequently asked question
Standard data types and mixing types [ back to contents]
We've already mentioned that there are several types of C++ variables: int for integers, float for floating point numbers, char for characters and bool for booleans (true or false). There are also doubles (which are floating
point numbers too, but potentially more accurate than floats) and longs (which store integers that might be too big or small to fit into an int variable). You can also declare that variables will only store non-negative values by using the unsigned keyword. For example, unsigned int height; creates a variable whose contents won't be negative. size_t is a datatype used by many C++ routines when a non-negative integer is being used (e.g. for the length of a string).
C++ is quite strict about types. What does the following program print out?
#include <iostream> using namespace std; int main() { int i=1; int j=2; cout << "i/j=" << i/j << endl; }
Did you hope it would print out i/j=0.5? Actually it prints out i/j=0 because in C++ an arithmetic operation with only integer operands results in a (possibly rounded-down) integer. If at least one operand is a floating point number, the answer's a floating point number, so you could make this program print out i/j=0.5 by changing it to
#include <iostream> using namespace std; int main() { int i=1; int j=2; cout << "i/j=" << i*1.0/j << endl; }
Computers usually store floating point numbers using a base 2 representation. Just as 1/3 can't be expressed in base 10 using a finite number of digits, so there are many numbers that computers can't accurately store in base 2 using a few bytes. The next exercise illustrates the difficulties.
Exercise 6 - Accuracy [ back to contents]
In maths, x*11 -x*10 will always be x. Will the result be the same on computers?
In a main function create a floating point variable called number and set it to 0.1. Create a loop that will run 20 times. Inside the loop, reset number so that it becomes 11 times itself minus 1, then print out the number and how many times the loop's been executed - e.g.
number = 0.1 after 5 iterations
Compile and run the program and record the results. Now try 2 changes
- Change number so that it's a double. Recompile and re-run the program, recording the results.
- Finally, initialise number to 0.5 and reset it so that it becomes 11 times itself minus 5. Recompile and re-run.
Try to explain these findings (there'll be more about this issue later).
Exercise 7 -
(look up "Buffon's needle" for the theory). The pen will hit a crack (an edge of a floorboard) if the closest distance (D) from the pen's centre to a line, compiling and testing as you go.
- In a file called pi.cc write a function to simulate the dropping of the pen. It could have the prototype
bool dropthepen();returning true if it touches a crack in the floor and false otherwise.
There are 2 random factors to take account of - the angle of the pen (0-90 degrees) and the distance of the pen's centre from a line (between 0 and 0.5; our floorboards will be 1 unit wide). The function will need to calculate sin(theta). The C++ sin function expects its argument to be in radians. You can get round that by doing
float angleindegrees=(90.0*random())/RAND_MAX; // a number between 0 and 90 float angleinradians=angleindegrees*M_PI/180; // a number between 0 and pi/2These expressions use variables made available by the inclusion of cstdlib and cmath, namely RAND_MAX (the biggest integer that the random routine produces), and M_PI (the value of pi).
Create a variable D and set it to (0.5* random())/RAND_MAX (a random number between 0 and 0.5).
Using D and sin(angleinradians) you can now work out whether the pen lands on a crack.
- Add a main routine that calls your function. Before you call the function, call srandom(time(0)) once to initialise the random number generator (remember, you'll need #include <cstdlib> at the top of the file to access these random number routines). Use a while. If pi comes out to 3, make sure you're using 2.0 rather than 2 when calculating (when you divide an int by an int in C++, you get an int. By involving a floating point number in the calculations,floating point arithmetic will be performed).
- Now do 10,000 runs. You should get a more accurate answer for pi.
Your program is likely to have the following layout - some included files, a prototype and 2 functions.
#include <iostream> using namespace std; // other included files // prototype bool dropthepen(); // main function int main() { // Initialise random number generator, call dropthepen many times, // gather statistics and print the answer } // dropthepen function bool dropthepen() { // return true if pen lands on crack, otherwise return false }
Exercise 8 - Monopoly © [ back to contents]
You're playing Monopoly (if you don't know the game or where the hotels are, read the notes)..cc to run 10,000 simulations.
Whenever you have a non-trivial program to write, think about how it can be broken down into stages, and how you can check each stage.
- You've already seen the RollDie function that simulates the rolling of a single die. Copy it into your new file. Now write a function called Roll2Dice to simulate the rolling of 2 dice (call RollDie twice and return the sum of the answers). Before going any further, test it. If it doesn't work, neither will your full program! Here's a main function you could use to test it
int main() { srandom(time(0)); cout << "Roll2Dice returns " << Roll2Dice() << endl; }You'll need to add prototypes for RollDie and Roll2Dice too.
-, then express that strategy using C++. it from the main function a few times and print the outcome to see if the results are reasonable.
To see whether you've landed on a hotel, use something like
if (location==1 or location==3 or location==6 ...and not
if (location==1 or 3 or 6 ...The latter isn't illegal but it doesn't do what you might expect.
- Now call that function 10,000 times using something like
int output=runTheGauntlet();in a loop. You don't want to print each outcome but you do want to store how many times no hotels were landed on, how many times only 1 hotel was landed on, etc. Create an arrayYou don't have to print the columns out neatly, but if you want to do so, 2 facilities might help
- setw - this lets you set the minimum number of characters produced by the next piece of output.
- setfill - with this you can choose the character that will fill the gaps caused by using the setw command
#include <iomanip> ... cout << setfill(' ') << setw(10) << i ;, or the consequences of adding the rule that rolling 3 consecutive doubles puts you in jail.
Call by Reference [ back to contents]
Suppose we want to write a function that will triple the value of a given variable. We could try the following
#include <iostream> using namespace std; // prototype. Run it if you're not convinced. The problem is that main's i is a different variable to the i in triple - each is a variable that's local to the function it's in. That the 2 variables have the same name is a coincidence (if the i variable in the triple function were renamed, in the following code.
#include <iostream> using namespace std; // prototype void triple(int& i); // added ampersand int main() { int i=3; cout << "In main, i is " << i << endl; triple(i); // NO added ampersand cout << "In main, i is now " << i << endl; } void triple(int& i) { // added ampersand i=i*3; // here i is an alias for main's i }
When a variable is passed using "call by value", the function is given the variable's value. The function doesn't know what the original variable was, so the function can't change it. When a variable is passed using "call by reference" the function can "refer to" the original variable, so it can be changed.
Why is "call by reference" useful? One reason is that it lets functions "return" more than one value. If, for example, you write a function that has 3 input parameters that are passed "by reference", the function can change all 3 of them, and those changes will be visible outside the function.
Alternative notations [ back to contents]
C++ has alternative ways to do some things
- Comments - Instead of using // to comment out a line, you can use /* ... */ to comment out a block of text
- Increment/decrement - there are shortcuts to changing a variable's value.
- After if, while, etc we've always put the next block of code in curly brackets. If (and only if) the block consists of one statement, the brackets aren't needed. So
if(2+2==4) { cout << "Correct!" << endl; }can be written as
if(2+2==4) cout << "Correct!" << endl;or even
if(2+2==4) cout << "Correct!" << endl;
More Loops [ back to contents]
In a while loop you sometimes want to abort early. There are 2 commands to do this
- break - this breaks out of the loop completely
- continue - this breaks out of the current cycle and starts the next one.
The best way to understand these commands is to see them in action. If you run this program, what would it print out? If you're not sure, run it and see!
#include <iostream> using namespace std; int main() { int i=0; while(i<10) { i=i+1; if(i==2) continue; if (i==4) break; cout << "i=" << i << endl; } cout << "End of looping" << endl; }
Another way to do looping is to use a for loop. Earlier we had this while loop.
int num=1; while (num<11) { cout << num << endl; num=num+1; }
Notice that it has
- Initialisation code - int num=1
- Code to control termination - num<11
- Code that's run each cycle to make the next cycle different - num=num+1
With a for loop all this code that controls the cycling is brought together in a compact form. Here's the for loop equivalent of the above while loop
for (int num=1; num<11; num=num+1) { cout << num << endl; }
or more commonly
for (int num=1; num<11; num++) { cout << num << endl; }
Note the format -
"for" loops are more common than while loops (people like having the "loop-controlling" code together) so you'll have to get used to them. Try the "for" loop teaching aid to get some practise and try re-writing some of the earlier exercises using for loops instead of while loops.
C++'s for loop is very flexible. The "loop variable" doesn't need to start at 1, nor do you need to add 1 to it each time you go round the loop. For example, the following code prints 11, 10 ... 1.
for (int num=11; num>0; num--) { cout << num << endl; }
Note that the num variable in the code above is created within the for loop, and no longer exists when the for loop ends; it's local to the loop.
More about Arrays [ back to contents]
- An array can be passed to a function as an input parameter. Arrays are always "passed by reference". Here's an example.
#include <iostream> using namespace std; void timesarrayby2(int numbers[]); // The square brackets are needed // because 'numbers' is an array. Note that there's no ampersand int main() { int nums[10]; for(int i=0; i<10; i++) { nums[i]=i; } timesarrayby2(nums); // Note that the numbers in the array have changed. for(int i=0; i<10; i++) { cout << nums[i] << endl; } } void timesarrayby2(int numbers[]) { for(int i=0; i<10; i++) { numbers[i]=2*numbers[i]; } }
Earlier we used 1-dimensional arrays but you can create arrays with more dimensions. Here's an example of a 2D array that stores 6 integers in 2 rows of 3 columns. If you want to set all of the elements to 0 you could use "nested loops" - loops inside loops. The following code sets table[0][0] to zero, then table[0][1] to zero, etc
for (int row=0;row<2;row++) for(int column=0;column<3;column++) table[row][column]=0;
More about Strings [ back to contents]
strings are quite sophisticated variables. You've already seen how if you have a string called s you can find its length by calling the s.length() function, but there are many other string functions too. The fragments below illustrate a few of them
string s="a few words"; // starting at position 7, extract a substring of s that is 2 characters long, string t=s.substr(7,2); // t is now "or" because positions are counted from 0 int found=s.find('w'); // find the position of the first w // The next line uses a special constant "string::npos" - see // the documentation about strings if you really need to know more if (found==string::npos) { // this is how to check whether anything's been found cout << "There is no w in the string" << endl; } else { cout << "There's a w in position " << found << endl; } found=s.find_last_of('w'); // look for the last w if (found==string::npos) { cout << "There is no w in the string" << endl; } else { cout << "The last w is in position " << found << endl; }
More Decisions [ back to contents]
- There are older, common alternatives to the boolean operators
- You can "nest" ifs. In the following code, the 2nd if line is only reached if num < 5 is true.
if (num < 5) { cout << "num is less than 5" << endl; if(num < 3) { cout << "and num is less than 3" << endl; } else { cout << "but num is equal to or greater than 3" << endl; } } else { cout << "num is greater than or equal to 5" << endl; }Follow the route that the execution of this code takes when num is 6, then 4, then 2.
- Sometimes you might want to perform a different action for each of many possible values of an integer variable. You could use if a lot of times. Alternatively you can use switch. Here's an example
#include <iostream> using namespace std; int main() { int i=0; while(i<10) { switch (i) { case 0: cout << "i is zero" ; break; case 1: cout << "i is one" ; break; case 2: cout << "i is two" ; break; default: cout << "i isn't 0, 1, or 2"; break; } i=i+1; cout << endl; } }When i is 0, the "case 0" block is run. In this situation the break doesn't break out of the surrounding while loop, it breaks out of the switch. Without the break, execution would "fall through" into the case 1 block. Try to work out what happens in this code before running it. What does it print out?
- If you decide to quit from the program completely , use exit(1); (in the main function, return will do this, but it's a special case. For this to work you need #include <cstdlib> at the top of your file.
Variable scope and lifetime [ back to contents]
The scope of a variable is the region of code from which a variable can be accessed. Variables created in a function or a loop generally cease to exist when the loop or function ends. Furthermore, variables inside one function can't be accessed from another function. This feature helps you write safer code. If you create a variable at the top of a file outside of all functions, it will be available to all of your functions all the time. These so-called global variables are best avoided.
Classes [ back to contents]
If the range of types that C++ provides is too restrictive you can invent new types. When you use a language (like English or C++) you are often attempting to represent or model the real world. The more the modelling language structurally resembles what it's modelling, the easier the modelling is likely to be. If for example your program was dealing with 2-dimensional points, it would help to have a type of variable to represent a point. In C++ you can create such a type like this
class Point { public: float x; float y; };
Note that this doesn't create a variable, it creates a new type of variable. Point is called a class. To create a variable of type Point you do the same kind of thing that you did when creating variables of type int, etc. To create an int called i you would do
int i;
To create a Point called p you do
Point p;
The following code fragment shows how to set p's component fields to values.
p.x=5; p.y=7;
Whereas in arrays all the elements needed to be of the same type and each element was identified by a number, in classes they can be of different types and are each given a name. Suppose you have the following data
Name Anna Ben Charlie Height 1.77 1.85 1.70 Age 20 18 15
You can create a class designed to contain this information as follows
class Person { public: string name; float height; int age; };
You could then create a variable to represent Anna's information by doing
Person anna; anna.name="Anna"; anna.height=1.77; anna.age=20;
If you wanted to print Anna's height later on, you could do
cout << "Anna's height = " << anna.height << endl;
Note that the following doesn't work because cout doesn't know what to do when asked to print a non-standard thing like a Person
cout << anna << endl;
You need to print each component individually. If we have several people, it's useful to create an array of Persons. The syntax when creating arrays of new types is the same as when creating arrays of built-in types like ints. The following line creates an array called family big enough to store information about 3 people;
Person family[3];
To set the age of the first person to 20 you'd do
family[0].age=20;[show extra information]
Exercise 9 - A phone directory [ back to contents]
This program revises file-handling and gets you to create and use your own datatypes. Create a text file at least 10 lines long where each line contains an integer (a phone number) and a name, with the name in double-quotes. For example, the file might begin
332746 "Tim Love" 999 "Emergency Services"
- Design a data structure using class to contain a number and a name (how many elements should the structure contain? What types should they be? You can store the number in a string or an int - the latter leads to more work).
Create an array of these data structures. If you've called the array directory the beginning of the array would look rather like the diagram. The array needs to be big enough to store all the entries.
- Write some code that reads the information from the file and stores it in the appropriate place in the array (the information from the first line into the array's first component, and so on). Make sure that you don't try to store more lines of data than you have space for in your array.
- At the end, after all the information has been stored, visually check that the information that's finally in the array matches the data in the file by printing out the information that is in the array.
Tackle the task a step at a time, checking your work as you reach each milestone. You've already used the getline function to read a line of text from a file into a string variable. If you call it repeatedly it reads successive lines. getline returns true only if it successfully reads a line, so to read all the lines in a file, you can use the following idea
ifstream fin; ... string str; while (getline(fin,str)) { cout << "current line is " << str << endl; ... }
The "while" loop will end as soon as the getline function returns false (i.e. when it couldn't read a line in because it had reached the end of the file). Extracting the phone number and name from that string will require some work that you can test separately. You may need to read the More about Strings section again. When you store the name, don't store the double-quotes.
This exercise can give rise to error messages that you're not seen before
- A segmentation error is usually caused by going off the end of an array
- "terminate called after throwing an instance of 'std::out_of_range'" is usually caused by trying to make substr read beyond the end of a string.
If you want to store the phone number as an integer rather than a string, note the following
- The maximum int that you can store might not be big enough to store long numbers (so you might wish to have only short numbers in your file)
- If the number starts with an initial 0, that 0 will disappear when you convert to an integer (so you might want to choose numbers that don't begin with a 0)
- A method to convert strings to integers is on CUED C++ Frequently Asked Questions page. It uses features you've not been taught.
When you've completed the exercise, get a demonstrator to mark your last 5 programs.
Bits, bytes and floats [ back to contents]
You may sometimes want to access the individual bits of a byte. You'll need to do so when programming robots in the 2nd year, and questions about bits are often in 1st year exams. A byte contains 8 bits, each of which can be off or on (a binary digit - 0 or 1). The program below shows how to display the bits of a byte. It uses the & (aka bitand) operator, which performs a bit-wise and with the 2 operands (there's also a | (aka bitor) operator, which performs a bit-wise or).
#include <iostream> using namespace std; int main() { unsigned char num=43; unsigned char bitmask=128; // the bit pattern 10000000 for (int thebit=7;thebit>-1;thebit--) { // Now see if (num bitand bitmask) is non-zero if (num bitand bitmask) { cout << "1"; } else { cout << "0"; } bitmask=bitmask/2; } cout <<endl; }
It goes through the bits of the byte called num checking the value of each bit (most significant first) and printing it out, so that in the end you get a binary represention of the decimal number 43.
You can use the bit operators to switch bits off or on. If you do num bitand x and x is all 1s, then the answer will
be the same as num. If x has exactly 1 bit set to 0, then num bitand x will be the value of num with that bit set to 0. So if you wanted to set the 4th bit from the right to 0 in num you could do
num = num bitand 247;
Similarly, to set the 3rd bit from the right to 1 you could do
num = num bitor 4;
As you can see, for natural numbers there's an obvious way to store the value - as binary. It's not so obvious how to deal with negative integers - see Wikipedia's Two's complement page for details. And there's always the problem that int values are usually stored in 4 bytes, so there's a limit how accurately you can store integers. There's sometimes a long long type available, which uses 8 bytes but that still won't give you unlimited range.
A float variable occupies 4 bytes too. It's not obvious how those bytes might be used to store reals, and even more so than with int there are accuracy issues. The code below (which goes beyond what you need to know about C++) displays the bits of a floating point number byte by byte. You can use it to study floating point representation (note that on PCs the most significant byte is at the highest memory address - i.e. it's printed out last by this program)
#include <iostream> using namespace std; void printbinary(unsigned char num) { unsigned char bitmask=128; for (int thebit=7;thebit>-1;thebit--) { if (num bitand bitmask) cout << "1"; else cout << "0"; bitmask=bitmask/2; } } int main() { unsigned char* cp; float f=43; cp=reinterpret_cast<unsigned char*>(&f); for (int byte=0;byte<4;byte++) { cout << "Byte " << byte << "="; printbinary(*(cp+byte)); cout << endl; } }
If you run this code it will print out
Byte 0=00000000 Byte 1=00000000 Byte 2=00101100 Byte 3=01000010
Those values in base 10 are 0, 0, 44, 66. How do those relate to the real number 43? It's all described in the IEEE 754 specification. If you try the IEEE 754 Converter you'll get the idea. One bit denotes the sign, 8 bits represent the exponent (using 2 as the base) and the other bits represent the mantissa (the value). You need to be aware of the consequences of such a format; namely that there are limits to the range and accuracy of the numbers. Just as 1/3 can't be represented in base 10, so 0.1 can't be precisely represented in base 2. So be careful when you use real numbers! See Floating-point Basics for more information.[show extra information]
Enumerations [ back to contents]
Enumerations are another way to make code easier to read. Earlier we had an array of integers created using int month[12];. If we wanted to set the July value to 31 we could do
month[6]=31;
(remember, element indexing begins at zero). Alternatively we could use enumerations.
enum Month {Jan, Feb, Mar, Apr, May, Jun, Jul, Aug, Sep, Oct, Nov, Dec}; month[Jul]=31;
Month is a new variable type whose only legal values are Jan, Feb ... Dec. These values are just aliases for 0,1 ... 11, but they're easier for humans to read.
Writing to files [ back to contents]
To be able to use the file-writing facilities you need to add
#include <fstream>
to the top of your file. To illustrate writing to files, we'll write the values of the month array (one value per line) into a file called myData. Notice that the line that writes to the file is similar to the use of the cout command when writing on the screen - you just replace cout by the name of the ofstream (which in this example is fileOut).
ofstream fileOut; // create a variable called 'fileOut' to store info about a file. // It's not an int or a float, it's an ofstream; a special type // of variable used when writing to files fileOut.open("myData"); // try to open the file for writing if (not fileOut.good()) { // if there's a problem when opening the file, print a message cout << "Error trying to open the file" << endl; return; } // The computer will only reach here if the file's ready to use, so use it for(int i=0; i<12; i++) { fileOut << month[i] << endl; //write to the file } fileOut.close();
The good() function of an ofstream variable returns false if the last file operation failed.
Semicolons [ back to contents]
By now (and with the help of the compiler's error messages) you've probably gained a lot of experience about where semi-colons are needed. Note that
if (i==3)
doesn't need a semi-colon after it because it's not a completed statement. Unfortunately, though it doesn't require a semi-colon, it's not illegal to have one. The following is legal C++
int i=0; if (i==3); { cout << "i is 3" << endl; }
and will print out i is 3 even though i is 0. Why? Well, the if(i==3) code is an incomplete construction. It expects a statement after it, and in this context a semi-colon is a null-statement. The following layout shows more clearly what happens.
int i=0; if(i==3) ; { cout << "i is 3" << endl; }
The cout line is run every time because it's not being controlled by the if. There's a similar risk with while. The compiler won't complain that the following code runs forever
int i=0; while (i<10); { i++; }
The trouble is that the i++; line isn't inside the body of the loop, so i is always 0. The following layout shows more clearly what happens
int i=0; while (i<10) ; { i++; }
See the I'm confused about commas and semi-colons frequently-asked-question for details.
Exercise 10 - Bubble sort [ back to contents]
This exercise introduces no new programming concepts - it's practise at using loops and comparisons, converting ideas expressed in English into a program written in C++.
Using the ideas from the talk (or the web - try Wikipedia), write a program that uses the bubble sort algorithm to sort the phone directory created in the previous exercise (you may as well start this exercise by copying the code from exercise 9 - you need only add about 10 lines to it to finish this exercise). Sort by the phone numbers, smallest first (or if you've stored the numbers as strings, sort them in alphabetical order - you can compare strings alphabetically using < or >). Keep comparing the numbers in neighbouring entries, swapping the complete entries if they're in the wrong order. While developing the program it might help to print the values in the list after each pass, to see if the correct entries are being swapped. How many passes are needed in the worst case? Ideally, your program should stop scanning the values once they're all in the right order, but that's not a requirement. Just ensure that you do sufficient passes to cope with the worst case.
You'll need to be careful when swapping entries. Consider this little program that tries to swap integers (you'll be swapping phone directory entries, but the same problem arises).
int main() { int a =1, b=2; // Let's try to swap a and b a=b; b=a; }
This doesn't work. If you dry-run through it you'll find that both a and b have the value 2 at the end. You'll need to create an extra, temporary variable.
You can add the extra code to your main or create a separate function. If you have created an entry class to contain a person's information, and you created an array of entrys called directory then the following might be a reasonable prototype if you want the put the bubblesort code into a function.
void bubblesort(entry directory[], int num_of_entries)
The first input parameter has the square brackets to show it's an array. You might call the function by doing bubblesort(directory, 5);
Note that though you need to compare the "number" fields of the entries, you can swap complete entries - you needn't separately swap the number and name fields.
Exercise 11 - Binary search [ back to contents]
Another revision exercise. Don't start this until you're sure that bubblesort works. You could start by copying the code from exercise 10.
Write a program that asks the user to type in a number. If that number is in the phone directory you've created earlier, the program should print the corresponding name. If the number isn't in the directory, the program should print a suitable message.
Use the binary search method (so you'll need to sort the items first). Look at the value of the middle item in the list, compare that to the value you're seeking and decide which half of the list you're going to repeat the process in. You might find it useful to create variables (low and high for example) to store where in the array the ends of the current section are, updating one of them each time you reduce the section. Keep going until you've found the item or you can't search any further. While you're developing the program it might be useful to print out low and high each time you change them. Test the function, making sure it does something sensible when asked to look for a number that doesn't exist.
Algorithmic Complexity [ back to contents]
So far your programs have taken minutes to write and milliseconds to run, so optimising wasn't crucial. Before long however, you'll be writing programs where efficiency matters. For functions like searching and sorting that process lots of data there are vast differences between the efficiency of the algorithms - pick the wrong one and your program will run for days rather than minutes. It's important to determine how the time taken depends on the amount of data, n. If the association's linear, then doubling the amount of data will double the program time. Sadly however, many of the methods that are easy to write have times proportional to the square of the data size. Even if your bubble sort can sort 10 numbers quickly, sorting a 1,000 will take 10,000 times longer.
One way to express the efficiency of an algorithm (its Algorithmic Complexity) is to use "Big-O" notation. An O(1) algorithm's time doesn't depend on the amount of data, an O(n) algorithm scales linearly, an O(n2) algorithm quadratically, and so on. "Big-O" notation indicates the order of magnitude of the performance, so constants aren't put in the brackets. It usually describes worst-case behaviour.[show extra information]
The efficiency of an algorithm can be deduced by studying the code. That's not always trivial, but here's a simple example.
// Initialise a square matrix called b of size n for (int i=0; i < n; i++) { for (int j=0; j < n; j++) { b[i][j] = 99; } }
By considering how many times each loop is executed (note that one loop is nested inside the other) you should be able to deduce how many times the assignment statement is run and hence the order of this task. Then look at your binary search code (or just think about the algorithm) and work out its order (hint - it's not O(n) like a naive search would be. Think about the effect of doubling the amount of data).
Exercise 12 - Measuring program speed [ back to contents]
The C++ standard library has a built-in function to sort items, which often uses the quicksort algorithm. We'll investigate how the time taken to sort
items depends on n, the number of items. Is the time proportional to n?
n2? The claim for the built-in function is that it's O(n*log(n)) (natural logs, but it doesn't really matter). We'll determine by experiment whether this is true, then compare that to the speed of your bubble sort code.
The program below uses some commands you've not seen before, but they'll be useful to you next term and/or next year. Like next term, we're providing you with a program that works but is incomplete. We're also not going to provide step-by-step instructions. The new features involve
- Timing - We've provided a routine you can use to time parts of your programs. It returns the number of microseconds that have elapsed since 1st Jan 1970. Don't worry about how it works.
- vector - C++'s vector is a more sophisticated version of an array. You'll be using it in the 2nd year. We're using it here because it makes the code simpler. You won't need to change any of the lines that use vector in this code. Just note that you can read values from a vector just as you'd read them from an array, using square brackets.
- Standard functions - C++ provides functions to sort, search, shuffle, etc, so it's worth seeing how to use them rather than having to write your own functions all the time.
Here's a program to sort 10 numbers
#include <iostream> #include <vector> #include <sys/time.h> #include <cmath> // needed for log #include <iomanip> // needed for setw #include <algorithm> // needed for sort using namespace std; // Prototypes long microseconds(); long timeTheSort(long thelength); int main() { long thelength=10; // the number of items: n in the description above long timeTaken; timeTaken=timeTheSort(thelength); // Output the results, formatting so that each number is in // a column 10 characters wide. // First print the column headings cout << " Length Time Time/n Time/(n*log(n))"<< endl; cout << setfill (' '); // fill the gaps with spaces cout << setw (10) << thelength << endl; } long timeTheSort(long thelength) { // Create a vector called numbers big enough to store 'thelength' ints // You can access the values in it just as if 'numbers' were an array vector<int> numbers(thelength); long thetime; // needed later // After the next lines, the vector number will contain // the integers 0 to thelength-1 for (int i=0;i<thelength;i++) { numbers[i]=i; } // Randomize and output the values random_shuffle(numbers.begin(),numbers.end()); for (int i=0;i<thelength;i++) { cout << numbers[i] << endl; } // Sort and output the values sort(numbers.begin(), numbers.end()); for (int i=0;i<thelength;i++) { cout << numbers[i] << endl; } return thetime; } // This function returns microseconds elapsed since the dawn of computing // (1st, Jan, 1970). Don't worry about how it works long microseconds() { struct timeval tv; gettimeofday(&tv,0); return 1000000*tv.tv_sec + tv.tv_usec; }
Compile and run this code. You'll see a list of unsorted numbers, a list of sorted numbers, then part of a table. Make sure that you understand the flow of the code before proceeding. Note that microseconds() isn't used yet - you'll need it later. If you don't know what long means, re-read the Standard data types section.
First, comment out the lines that print the numbers (later you won't want to watch 10,000,000 numbers being printed out). Then complete the line of the table by calling the timeTheSort function. You'll need to add lines to the timeTheSort function to call microseconds() twice - once just before calling sort and once just after - and find the difference between the 2 results, returning the answer. Time only the sort function - don't time the setting-up code too. It should take less than 10 ms. Then complete the final 2 columns of the table. If Time/N comes out to 0, you need to read the Standard data types section again. Your output should be something like
Length Time Time/n Time/(n*log(n)) 10 8 0.8 0.347436
Using an appropriately placed for loop add a completed line of data to the table for lengths of 100, 1000, etc up to 10,000,000.
Which length has the smallest time-per-element value? Is the time proportional to n*log(n)?
Now assess the performance of bubblesort. Take the lines from Exercise 10 that performed the sorting, and use them here instead of C++'s sort routine. You'll need to adjust your bubblesort code so that it sorts an appropriately-sized array of numbers rather than an array of directory entries. You needn't create a new function, but if you really want to, a reasonable prototype for it would be
void bubblesort(vector<int> numbers, int thelength);
Instead of working out Time/(n*log(n)), calculate Time/(n*n) (because the speed of bubble sort is supposed to be proportional to n2). You'll find that bubblesort is much slower than sort, so don't sort an array any bigger than 10,000 items. Run the program and look at the results. Is the performance of bubblesort close to the predictions? Can you think of any reason for ever using bubblesort?
Finally, change the code so that the table is written into a file called "ex12table". We'll be checking to see that you've produced the file, so don't delete it afterwards.[show extra information]
More exercises [ back to contents]
The early exercises here could be done during the course to re-enforce your understanding of basic concepts. The later exercises might be useful after the course to help you prepare for next term.
Simple
- Print the odd integers between 0 and 100 in ascending order
- Print the odd integers between 0 and 100 in descending order
- Get the user to type in a word. Print the word with the letters reversed
- Get the user to type in a word. Count the number of vowels in the word
- Rewrite all the earlier exercises that used while loops so that they use for loops instead.
Classes
- Invent a class called fraction suitable for storing a vulgar fraction. Write a function that given a variable of type fraction prints out the value in the form a/b. Write a function that given 2 variable of type fraction prints out their sum.
Functions
- Write a function that converts degrees to radians
-
Puzzle-solving
- By trial and error, find all the Pythagorean triples where the integers are less than 100 (a,b,c is a Pythagorean triple if a*a + b*b == c*c). It's easy to eliminate duplicates (4 3 5 is a duplicate of 3 4 5 ). It's rather harder to eliminate multiples (e.g. 6 8 10 is a multiple of 3 4 5), but the whole program should be less than 20 lines long.
- inclusive. If the user types something invalid, ask them to try again. Keep asking until they type a valid number., counting from the right - e.g. 1,345,197. Write a function which is given a float and prints out the number with commas. You'll need to find out how to convert numbers to strings - use the WWW!
- Create an array. E.g. its prototype could be
void addfractions(int numerator1, int denominator1, int numerator2, int denominator2);so that calling it as addfractions(3,4,5,6) will make it output something like 38/24, or better still 19/12, or even 1 + 7/12. You could create a class called Fraction
There are many variants of "Poker Dice" - YahtzeeTM for example. The player rolls 5 dice, keeps as many as they want, rolls the remaining ones, keeps as many of those as they want, then rolls again. The aim is to get particular combinations of die-values. To keep things simple we're just going to collect 6s. The chance of throwing 5 6s in the first roll is 1 in 65. Using the rolls as described above, what's the chance of having 5 6s at the end of the 3 rolls? We'll determine the answer experimentally, running 100,000 attempts.
Here is the code for most of the main routine.
int successes=0; for (int i=0;i<100000; i=i+1) { if (fiveSixesThrown()) successes=successes+1; } } cout << "After 3 rolls, the success rate is " << successes/1000.0 << "%" << endl;
From this, work out the prototype of the fiveSixesThrown function, then write the function (use the RollDie function that you've already used). Think about the variables that the function might need - I used totalNumberOfSixes, numberOfDiceLeft, and thisRollNumberOfSixes, but it's up to you.
Then complete the program.
Rather than 3 rolls, what's the minimum number of rolls that will give you at least a 50% chance of ending up with 5 6s? The tidiest way to determine this is to change the code so that your fiveSixesThrown function is given an argument to show how many rolls are allowed. You shouldn't need to make many modifications to the function's code. Then you can write code in main like the following to increase the number of rolls until you reach the required level of performance
float percentage=0; numOfRolls=1; while (percentage<50) { if (fiveSixesThrown(numOfRolls)) { ... } numOfRolls=numOfRolls+1; }
- Write a program that simulates the rolling of 10 dice 10000 times. Display a frequency table of outcomes (the outcome being the sum of the 10 dice) - e.g.
Outcome Frequency 1 0 2 0 ... 60 1(this was the final exercise in the Mich 2008 term) | http://www-h.eng.cam.ac.uk/help/tpl/languages/C++/1AComputing/Mich/index.php?reply=extraMoreLoops | CC-MAIN-2018-09 | refinedweb | 14,142 | 70.23 |
view?
view?
Since [P2415R0], added wording.
C++20 Ranges introduced two main concepts for dealing with ranges:
range and
view. These notions were introduced way back in the original paper, “Ranges for the Standard Library” [N4128] (though under different names than what we have now - what we now know as
range and
view were originally specified as
Iterable and
Range1):
[A Range] type is one for which we can call
begin() and
end() to yield an iterator/sentinel pair. (Sentinels are described below.) The [Range] concept says nothing about the type’s constructibility or assignability. Range-based standard algorithms are constrained using the [Range] concept.
[…]
The [View] concept is modeled by lightweight objects that denote a range of elements they do not own. A pair of iterators can be a model of [View], whereas a
vector is not. [View], as opposed to [Range], requires copyability and assignability. Copying and assignment are required to execute in constant time; that is, the cost of these operations is not proportional to the number of elements in the Range.
The [View] concept refines the [Range] concept by additionally requiring following valid expressions for an object
o of type
O:
// Constructible: auto o1 = o; auto o2 = std::move(o); O o3; // default-constructed, singular // Assignable: o2 = o1; o2 = std::move(o1); // Destructible o.~O();
The [View] concept exists to give the range adaptors consistent and predictable semantics, and memory and performance characteristics. Since adaptors allow the composition of range objects, those objects must be efficiently copyable (or at least movable). The result of adapting a [View] is a [View]. The result of adapting a container is also a [View]; the container – or any [Range] that is not already a [View] – is first converted to a [View] automatically by taking the container’s
begin and
end.
The paper really stresses two points throughout:
This design got muddled a bit when views ceased to require copyability, as a result of “Move-only Views” [P1456R1]. As the title suggests, this paper relaxed the requirement that views be copyable, and got us to the set of requirements we have now in 24.4.4 [range.view]:
But somehow absent from the discussion is: why do we care about views and range adaptors being cheap to copy and assign and destroy? This isn’t just idle navel-gazing either, [LWG3452] points out that requiring strict O(1) destruction has implications for whether
std::generator [P2168R3] can be a
view. What can go wrong in a program that annotates a range as being a
view despite not meeting these requirements?
The goal of this paper is to provide good answers to these questions.
N4128 asked the following question:
This creates a view of
v that iterates in reverse order. Now: is
rng copyable, and if so, how expensive is the copy operation?
Why is this question important? The initial thought to
rng itself being cheap to copy might be that we need this requirement because we write algorithms that take views by value:
We could have gone that route (and we definitely do encourage people to take specific views by value - such as
span and
string_view), but that would affect the usability of range-based algorithms. You could not write
ranges::sort(v) on a
vector<T>, since that is not a view - you would have to write
ranges::sort(views::all(v)) or perhaps something like
ranges::sort(v.all()) or
ranges::sort(v.view()). Either way, we very much want range-based algorithms to be able to operate on, well, ranges, so these are always written instead to take ranges by forwarding reference:
At best, we write algorithms that do require views and it’s those algorithms that themselves construct the views that they need - but their API surface still takes ranges (specifically,
viewable_ranges 24.4.5 [range.refinements]) by forwarding reference.
If we don’t care about views being cheap to copy because of the desire to write algorithms that take them by value, then why do we care about views being cheap to copy?
Because we very much care about views being cheap to construct.
Let’s go back to this example:
This is intended to be a lazy range adaptor - constructing
rng here isn’t intended to do any work, it’s just preparing to do work in the future. It’s important for this to be “cheap” - in the sense that this should absolutely not end up copying all the elements of
v, or really doing any operation on the elements of
v. This extends to all layering of range adaptors:
If constructing each of these range adaptors in turn required touching all the elements of
v, this would be a horribly expensive construct - and we haven’t even done anything yet! This is why we need views to be cheap to copy - range adaptors are the algorithms for views, and we need to be able to pass views cheaply to those.
Currently, in order for a type
T to model
view, it needs to have O(1) move construction, move assignment, and destruction. If
T is copyable, the copy operations also need to be O(1). What happens if a type
T satisfies
view (whether by it inheriting from
view_base, inheriting from
view_interface<T>, or simply specializing
enable_view<T> to be
true), yet does not actually satisfy the O(1) semantics I just laid out?
Consider:
struct bad_view : view_interface<bad_view> { std::vector<int> v; bad_view(std::vector<int> v) : v(std::move(v)) { } std::vector<int>::iterator begin() { return v.begin(); } std::vector<int>::iterator end() { return v.end(); } }; std::vector<int> get_ints(); auto rng = bad_view(get_ints()) | views::enumerate; for (auto const& [idx, i] : rng) { std::print("{}. {}\n", idx, i); }
bad_view is, as the name might suggest, a bad view. It is O(1) move constructible and move assignable, but it is not O(1) destructible. It is copyable, but not O(1) copyable (though nothing in this program tries to copy a
bad_view - but if it did, that would be expensive!). As a result, this program is violating 16.4.5.11 [res.on.requirements]/2:
2 If the validity or meaning of a program depends on whether a sequence of template arguments models a concept, and the concept is satisfied but not modeled, the program is ill-formed, no diagnostic required.
Ill-formed, no diagnostic required! That is a harsh ruling for this program!
But what actually goes wrong if a program-defined
view ends up violating the semantic requirements of a
view? The goal of a
view is to enable cheap construction of range adaptors. If that construction isn’t as cheap as expected, then the result is just that the construction is… more expensive than expected. It would still be semantically correct, it’s just less efficient than ideal? That’s not usually the line to draw for ill-formed, no diagnostic required.
Furthermore, what actual operations do we need to be cheap? Consider this refinement:
struct bad_view2 : view_interface<bad_view2> { std::vector<int> v; bad_view2(std::vector<int> v) : v(std::move(v)) { } // movable, but not copyable bad_view2(bad_view2 const&) = delete; bad_view2(bad_view2&&) = default; bad_view2& operator=(bad_view2 const&) = delete; bad_view2& operator+(bad_view2&&) = default; std::vector<int>::iterator begin() { return v.begin(); } std::vector<int>::iterator end() { return v.end(); } }; std::vector<int> get_ints(); auto rng = bad_view2(get_ints()) | views::filter([](int i){ return i > 0; }) | views::transform([](int i){ return i * i; });
This whole construction involves moving a
vector<int> twice (once into the
filter_view and once into the
transform_view, both moving a
vector<int> is cheap) and destroying a
vector<int> three times (twice when the source is empty, and once eventually when we’re destroying
rng - it’s this last one that is not O(1)).
In contrast, the ordained method for writing this code is actually:
Now, this no longer involves any moves of a
vector<int>, since
rng will instead be holding a
ref_view into it, so this is in some sense cheaper. But this still, in the end, requires destroying that
vector<int> - it’s just that this cost is paid by destroying
ints rather than destroying
rng in this formulation. That’s not meaningfully different. And moreover, there’s real cost to be paid by the latter formulation: now
rng has an internal reference into
ints, which both means that we have to be more careful because we can dangle (not an issue in the
bad_view2 formulation) and that we have an extra indirection through a pointer which could have performance impact.
Which is ironic, given that it’s the performance consideration which makes
bad_view2 bad.
Let’s consider relaxing the requirements as follows:
- (2.1)
Thas O(1) move construction; and
- (2.2)
Thas O(1) move assignment; and
- (2.3)
ifif
Thas O(1) destruction
Nmoves are made from an object of type
Tthat contained
Melements, then those
Nobjects have
O(N+M)destruction; and
- (2.4)
copy_constructible<T>is
false, or
Thas O(1) copy construction; and
- (2.5)
copyable<T>is
false, or
Thas O(1) copy assignment.
Or, alternatively:
- (2.3)
an object of typean object of type
Thas
Tthat has been moved from has O(1) destruction; and
In this formulation,
bad_view is still a bad view (because it is copyable and copying it is expensive - which is important because building up a range adaptor pipeline using lvalue views will try to copy them) but
bad_view2 is actually totally fine (and indeed, it is not more expensive than the alternate formulation).
In this formulation,
std::generator<T> is definitely a
view that does not violate any of the semantic requirements.
This formulation has another extremely significant consequence. [N4128] stated:
[Views] are lightweight objects that refer to elements they do not own. As a result, they can guarantee O(1) copyability and assignability.
But this would no longer necessarily have to be the case. Consider the following:
template <range R> requires is_object_v<R> && movable<R> class owning_view : public view_interface<owning_view<R>> { R r_; // exposition only public: owning_view() = default; constexpr owning_view(R&& t); owning_view(const owning_view&) = delete; owning_view(owning_view&&) = default; owning_view& operator=(const owning_view&) = delete; owning_view& operator=(owning_view&&) = default; constexpr R& base() & { return r_; } constexpr const R& base() const& { return r_; } constexpr R&& base() && { return std::move(r_); } constexpr const R&& base() const&& { return std::move(r_); } constexpr iterator_t<R> begin() { return ranges::begin(r_); } constexpr iterator_t<const R> begin() const requires range<const R>{ return ranges::begin(r_); } constexpr sentinel_t<R> end() { return ranges::end(r_); } constexpr sentinel_t<const R> end() const requires range<const R> { return ranges::end(r_); } // + overloads for empty, size, data }; template <class R> owning_view(R&&) -> owning_view<R>;
An
owning_view<vector<int>> would completely satisfy the semantics of
view: it is not copyable, it is O(1) movable, and moved-from object would be O(1) destructible. All without sacrificing any of the benefit that views provide: cheap construction of range adaptor pipelines.
Adopting these semantics, along with
owning_view, would further allow us to respecify
views::all (24.7.5 [range.all]) as:}.
The first sub-bullet effectively rejects using lvalue non-copyable views, as desired. Then the second bullet captures lvalue non-view ranges by reference and the new third bullet3 would capture rvalue non-view ranges by ownership. This is safer and more ergonomic too.
Making the above change implies we also need to respecify
viewable_range (in 24.4.5 [range.refinements]/5), since this concept and
views::all need to stay in sync:
5 The
viewable_rangeconcept specifies the requirements of a
rangetype that can be converted to a
viewsafely.
view?
Once upon a time, a
view was a cheaply copyable, non-owning range. We’ve already somewhat lost the “cheaply copyable” requirement since views don’t have to be copyable, and now this paper is suggesting that we also lose the non-owning part.
So how do you answer the question now?
There may not be a clean answer, which is admittedly unsatisfying, but it mainly boils down to:
If
v is an lvalue, do you want
rng to copy
v or to refer to
v? If you want it to copy
v, because copying
v is cheap and you want to avoid paying for indirection and potentional dangling, then
v is a
view. If you want to refer to
v, because copying
v is expensive (possibly more expensive than the algorithm you’re doing), then
v is not a view.
string_view is a
view,
vector<string> is not.
This proposal has been implemented and passes the libstdc++ testsuite (with suitable modifications).
This also resolves [LWG3452].
Update 17.3.2 [version.syn]
Add
owning_view to 24.2 [ranges.syn]:
#include <compare> // see [compare.syn] #include <initializer_list> // see [initializer.list.syn] #include <iterator> // see [iterator.synopsis] namespace std::ranges { // ... // [range.all], all view namespace views { inline constexpr unspecified all = unspecified; template<viewable_range R> using all_t = decltype(all(declval<R>())); } template<range R> requires is_object_v<R> class ref_view; template<class T> inline constexpr bool enable_borrowed_range<ref_view<T>> = true; + template<range R> + requires see below + class owning_view; + + template<class T> + inline constexpr bool enable_borrowed_range<owning_view<T>> = enable_borrowed_range<T>; // ... }
Relax the requirements on
view in 24.4.4 [range.view]:
1 The
viewconcept specifies the requirements of a
rangetype that has
constant time move construction, move assignment, and destruction; that is, the cost of these operations is independent of the number of elements in thethe semantic properties below, which make it suitable for use in constructing range adaptor pipelines ([range.adaptors]).
view
- (2.1)
Thas
O(1)move construction; and
- (2.2)
move assignment of an object of typemove assignment of an object of type
Thas
O(1)move assignment
Tis no more complex than destruction followed by move construction; and
- (2.3)
ifif
Thas
O(1)destruction
Ncopies and/or moves are made from an object of type
Tthat contained
Melements, then those
Nobjects have
O(N+M)destruction [Note: this implies that a moved-from object of type
Thas
O(1)destruction -end note]; and
- (2.4)
copy_constructible<T>is
false, or
Thas
O(1)copy construction; and
- (2.5)
copyable<T>is
false, or
copy assignment of an object of typecopy assignment of an object of type
Thas
O(1)copy assignment
Tis no more complex than destruction followed by copy construction.
3 [Example 1: Examples of
views are:
- (3.1) A range type that wraps a pair of iterators.
- (3.2) A range type that holds its elements by
shared_ptrand shares ownership with all its copies.
- (3.3) A range type that generates its elements on demand.
Most containers are not viewsA container such as
vector<string>does not meet the semantic requirements of
viewsince
destruction ofcopying the container destroyscopies all of the elements, which cannot be done in constant time. — end example]
Change the definition of
viewable_range to line up with
views::all (see later) in 24.4.5 [range.refinements], inserting the new exposition-only variable template
is-initializer-list<T> [ Editor's note:
remove_reference_t rather than
remove_cvref_t because we need to reject
const vector<int>&& from being a
viewable_range ]:
* For a type
R,
is-initializer-list<R>is
trueif and only if
remove_cvref_t<R>is a specialization of
initializer_list.
5 The
viewable_rangeconcept specifies the requirements of a
rangetype that can be converted to a
viewsafely.
Change the last bullet in the definition of
views::all in 24.7.5.1 [range.all.general]:}.
Add a new subclause under [range.all] directly after 24.7.5.2 [range.ref.view] named “Class template
owning_view” with stable name [range.owning.view]:
1
owning_viewis a move-only
viewof the elements of some other
range.
namespace std::ranges { template<range R> requires movable<R> && (!is-initializer-list<R>) // see [range.refinements] class owning_view : public view_interface<owning_view<R>> { private: R r_ = R(); // exposition only public: owning_view() requires default_initializable<R> = default; constexpr owning_view(R&& t); owning_view(owning_view&&) = default; owning_view& operator=(owning_view&&) = default; constexpr R& base() & noexcept { return r_; } constexpr const R& base() const& noexcept { return r_; } constexpr R&& base() && noexcept { return std::move(r_); } constexpr const R&& base() const&& noexcept { return std::move(r_); } constexpr iterator_t<R> begin() { return ranges::begin(r_); } constexpr sentinel_t<R> end() { return ranges::end(r_); } constexpr auto begin() const requires range<const R> { return ranges::begin(r_); } constexpr auto end() const requires range<const R> { return ranges::end(r_); } constexpr bool empty() requires requires { ranges::empty(r_); } { return ranges::empty(r_); } constexpr bool empty() const requires requires { ranges::empty(r_); } { return ranges::empty(r_); } constexpr auto size() requires sized_range<R> { return ranges::size(r_); } constexpr auto size() const requires sized_range<const R> { return ranges::size(r_); } constexpr auto data() requires contiguous_range<R> { return ranges::data(r_); } constexpr auto data() const requires contiguous_range<const R> { return ranges::data(r_); } }; }
2 Effects: Initializes
r_with
std::move(t).
[LWG3452] Mathias Stearn. Are views really supposed to have strict 𝒪(1) destruction?
[N4128] E. Niebler, S. Parent, A. Sutton. 2014-10-10. Ranges for the Standard Library, Revision 1.
[P1456R1] Casey Carter. 2019-11-12. Move-only views.
[P2168R3] Corentin Jabot, Lewis Baker. 2021-04-19. generator: A Synchronous Coroutine Generator Compatible With Ranges.
[P2325R3] Barry Revzin. 2021-05-14. Views should not be required to be default constructible.
[P2415R0] Barry Revzin, Tim Song. 2021-07-15. What is a view?
This is why they’re called range adaptors rather than view adaptors, perhaps that should change as well?↩︎
the existing third bullet could only have been hit by rvalue, borrowed, non-view ranges. Before the adoption of [P2325R3], fixed-extent
span was the pub quiz trivia answer to what this bullet was for. Afterwards, is there a real type that would fit here?↩︎ | https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2021/p2415r2.html | CC-MAIN-2022-27 | refinedweb | 2,949 | 51.99 |
JavaScript animations
JavaScript animations can handle things that CSS can't.
For instance, moving along a complex path, with a timing function different from Bezier curves, or an animation on a canvas.
Using setInterval
An animation can be implemented as a sequence of frames -- usually small changes to HTML/CSS properties.
For instance, changing
style.left from
0px to
100px moves the element. And if we increase it in
setInterval, changing by
2px with a tiny delay, like 50 times per second, then it looks smooth. That's the same principle as in the cinema: 24 or more frames per second is enough to make it look smooth.
The pseudo-code can look like this:
let timer = setInterval(function() { if (animation complete) clearInterval(timer); else increase style.left by 2px }, 20); // change by 2px every 20ms, about 50 frames per second
More complete example of the animation:
let start = Date.now(); // remember start time let timer = setInterval(function() { // how much time passed from the start? let timePassed = Date.now() - start; if (timePassed >= 2000) { clearInterval(timer); // finish the animation after 2 seconds return; } // draw the animation at the moment timePassed draw(timePassed); }, 20); // as timePassed goes from 0 to 2000 // left gets values from 0px to 400px function draw(timePassed) { train.style.left = timePassed / 5 + 'px'; }
Using requestAnimationFrame
Let's imagine we have several animations running simultaneously.
If we run them separately, then even though each one has
setInterval(..., 20), then the browser would have to repaint much more often than every
20ms.
That's because they have different starting time, so "every 20ms" differs between different animations. The intervals are not alignned. So we'll have several independent runs within
20ms.
In other words, this:
setInterval(function() { animate1(); animate2(); animate3(); }, 20)
...Is lighter than three independent calls:
setInterval(animate1, 20); // independent animations setInterval(animate2, 20); // in different places of the script setInterval(animate3, 20);
These several independent redraws should be grouped together, to make the redraw easier for the browser (and hence smoother for people).
There's one more thing to keep in mind. Sometimes when CPU is overloaded, or there are other reasons to redraw less often (like when the browser tab is hidden), so we really shouldn't run it every
20ms.
But how do we know about that in JavaScript? There's a specification Animation timing that provides the function
requestAnimationFrame. It addresses all these issues and even more.
The syntax:
let requestId = requestAnimationFrame(callback)
That schedules the
callback function to run in the closest time when the browser wants to do animation.
If we do changes in elements in
callback then they will be grouped together with other
requestAnimationFrame callbacks and with CSS animations. So there will be one geometry recalculation and repaint instead of many.
The returned value
requestId can be used to cancel the call:
// cancel the scheduled execution of callback cancelAnimationFrame(requestId);
The
callback gets one argument -- the time passed from the beginning of the page load in microseconds. This time can also be obtained by calling performance.now().
Usually
callback runs very soon, unless the CPU is overloaded or the laptop battery is almost discharged, or there's another reason.
The code below shows the time between first 10 runs for
requestAnimationFrame. Usually it's 10-20ms:
<script> let prev = performance.now(); let times = 0; requestAnimationFrame(function measure(time) { document.body.insertAdjacentHTML("beforeEnd", Math.floor(time - prev) + " "); prev = time; if (times++ < 10) requestAnimationFrame(measure); }) </script>
Structured animation
Now we can make a more universal animation function based on
requestAnimationFrame:); } }); }
Function
animate accepts 3 parameters that essentially describes the animation:
duration : Total time of animation. Like,
1000.
timing(timeFraction) : Timing function, like CSS-property
transition-timing-function that gets the fraction of time that passed (
0 at start,
1 at the end) and returns the animation completion (like
y on the Bezier curve).
For instance, a linear function means that the animation goes on uniformly with the same speed:
function linear(timeFraction) { return timeFraction; }
It's graph:
That's just like `transition-timing-function: linear`. There are more interesting variants shown below.
draw(progress) : The function that takes the animation completion state and draws it. The value
progress=0 denotes the beginning animation state, and
progress=1 -- the end state.
This is that function that actually draws out the animation.
It can move the element:
function draw(progress) { train.style.left = progress + 'px'; }
...Or do anything else, we can animate anything, in any way.
Let's animate the element
width from
0 to
100% using our function.
The code for it:
animate({ duration: 1000, timing(timeFraction) { return timeFraction; }, draw(progress) { elem.style.width = progress * 100 + '%'; } });
Unlike CSS animation, we can make any timing function and any drawing function here. The timing function is not limited by Bezier curves. And
draw can go beyond properties, create new elements for like fireworks animation or something.
Timing functions
We saw the simplest, linear timing function above.
Let's see more of them. We'll try movement animations with different timing functions to see how they work.
Power of n
If we want to speed up the animation, we can use
progress in the power
n.
For instance, a parabolic curve:
function quad(timeFraction) { return Math.pow(timeFraction, 2) }
The graph:
...Or the cubic curve or event greater
n. Increasing the power makes it speed up faster.
Here's the graph for
progress in the power
5:
The arc
function circ(timeFraction) { return 1 - Math.sin(Math.acos(timeFraction)); }
The graph:
Back: bow shooting
This function does the "bow shooting". First we "pull the bowstring", and then "shoot".
Unlike previous functions, it depends on an additional parameter
x, the "elasticity coefficient". The distance of "bowstring pulling" is defined by it.
The code:
function back(x, timeFraction) { return Math.pow(timeFraction, 2) * ((x + 1) * timeFraction - x) }
The graph for
x = 1.5:
Bounce
Imagine we are dropping a ball. It falls down, then bounces back a few times and stops.
The
bounce function does the same, but in the reverse order: "bouncing" starts immediately. It uses few special coefficients for that:
function bounce(timeFraction) { for (let a = 0, b = 1, result; 1; a += b, b /= 2) { if (timeFraction >= (7 - 4 * a) / 11) { return -Math.pow((11 - 6 * a - 11 * timeFraction) / 4, 2) + Math.pow(b, 2) } } }
Elastic animation
One more "elastic" function that accepts an additional parameter
x for the "initial range".
function elastic(x, timeFraction) { return Math.pow(2, 10 * (timeFraction - 1)) * Math.cos(20 * Math.PI * x / 3 * timeFraction) }
The graph for
x=1.5:
Reversal: ease*
So we have a collection of timing functions. Their direct application is called "easeIn".
Sometimes we need to show the animation in the reverse order. That's done with the "easeOut" transform.
easeOut
In the "easeOut" mode the
timing function is put into a wrapper
timingEaseOut:
timingEaseOut(timeFraction) = 1 - timing(1 - timeFraction)
In other words, we have a "transform" function
makeEaseOut that takes a "regular" timing function and returns the wrapper around it:
// accepts a timing function, returns the transformed variant function makeEaseOut(timing) { return function(timeFraction) { return 1 - timing(1 - timeFraction); } }
For instance, we can take the
bounce function described above and apply it:
let bounceEaseOut = makeEaseOut(bounce);
Then the bounce will be not in the beginning, but at the end of the animation. Looks even better:
[codetabs src=""]
Here we can see how the transform changes the behavior of the function:
If there's an animation effect in the beginning, like bouncing -- it will be shown at the end.
In the graph above the regular bounce has the red color, and the easeOut bounce is blue.
- Regular bounce -- the object bounces at the bottom, then at the end sharply jumps to the top.
- After
easeOut-- it first jumps to the top, then bounces there.
easeInOut
We also can show the effect both in the beginning and the end of the animation. The transform is called "easeInOut".
Given the timing function, we calculate the animation state like this:
if (timeFraction <= 0.5) { // first half of the animation return timing(2 * timeFraction) / 2; } else { // second half of the animation return (2 - timing(2 * (1 - timeFraction))) / 2; }
The wrapper code:
function makeEaseInOut(timing) { return function(timeFraction) { if (timeFraction < .5) return timing(2 * timeFraction) / 2; else return (2 - timing(2 * (1 - timeFraction))) / 2; } } bounceEaseInOut = makeEaseInOut(bounce);
The "easeInOut" transform joins two graphs into one:
easeIn (regular) for the first half of the animation and
easeOut (reversed) -- for the second part.
The effect is clearly seen if we compare the graphs of
easeIn,
easeOut and
easeInOut of the
circ timing function:
- Red is the regular variantof
circ(
easeIn).
- Green --
easeOut.
- Blue --
easeInOut.
As we can see, the graph of the first half of the animation is the scaled down
easeIn, and the second half is the scaled down
easeOut. As a result, the animation starts and finishes with the same effect.
More interesting "draw"
Instead of moving the element we can do something else. All we need is to write the write the proper
draw.
Here's the animated "bouncing" text typing:
[codetabs src=""]
Summary
For animations that CSS can't handle well, or those that need tight control, JavaScript can help. JavaScript animations should be implemented via
requestAnimationFrame. That built-in method allows to setup a callback function to run when the browser will be preparing a repaint. Usually that's very soon, but the exact time depends on the browser.
When a page is in the background, there are no repaints at all, so the callback won't run: the animation will be suspended and won't consume resources. That's great.
Here's the helper
animate function to setup most animations:); } }); }
Options:
duration-- the total animation time in ms.
timing-- the function to calculate animation progress. Gets a time fraction from 0 to 1, returns the animation progress, usually from 0 to 1.
draw-- the function to draw the animation.
Surely we could improve it, add more bells and whistles, but JavaScript animations are not applied on a daily basis. They are used to do something interesting and non-standard. So you'd want to add the features that you need when you need them.
JavaScript animations can use any timing function. We covered a lot of examples and transformations to make them even more versatile. Unlike CSS, we are not limited to Bezier curves here.
The same is about
draw: we can animate anything, not just CSS properties. | http://semantic-portal.net/javascript-animation-js-animation | CC-MAIN-2022-05 | refinedweb | 1,740 | 56.25 |
*************************
Write.
Input Validation: Do not accept a number less than 1 for the number of days worked.
Input and output should be done with Dialog and Message boxes. Your program should be well documented internally and externally
*************************
My question is where we have to display a table. How can I output a table with all of the days, pennies earned, and total on one output dialog? Is this possible?
Thanks in advance.
import javax.swing.JOptionPane; public class PenniesForPay { public static void main(String[] args) { String inputString; int pennies; // Penny accumulator int totalPay; // Total pay accumulator int maxDays; // Max number of days int day; // Day counter inputString = JOptionPane.showInputDialog("For how many days will you work? "); maxDays = Integer.parseInt(inputString); // Validate the input while (maxDays < 1){ inputString = JOptionPane.showInputDialog("The number of days " + "must be at least 1.\nEnter the number of days: "); maxDays = Integer.parseInt(inputString); } day = 1; pennies = 1; totalPay = 0; while (day <= maxDays) { // Display the day number and pennies earned. JOptionPane.showMessageDialog(null, "Day:\t" + day + "\nPennies Earned:\t" + pennies); // Accumulate the total pay. totalPay = totalPay + pennies; // Increment for the next day. day++; // Double the number of pennies. pennies = pennies * 2; } JOptionPane.showMessageDialog(null, "Total pay: $" + totalPay / 100.0); } } | http://www.dreamincode.net/forums/topic/250504-pennies-for-pay/ | CC-MAIN-2017-47 | refinedweb | 202 | 52.05 |
John,
I've been meaning to ask you ... how did you produce the very fine User
Guide? Is that TeXmacs? LyX? raw LaTeX? ConTeXt? emacs magic?
Is there some slick way of getting the listings from the command line
window into the document, especially with the comments colorized? I'm
writing a small local guide, and was wondering ...
-gary
>>>>> "James" == James Boyle <boyle5@...> writes:
James> Is there anyway to place the tick marks so that they are
James> located outside the axes, i.e. on the same side of the axis
James> line as the axis labels?
James> With plots such as imshow and pcolor and even some busy
James> line plots, the interior minor ticks are completely
James> obscured and the exact location of the major ticks is
James> ambiguous.
James> It would be nice to be able to specify the ticks as inside
James> or outside (or both), right or left (or both), top or
James> bottom (or both). This functionality may already be present
James> but I cannot figure out how to invoke it if it is.
I would like to make tick placement more flexible, for example to
support a detachable tick line so the axis line, tick lines and labels
float below the axes boundary. In addition, I would like the ability
to position ticks along this line as above, centered or below, as you
suggest. But for now this doesn't exist, but you can hack an
approximation.
The tick markers are TICKUP, TICKDOWN, TICKLEFT, and TICKRIGHT,
and these are constants in matplotlib.lines. You can set the tick
markers, for example, to be TICKDOWN. But you'll have to manually
adjust the y position of the labels to be below them.
The second hack is this only works in interactive mode. ticks are
generated dynamically (eg for panning and zooming) and the ticks
aren't generated until the plot is show. In non-interactive mode, the
change of the default tick's line style is not propogating to the new
ticks that are dynamically generated when the line is shown. This
appears to be a bug so I'll look into it. For now, though, you should
be able to get something that works in non-interactive mode.
import matplotlib
matplotlib.interactive(True)
import matplotlib.lines as mpllines
import pylab as pl
ax = pl.subplot(111)
pl.plot([1,2,3])
lines = ax.get_xticklines()
labels = ax.get_xticklabels()
for line in lines:
line.set_marker(mpllines.TICKDOWN)
# labels are in axes coords, where 0,0 is lower left of axes rectangle
# and 1,1 is upper right
for label in labels:
label.set_y(-0.02)
pl.show()
>>>>> "seberino" == seberino <seberino@...> writes:
seberino> Imagine your arrays had points (Cartesian position
seberino> vectors) all over the place at completely random points
seberino> in space. The 'shape' of this plot depends on max and
seberino> min values of each coordinate. I believe Mathematica
seberino> plotting would automagically calculate these max and min
seberino> values and set plot ranges for you. This is why 'shape'
seberino> attribute of Matplotlib/Numarray seems awkward and
seberino> unnecessary to me unless I'm missing something.
There are a variety of issues here.
- The "shape" attribute comes form Numeric/numarray and is outside
the realm of matplotlib. matplotlib plots numerix arrays.
- The pcolor interface is determined by matlab. matlab has a pcolor
function which I have tried to implement faithfully. To the
extent that matplotlib has been successful, this is due in part
because matlab has a good interface for plotting and replicating
it generally, is a good thing.
- Storing the "shape" of a data set allows for memory and efficiency
savings. To take your example of a set of x,y,z points, you are
right you cold reconstruct rectilinear grid from this data -- one
might have to use interpolation but it can be done -- but it would
require a lot of unnecessary computation for data which already
lives on a grid. So pcolor assumes your data are on a rectilinear
grid and it is incumbent upon you to get it into that form.
The meshgrid function takes regularly sampled vector data and
turns it into a rectilinear grid (this is also a matlab function).
The matlab griddata function (which is not yet implemented in
matplotlib) does the same for irregularly sampled data.
JDH
Hi,
When trying to plot the contours of the famous Rosenbrock function:
----------------------------------------
from matplotlib.pylab import *
def rosenbrock(x,y):
return 10.0 * (y-x**2)**2 + (x-1)**2
x = arange( -1.5, 1.5, 0.01 )
y = arange( -0.5, 1.5, 0.01 )
[X,Y] = meshgrid( x, y )
Z = rosenbrock( X, Y )
contour( Z, x=X, y=Y, levels = 50 )
show()
----------------------------------------
I notice some spurious zigzagging lines towards the top of the plot. Any
idea where those might be coming from?
Also, the figure produced by the above script is flipped horizontally.
The corresponding Matlab script produces the correct plot.
Thanks,
Dominique
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/matplotlib/mailman/matplotlib-users/?viewmonth=200501&viewday=14 | CC-MAIN-2017-26 | refinedweb | 870 | 65.42 |
VMware vcloud Architecture Toolkit High Performance Data with VMware vfabric GemFire Best Practices Guide
- Gregory Hill
- 1 years ago
- Views:
Transcription
1 VMware vclud Architecture Tlkit High Perfrmance Data with VMware vfabric GemFire Octber 2011
2 High Perfrmance Data with VMware vfabric GemFire This prduct is prtected by U.S. and internatinal cpyright and intellectual prperty laws. This prduct is cvered by ne r mre patents listed at VMware is a registered trademark r trademark f VMware, Inc. in the United States and/r ther jurisdictins. All ther marks and names mentined herein may be trademarks f their respective cmpanies. VMware, Inc 3401 Hillview Ave Pal Alt, CA Page 2 f 36
3 Cntents High Perfrmance Data with VMware vfabric GemFire 1. Intrductin Overview Purpse Target Audience Scpe References vfabric GemFire Architecture Overview vfabric GemFire Tplgies vfabric GemFire General Administratin Guide Overview Installatin Cnfiguratin Mnitring General Administratin and Trubleshting vfabric GemFire and Spring High Level Tuning Overview... Errr! Bkmark nt defined. 5.2 JVM Memry Segments JVM Tuning and Best Practices vfabric GemFire n VMware Best Practices Overview Latency Sensitive Applicatins Best Practices if Virtualized Memry Sizing f Virtual Machines Running vfabric GemFire vcpu sizing f Virtual Machines Running vfabric GemFire Page 3 f 36
4 High Perfrmance Data with VMware vfabric GemFire Page 4 f 36
5 1. Intrductin High Perfrmance Data with VMware vfabric GemFire 1.1 Overview This High Perfrmance Data with VMware vfabric GemFire prvides infrmatin abut best practices fr the deplyment f data fabric systems. The guide describes the best practices fr VMware vfabric GemFire data caching systems and its varius design cnstructs. The dcument captures fur main deplyment patterns cmmnly used t implement enterprise data requirements: vfabric GemFire deplyed as an enterprise data management system. vfabric GemFire deplyed as L2 cache. vfabric GemFire deplyed fr HTTP sessin management. vfabric GemFire deplyed as a faster mass data mver fr example, real-time reprting. 1.2 Purpse This guide prvides best practice guidelines fr deplying vfabric GemFire. The recmmendatins in this guide are nt specific t any particular set f hardware r t the size and scpe f any particular implementatin. The best practices in this dcument prvide guidance nly and d nt represent strict design requirements because enterprise data requirements can vary frm ne implementatin t anther. Hwever, the guidelines d frm a gd fundatin n which yu can build many f ur custmers have used these guidelines t successfully implement an enterprise data fiber fr their enterprise applicatins. 1.3 Target Audience This guide assumes a basic knwledge and understanding f vfabric GemFire, data management cncepts, and virtualizatin with VMware vsphere. Architectural staff can use this dcument t gain an understanding f hw the system wrks as a whle as they design and implement varius cmpnents. Engineers and administratrs can use this dcument as a catalg f technical capabilities. 1.4 Scpe This guide cvers the fllwing tpics: vfabric GemFire Architecture This sectin prvides a high level best practice architecture fr varius tplgies that are part f the high perfrmance data slutin space. vfabric GemFire Best Practices This sectin cvers varius best practices pertaining t setting up a data fabric in prductin, and GemFire n vsphere best practice cnsideratins. vfabric GemFire Mnitring and Trubleshting Primer There are times when yu have t trublesht a particular vfabric GemFire applicatin prblem. vfabric GemFire is equipped with several tls such as GemFire Tl fr Mnitring (GFMn), Visual Statistics Display (VSD), and vsphere esxtp utilities which are very infrmative when trubleshting. vfabric GemFire FAQ In this sectin, we answer sme frequently asked questins abut the varius data fabric deplyments. Page 5 f 36
6 1.5 References It is recmmended that yu becme familiar with the fllwing dcumentatin: High Perfrmance Data with VMware vfabric GemFire vfabric GemFire User s Guide: Enterprise Java Applicatins n VMware Practices fr Perfrmance Tuning f Latency-Sensitive Wrklads in vsphere VMs Page 6 f 36
7 2. vfabric GemFire Architecture 2.1 Overview High Perfrmance Data with VMware vfabric GemFire vfabric GemFire is an in-memry distributed data management platfrm that can be spread acrss many virtual machines, JVMs, and GemFire servers t manage applicatin bjects. Using dynamic replicatin and partitining it ffers the fllwing features in the platfrm: data durability, reliable event ntificatin, cntinuus querying, parallel executin, high thrughput, lw latency, high scalability, cntinuus availability, and WAN distributin. The fllwing figure shws GemFire as the middle data tier that rchestrates data delivery frm the backend datastres t the cnsuming applicatins. As demand frm cnsuming applicatins increases, the middle tier data layer expands t apprpriately meet demand. Fr further persistence, resiliency data can be written behind t a backup stre like a relatinal database fr archival purpses. GemFire als prvides full persistence durability using its wn native shared-nthing persistence mechanism. Figure 1. vfabric GemFire Architecture Best Practice BP 1 Cmmn Distributed Data Platfrm Descriptin When data delivery is required t be at the highest speed pssible, if millisecnds and micrsecnds matter, setting up vfabric GemFire as an enterprise data fabric system is the crrect apprach. By ding s and as shwn in Figure 1, yu intrduce a cmmn data delivery and cnsumptin layer in-memry fr all enterprise applicatins data needs. This allws yu t benefit frm the scalability, availability, and speed f executin features f vfabric GemFire. Page 7 f 36
8 2.2 vfabric GemFire Tplgies High Perfrmance Data with VMware vfabric GemFire There are three main setup tplgies fr vfabric GemFire: client/server, peer-t-peer, and multisite. Each f these tplgies can be used standalne r cmbined t frm an extended, full featured distributed data management system Client/Server Tplgy In a client/server tplgy there are tw tiers, a client tier and a server tier. In Figure 1, the client and server tiers are depicted. The client tier cmmunicates with the server tier t search fr r update data bjects frm the server tier. In the client tier, standalne client caches 1, 2, 3, and 4 cmmunicate directly with the server tier. Figure 2. vfabric GemFire Client/Server Tplgy Page 8 f 36
9 High Perfrmance Data with VMware vfabric GemFire Best Practice BP 2 Client/Server Tplgy Descriptin Client/server tplgy is the mst cmmnly used fr enterprise class applicatins. The client sends individual peratins t the server t update cached data, t satisfy a lcal cache miss, r t run an ad hc query. The server streams cache update and cntinuus query events t the client based n client subscriptins. Fr advanced tuning and increased thrughput capacity, yu can distribute the lad f netwrk traffic fr yur client/server traffic thrugh a different adapter than the peer-t-peer traffic by setting a server bind address. These gemfire.prperty lines specify different nn-default addresses fr the member: bind-address= , server-bind-address= Use client/server tplgy when any f the fllwing are requirements f an enterprise class applicatin: Dynamic server discvery The GemFire server lcatr utility dynamically tracks server prcesses and directs clients t new servers, giving clients indirectin frm explicit server infrmatin. Clients need t knw nly hw t cnnect t the lcatr services. They d nt need t knw where servers are running r hw many servers are available at any time. Server grups Yu can assign yur servers t lgical grups that yur clients can refer t in their cnnectin cnfiguratins. Fr example, yu might use grups t manually partitin yur data, with ne grup f servers hsting ne set f data and anther hsting anther set. Or yu might use a grup t direct all database-centric traffic t the subset f servers that are directly cnnected t a backend database. Servers can belng t multiple grups. Yur clients need t specify nly the grup t use and are islated frm having t knw which servers belng t which grups. Server lad balancing The GemFire server lcatr tracks current lad infrmatin fr all servers, directing new client cnnectins t the servers with the least lad. GemFire prvides a default lad prbe fr yur servers, which yu can replace with yur wn custmized plug-in. Server cnnectin cnditining Client cnnectins can be cnfigured t transparently time ut and be replaced with new cnnectins which allws verall server use t be rebalanced after new servers are started. This helps speed cnditining in situatins such as adding servers r recvery frm server crashes and ther dwntime. Autmated data and query updates Yur clients can subscribe t events in the server. These events can include data updates and updates t results fr cntinuus queries that the client has registered with the server. The server uses subscriptin queues t send the updates asynchrnusly. Server failver and high availability When servers crash, the client cnnectins autmatically fail ver t the remaining servers. If the servers are sending autmated updates t the clients, the update requests als autmatically fail ver. Yu can cnfigure redundancy in yur server subscriptin queues s that the failver des nt interrupt the stream f events frm the server side. Page 9 f 36
10 High Perfrmance Data with VMware vfabric GemFire BP 3 Client/Server Cmmn Sizes Because vfabric GemFire is hrizntally scalable, scalability is limited nly by the hardware resurces available. Cnfigure vfabric GemFire data management ndes t hst apprximately 1TB f data. While there is n prduct limit as t the number f JVMs deplyed in a data management system, up t 32 JVMs have been implemented in varius prductin systems. Thusands f clients can cnnect back t the data management systems accessing data. Client scalability can be managed thrugh cnnectin pls. Run within a 64-bit JVM, and a heap size f up t 32GB. When using -XX:CmpressedOps, the 32GB heap space uses 32-bit pinter addressing which saves large amunts f memry as ppsed t using 64-bit pinter addressing. With this apprach yu can cntinue t run inside a 64-bit JVM and benefit frm larger heap sizes, but with cmpressed pinter addressing. Any heap size beynd 32GB uses 64-bit pinter addressing because the -XX:CmpressedOps ptimizatin is limited t 32GB. This is limitatin in the Java ptimizatin and is nt specific t vfabric GemFire as GemFire supprts larger than 32GB heap sizes. Nte Client lcal cache generally starts with zer client side data and is enabled nly when needed fr perfrmance ptimizatin Peer-t-Peer In this tplgy, tw r mre intercmmunicating vfabric GemFire servers frm a distributed system. The data is distributed accrding t the data regin s cnfiguratin redundancy rules. Figure 3. Peer-t-Peer vfabric GemFire Distributed System Page 10 f 36
11 High Perfrmance Data with VMware vfabric GemFire Best Practice BP 4 Peer-t-Peer Multihmed Machines Descriptin If running n multihmed machines, yu can specify a nn-default netwrk adapter fr cmmunicatin. In nn-multicast peer-t-peer situatins, cmmunicatin uses the bindaddress prperty. This address must be the same fr all vfabric GemFire servers within the distributed system. BP 5 Peer-t-Peer Sckets Highly cncurrent/high-thrughput deplyments need cnserve-sckets set t false and then limit the NIO thread pl servicing clients if (and nly if) the number f peer-t-peer sckets/wrker threads increases t the pint where cntext switching verhead degrades perfrmance. Mre ndes in the peer-t-peer cluster imply mre cnnectin/wrker thread verhead, and thus a pssible reasn t lwer the per-server NIO pl size. Cnversely, mre pwerful hardware with mre available cres, and running n a mre pwerful underlying netwrk fabric implies the ability t increase the per-server NIO pl size. Fr peer-t-peer threads that d nt share sckets, yu can use the scket-lease-time t limit the time that a scket sits idle. When a scket that belngs t an individual thread remains unused fr this time perid, the system autmatically returns it t the pl. The next time the thread needs a scket, it retrieves ne frm the pl. Scket-buffer-size determines the buffer size. Buffers shuld be at least as large as the largest stred bjects and their keys, plus sme verhead fr message headers. The verhead varies depending n wh is sending and receiving, but 32KB shuld be sufficient. Larger scket buffers allw yur members t distribute data and events mre quickly, but they als take memry away frm ther requirements. Nte This prvides excellent perfrmance even fr small update sizes, while nt killing the ptential fr larger-sized chunking t ptimize bulk peratins putall()/getall()/queries and rebalancing/failver/failback. BP 6 Peer-t-Peer hsts File BP 7 Use Lcatrs in Managed Peer-t-Peer Envirnments Verify that every peer-t-peer hst has a hsts file entry fr itself and fr all ther hsts n the LAN. The hsts file entry frmat shuld fllw hst + dmain (fr example, "gemserver1" and "gemserver1.vmware.cm") are present fr each IP address entered in the hsts file. Lcatrs using TCP/IP Using this methd yu run GemFire lcatr prcesses that manage the authritative list f active peer-t-peer distributed system members. These lcatrs are peer lcatrs. A new member cnnects t ne f the lcatrs t retrieve the member list which it uses t jin the system. Lcatrs are highly recmmended fr prductin systems. Fr prductin envirnments, always use at least tw lcatrs n different hsts. Page 11 f 36
12 Nte High Perfrmance Data with VMware vfabric GemFire The client/server tplgy is the mst cmmnly used in enterprise applicatins. There are sme rare cases when perfrmance cnstraints are s strenuus that a single hp is all that can be affrded t meet SLAs. Peer-t-peer tplgy typically has ne netwrk hp between peers as ppsed t client/server tplgy where there are tw netwrk hps, if yu assume that there are at least tw redundant servers with which a client can cmmunicate. Hwever, when using peer-t-peer tplgy it is assumed that the rich features f a client/server tplgy, such as cntinuus querying, registratin f interest, and cnnectivity thrugh pled cnnectins are nt needed. These features are available nly with the client/server tplgy. Client/server tplgy is the mst cmmnly used as it is the mst feature rich Multisite Tplgy In the case f multisite tplgy, as shwn in Figure 4, there are tw sites each with a distributed system. Within each site, ne server member is nminated as the gateway t prvide data distributin between sites in case f a failure event, r fr ther enterprise data distributin requirements. The Site 1 and Site 2 tplgy can lcate bth sites within ne datacenter, r the sites can be distributed gegraphically at different datacenters if needed. Figure 4. vfabric GemFire Multisite Tplgy Page 12 f 36
13 High Perfrmance Data with VMware vfabric GemFire Best Practice BP 8 Multisite Tplgy Descriptin Use multisite tplgy in distributed data systems that require a rbust failver mechanism at the applicatin data layer. Use the cnflatin feature when using a gateway hub s that nly the latest updates are passed ver t the remte site. With cnflatin, earlier entry updates in the queue are drpped in favr f updates sent later in the queue. This is prblematic fr applicatins that depend n seeing every update. Fr example, if any remte gateway has a CacheListener that needs t knw abut every state change, yu shuld disable cnflatin. T enable cnflatin, yu can set the batch-cnflatin attribute t true within the gateway-queue cache cnfiguratin element. In a multisite installatin using gateways, messages can back up in the gateway queues if the link between sites is nt tuned fr ptimum thrughput. If a receiving queue verflws because f inadequate buffer sizes, it can becme ut f sync with the sender and the receiver is unaware f the cnditin. The gateway s <gateway> scket-buffer-size attribute shuld match the gateway hub s <gateway-hub> scket-buffer-size attribute fr the hubs the gateway cnnects t. Fr example: <gateway-hub <gateway id="us" scket- </gateway> </gateway-hub> <gateway-endpint <gateway-queue <gateway-hub <gateway id="eu"> </gateway> </gateway-hub> <gateway-endpint <gateway-queue Avid verflwing t disk when pssible by adjusting the maximum-queue-memry attribute t accmmdate needed memry. Hwever, shuld yu wish t verflw t disk, yu can easily d s t prvide additinal data reliability. Fr prductin systems and higher availability, set enable-persistence t true fr the gateway-queue attribute. This causes the gateway queue t persist t the disk stre specified in disk-stre-name. Althugh fr ease f illustratin we shw tw sites, typically yu wuld implement n+1 sites t achieve fault tlerance. The multisite tplgy can als span a WAN with multiple sites in, fr example, New Yrk, Tky, and Lndn. Refer t Figure 5. Page 13 f 36
14 Nte High Perfrmance Data with VMware vfabric GemFire Gateway hubs and gateways cmmunicate thrugh TCP/IP sckets. The gateway hub listens at a specified address and prt fr gateway cmmunicatin frm remte sites. Gateways are cnfigured with endpint infrmatin matching the remte gateway hub specificatins. The gateway sends cnnectin requests t the gateway hubs t establish tw-way TCP cnnectins. Fr infrmatin n the multisite cnfiguratin, refer t the Cnfiguring Multisite Installatins sectin f the vfabric GemFire User s Guide (). In additin t the site-t-site cmmunicatin, each gateway hub is a member in its wn distributed system. Figure 5 shws three glbal sites in New Yrk, Lndn, and Tky. Each site has a primary gateway and a backup gateway. It is imprtant t inspect and tune the cnfiguratin parameters f the WAN gateways. Figure 5. vfabric GemFire in a Glbal Multisite Cnfiguratin Page 14 f 36
15 2.2.4 Using vfabric GemFire as Simple L2 Cache High Perfrmance Data with VMware vfabric GemFire In Figure 6 a client/server tplgy is used t cnfigure vfabric GemFire as a Hibernate L2 cache. This cnfiguratin has the added benefits f faster perfrmance with relative ease f cnfiguratin. This is installed as a Hibernate plug-in and therefre n cde change is required. It als keeps the query results as distributed cache bjects thus imprving perfrmance and availability. Figure 6. Using vfabric GemFire as Hibernate L2 Cache Page 15 f 36
16 High Perfrmance Data with VMware vfabric GemFire Best Practice BP 9 Hibernate L2 Cache Descriptin Turn n L2 cache in the Hibernate cnfiguratin (hibernate.cfg.xml): <prperty name="hibernate.cache.use_secnd_level_cache">true</prperty> Set regin.factry_class t GemFireReginFactry (hibernate.cfg.xml versin 3.3+): <prperty name="hibernate.cache.regin.factry_class"> cm.gemstne.gemfire.mdules.hibernate.gemfirereginfactry </prperty> Set the cache usage mde t: Read nly Used when yu d nt plan t mdify the data already stred in persistent strage. Read write Used when yu plan t bth read frm and write t data. Nn-strict write A special read/write mde that has faster write perfrmance. Use this nly if n mre than ne client updates the data at a time. Transactinal Allws fr transactin based data access. The cache mde can be set either using anntatin r in the Hibernate mapping file: T set using the Hibernate mapping file entity_name.hbm.xml: <hibernate-mapping <class name="entity_name"...> </class> <cache usage="read-write nnstrict-read-write readnly"/>... </hibernate-mapping> T set the mde anntatins: imprt rg.hibernate.anntatins.cache; @Cache(regin = 'REGION_NAME', usage = CacheCncurrencyStrategy.READ_ONLY READ_WRITE NONSTRICT_READ_WRIT E TRANSACTIONAL) public class MyClass implements Serializable { }... Page 16 f 36
17 2.2.5 Using vfabric GemFire as an HTTP Sessin Cache High Perfrmance Data with VMware vfabric GemFire Best Practice BP 10 Tplgies fr HTTP Sessin Management Descriptin Either client/server, peer-t-peer, r multisite vfabric GemFire tplgies can be used t achieve HTTP sessin replicatin. If dealing with user sessin data that must be cmpletely fault tlerant, use multisite vfabric GemFire tplgy and HTTP sessin management. Fllw the recmmended setup in the HTTP Sessin Management Mdule sectin f the vfabric GemFire User s Guide (). It is relatively straightfrward with minimal change t cnfigure HTTP sessin replicatin with GemFire n VMware vfabric tc Server. In Figure 7, vfabric GemFire is used fr HTTP sessin replicatin that can be easily achieved when plugged int vfabric tc Server. Figure 7. Using vfabric GemFire fr HTTP Sessin Replicatin Using vfabric GemFire as a Faster Data Mver Best Practice BP 11 Real-Time Reprts Descriptin vfabric GemFire client/server tplgy is mst suited fr real time reprt setup. This allws yu t mve rapidly changing data t the cnsuming end pint client cache t present the data in real time. vfabric GemFire features such as cntinuus querying and functin executin can help in the implementatin f a business critical real time reprts. Page 17 f 36
18 High Perfrmance Data with VMware vfabric GemFire 3. vfabric GemFire General Administratin Guide 3.1 Overview The fllwing sectins summarize sme high level best practices. There are additinal details in the Administratin sectin f the vfabric GemFire User s Guide. 3.2 Installatin T dwnlad vfabric GemFire, g t Fllw the installatin instructins at:. 3.3 Cnfiguratin The mst ntable cnfiguratin files within vfabric GemFire are gemfire.prperties, gemfirelicenses.zip, and cache.xml. gemfire.prperties Cntains the settings required t jin a distributed system. Cnfiguratin includes system member discvery, cmmunicatin parameters, security, lgging, and statistics. Fr a detailed descriptin f the parameters within this file, refer t the vfabric GemFire User s Guide. gemfirelicense.zip The license file which shuld never be unzipped. Nte This is the license file fr vfabric GemFire 6.5. With vfabric GemFire 6.6 and later, licensing is dne using serial numbers. Refer t the vfabric GemFire User s Guide fr details. cache.xml The declarative cache cnfiguratin file. This file cntains XML declaratins fr cache, regin, and regin entry cnfiguratin. It is als used t cnfigure disk stres, database lgin credentials, server and gateway lcatin infrmatin, scket cnfiguratin, and s frth. Page 18 f 36
19 High Perfrmance Data with VMware vfabric GemFire Best Practice BP 12 Cnfiguratin Descriptin D nt t unzip the gemfirelicense.zip file leave it intact. Each f the three cnfiguratin files has a default name, a set f file search lcatins, and a system prperty that can be used t verride the defaults. T use the default file specificatin, place the file at the tp level f its directry r jar file. The system prperties are standard file specificatins that can have abslute r relative pathnames and filenames. If yu d nt specify an abslute file path and name, the search lks thrugh all the search lcatins fr the file. The gemfire.prperties file can be specified with the system-level Java prperty -DgemfirePrpertyFile=<valid file/path>. Yu can verride any GemFire prperty set in the file r by the CacheFactry API with a system-level Java argument that fllws the pattern -Dgemfire.<prperty-name>=<prperty-value>. Deply the same gemfirelicense.zip n all members f the peer-t-peer tplgy, fr vfabric GemFire 6.5. Fr vfabric GemFire 6.6 use the same license key n all members f the peer-tpeer tplgy. All peer-t-peer members f the distributed system must have the same versin f vfabric GemFire. Clients can be up t ne majr release behind. Fr example, any 6.x client interperates with any 6.x r 7.x server, but nt with an 8.x server. The vfabric GemFire prperty aut-start=true must be cnfigured fr the agents fr any versin f GemFire 6.5. Fr highly cncurrent wrklads, set the GemFire prperty cnservesckets=false n the data management ndes (DMNs). If the scale is large and t many sckets (and assciated threads t service thse sckets) are created between the DMNs, tune the CacheServer (a cnfiguratin element in the DMN cache.xml cnfiguratin) t reduce the NIO thread pl servicing client requests. This places a hard upper limit n the pssible number f DMN peer-t-peer cmmunicatin sckets. Refer t the vfabric GemFire User s Guide fr infrmatin: <cache-server Place the default files either in the current directry frm which yu start the GemFire server r n the CLASSPATH. If yu wish t change the default names f these cnfiguratin files, yu can set the fllwing prperties t verride them. Nte These prperties are useful t script the deplyment r mvement f the cde base frm Dev t QA, and then t prductin, where there might be a separate set f cnfiguratin files fr each envirnment. Depending n which envirnment is deplyed t yu can rtate the apprpriate files in the gemfireprpertyfile, gemfire.cache-xml-file, r gemfire.license-file. Set -Djava.net.preferIPv4Stack=true in the start script fr all servers, peers, and lcatrs. Page 19 f 36
20 3.4 Mnitring High Perfrmance Data with VMware vfabric GemFire Best Practice BP 13 vfabric GemFire Mnitring Tls Descriptin vfabric GemFire is a specialized prduct and it is imprtant that administratrs are familiar with the available mnitring tls. The vfabric GemFire Tls Guide () details GFMn and Visual Statistics Display (VSD) tls available fr mnitring. The GFMn tl mnitrs a vfabric GemFire system in real time, prviding health infrmatin, detailed peratinal and cnfiguratin data, system alerts, thrughput perfrmance, and statistics fr system members and cnnected clients. The VSD tl reads GemFire statistics and prduces a graphical display fr analysis. Cnfigure the vfabric GemFire prperty statistics-enabled=true t generate statistics files that can be viewed with VSD. This can be critical t trublesht ptential prblem areas r help t diagnse a prblem. Yu can als use the VMware vfabric Hyperic GemFire plug-in that prvides a live data user interface fr viewing metrics in real time. Refer t the vfabric Hyperic Guide: Yu can als use esxtp t mnitr vsphere, refer t the trubleshting sectin f the Enterprise Java Applicatins n VMware 3.5 General Administratin and Trubleshting Best Practice BP 14 General Administratin Descriptin Fr managing disk stres, security, system lgs, trubleshting, the Cmmand Line Utility, and fr administering the Distributed System, refer t the vfabric GemFire User s Guide. D nt cnfigure firewall restrictins between the LAN hsts. There must be least fur dedicated prts pen thrugh any intervening firewall (cnnecting t the peert-peer hsts): ne fr the lcatr prt, ne fr the server prt, ne fr the agent HTTP prt, and ne fr the agent RMI prt. If trubleshting a cnnectivity prblem, set lg-level=fine n all sides f the cnnectin. This shuld always include the lcatr, as it is the first pint f cntact fr cnnectivity. At a fine lg level, yu can immediately see whether a scket cnnectin is made, and if it is, why and where the cnnectin is rejected. After trubleshting it is imprtant t revert t the default lg level, r typically in prductin, t the lg-level=errr. When yu first begin t diagnse a ptential cnnectivity r general prblem with the system, start with telnet t test whether a remte lcatr, server, r agent prt is reachable. Page 20 f 36
21 High Perfrmance Data with VMware vfabric GemFire BP 15 General Trubleshting The vfabric GemFire User s Guide has a detailed sectin n Trubleshting and System Recvery. Fllw thse instructins. BP 16 Trubleshting SYN Ckies When trubleshting perfrmance prblems, check t see yu are nt impacted by SYN ckies. SYN ckies are the key element f a technique used t guard against SYN fld attacks. Daniel J. Bernstein, the technique's primary inventr, defines SYN ckies as particular chices f initial TCP sequence numbers by TCP servers. In particular, the use f SYN ckies allws a server t avid drpping cnnectins when the SYN queue fills up. Instead, the server behaves as if the SYN queue had been enlarged. The server sends back the apprpriate SYN+ACK respnse t the client but discards the SYN queue entry. If the server then receives a subsequent ACK respnse frm the client, the server is able t recnstruct the SYN queue entry using infrmatin encded in the TCP sequence number. T check fr the presence f SYN ckies: grep SYN /var/lg/messages Aug 2 12:19:06 w1-vfabric-g1 kernel: pssible SYN flding n prt Sending ckies. Aug 2 12:54:38 w1-vfabric-g1 kernel: pssible SYN flding n prt Sending ckies. Aug 3 10:46:38 w1-vfabric-g1 kernel: pssible SYN flding n prt Sending ckies. T determine whether r nt SYN ckies are enabled (1 is n, 0 is ff): $ cat /prc/sys/net/ipv4/tcp_synckies 1 T temprarily disable SYN ckies (changes at rebt): # ech 0 > /prc/sys/net/ipv4/tcp_synckies Permanently disable SYN ckies: Add\mdify the fllwing in /etc/sysctl.cnf # Cntrls the use f TCP synckies net.ipv4.tcp_synckies = 0 Page 21 f 36
22 4. vfabric GemFire and Spring High Perfrmance Data with VMware vfabric GemFire Best Practice BP 17 vfabric GemFire and Spring Descriptin Use Spring t cnfigure GemFire servers and regins rather than manually creating them with applicatin cde. This allws yu t centralize yur applicatin service cnfiguratin, as ppsed t having the Spring cntext cnfiguratin plus a separate cache.xml file. Yu can eliminate the need t implement Declarable n yur cache laders, cache listeners and cache writers. Spring takes care f binding these cmpnents int GemFire regins thrugh dependency injectin. These cmpnents can als be shared amng regins as singletns using Spring's nrmal DI techniques. Yu can leverage advanced techniques t cnfigure GemFire via SpEL, selectively expsed cnfiguratin parameters, and s n. Use the Spring GemFire prject t mve yur GemFire cnfiguratin int the Spring cntext mre easily. Refer t Spring GemFire Hme at Use the GemFire schema extensin t the Spring cntext cnfiguratin t simplify the cnfiguratin f the varius GemFire cmpnents further with validated cnfiguratin prperty names (Sectin 1.1 in the Spring GemFire dcumentatin). Use GemfireTemplate t simplify interactins with the GemFire APIs. GemfireTemplate includes many best practice techniques fr dealing with resurce management and multiple threads in a virtual machine wrking with GemFire. GemfireTemplate eliminates the need fr yu t deal with checked exceptins. The GemfireTemplate prvides the best practice technique fr ensuring thread safe access t GemFire resurces in a single virtual machine. GemfireTemplate prvides utility methds t access and manage data mre simply. Transactin management using Spring and Spring GemFire: This prvides a prtable, well integrated way t prvide transactins t yur applicatin t prmte well defined behavir fr multiple threads mdifying the data. Autmatically cnfigures the GemFire server with best practice recmmendatins fr safest use in a multithreaded envirnment by enabling cpyonread t prevent client threads frm inadvertently editing data cntents in a nn-transactinal way. Use InstantiatrFactryBean t autmatically generate an efficient Instantiatr: Reflectin is the default technique used t serialize and deserialize data acrss the entire distributed data management system. If data serializatin is a bttleneck in yur applicatin, the general best practice recmmendatin is t implement custm instantiatr lgic t speed up the serializatin prcess. The InstantiatrFactryBean takes a list f dmain types (and a unique integer ID fr each type t efficiently serialize type inf as an integer instead f a string) and autmatically generates instantiatrs using the ASM bytecde library that prevents having t use reflectin t serialize and deserialize bject data. Page 22 f 36
SolarWinds Technical Reference
SlarWinds Technical Reference Preparing an Orin Failver Engine Installatin Intrductin t the Orin Failver Engine... 1 General... 1 Netwrk Architecture Optins and... 3 Server Architecture Optins and... 4
Serv-U Distributed Architecture Guide
Serv-U Distributed Architecture Guide Hrizntal Scaling and Applicatin Tiering fr High Availability, Security, and Perfrmance Serv-U Distributed Architecture Guide v14.0.1.0 Page 1 f 16 Intrductin Serv
Best Practice - Pentaho BA for High Availability
Best Practice - Pentah BA fr High Availability This page intentinally left blank. Cntents Overview... 1 Pentah Server High Availability Intrductin... 2 Prerequisites... 3 Pint Each Server t Same Database,
Deployment Overview (Installation):
Cntents Deplyment Overview (Installatin):... 2 Installing Minr Updates:... 2 Dwnlading the installatin and latest update files:... 2 Installing the sftware:... 3 Uninstalling the sftware:... 3 Lgging int
Disk Redundancy (RAID)
A Primer fr Business Dvana s Primers fr Business series are a set f shrt papers r guides intended fr business decisin makers, wh feel they are being bmbarded with terms and want t understand a cmplex tpic.
Restricted Document. Pulsant Technical Specification
Pulsant Technical Specificatin Title Pulsant Dedicated Server Department Prduct Develpment Cntributrs RR Classificatin Restricted Versin 1.0 Overview Pulsant ffer a Dedicated Server service t underpin
The Relativity Appliance Installation Guide
The Relativity Appliance Installatin Guide February 4, 2016 - Versin 9 & 9.1 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -,
Networking Best Practices
Netwrking Best Practices Use f a Lad Balancer With Hitachi Cntent Platfrm and Hitachi Cntent Platfrm Anywhere By Hitachi Data Systems August 2015 Cntents Executive Summary... 3 Intrductin... 4 Lad Balancer:
BackupAssist SQL Add-on
WHITEPAPER BackupAssist Versin 6 2 Cntents 1. Requirements... 3 1.1 Remte SQL backup requirements:... 3 2. Intrductin... 4 3. SQL backups within BackupAssist... 5 3.1 Backing up system
Knowledge Base Article
Knwledge Base Article Crystal Matrix Interface Cmparisn TCP/IP vs. SDK Cpyright 2008-2012, ISONAS Security Systems All rights reserved Table f Cntents 1: INTRODUCTION... 3 1.1: TCP/IP INTERFACE OVERVIEW:...
System Business Continuity Classification
Business Cntinuity Prcedures Business Impact Analysis (BIA) System Recvery Prcedures (SRP) System Business Cntinuity Classificatin Cre Infrastructure Criticality Levels Critical High Medium Lw Required
What's New. Sitecore CMS 6.6 & DMS 6.6. A quick guide to the new features in Sitecore 6.6. Sitecore CMS 6.6 & DMS 6.6 What's New Rev: 2012-10-22
Sitecre CMS 6.6 & DMS 6.6 What's New Rev: 2012-10-22 Sitecre CMS 6.6 & DMS 6.6 What's New A quick guide t the new features in Sitecre 6.6 Sitecre is a registered trademark. All ther brand and prduct names
Webalo Pro Appliance Setup
Webal Pr Appliance Setup 1. Dwnlad the Webal virtual appliance apprpriate fr yur virtualizatin infrastructure, using the link yu were emailed. The virtual appliance is delivered as a.zip file that is n,
MS SQL SERVER. Course Catalog 2012-2013
MS SQL SERVER Curse Catalg 2012-2013 Micrs SQL Server 2012 Administratin This class cnsists f hands-n training that fcus n the fundamentals f administering the SQL Server 2012 database engine. Participants
Architecting HP Server Solutions
HP ExpertOne exam preparatin guide Architecting HP Server Slutins Exam descriptin This certificatin attests that yu can: gather and analyze business and technical requirements and then plan and design
Getting Started Guide
AnswerDash Resurces Cntextual help fr sales and supprt Getting Started Guide AnswerDash is cmmitted t helping yu achieve yur larger business gals. The utlined pre-launch cnsideratins | http://docplayer.net/988539-Vmware-vcloud-architecture-toolkit-high-performance-data-with-vmware-vfabric-gemfire-best-practices-guide.html | CC-MAIN-2017-17 | refinedweb | 5,805 | 53.1 |
Excelsior
Correspondent
JAMMU,
Mar 24: Flying
Cats Airhostess Training School is conducting a free.....more
BoB
restarts
Bahrain operation
DUBAI,
Mar 24:
India's leading public sector bank, Bank of Baroda, has
opened a wholesale branch in Bahrain, making the Gulf
state the 25th country where it has international
presence.....more
High
density polymers improve on better offtake
NEW
DELHI, Mar 24:
High density polymer prices improved by a rupee per kg on
the wholesale market here today on fresh offtake ........more....more
BP may enter into
pact with Cals for
refinery at Kolkata
NEW
DELHI, Mar 24:
Global energy major BP Plc is likely to enter into a
strategic tie-up with Cals Ltd, a Spice Energy holding
company, for its .....more
Dal
prices remain
quiet in thin trade
NEW
DELHI, Mar 24:
Steady trend prevailed in the wholesale pulses market
here today as prices hovered in a narrow range on little
bouts of trading to close at previous levels.,.....more
Wheat
dara eases
on sluggish demand
NEW DELHI, Mar 24: Wheat dara prices today declined
by Rs 10 a quintal on the wholesale grain market here due
to lack of buying interest amid increased .....more
Sugar
turns weak
on ample supply
NEW
DELHI, Mar 24:
Sugar prices today softened by Rs 10 per quintal on the
wholesale market here due to ample supply from mills amid
poor demand. ....more
Thymol
falls on reduced offtake........
Dry
dates declines on reduced demand.......
Tata-JLR
deal to be complete within a few days, says
Parliamentary
panel against taking Madrid trademark
obligation........
Flying Cats Training School
holds workshop
Excelsior Correspondent
JAMMU,
Mar 24: Flying
Cats Airhostess Training School is conducting a free
workshop on personality development, career councelling
and make over with a focus on grooming of job seekers.
The workshop which started
on March 15, this yaer is being conducted by experts and
trainers in the feild of communication, leadership skills
and personality development. Narinder Kour, a faculty
member appealed to the students to avail themselves of
this opportunity.
She added that Avaition
Industry, which is biggest booming industry has enough
career opportunities in it and aspirants can make career
as Airhostess, Stewart or in hotel industry.
She further stated that
India is slowly inching towards becoming next boom
industry in Aviation and Flying Cats is the only school
in the sstate which has affliation with the Annamalai
University.
There is no age limit set
for the candidates and anyone holding matriculation
certificate can participatre in the workshop, the press
note added. The said workshop will continue till April
13, this year.
BoB restarts Bahrain
operation
DUBAI,
Mar 24:
India's leading public sector bank, Bank of Baroda, has
opened a wholesale branch in Bahrain, making the Gulf
state the 25th country where it has international
presence.
Bank of Baroda, the fifth
largest bank in India, first established an offshore
banking unit in Bahrain in 1980, which was closed in 1993
due to a forex crisis in India.
The new license, granted
by the CBB, marks the return of Bank of Baroda to
Bahrain.
Indian Ambassador
Balkrishna Shetty opened the new branch along with
Central Bank of Bahrain (CBB) Deputy Governor Dr Anwar Al
Sadah in Manama yesterday.
"The CBB welcomes the
return of Bank of Baroda to Bahrain. This reflects the
bank's confidence in Bahrain and the resilience of our
financial market," Sadah said.
The wholesale banking
industry represents 80.5 per cent of the consolidated
assets of Bahrain's banking system, which stood at USD
220 billion in August last year, an increase of 32 per
cent over the same period in 2006. (PTI)
High density polymers
improve on better offtake
NEW
DELHI, Mar 24:
High density polymer prices improved by a rupee per kg on
the wholesale market here today on fresh offtake by
helmet makers and other consuming industries against
restricted supply.
However, low density and
other polymer prices closed steady on small buying and
selling spree.
Marketmen said increased
inquiries from consuming industries against short supply
edged up high density polymer prices.
Hd blowing moved up by a
rupee to settle at Rs 85 a kilo on fresh buying interest.
Hd moulding deshi also inched up by a rupee one to close
at Rs 81 a kilo.
Hd moulding colour too
showed a rise of a rupee at Rs 85 a kilo on reduced
supply.
Following are today's
quotations in Rs per kg:
LD No 40 - 91.00, LD No
400 - 90.00 LLDP blowing 83.00, colour 74.00 HD Blowing
82.00, HD moulding (deshi) 80.00, HD moulding (colour)
84.00, PP No 100 81.00, PP Colour 105, ABS (Indian) 97,
acrylic 130-140, colour 139-144, crystal (Indian) 80,
colour 90, poly carbonate 180-190, Nylon No-6 160, Nylon
No 66 -181-191, PVC resin deshi 56.00, PVC pest grade 85.
(PTI) market.
Indias commercial
vehicle market stood at 290,000 units last year,
including 5,000 premium buses. Hyundai, which said in
February it was talking to Indian firms for a possible
venture for commercial vehicles, expected the market for
"luxury" buses, to be mainly used for tourism,
to reach 10,000 units in 2010.
Under the agreement,
Hyundai will provide parts and production technology for
Aero buses while Caparo India will manufacture and sell
the vehicles, Hyundai said in a statement.
Caparo India will build a
plant in the southern Indian city Chennai, near
Hyundais second plant in the country, to start
production in early 2009. The companies expect
Caparos plant to produce 5,100 units by 2013.
Hyundai Motor ranked
second in the Indian passenger car market in 2007,
trailing Maruti Suzuki India Ltd <MRTI.BO>. It aims
to enter the top 10 rankings in the local commercial
vehicle market by 2010.
Hyundai shares ended up
1.2 percent at 73,400 won before the announcement
compared with a 0.6 percent rise in the broader market .
(AGENCIES)
BP
may enter into pact with Cals for refinery at
Kolkata
NEW
DELHI, Mar 24: Global energy major BP Plc is
likely to enter into a strategic tie-up with Cals
Ltd, a Spice Energy holding company, for its
proposed one billion dollar, five million tons
oil refinery at Haldia near Kolkata.
BP may supply
crude oil to the refinery which needs 2.5 million
tons of heavy (high sulphur) crude and a similar
supply of light (low sulphur) crude, a Cals Ltd
press release said here.
Cals plans to
export petrol and diesel produced in the Euro-4
complaint refinery and BP may sign an offtake
agreement for these two products, it said. The
refinery would also produce jet fuel, LPG and pet
coke for the domestic market.
Cals Ltd plans to
import a 90,000-barrel-per-day refinery from
Bayernoil, Germany, which will be dismantled at
Ingolstadt on the river Danube, Germany and
shipped to Haldia for reconstruction. The
refinery will become Bengals second largest
oil refinery after the Indian Oil
Corporations existing one in Haldia.
The company plans
to commission the refinery by end 2009. Cals Ltd
has hired UK refinery engineers KBC to upgrade
the plant, to be able to refine lower-quality
crude oil.
To fund the
project, Cals Ltd raised 200 million dollars
through issuing a global depository receipt on
the Luxembourg Stock Exchange in November,
attracting investments from Dubai Investment
Group, part of Dubai Holding, and Londons
RP Capital. It is now hoping to raise a further
100 to 200 million dollars from a strategic
investor.
Cals
Refineries has roped in former IOC chairman
M S Ramachandran as the chairman of the board.
Some senior managers from RPL and Essar have also
joined the company in the run-up to the
implementation of the project, the release said.
Cals Refineries
has signed an MoU with the Haldia Development
Authority and West Bengal Industries Development
Corp to facilitate handing over of land and other
incentives for the installation of the
five-million tons a year refinery at Haldia.
The refinery is
likely to be be expanded to 10 million tons by
2010-end and to 20 million tons by 2013.
Cals has already
spent around Rs 360 crore on equipments, basic
engineering and initial project enabling work.
Site activities would commence by this April and
shipments of equipment were expected to arrive at
Haldia by this July onwards. (PTI)
Dal prices remain quiet in
thin trade
NEW
DELHI, Mar 24:
Steady trend prevailed in the wholesale pulses market
here today as prices hovered in a narrow range on little
bouts of trading to close at previous levels.
Marketmen said little
buying interest against sufficient ready stocks kept the
prices around last levels.
Following are today's
quotations (per quintal):
Urad Maharashtra
2225-2400, Rangoon 2350-2375, Urad chilka (local)
2900-3100 , best 3100-3500, dhoya local 3100-3400, best
3500-3600, Moong Maharashtra 2350-2650, Rajasthan
2100-2350, dal moong chilka local 2800-3050, best
3100-3400, moong dhoya local 2900-3150, best quality
3100-3550, masoor small 3050-3200, bold 3400-3500, dal
masoor local 3900-4100, best quality 4200-4500, Malka
local 4100-4300, best 4350-4500, Moth 1900-2000, Arhar
Maharashtra 2750-2800, Rangoon 2550-2625, dal arhar dara
3650-3850 and patka 3700-4000.
Gram 2650-2700, gram dal
(local) 2975-3025, best quality 3100-3300, besin (35 kg)
shakti bhog 1330, rajdhani 1350, Rajmah chitra Pune
3300-3900, China 3600-3950, red 3200-3300, kabli gram
small 2750-3500, dabra 2775-2875, imported 4600-4700,
lobia 2200-2600, peas white 2350-2400 and green
2400-2500. (PTI)
Wheat
dara eases on sluggish demand
NEW DELHI, Mar 24: Wheat dara prices today
declined by Rs 10 a quintal on the wholesale
grain market here due to lack of buying interest
amid increased supply.
Marketmen
said reduced offtake by rolling flour mills amid
selling pressure from stockists pulled down wheat
dara prices.
Wheat
dara fell from Rs 1140-1145 to settle at Rs
1130-1135 a quintal on slackness in demand.
Chakki
atta delivery pegged lower at Rs 1125-1132 from
Rs 1140-1145 a 90 kilo bag.
Rollling
flour mills prices eased to finish at Rs 1125
-1130 from Rs 1135-1142 a 90 kilo bag in line
with general trend.
Following
are todays quotations per quintal (in Rs):
wheat MP (deshi) 1350-1600, wheat dara (for
mills) 1130-1135, chakki atta (delivery)
1125-1132, Chakki atta Rajdhani (10 kgs) 145,
shakti bhog (10 kgs) 155, roller flour mill
1125-1130, maida 1215-1240 (90 kilos) and sooji
1240-1275 (90 kgs).
Rice
basmati (lal quila) 7000, Shri Lal Mahal 7000,
Basmati common 6700-7000, Permal raw 1450-1500,
permal wand 1675-1725, sela 2200-2300 and rice
IR-8 1300-1375, Bajra 650-675, Jowar yellow
675-700, white 1250-1300, Maize 800-825 Barley
(UP) 1170-1185 and Rajasthan 1180-1185. (PTI)
Sugar
turns weak on ample supply
NEW
DELHI, Mar 24: Sugar prices today softened by Rs
10 per quintal on the wholesale market here due
to ample supply from mills amid poor demand.
Marketmen said
increased supply from mills against reduced
offtake by stockists and bulk consumers pulled
down sugar prices.
Sugar ready medium
and second grade price dipped from Rs 1,630-1,700
and Rs 1,610-1,690 to settle at Rs 1,620-1,690
and Rs 1,600-1,680 a quintal respectively.
Mill delivery
medium and second grade price also eased to
settle at Rs 1,460-1,565 and Rs 1,450-1,560
instead of Rs 1,470-1,575 and Rs 1,460-1,570 a
quintal respectively.
In mill gate
section, khatauli, Simbhawali, Bijor and Amroha
were traded lower at Rs 1,615, Rs 1,590, Rs 1,505
and Rs 1,500 a quintal.
Following are
today quotations per quintal:
Sugar ready M-30
1620-1690 and S-30 1600-1680.
Mill delivery M-30
1460-1565 and S-30 1450-1560.
Sugar mill gate
prices (excluding duty): Modi Nagar 1520,Bagpat
1455, Daurala 1545, Chandpur 1370, Titabi 1600,
Mawana 1570, Simbhawali 1590 Khatauli 1605,
Badaiun 1370, Sattha 1355, Ruderavilash 1375,
Bijnor 1505, Amroha 1500 and Samali Rs 1560.
(PTI)
Thymol
falls on reduced offtake
NEW
DELHI, Mar 24: Thymol prices today fell by Rs 10 a
kg to close at Rs 390 per kg on the wholesale
chemical market due to reduced industrial
offtake.
Traders said
adequate stocks position against reduced demand
brought down thymol prices.
However, prices of
other chemicals ruled steady on scattered deals.
Following are
today's quotations:
Ammonia bicarb (25
kg) 345 Ammonium chloride (50 kg) 1,800, acetic
acid (1 kg) 42, boric acid technical (50 kg)
4,100-4,600, borex granular (50 kg) 2050 Caustic
soda flake (50 kg) 1185 citric acid (50 kg)
(China) 2,650-2,800, citric acid deshi (50 kg)
2,600-2800, camphor slab (1 kg) 170-175, camphor
powder (1 kg) 150, glycerine (1 kg) 78-80,
hexamine (1 kg) 82, hydrogen peroxide (1 kg)
31-32, mercury (34.5 kg) 28,600, menthol bold
crystal (per kg) 605 menthol flake (1 kg) 585 and
Mentha oil (1 kg) 505.
Paraffin wax ( 1
kg)Iran 65
paraffin wax ( 1
kg)China 72
paraffin wax ( 1
kg) Indian 67
residue wax (p
tonne) 34,000
soda ash (50 kg)
(Tata) 880
soda ash (50 kg)
(Gujarat) 870
soda ash (50 kg)
(Dcw) 870
soda ash (50 kg)
(Birla) 870
Sodium Nitrite (50
kg) 1400-1550
Sodium silicate
(Qtl) 950-1100
stable bleaching
powder (shriram) (25 kg) 310 stable bleaching
powder (chambal) 330
stable bleaching
powder (modi) 310
tartaric acid
france (1 kg) 421
thymol (1 kg) 390
titanium dioxide
(ttk) (1 kg) 98
titanium dioxide
(k-brand) (1 kg) 89
titanium dioxide
(china) (1 kg) 89
titanium dioxide
(TR-92) 108
titanium dioxide
(rc-822) (1 kg) 108
oxalic acid
(pcpl-red) 50 kg 2500
oxalic acid
(pcpl-blue)50 kg 2500
Zinc oxide (kg)
122-135. (PTI)
Dry
dates declines on reduced demand
NEW
DELHI, Mar 24: Dry dates dipped by Rs 100 per
quintal to close at Rs 2,200-7,000 a quintal on
the wholesale dry fruit market today on sluggish
demand amid fresh arrival from southern regions.
Marketmen said
fall in demand from retailers and lower advices
from producing regions brought down the prices.
However, prices of
other commodities moved in a narrow range on
alternate bouts of trading and settled at
previous levels.
Following are
today's quotations per 40 kg bag: Almond
(California) new 8,500 Almond (gurbandi) 5,000
Almond (girdhi) 3,150, Almond kernel (California)
297-298 Almond kernel (gurbandi) (kg) 280-325 and
Abjosh Afghani 6,000-13,000. Chilgoza raw-new (1
kg) 380, chilgoza (roasted) (1 kg) 750, cashew
kernel 1 kg (no 180) 425-430, cashew kernel (no
210) 380-385, cashew kernel no.(240) 315-320,
cashew kernel (no 320) 275-280, cashew kernel
broken 2 pieces 235-240, cashew kernel broken 4
pieces 205-235, cashew kernel broken 8 pieces
180-210, copra (qtl) 4,900-5,000, coconut powder
(25 kg) 1,100-2,000, dry dates red (qtl)
2,200-7,000, fig 3,500-12,000, kishmish kandhari
local 5,300-6,000, kishmish kandhari special
11,000-14,000, kiahmish indian yellow 2500-2800,
kishmish indian green 2,800-3,800, pistachio
Irani 480-510, pistachio Hairati 480-520,
pistachio Peshawari 510-555, pistachio dodi
(roasted) 340-350, walnut new 110-200, walnut
kernel new (1kg) 350-500. (PTI)
Tata-JLR
deal to be complete within a few days, says
LONDON,
Mar 24: Indian conglomerate Tata and US car
maker Ford are expected to complete the deal of
the latters luxury brands Jaguar and Land
Rover within a few days, a media report said
today.
According to The
Guardian newspaper, Ford is expected to complete
the sale of Jaguar and Land Rover to Tata
"within a few days."
"The two
sides are understood to be keen to complete the
deal in time to meet Fords dollars (660 million
pounds) and 1.5 billion dollars, though the
recent success of Land Rover, on the back of new
model launches, may have pushed the price tag
closer to two billion dollars,". (PTI)
Parliamentary
panel against taking Madrid trademark obligation
NEW
DELHI, Mar 24: A Parliamentary panel has asked the
Government not to take the Madrid Protocol
obligation for clearing a trade mark application
within 18 months of filing till requisite
infrastructure is built.
"The
Government should not accede to the Madrid
Protocol, till the Trade Marks Registry is
equipped with adequate, skilled manpower and
requisite infrastructure and enabled to handle
the pressure of dealing with trade mark
applications, both domestic and international,
within a period of 18 months," the
Parliamentary Standing Committee on Commerce has
said.
The committee has
given its observations on a bill which was
referred to it for amending the Trade Marks Act,
1999 for facilitating Indians as well foreign
nationals to secure simultaneous protection of
trade marks throughout the world.
India wants to
join the Madrid Protocol of 1989, administered by
the International Bureau of the World
Intellectual Property Organisation (WIPO)-- a
specialised agency of the United Nations.
The protocol
enables the nationals of member countries to
obtain protection of trade marks within the
prescribed period of 18 months by filing a single
application with one fee and in one language.
However, accession
to the Madrid Protocol will entail amendments to
the Trade Marks Act. The bill for amending the
law was introduced in Lok Sabha in August last
year and was referred to the standing committee.
The committee said
the Trade Marks Registry in the country is not
adequately equipped to cope with the mandate of
issuing certificates of registration within the
stipulated period due to constrains of manpower
and infrastructure. (PTI)
| home | state | national | business| editorial | advertisement | sports |
| international | weather | mailbag | suggestions | search | subscribe | send mail | | http://www.dailyexcelsior.com/web1/08mar25/busi.htm | crawl-001 | refinedweb | 3,002 | 54.76 |
Machine learning routines work on numbers rather text, so we may frequently have to convert our text to numbers. Below is a function for one of the simplest ways to convert text to numbers. Each word is given an index number (and here we give more frequent words lower index numbers).
This function uses ‘tokenized’ text – that is text that has been pre-processed into lists of words. Tokenization also usually involves other cleaning steps, such as converting all words to lower case and removing ‘stop words’, that is words such as ‘the’ that have little value in machine learning. If you need code for tokenization, please see here, though if all you need to do is the break a sentence into words then this may be done with:
import nltk
tokens = nltk.word_tokenize(text)
Here is the function to convert strings of tokenized text:
import nltk import numpy as np import pandas as pd def text_to_numbers(text, cutoff_for_rare_words = 1): """Function to convert text to numbers. Text must be tokenzied so that test is presented as a list of words. The index number for a word is based on its frequency (words occuring more often have a lower index). If a word does not occur as many times as cutoff_for_rare_words, then it is given a word index of zero. All rare words will be zero. """ # Flatten list if sublists are present if len(text) > 1: flat_text = [item for sublist in text for item in sublist] else: flat_text = text # get word freuqncy fdist = nltk.FreqDist(flat_text) # Convert to Pandas dataframe df_fdist = pd.DataFrame.from_dict(fdist, orient='index') df_fdist.columns = ['Frequency'] # Sort by word frequency df_fdist.sort_values(by=['Frequency'], ascending=False, inplace=True) # Add word index number_of_words = df_fdist.shape[0] df_fdist['word_index'] = list(np.arange(number_of_words)+1) # replace rare words with index zero frequency = df_fdist['Frequency'].values word_index = df_fdist['word_index'].values mask = frequency <= cutoff_for_rare_words word_index[mask] = 0 df_fdist['word_index'] = word_index # Convert pandas to dictionary word_dict = df_fdist['word_index'].to_dict() # Use dictionary to convert words in text to numbers text_numbers = [] for string in text: string_numbers = [word_dict[word] for word in string] text_numbers.append(string_numbers) return (text_numbers)
Now let’s see the function in action.
# An example tokenised list text = [['hello', 'world', 'Michael'], ['hello', 'world', 'sam'], ['hello', 'universe'], ['michael', 'makes', 'a', 'good', 'cup', 'of', 'tea'], ['tea', 'is', 'nice'], ['michael', 'is', 'nice']] text_numbers = text_to_numbers(text) print (text_numbers) Out: [[1, 2, 0], [1, 2, 0], [1, 0], [3, 0, 0, 0, 0, 0, 4], [4, 5, 6], [3, 5, 6]]
3 thoughts on “108. Converting text to numbers”
This code works great, I only have one problem though, is there a way to make a code that converts the numbers just recently created back into the original words. (for example, change to numbers for neural network, but then how do I make it so that I can identify what those numbers mean?)
Hello, In the `text_to_numbers` function you could create a reversed dictionary (and so that will return the word for each number), with:
`reverse_dict = {v: k for k, v in word_dict.items()}`
Do that after the line `word_dict = df_fdist[‘word_index’].to_dict()`
Then add `reverse_dict` to the return statement. You can then use that to decode the numbers.
Hope that helps!
Thanks, It works beautifully
This has been a great help in my self teaching goal of neural networks. Love the work you are doing! | https://pythonhealthcare.org/2018/12/20/108-converting-text-to-numbers/ | CC-MAIN-2020-29 | refinedweb | 558 | 63.49 |
by Arup Nanda
Part 2 of a two-part series that presents an easier way to learn R by comparing and contrasting it to PL/SQL.
Published September 2017
Welcome to the second installment of this series. In this installment, you will learn about the more advanced concepts of the R language such as evaluating conditions, executing loops, and creating program units such as functions.
But before we start, let's explore a rather trivial activity in any interactive program: accepting an input from the user at runtime. The R function for that is called
readline(). It prompts the user for an input and reads the input to be stored as a value. Here is an example:
> v1 <- readline(prompt = "what's the good word> ") what's the good word>
You can enter a value at the blinking cursor. Suppose you enter
1. The variable
v1 will be assigned the value of 1. If you type
v1, you will see the value. Remember from Part 1 that you can just type the variable name and the value will be displayed. There is no need for a
print() function.
> v1 [1] "1"
But note the double quotes. This is a character string. You can confirm that by using the
class() function, which you learned about in Part 1:
> class(v1) [1] "character"
If you want to use the value as a number, then you must convert it to a number by using the
as.numeric() function, which you learned about in Part 1, or the
as.integer() function. Here is an example:
> v1 <- as.numeric(v1)
Confirm that it's a numeric value now:
> class(v1) [1] "numeric"
Alternatively, you can use this:
> v1 <- as.integer(v1) > class(v1) [1] "integer"
Like the
IF statement in PL/SQL, the most basic conditional operation in R is also
IF. In PL/SQL, the general structure of the IF condition is:
IF conditional expression THEN some statements ELSE some statements END IF;
Here is some sample code:
-- pl1.sql declare v1 number; begin v1 := &inputvalue; if (v1 < 101) then dbms_output.put_line('v1 is less than 101'); end if; end;
Here is the output (after you input 100 at the prompt):
Enter value for inputvalue: 100 old 4: v1 := &inputvalue; new 4: v1 := 100; v1 is less than 101
Here is how you write the same logic in R:
#r1.txt v1 <- is.integer(readline(prompt = "enter a number> ")) if (v1<101) { print ("v1 is less than 101") }
Here is the output:
[1] "v1 is less than 101"
There are a few things to note here before we further:
ifcondition, the action block is enclosed by two curly braces:
{and
}. This is the standard convention in R. This is similar to the convention in C language.
;). A line ends with the normal end-of-line character.
When there is
IF, there must be
ELSE. In PL/SQL, you would write like this:
--pl2.sql declare v1 number; begin v1 := &inputvalue; if (v1 < 101) then dbms_output.put_line('v1 is less than 101'); else dbms_output.put_line('v1 is not less than 101'); end if; end;
The R syntax is also the same—-
ELSE—but there is a catch you should be aware of. Here is the equivalent R code:
v1 <- 100 if (v1 < 101) { print ('v1 is less than 101') print ('Not Indented') } else { print ('v1 is greater than 101') print ('Way Too much Indented') }
I used the indentation messages to show you how indentations are not important in R, just as in PL/SQL. But I want to show you a very important differentiator. Note the presence of a curly brace before the
else in the code above. The ending curly brace of the
IF condition tells R to stop evaluating and start processing. If you have an
ELSE, it must come in the same line after the curly brace; otherwise, the R interpreter will not be able to evaluate the
else. Note what happens when you put
else in the next line:
# r2a.txt v1 <- 100 if (v1<=100) { print ('v1 is less than or equal to 100') print ('Not Indented') } else { print ('v1 is greater than 100') print ('Way Too much Indented') }
Output:
[1] "v1 is less than or equal to 100" [1] "Not Indented" Error: unexpected 'else' in "else" Execution halted
The
else was not properly handled, because it was not in the same line as the ending curly brace of the
IF. This is a very important syntax difference from PL/SQL that you should be aware of. Most developers familiar with other languages such as C, where the curly braces are used as well, make this mistake.
What if you want to put another condition? There could be another
IF after the
ELSE:
#r3.txt v1 <- 100 if (v1 < 100) { print ('v1 is less than 100') } else if (v1 == 100) { print ('v1 is equal to 100') } else { print ('v1 is greater than 100') }
Executing that code, we get this:
C:\>rscript r3.txt [1] "v1 is equal to 100"
When you have to put in a line but nothing needs to be done, you usually use a
NULL statement in PL/SQL.
--pl4.sql declare x number := 10; y number := 11; begin if (x<y) then dbms_output.put_line('Yes'); else null; end if; end; /
The
null statement in line 9 is required. You have to put a valid PL/SQL statement within the
IF and
END IF statements. Otherwise, the code will produce an error. In R, you don't need to put anything between the curly braces. The following is the equivalent code in R.
# r4.txt x <- 10 y <- 11 if (x<y) { print ("Yes") } else { }
The
CASE statement of PL/SQL is pretty powerful. It allows you to define several conditions in one expression. Here is an example.
--pl5.sql declare n1 number; begin n1 := 5; case when (n1<=25) then dbms_output.put_line('n1 is within 25'); when (n1<=50) then dbms_output.put_line('n1 is within 50'); when (n1<=75) then dbms_output.put_line('n1 is within 75'); else dbms_output.put_line('n1 is greater than 75'); end case; end; /
There is an equivalent of the
CASE statement in R. It's called the
switch() function. But unlike
CASE, the
switch() function works in different ways when the first argument is an integer or a character. Let's first see the effect with an integer input:
> v1 <- switch (1,'first','second','third','fourth') > v1 [1] "first"
The
switch statement returned
"first" because the first argument is the integer 1. Therefore, the
switch statement picked the first argument from the list of selections. Similarly, if you choose other numbers,
switch will choose the corresponding values.
# r5.txt > v1 <- switch (2,'first','second','third','fourth') > v1 [1] "second" > v1 <- switch (3,'first','second','third','fourth') > v1 [1] "third" > v1 <- switch (4,'first','second','third','fourth') > v1 [1] "fourth"
What if you pass a number for which there is no corresponding selection, for example, 0? Let's see:
# r5a.txt > v1 <- switch (0,'first','second','third','fourth') > v1 NULL > v1 <- switch (5,'first','second','third','fourth') > v1 NULL
Note the output, which shows
NULL. In Part 1, you saw a value called
NA, which was roughly the equivalent of
NULL in PL/SQL. The R
NULL has no clear equivalent; it's sort of undefined.[Laura: Reminder to add URL that goes to Part 1]
The first argument does not have to be an integer. It could also be an implied integer, for example, a boolean value. Remember, boolean
TRUE and
FALSE evaluate to 1 and 0, respectively. If the first parameter is an expression that results in a boolean value, it is converted to an integer. If the expression is
3 < 5, the result will be
TRUE, that is, 1; so
switch will pick the first argument from the choices:
# r5b.txt > v1 <- switch (3<5,'first','second','third','fourth') > v1 [1] "first"
Unfortunately it doesn't work in all cases. What if the expression evaluates to
FALSE? It will be then converted to 0; but there is no 0th choice. So it will return a
NULL:
# r5c.txt > v1 <- switch (3>5,'first','second','third','fourth') > v1 NULL
The
switch function works differently when the first input is a character. In this format, you will need to provide the return values for several input values. You can also provide the default value for when none of the input values match. For instance, here is an R expression that returns the position of vowels, that is, 1 for a, 2 for b, and so on. It should return 0 if the input is not a vowel.
# r6.txt > v1 <- switch('a',a=1,e=2,i=3,o=4,u=5,0) > v1 [1] 1 > v1 <- switch('z',a=1,e=2,i=3,o=4,u=5,0) > v1 [1] 0
This format of the
switch() function is closer to the
CASE statement in PL/SQL when there is a default value.
Consider the usual looping code in PL/SQL using the
FOR ... LOOP ... END LOOP construct. The general structure is this:
FOR i in StartingNumber ... EndingNumber LOOP ... END LOOP;
The R equivalent also uses
for; but there is no
LOOP keyword and, consequently, no
END LOOP either. As in the case of the
if statement, the end of the conditional expression is marked by the curly brace start. Also, like the
if statement, the block of program statements to be repeated is identified by being enclosed inside curly braces.
But before we start talking about
loop, we need to know how to create a start and end number range. It's the
seq() function.
> seq(10,20) [1] 10 11 12 13 14 15 16 17 18 19 20
Let's see a very simple function that generates 11 values from 10 to 20 and displays them one by one:
Here's PL/SQL code:
-- pl7.sql begin for i in 1..10 loop dbms_output.put_line('i= '||i); end loop; end;
Here's the R code:
for (i in seq(10:20)) { print(i) }
Here's the output:
[1] 10 [1] 11 [1] 12 [1] 13 [1] 14 [1] 15 [1] 16 [1] 17 [1] 18 [1] 19 [1] 20
The
seq() function is very useful in R. You will be using it a lot to create data, especially when trying to fit models. Let's see some more parameters of this function. If you want to skip values, you can pass the optional third parameter to specify the skip values. Let's say, we want to print 10 to 20 but skip every 2 numbers.
> seq(10,20,2) [1] 10 12 14 16 18 20
Similarly we can use the third parameter to increment negatively. To produce numbers from 20 to 10, incrementing by 1, the third parameter should be -1:
> seq(20,10,-1) [1] 20 19 18 17 16 15 14 13 12 11 10
If you want to break the loop iteration, just use the
break statement, which is exactly same as the
break statement in PL/SQL. If you want to enter a number and check if it has a multiple between 10 and 20, you could use the following. In R
%% is the modulo operator, equivalent to the
mod() function in PL/SQL. The expression
v1 %% v2 returns 0 if
v1 is a multiple of
v2. You want to iterate through the loop from 10 and 20, but stop when you find a multiple. The
break statement comes in there.
# r8.txt n <- as.integer(readline("Enter a number> ")) for (i in rep(10:20)) { print(i) if (i%%n == 0) { break } }
Executing it produces the following:
> source ("r8.txt") Enter a number> 7 [1] 10 [1] 11 [1] 12 [1] 13 [1] 14
The second type of loop we will cover is a variant of
FOR but without a start and end: the
WHILE loop. It allows you to loop as long as a condition is met (the condition can be set to always be true for a forever loop). Here is an example of printing the 10 number.
PL/SQL code:
-- pl9.sql declare i number := 0; begin while (i<11) loop dbms_output.put_line('i= '||i); i := i+1; end loop; end; /
The output:
i= 0 i= 1 i= 2 i= 3 i= 4 i= 5 i= 6 i= 7 i= 8 i= 9 i= 10
In R, the syntax is the same:
while. Like the
FOR loop, the code to be inside the
WHILE loop is marked by the curly braces, equivalent to the
BEGIN and
END markers of PL/SQL. Like PL/SQL, the indentations are merely for readability; they are not part of the syntax.
# r9.txt i <- 0 while (i<11) { print(i) i <- i+1 }
The output:
[1] 0 [1] 1 [1] 2 [1] 3 [1] 4 [1] 5 [1] 6 [1] 7 [1] 8 [1] 9 [1] 10
Suppose you want to put a condition in the loop that will make the program break away from the loop when the condition is satisfied. For instance, in the previous program, you want to break form the loop when the variable
i is a multiple of 5. In PL/SQL, you can do that in two different ways:
exit when ConditionIsSatisfied
if (ConditionIsSatisfied) then exit
Functionally they are the same. In R, the keyword
break breaks the loop from executing and jumps out to the first line after the loop. We will examine the approaches in both these languages.
In PL/SQL using approach 1:
--pl10a.sql declare i number := 1; begin while (i<11) loop exit when mod (i,5) = 0; dbms_output.put_line('i= '||i); i := i+1; end loop; end; /
The output:
i= 1 i= 2 i= 3 i= 4
In PL/SQL using approach 2:
--pl10b.sql declare i number := 1; begin while (i<11) loop dbms_output.put_line('i= '||i); i := i+1; if mod (i,5) = 0 then exit; end if; end loop; end; /
In either approach, the output is the same. While the output is same, the approaches are different and might behave differently. In the first approach, the condition for breaking is checked immediately at the start of the loop. In the second approach, it's evaluated after the counter is incremented. So you have to be careful when coding for either approach. The change in logic might be subtle, but it is important and can introduce bugs in a program.
In R, there is no equivalent of the first version. The second version is what you would use in R. You already saw an example in the
FOR loop. Let's see the same for the
WHILE loop. Here is the R code. Execute it yourself on the R command line see the results.
# r10.txt i <- 1 while (i<11) { print(i) if (i%%5 == 0) { break } i <- i+1 }
You might have seen another case where you you need to repeat the loop indefinitely until a break condition comes in. In those cases a
WHILE loop with a condition that always evaluates to
TRUE will help. Here is an example in PL/SQL:
WHILE (TRUE) LOOP ... IF Condition THEN BREAK; END IF END LOOP
In R, you can write it the same way:
while(TRUE) { if Condition { break } }
There is a simpler way in R: using the
repeat clause. Here is an example:
# r11.txt n <- as.integer(readline("Enter an integer> ")) i <- 1 repeat { if (i == n) { cat("It took me ", i, " iterations to find your number\n") break } i <- i+1 }
Executing it, produces the following:
Enter an integer> 29 It took me 29 iterations to find your number
Another element of the loop is the
next statement. This allows you to skip over a loop based on a condition. Let's take the same example we saw for the
break statement. But let's say we don't need to count in multiples of 5. In other words, we count how many iterations we had to do to come to the number entered by the user; but we will not count iterations 5, 10, 15, and so on.
# r12.txt n <- as.integer(readline("Enter an integer> ")) i <- 0 j <- 1 repeat { i <- i+1 if (i %% 5 == 0) { next } if (i == n) { cat("It took me ", j, " iterations to find your number\n") break } j <- j+1 }
Executing it produces this:
> source('r12.txt') Enter an integer> 29 It took me 24 iterations to find your number
Remember the PL/SQL
continue statement? It is used inside a loop to instruct the program to jump to the end of the loop and continue with the rest of the loop iterations as usual. The syntax is equivalent to the
next statement in R. Let's see a small example:
The PL/SQL code:
-- pl13.sql declare mynum number := 3; begin for i in 1..10 loop if mod (i,mynum) = 0 then dbms_output.put_line('multiple found as '||i); continue; dbms_output.put_line('we are continuing'); end if; dbms_output.put_line ('No multiple found as '||i); end loop; end; /
Executing the code:
No multiple found as 1 No multiple found as 2 multiple found as 3 No multiple found as 4 No multiple found as 5 multiple found as 6 No multiple found as 7 No multiple found as 8 multiple found as 9 No multiple found as 10
The R code:
# r13.txt mynum <- 3 for (i in 1:10) { if (i%%mynum == 0) { cat ("Multiple found as ", i, "\n") next } cat ("No multiple found as ", i, "\n") }
Executing the R code:
> source('r13.txt') No multiple found as 1 No multiple found as 2 Multiple found as 3 No multiple found as 4 No multiple found as 5 Multiple found as 6 No multiple found as 7 No multiple found as 8 Multiple found as 9 No multiple found as 10
As is the case in most languages, R provides repeatable code segments, similar to procedures and functions in PL/SQL. As you already know, in PL/SQL, a procedure does not return anything (although it can have an
OUT parameter; but that's not a return, so it's not the same thing), and a function returns a single value. In R, the equivalents of both PL/SQL procedures and functions is called simply a function. A Python function may or may not return anything.
In this article, we will cover how to write functions and use them in your programs. As in Part 1 of this series, we will see how to do something in PL/SQL and then do the equivalent in R.
A function definition in PL/SQL has this general syntax format:
function FunctionName ( Parameter1Name in DataType, Parameter2Name in DataType, ... return ReturnDatatype is localVariable1 datatype; localVariable2 datatype; begin ... function code ... return ReturnVariable; end;
A procedure definition in PL/SQL has this general syntax:
procedure ProcedureName ( Parameter1Name in DataType, Parameter2Name in DataType, ... ) is localVariable1 datatype; localVariable2 datatype; begin ... procedure code ... end;
A function definition syntax is somewhat convoluted in my opinion, compared PL/SQL; but the rest of the body is pretty standard. R follows this simple syntax:
FunctionName <- function (Parameter1Name,Parameter2Name, ...) { ... function code ... return ReturnVariable }
Note some important properties of the R function definition compared to the PL/SQL equivalent:
{and
}. Unlike PL/SQL, there is no
BEGIN ... ENDconstruct.
{sign is merely for readability; it's not needed. This is the same style followed in R to mark
IF ... THEN ... ELSEblocks or loops.
return ReturnVariable.
Now that you've got the basic idea about the syntax vis-à-vis PL/SQL, let's start with a very simple procedure in PL/SQL that accepts a principal amount and interest rate, computes the interest amount and the new principal after the interest is added, and displays the new principal.
Here is how we do it in PL/SQL. Note that I deliberately chose to use the R naming convention, for example,
pPrincipal, not a PL/SQL-style variable name such as
p_principal.
PL/SQL code:
-- pl14.sql declare procedure calcInt ( pPrincipal number, pIntRate number ) is newPrincipal number; begin newPrincipal := pPrincipal * (1+(pIntRate/100)); dbms_output.put_line ('New Principal is '||newPrincipal); end; begin calcInt(100,10); end; /
Here is the output:
New Principal is 110
R code:
# r14.txt calcInt <- function (pPrincipal, pIntRate) { newPrincipal <- pPrincipal * (1+(pIntRate/100)) paste("New Principal is ",as.character(newPrincipal)) }
We save this as
r1.txt and call it using the
source() function you learned about in Part 1 of this series.
> source('r14.txt') > calcInt(100,10) [1] "New Principal is 110"
Sometimes you need to pass a default value to a parameter. This value is in effect if the user does not explicitly pass the parameter. Building on the previous procedure, suppose we want to make the parameter
pIntRate optional, that is, make it a certain value (such as 5) when the user does not explicitly mention it. In PL/SQL, you mention the parameter this way:
ParameterName DataType := DefaultValue
In R, it's exactly the same, but because the assignment operator in R is the equals sign (
=), not
:=, that's what you need to use. Besides, remember, you don't mention the data type for parameters. Here is the general syntax:
ParameterName = DefaultValue
You can write the PL/SQL function this way (the changes are in bold):
--pl15.sql declare procedure calcInt ( pPrincipal number, pIntRate number := 5 ) is newPrincipal number; begin newPrincipal := pPrincipal *(1+(pIntRate/100)); dbms_output.put_line('New Principal is '||newPrincipal); end; begin -- don't mention the pIntRate parameter. -- defaults to 5 calcInt(100); end; /
R code:
# r15.txt calcInt <- function (pPrincipal, pIntRate = 5) { newPrincipal <- pPrincipal * (1+(pIntRate/100)) paste("New Principal is ",as.character(newPrincipal)) }
One important property of functions in R is that the default values can be variables as well. This is not possible in PL/SQL. For instance, in PL/SQL the following will be illegal:
-- pl16.sql declare defIntRate number := 5; procedure calcInt ( pPrincipal number, pIntRate number := defIntRate; ) is ...
But it's perfectly valid in R. Let's see how:
# r16.txt defIntRate <- 5 calcInt <- function (pPrincipal, pIntRate = defIntRate) { newPrincipal <- pPrincipal * (1+(pIntRate/100)) paste("New Principal is ",as.character(newPrincipal)) }
The variable
defIntRate dynamically influences the operation of the function. If you change the value of this variable, the function changes as well. Consider this following example:
> calcInt(100) [1] "New Principal is 105"
Now let's change the value of this variable to 10 and re-execute this function.
> defIntRate <- 10 > calcInt(100) [1] "New Principal is 110"
The new value of the variable took effect in the function.
You already know that in PL/SQL, you do not have to provide parameter values in the order in which the parameters were defined in the procedure. You can pass values by specifying the parameter by name. For instance, if a procedure
F1 assumes the parameters
P1 and
P2—in that order—you can call the procedure this way with the parameter values
Val1 and
Val2, respectively:
F1 (Val1, Val2);
But you can also call them with explicit parameter name assignments:
F1 (P2 => Val2, P1 => Val1);
This explicit naming allows you to order the parameters any way you want when calling the procedure. It also allows you to skip some non-mandatory parameters. In R, the equivalent syntax is this:
F1 (P2=Val2, P1=Val1)
So, just the greater-than operator (
=>) is changed to the equals sign (
=). Let's see examples in both PL/SQL and R.
PL/SQL example:
--pl7.sql declare procedure calcInt ( pPrincipal number, pIntRate number := 5 ) is newPrincipal number; begin newPrincipal := pPrincipal *(1+(pIntRate/100)); dbms_output.put_line('New Principal is '||newPrincipal); end; begin calcInt(pIntRate=>10, pPrincipal=>100); end; /
The output is this:
New Principal is 110
R example:
# r3.txt calcInt <- function (pAccType = "Savings", pPrincipal, pIntRate = 5) {:
> calcInt(pPrincipal=100) [1] "New Principal is 110" > calcInt(pPrincipal=100, calcInt(pPrincipal=100, pAccType = "Checking") [1] "New Principal is 105"
One of the useful cases in PL/SQL is to define a default value only when the value is not explicitly provided. Take for instance, when the user didn't specify anything for the interest rate, and you want the default values to be based on something else, for example, the account type. If the account type is Savings (the default), the interest rate should should be 10 percent; otherwise, it should be 5 percent. Here is how you will need to write the function:
-- pl18.sql declare procedure calcInt ( pPrincipal number, pIntRate number := null, pAccType varchar2 := 'Savings' ) is newPrincipal number; vIntRate number; begin if (pAccType = 'Savings') then if (pIntRate is null) then vIntRate := 10; else vIntRate := pIntRate; end if; else if (pIntRate is null) then vIntRate := 5; else vIntRate := pIntRate; end if; end if; newPrincipal := pPrincipal * (1+(vIntRate/100)); dbms_output.put_line('New Principal is '|| newPrincipal); end; begin calcInt(100); calcInt(100, pAccType => 'Checking'); end; /
The equivalent of the following line:
pIntRate number := null,
in R is this:
pIntRate = NULL
The PL/SQL equivalent of
IS NULL in R is the function
is.null(). Here is the complete R example (note the capitalization of "NULL"):
# r18.txt calcInt <- function (pAccType = "Savings", pPrincipal, pIntRate = NULL) { if (is.null(pIntRate)) { vIntRate <- 5 } else {:
> source('r18.txt') > calcInt(pPrincipal=100, calcInt(pPrincipal=100, calcInt(pPrincipal=100, pAccType = "Checking", pIntRate = NULL) [1] "New Principal is 105"
So far, we have talked about procedures in PL/SQL, which do not return anything. In contrast, functions in PL/SQL return a value. Here is a simple example of a function that returns the interest rate for the account type, which is the parameter passed to it:
--pl19 declare function getIntRate( pAccType in varchar2 ) return number is vRate number; begin case pAccType when 'Savings' then vRate := 10; when 'Checking' then vRate := 5; when 'MoneyMarket' then vRate := 15; end case; return vRate; end; begin dbms_output.put_line('Int Rate = '||getIntRate('Savings')); dbms_output.put_line('Int Rate = '||getIntRate('Checking')); dbms_output.put_line('Int Rate = '||getIntRate('MoneyMarket')); end; /
Here is the output:
Int Rate = 10 Int Rate = 5 Int Rate = 15
The equivalent of the following code line:
return vRate;
in R, fortunately, is similar, but not exactly the same:
return (vRate)
Note the parentheses. Here is the R function:
# r19.txt getIntRate <- function (pAccType) { if (pAccType == "Savings") { vRate <- 10 } else if (pAccType == "Checking") { vRate <- 5 } else if (pAccType == "MoneyMarket") { vRate <- 15 } return (vRate) }
Executing the R code produces this:
> getIntRate("Savings") [1] 10
You can try the other values:
> getIntRate("Checking") > getIntRate("MoneyMarket")
Another way to write the same function logic is using the
switch() function you learned about earlier in this article. It's the equivalent of the
CASE statement in PL/SQL.
# r19a.txt getIntRate <- function (pAccType) { vRate <- switch (pAccType,"Savings"=10, "Checking"=5, "MoneyMarket"=15,0) return (vRate) }
A very important concept of functions in R is that the
return (...) value is implicit. By default, the function returns the value of the last assigned object, even if you don't actually have an explicit
return statement. Let's see the example of a very simple function that takes a number and two variables are assigned inside the body. The function doesn't return anything.
# r20.txt f1 <- function(inVal) { v1 <- inVal * 2 v2 <- v1 * 2 }
As you can see, the function returns nothing. Now let's call the function:
> f1(2)
There is no result, because the function doesn't return or print anything. Now let's capture the return value of the function in a variable
v3:
> v3 <- f1(2) > v3 [1] 8
How did the function return 8, when we didn't write a
return statement? It's because the last assigned value was
v2 and that was implicitly returned. By default, all functions will implicitly return the last value assigned. So does it mean that we need not write the
return statement? Of course not. We need an explicit
return in the function code for these two reasons:
As you write multiple levels of code in R, such as subprograms calling other subprograms, you might face the prospect of encountering the same names being defined for variables inside these subprograms. In that case, which of the values assigned to the variables will be relevant? This is where you have to know how the scope of the variables, which is a very important concept to remember. Let's start with a simple example function that accepts an input value, stores it in a local variable called
v1, and prints it.
#r21.txt f1 <- function(inVal) { v1 <- inVal cat ("v1=", v1, "\n") }
Executing the code produces this:
> source("r21.txt") > f1(1) v1= 1
What will happen if you have another variable outside the function that has the same name of "v1"? Will the function use the value set inside the function or simply get the value from outside? Let's change the code and see the output. We first set the value of
v1 to 10 outside the function and set it to 1 inside:
#r22.txt v1 <- 10 f1 <- function(inVal) { v1 <- inVal cat ("v1=", v1, "\n") }
When we execute the code, what should we get? Let's see:
> source("r22.txt") > f1(1) v1= 1
The output is 1, which is the value assigned inside the function. The prior assigned value, 10, was not considered. This is the expected behavior in pretty much any language, including PL/SQL. So it's no surprise.
However, what happens if a variable called
v1 is not even created inside the function, as shown below:
#r22a.txt v1 <- 10 f1 <- function(inVal) { cat ("v1=", v1, "\n") }
Note that the variable
v1 is not defined inside the function, yet it is called in the
cat() function. In PL/SQL, if you reference a variable not defined in the function, you will get a syntax error. Let's see what happens in R. Executing the R code results in this:
> source("r22a.txt") > f1(1) v1= 10
Whoa! What happened? We did not get a syntax error. Instead R pulled up the variable
v1 defined outside the function. This is a very important property of R, which is very unlike PL/SQL. You should pay attention to this behavior, because it can cause many bugs if it is not understood properly. Let's recap. If a variable is referenced inside a function, R first looks to see if that variable is already defined inside the function. If so, the value is used. Otherwise, if that variable is not defined, R looks up to the immediate top level code to see if that variable is defined there. If it is found, that value is used. If is is not found there, the next upper level of code is checked, and so on.
Let's take a recap of what we explored in this article. Like the previous article, we will examine one element of PL/SQL and contrast it with the equivalent R element.
Let's test your understanding with these simple questions.
1. Consider the following code:
#q1.txt f1 <- function (inVal) { v1 <- inVal * 2 cat ("Inside f1, v1=", v1, "\n") } f2 <- function (inVal) { v1 <- inVal * 2 cat ("Inside f2, v1=", v1, "\n") } f3 <- function (inVal) { v1 <- inVal * 2 cat ("Inside f3, v1=", v1, "\n") } f3(f2(f1(2)))
Here is the output:
> source ("q1.txt") Inside f1, v1= 4 Inside f2, v1= Inside f3, v1=
Why don't we see the values of v1 in the other functions?
2. Consider the following function:
# q2.txt f1 <- function (inVal) { v1 <- inVal * 2 }
Note that there is no
return statement. So this code doesn't return anything. However, we still call it and assign the return value to another variable
v2:
> v2 <- f1(2) > v2 [1] 4
How come the function returned 4?
3. You are starting an R session from scratch. You gave the following command:
# q3.txt if (x<y) { print('yes') }
And the output was this:
Error: object 'x' not found
Why was the error produced? Isn't it true that R defines variables when they are referenced?
4. What will be result of the following code?
# q4.txt v1 <- 10 f1 <- function (inVal) { v1 <- 4 2 * v1 * f2(inVal) } f2 <- function (inVal) { inVal * v1 } f1(2)
5. What will be the output of the following?
> v1 <- 2 > v2 <- switch(v1,100,200,300,400) > v2
6. Along the same lines, here is modified code where you ask the user to input the value of
v1 instead of hardcoding it.
# q6.txt > v1 <- readline("Enter a number> ") Enter a number> 2 > v2 <- switch(v1,100,200,300,400)
But it failed with the following message:
Error: duplicate 'switch' defaults: '100' and '200'
What happened? The only change you made was to accept the value; and now it's producing an error.
7. You are writing a statement to check the number input by the user is less than 100. Here is the code you wrote:
# q7.txt > v1 <- as.integer(readline("Enter a number> ")) Enter a number> 5 > v2 <- switch((v1<100), "Yes, less than 100", "No, greater than 100") > v2 [1] "Yes, less than 100"
It worked correctly. It reported that the number entered by the user (5) is less than 100. So, you re-executed the statement with a different input:
> v1 <- as.integer(readline("Enter a number> ")) Enter a number> 200 > v2 <- switch((v1<100), "Yes, less than 100", "No, greater than 100") > v2 NULL
Note the output. It's
NULL, not the desired output. Why?
8. What is the difference between
break and
next statements?
9. You have all the R commands in a file called, say,
myscript.R. How can you call the script and not have to enter the commands one by one?
10. I am trying to write a simple function that merely prints the word "Hello." So the function doesn't accept any parameters. Here is how I started typing, but I got an error:
> printHello <- function + { Error: unexpected '{' in: "printHello <- function {"
Why did I get the error? I don't have any parameter; so I can't pass it anyway.
1. Note the function definition. There are no
return statements inside the function. So the
v1 value was not populated. The correct syntax would have been this:
#q1a.txt f1 <- function (inVal) { v1 <- inVal * 2 cat ("Inside f1, v1=", v1, "\n") return (v1) } f2 <- function (inVal) { v1 <- inVal * 2 cat ("Inside f2, v1=", v1, "\n") return (v1) } f3 <- function (inVal) { v1 <- inVal * 2 cat ("Inside f3, v1=", v1, "\n") return (v1) } f3(f2(f1(2)))
Here is output now:
> source("q1a.txt") Inside f1, v1= 4 Inside f2, v1= 8 Inside f3, v1= 16
2. Even though the function might not have an explicit
return statement, the last assigned value is returned implicitly. Because
v1 was assigned last, it was returned.
3. No; R creates variables when they are assigned, not when they are referenced. In this code, you simply referenced
x and
y without assigning any value to them. So they were not created. The following would have been valid code in which the values of
x and
y were assigned.
# q3a.txt x <- 1 y <- 2 if (x<y) { print('yes') }
4. It will be 160. Here is why. Inside the
f1 code, you see a reference to
f2; so R will go on to evaluate
f2(2). Inside the
f2 code, there is a reference to variable
v1. But there is no variable
v1 defined inside
f2. So R will look up the variable
v1 defined at the beginning of the code (that is, 10). So
f2(2) will return 2 * 10 (that is, 20). Then control will pass to function
f1. However, there is a variable named
v1 inside it. So, that value (that is, 4) will be used.
f1(2) will evaluate to 2 * 4 *
f2(2). Because
f2(2) returned 20, this expression will be 2 * 4 * 20 = 160.
5. It will be 200. When you pass an integer as the first argument to
switch, it uses that to decide which position it will look up to get the value. In this case, you have passed 2; hence, it will look up to the second position, which has the value 200. Hence, the
switch function will return 200.
6. The function
readline() returns a value of character data type. So
v1 is a character. The
switch() function behaves differently when the first argument is a character instead of a number. So, the syntax for the rest of the arguments in the
switch was wrong. The correct code is this:
# q6a.txt > v1 <- as.integer(readline("Enter a number> ")) Enter a number> 2 > v2 <- switch(v1,100,200,300,400) > v2 [1] 200
7. The
switch function works on integers only, not on logical values. In the first case (5<100), the expression is TRUE, which becomes 1. Therefore,
switch() picks up the first value on the list: "Yes, less than 100". However, in the second case, it was FALSE, which equates to 0, and there was nothing for the 0-th option. It's not possible.
So, how would you write that code to test if the input number is less than or greater than 100? One option is to use
if ... else ,,, construct. But if you want to use
switch(), you should use this:
# q7a.txt v2 <- switch((v1<100)+1, "No, greater than 100", "Yes, less than 100")
Now the code will yield right results.
8. The
break statement stops the loop and exits the loop completely. The code after the loop is executed. The
next statement simply jumps control to the end of the loop; so control goes back to the beginning of the loop again.
9. There are two options:
C:\> rscript myscript.r
> source("myscript.r")
In both options, we assume that the script file is in that directory.
10. Even if you don't need parameters (or arguments) to R functions, you still need to use parentheses:
printHello <- function(). | https://www.oracle.com/technetwork/articles/bi/learning-r-for-plsql-developers-p2-3941812.html | CC-MAIN-2019-39 | refinedweb | 6,346 | 72.16 |
Hello Grails!?
Setting Things Up
Unlike with Rails, the NetBeans Groovy plugin does not come bundled with Groovy and the Grails framework - these must be pre-installed on your system (don't worry, it's easy):
- Download and install Groovy
- Download and install Grails
- Download and install NetBeans 6.1 M1.
- Install the Groovy and Grails plugin (Tools > Plugins > Available Plugins)
- Open the NetBeans Options dialog and select the Groovy category. Set the Groovy and Grails home directories.
Creating the Grails Project
- Choose File > New Project to create a new Grails Application named GroovyWebLog. Your new Grails application will appear in the IDE:
Creating the Model
Or as Grails prefers to call them, Domain classes.
- Right-click the Domain classes node and select "Create new Domain Class":
- Name the class Post.
- When Post.groovy opens in the editor, add a single field, title:
class Post {
String title
}
Creating the Controller
- Right-click the Controllers node and select "Create new controller"
- Name the controller Blog.
- For now, we'll use dynamic scaffolding to create the application views at runtime. Replace the default contents of BlogController with the following:
class BlogController {
def scaffold = Post
}
Note, if our controller named matched the domain class name, we could then use def scaffolding = true. I kept the model and controller names different to match what we did with Rails.
Run the Application
- Right-click the GrailsWebLog project and select Grails > Run Application.
- If all goes well, NetBeans will start the Jetty server that comes bundled with Grails and launch your browser where you will see the Grails welcome page:
- Click the BlogController link:
- Create a New Post:
It's not immediately obvious, but the Id is a link which you can use to view the post details. From the detail page you can then edit and/or delete the post:
Adding Another Field
Our blog needs a body field.
- Edit Post.groovy and add a body field:
class Post {
String title
String body
}
- Return to the browser and depending on what you do, you may get an error. By default Grails is configured to drop and create the database every time a change is made, so if you tried to edit the existing record you were viewing, you got a NullPointerException. However, if you navigate back to the list, you will see that the body field has been detected and you can add a new record again:
A Friendlier Database Configuration:
- Change the dbCreate property from "create-drop" to "update".
- Right-click the GrailsWebLog project and choose Grails > Stop Application.
- Richt-click the project again and choose Grails > Run Application.
- Now experiment with adding and/or deleting fields - you'll notice that the behavior is much more "Railsesque".
Validating Input
Like Rails, validation in Grails is very straightforward.
- Open Post.groovy and add the following constraints:
class Post {
String title
String body
static constraints = {
title(blank:false)
body(blank:false)
}
}
- Attempt to add a new Post without entering any data:
Customizing the View
Okay, enough with this dynamic scaffolding. Let's generate our controller and view code. Unfortunately, for this step I haven't found the menu option in NetBeans yet, so we'll resort to the command line for now.
- Open your command prompt and navigate to the GrailsWebLog project directory. If you've forgotten where this is, you can find it in the project's Properties dialog.
- Run the command grails generate-all Post:
- Now this created a new controller named PostController. I don't see a way to specify a controller name when using generate-all (or generate-controller). That's fine - I'll leave BlogController to do the dynamic scaffolding and use PostController for my customizations.
- Also created were a set of GSP (Groovy Server Pages) for managing the view for the PostController:
- Open list.gsp and delete everything from from the
Post List
to the
<h1>The Groovy Blog</h1>
<g:each
<h2>${post.title}</h2>
<p>${post.body}</p>
<small><g:linkpermalink</g:link></small>
<hr>
</g:each>
<g:each
The Completed Application
- Login or register to post comments
- Printer-friendly version
- bleonard's blog
- 5256 reads
by schmidtm - 2008-02-25 03:56Hi *, the problem with having grails and the project on different drives is fixed, see: Please file all issues within issuezilla (category groovy) and give feedback what's missing.
by bvaessen - 2008-02-06 02:20Brian, I have done some extra tries and found out that when I create my project on my D:-drive (in Windows) the error occurs. I tried to create it on my C:-drive as well and then the error does not occur. Seems like a bug to me...
by bvaessen - 2008-02-06 00:04Brian, Sorry for my late response. I just tried to create the Domain Class from the command line, so this works.
by timyates - 2008-02-01 08:48(on OS X 10.4 btw)
by timyates - 2008-02-01 08:48;
by bleonard - 2008-02-01 08:32Ben, are you able to successfully run the command "grails create-domain-class Post" from the command line (make sure you're in the project directory). -Brian
by bleonard - 2008-02-01 08:28Hi
by bvaessen - 2008-02-01 00:28These.
by aaron_broad - 2008-01-31 23:53Hi,
by bleonard - 2008-01-31 11:59Yes,.
by hsalameh - 2008-01-31 10:10?
by hsalameh - 2008-01-31 09:54Why is installing Groovy needed? Doesn't Grails include Groovy already?
by reichertdf - 2009-06-04 13:07Ah, I finally sorted through the CSS that controls the data entry forms. Sorry for the clutter.
by reichertdf - 2009-06-04 08:05I
by bleonard - 2008-02-01 12:52Tim, good catch. The issue's being tracked here in case you want to watch it. -Brian
by timyates - 2008-02-01 11:12Got it :-) I needed the "All" install, not just the Java SE one... Something must be missing from Java SE that the Groovy stuff requires :-) As you were people... ;-)
by jeremygerrits - 2008-02-01 11:06Tim, I had the same problem with 6.1M1 for WinXP. Try the latest build (I installed todays) and it should work just fine.
by timyates - 2008-02-01 10:47Yeah, I've tried removing my entire ~/.netbeans folder, but still no joy... I'll try it again, it's probably me being an idiot...
by bleonard - 2008-02-01 10:22Tim,
by bleonard - 2008-03-13 07:48Yes, I was wondering that as well. I installed the development updated center into the Beta (Tools > Plugins > Settings > Add). Here's the URL:... . It appears to be working fine - I can't vouch for any of the other plugins on that site :-). /Brian
by shemnon - 2008-03-12 07:23Wheres the plugin for 6.1 Beta? | https://weblogs.java.net/blog/bleonard/archive/2008/01/hello_grails.html | CC-MAIN-2014-10 | refinedweb | 1,129 | 71.55 |
Bart De Smet's on-line blog (0x2B | ~0x2B, that's the question)
"Today is my last day at Microsoft. I'm leaving the the mothership and headed out on a new venture. It's been a dream for nearly 6 years, it's been in planning for 18mo, and it starts next week!"
More info on Rob's blog on. All the best, Rob!
Microsoft has released the website to promote IIS 6.0 as the ideal webserver for businesses. In fact, I share this idea and really like the latest release of IIS included with Windows Server 2003. In fact, there is a special release of Windows Server 2003, the Web Edition, that was especially built for this purpose (web and application server). As the matter in fact, IIS still has too much a bad name in terms of security with the Code Red and Nimda in IT pro's minds. That's one of the problems the IIS team took on with the development of IIS 6.0 in the light of the Trustworthy Computing statement of Microsoft (remember SD3+C = Secure by Design, Secure by Default and Secure by Deployment and Communications). If you want to know more about this mathematical looking formula "SD3+C", you can watch the MSDN TV episode by Michael Howard that was released recently on.
So what can you find on the Try IIS website? First of all, the promotional stuff of course with a slogan like "Instead of adding more servers, add a web server that does more.", not bad isn't it? Furthermore you'll find casestudies and links to webcasts on IIS 6 and resources such as the IIS 6 Resource Kit. People who're still doubting about the position of Microsoft in the server market (how dare you?) can find the "facts on Windows and Linux" as well via and the prove ("black on white") of the strenghts of IIS 6 in comparison with a webserver running Apache on a Linux platform (test done by VeriTest a year ago in April).
There are - as Microsoft states - 6 reasons to adopt IIS 6 (I'm sure I'd find some more but 6 reasons for IIS 6 sounds better for promotional stuff I guess):
Words are only words and don't show the real stuff. So, there's only one way to get to learn IIS 6.0 (if you've not done so yet): try it now and order the Eval Kit of!
Today was the wedding of Gunther Beersaerts (working at Microsoft Belux, EPG). Congratulations to Gunther and his bride Petra :-)
Cheers,Bart
The next part of the story ;-). I took the test again this time without a popup blocker. 87% percent again (some questions were the same but most of them were new ones) in a record-time of 9 minutes... Unfortunately the site now mentions 3 tries (a first try was just to try the assessment web app, a second try did not display the popup, the third and last try was the good one).
My tip: make sure popup blocking is disabled if you want to get your name in the ranking lists (assuming you have a high score of course).
(End of story?)
Okay, the reason why I was not listed in the high scores table is the pop-up blocker of Internet Explorer in Windows XP SP2 :-(. I guess the guys of MeasureUp will have some work when XP SP2 is released :-))). A fragment of the reply of my support request:
"The High Score feature is only available for Registered Users, those who enter the Skills Assessment via Passport. If the customer is a Registered User and scores high enough to rank in the High Score list, a High Score pop-up window will be displayed AFTER the assessment is scored. IMPORTANT: If you have an application on your personal computer that blocks pop-up windows, such as Pop-Up Killer, the pop-up window will NOT be displayed. This pop-up window is the only way to register a High Score name."
Maybe I have to take the skill assessment for ASP.NET again, this time without a popup blocker (so Bart, disable that cool MSN toolbar as well...)?
Test your skills on Microsoft technology (development, IT pro, etc) via for free. I took the challenge today and did the "Developing Enterprise Web-based Applications with .NET: ASP.NET - Visual C#.NET" a few minutes ago, with success: 26 out of 30 (87%) in 23 minutes. Not listed in the high scores yet although the previous record was on 77%... However, I'm happy with this result (unfortunately there is not feedback on the mistakes you made).
So, take the test as well and measure your .NET skills right now :-)
Sometimes I like to do something special. Today this was not different :-). There was one of the MSDN DVDs that wasn't ever installed completely over here on my machine, the one that contains the DDK (Driver Development Kit). How dare you Bart? I have quite some SDKs on my machine but that one was missing. Okay, problem solved today. Installing a lot of software has a nice side-effect: the Start menu is becoming quite large which is pretty useful to get a wow-effect during presentations. Okay, just kidding. When you install something on your machine, the most obvious reason to do so is to use it of course. So, I decided to take a look at driver development for Windows Server 2003. Pretty interesting stuff although it's very very complex (especially when you're faced with it for the very first time). Nevertheless, I took the challenge (in fact I'm working on some project and a driver to emulate a device would be handy for the particular application) and did work on some samples. A nice on is the RAMDISK sample that can be found on-line as well on;en-us;Q257405. Although it was originally developed for Windows 2000 it still works on Windows Server 2003 but some minor modifications are recommended (the same holds for use on Windows XP as discussed in the KB article online). Thanks to this sample I now have a B: drive on my system (you can change the drive letter of the RAMDISK through the registry) with a size of 128 MB...
A possible use of a RAMDISK is described in;DE;834886 (unfortunately only available in German AFAIK) to protect privacy on the internet (by storing temporary files and cookies on the RAMDISK). Really handy in some scenario's... Note: there are other implementations of RAMDISKs for Windows 2000, Windows XP and Windows Server 2003 around the net. A simple Google search will give you quite some results.
Going to bed now. The birds are already singing and it's becoming light in the city of Zottegem :-)
Been experimenting with SQL Server Notification Services again tonight. One little conclusion: don't stop the SQL Server instance being used without stopping the notification services first. I tried to do it the other way, with the splendid result of a 100% processor usage for the NSService.exe process... :-( However, I hope that I'll find the time to finish my article on SQL Server NS for MSDN Belux this summer (finally) but due to other projects this has even been delayed...
Mathematic fans will know sparse matrices (that are, matrices that contain a lot of zeros). Files can be sparse as well if they contain a lot of zeros in a row (for example a region of multiple MBs contains only zeros as data). NTFS support sparse files and allows you to compress these files on the disk. In fact, I'm in my fsutil investigation period, so this is just another possibility of the fsutil tool.
Let's create a sparse file (of course we write a program to do this):
using System.IO;
class Sparse{ public static void Main(string[] args) { string file = args[0]; FileStream fs = new FileStream(file, FileMode.CreateNew); BinaryWriter bw = new BinaryWriter(fs); byte ZERO = 0;
bw.Write((byte) 1); for (int i = 0; i < 1024*1024 - 2; i++) bw.Write(ZERO); bw.Write((byte) 1); }}
And create the file using sparse.exe test.sparse.
If you take a look in the Windows Explorer right now, you'll find that there is a file of 1.00 MB with 1.00 MB allocation on the harddisk. Now we can mark the file as being sparse:
fsutil sparse setflag test.sparse
The next thing to do is mark the sparse region:
fsutil sparse setrange test.sparse 1 1048575
Now, Windows Explorer will tell us only 64 KB are allocated on the disk to store the file (the non-zero data + data to know where the sparse region lives). A hex-editor on the disk can be quite useful if you want to see how NTFS stores a sparse file and how it indicates a file is sparse. | http://community.bartdesmet.net/blogs/bart/archive/2004/05.aspx | CC-MAIN-2015-35 | refinedweb | 1,504 | 71.14 |
Foundations¶
You should read through the quickstart before reading this document.
Distributed computing is hard for two reasons:
- Consistent coordination of distributed systems requires sophistication
- Concurrent network programming is tricky and error prone
The foundations of
distributed provide abstractions to hide some
complexity of concurrent network programming (#2). These abstractions ease the
construction of sophisticated parallel systems (#1) in a safer envirotnment.
Communication with Tornado Streams (raw sockets)¶
Workers, the Scheduler, and clients communicate with each other over the network. They use raw sockets as mediated by tornado streams. We separate messages by a sentinel value.
Servers¶
Worker and Scheduler nodes serve requests over TCP. Both Worker and Scheduler
objects inherit from a
Server class. This Server class thinly wraps
tornado.tcpserver.TCPServer. These servers expect requests of a particular
form.
- class
distributed.core.
Server(handlers, max_buffer_size=2069891072.0, **kwargs)[source]¶
Distributed TCP Server
Superclass for both Worker and Center objects. Inherits from
tornado.tcpserver.TCPServer, adding a protocol for RPC.
Handlers
Servers define operations with a
handlersdict mapping operation names to functions. The first argument of a handler function must be a stream for the connection to the client. Other arguments will receive inputs from the keys of the incoming message which will always be a dictionary.
>>> def pingpong(stream): ... return b'pong'
>>> def add(stream, x, y): ... return x + y
>>> handlers = {'ping': pingpong, 'add': add} >>> server = Server(handlers) >>> server.listen.
- class
distributed.core.
rpc(arg=None, stream=None, ip=None, port=None, addr=None, timeout=3)[source]¶
Conveniently interact with a remote server
Normally we construct messages as dictionaries and send them with read/write
>>> stream = yield connect(ip, port) >>> msg = {'op': 'add', 'x': 10, 'y': 20} >>> yield write(stream, msg) >>> response = yield read(stream)
To reduce verbosity we use an
rpcobject.
>>> remote = rpc(ip=ip, port=port) >>> response = yield remote.add(x=10, y=20)
One rpc object can be reused for several interactions. Additionally, this object creates and destroys many streams as necessary and so is safe to use in multiple overlapping communications.
When done, close streams explicitly.
>>> remote.close_streams()
Example¶
Here is a small example using distributed.core to create and interact with a custom server.
Server Side¶
from tornado import gen from tornado.ioloop import IOLoop from distributed.core import write, Server def add(stream, x=None, y=None): # simple handler, just a function return x + y @gen.coroutine def stream_data(stream, interval=1): # complex handler, multiple responses data = 0 while True: yield gen.sleep(interval) data += 1 yield write(stream, data) s = Server({'add': add, 'stream': stream_data}) s.listen(8888) IOLoop.current().start()
Client Side¶
from tornado import gen from tornado.ioloop import IOLoop from distributed.core import connect, read, write @gen.coroutine def f(): stream = yield connect('127.0.0.1', 8888) yield write(stream, {'op': 'add', 'x': 1, 'y': 2}) result = yield read(stream) print(result) >>> IOLoop().run_sync(f) 3 @gen.coroutine def g(): stream = yield connect('127.0.0.1', 8888) yield write(stream, {'op': 'stream', 'interval': 1}) while True: result = yield read(stream)(): # stream = yield connect('127.0.0.1', 8888) # yield write(stream, {'op': 'add', 'x': 1, 'y': 2}) # result = yield read(stream) r = rpc(ip='127.0.0.1', 8888) result = yield r.add(x=1, y=2) print(result) >>> IOLoop().run_sync(f) 3
Everything is a Server¶
Workers, Scheduler, and Nanny objects all inherit from Server. Each maintains separate state and serves separate functions but all communicate in the way shown above. They talk to each other by opening connections, writing messages that trigger remote functions, and then collect the results with read. | http://distributed.dask.org/en/1.10.2/foundations.html | CC-MAIN-2021-17 | refinedweb | 597 | 51.34 |
#include <lqr.h>
It is very important to note that using no rigidity masks at all is equivalent to use a rigidity mask over the whole image with all the values set to 1.0, but, when first adding a rigidity mask to a LqrCarver object, all the pixels outside the affected area will have their rigidity set to zero; therefore, the functions lqr_carver_rigmask_add_xy, lqr_carver_rigmask_add_area and lqr_carver_rigmask_add_rgb_area actually affect the whole image, despite their name.
All the functions must be called after lqr_carver_init and before lqr_carver_resize. If called multiple times over the same area, new values will replace the old ones.
The function lqr_carver_rigmask_add_xy sets the rigidity mask value of the x, y pixel of the image loaded into the LqrCarver object pointed to by carver
The function lqr_carver_rigmask_add_area adds a rigidity mask to an area of the image loaded in the LqrCarver object pointed to by carver.
The parameter buffer must point to an array of doubles of size width * height, ordered first by rows, then by columns.
The offset of the area relative to the image are specified through x_off and y_off. The rigidity mask area can exceed the boundary of the image, and the offsets can be negative.
The values in the given buffer are scaled by the overall rigidity value set when calling the function lqr_carver_init.
The function lqr_carver_rigmask_add can be used when the area to add is of the same size of the image loaded in the LqrCarver object and the offsets are 0.
The functions lqr_carver_rigmask_add_rgb_area and lqr_carver_rigmask_add_rgb are very similar to lqr_carver_rigmask_add_area and lqr_carver_rigmask_add, rigidity value is computed from the average of the colour channels, multiplied by the value of the alpha channel if present. For example, in RGBA images a white, nontransparent pixel is equivalent to a value of 1.0 when using a buffer in lqr_carver_rigmask_add_area.
The return values follow the Liquid Rescale library signalling system.
LqrRetVal(3), lqr_carver_init(3), lqr_carver_bias_add(3) | http://www.makelinux.net/man/3/L/lqr_carver_rigmask_add_rgb_area | CC-MAIN-2015-14 | refinedweb | 321 | 50.16 |
11-01-2018
04:44 AM
For a couple of days, I'm trying to set up a VPN server on a new Server 2016 VPS. It's just a standard Server 2016 without any other roles added.
When I finish the Routing and Remote Access Server setup, it says the RRAS service is ready to start, when I click on OK it starts the server, but then it's everytime hanging on 'Please wait while the Routing and Remote Access server finishes initialization'
No matter how long I wait, it doesn't get beyond that point.
Looking at the logs in the event viewer, I got several warnings and errors from WMI:
Warnings:A provide, WebAdministrationProvider, has been registered in the Windows Management Instrumentation namespace Root\WebAministration to use the LocalSystem account. This account ios privileged and the provider may cause a security violation if it does not correctly impersonate user requests.
Errors:
Event provider GatewayHealthMonitorProvider attempted to register query "select" from MSFT_GatewayHealthEvent" whose target class "MSFT_GatewayHealthEvent" in ///./root/Microsoft/Windows/RemoteAccess/GatewayHealthMonitor namespace does not exist. The query will be ignored.
And then some errors like the above one, but for RAServerPSProvider and RAMgmtPSProvider.
Now I have tried it multiple times to set up, even a new install twice, but I can't seem to get beyond this point.
Does someone know how to solve this problem?
View best response
11-01-2018
08:25 AM
11-02-2018
04:51 PM
I had contact with the VPS provider, after some searching, it turned out it was the network adaptor not being compatible with Windows Server. They assigned another type of network adaptor and now it's running as it should be. | https://techcommunity.microsoft.com/t5/windows-server-for-it-pro/server-2016-rras-setup-problems/td-p/281071 | CC-MAIN-2020-10 | refinedweb | 284 | 59.43 |
New tile based platform engine – part 7 – trampolines
After the AS3 translation of AS2 part 6, it’s time to introduce trampolines.
Just like ladders, trampolines have their rules. Here they are:
1 – A trampoline does not act like a wall, so a player can walk through it just like a cloud
2 – A trampoline should never be placed over another trampoline
3 – When you land on a trampoline, your vertical speed is inverted
How does this affect gameplay?
Just try to play and reach that unreachable platform as shown in the picture
Here it is the actionscript: Read more?
Triqui.com WordPress arcade now in beta
If you followed the Creation of a Flash arcade site using WordPress series you know I am about to release a WP Arcade theme with automatic game submission thanks to a plugin that parses MochiAds feed.
It’s time to open the beta to the public and make the site run.
It’s time to play, rate and comment games on triqui.com
As soon as you comment and rate games, I will be able to improve it thanks to your feedback.
Don’t forget I am about to release both the theme and the plugin for free a few days after the end of this month so your feedback is very appreciated.
Just a couple of technical information: the theme I am designing is a mix between the free version of Revolution Blog WordPress Theme and CSSEY, while the logo was made following Icey Styles in Photoshop tutorial.
New tile based platform engine – AS3 version
First, I would like to say the engine is not finished as I have a lot of tile types to add.
This is the AS3 version of part 6.
I am fixing some glitches and preparing another post about the theory of platform games
This is the code
- package {
- import flash.display.Sprite;
- import flash.ui.Mouse;
- import flash.events.Event;
- import flash.events.KeyboardEvent;
- public class newplat06_as3 extends Sprite {
- var over = "";
- var bonus_speed = 0;
- var press_left = false;
- var press_right = false;
- var press_up = false;
- var press_down = false;
- var press_space = false;
- var x_pos = 0;
- var y_pos = 0;
- var tile_size = 20;
- var ground_acceleration = 1;
- var ground_friction = 0.8;
- var air_acceleration = 0.5;
- var air_friction = 0.7;
- var ice_acceleration = 0.15;
- var ice_friction = 0.95;
- var treadmill_speed = 2;
- var max_speed = 3;
- var xspeed = 0;
- var yspeed = 0;
- var falling = false;
- var gravity = 0.5;
- var jump_speed = 6;
- var climbing = false;
- var climb_speed = 0.8;
- var level = new Array();
- var player = new Array(5,1);
- var h:hero = new hero();
- public function newplat06_as3() {
- level[0] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1];
- level[1] = [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1];
- level[2] = [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 6, 1, 0, 0, 0, 0, 1];
- level[3] = [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6, 0, 0, 0, 0, 0, 1];
- level[4] = [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6, 0, 0, 0, 0, 0, 1];
- level[5] = [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6, 0, 0, 0, 0, 0, 1];
- level[6] = [1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 6, 1, 0, 0, 0, 0, 0, 1];
- level[7] = [1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 0, 6, 0, 0, 0, 0, 0, 0, 1];
- level[8] = [1, 1, 1, 1, 0, 0, 0, 0, 0, 5, 5, 0, 0, 0, 0, 0, 0, 6, 0, 0, 0, 0, 0, 0, 1];
- level[9] = [1, 1, 1, 1, 3, 3, 3, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 4, 4, 4, 4, 1, 1];
- create_level(level);
- addEventListener(Event.ENTER_FRAME,on_enter_frame);
- stage.addEventListener(KeyboardEvent.KEY_DOWN, key_down);
- stage.addEventListener(KeyboardEvent.KEY_UP, key_up);
- }
- public function key_down(event:KeyboardEvent) {
- if (event.keyCode == 32) {
- press_space = true;
- }
- if (event.keyCode == 37) {
- press_left = true;
- }
- if (event.keyCode == 38) {
- press_up = true;
- }
- if (event.keyCode == 39) {
- press_right = true;
- }
- if (event.keyCode == 40) {
- press_down = true;
- }
- }
- public function key_up(event:KeyboardEvent) {
- if (event.keyCode == 32) {
- press_space = false;
- }
- if (event.keyCode == 37) {
- press_left = false;
- }
- if (event.keyCode == 38) {
- press_up = false;
- }
- if (event.keyCode == 39) {
- press_right = false;
- }
- if (event.keyCode == 40) {
- press_down = false;
- }
- }
- public function create_level(l) {
- var level_container:Sprite = new Sprite();
- var level_height = l.length;
- var level_width = l[0].length;
- addChild(level_container);
- for (j=0; j<level_height; j++) {
- for (i=0; i<level_width; i++) {
- if (l[j][i] != 0) {
- var t:tile = new tile();
- t.x = i*tile_size;
- t.y = j*tile_size;
- t.gotoAndStop(l[j][i]);
- level_container.addChild(t);
- }
- }
- }
- x_pos = player[0]*tile_size+tile_size/2;
- y_pos = player[1]*tile_size+tile_size/2+1;
- h.x = x_pos;
- h.y = y_pos;
- level_container.addChild(h);
- }
- public function on_enter_frame(event:Event) {
- ground_under_feet();
- walking = false;
- climbing = false;
- if (press_left) {
- xspeed-=speed;
- walking = true;
- }
- if (press_right) {
- xspeed += speed;
- walking = true;
- }
- if (press_up) {
- get_edges();
- if (top_right == 6 || bottom_right == 6 || top_left == 6 || bottom_left == 6) {
- jumping = false;
- falling = false;
- climbing = true;
- climbdir = -1;
- }
- }
- if (press_down) {
- get_edges();
- if (over == "ladder") {
- jumping = false;
- falling = false;
- climbing = true;
- climbdir = 1;
- }
- }
- if (press_space) {
- get_edges();
- if (!falling && !jumping) {
- jumping = true;
- yspeed = -jump_speed;
- }
- }
- if (!walking) {
- xspeed *= friction;
- if (Math.abs(xspeed)<0.5) {
- xspeed = 0;
- }
- }
- if (xspeed>max_speed) {
- xspeed = max_speed;
- }
- if (xspeed<max_speed*-1) {
- xspeed = max_speed*-1;
- }
- if (falling || jumping) {
- yspeed += gravity;
- }
- if (climbing) {
- yspeed = climb_speed*climbdir;
- }
- if (!falling && !jumping && !climbing) {
- yspeed = 0;
- }
- xspeed += bonus_speed;
- check_collisions();
- h.x = x_pos;
- h.y = y_pos;
- xspeed -= bonus_speed;
- }
- public function ground_under_feet() {
- bonus_speed = 0;
- var left_foot_x = Math.floor((x_pos-6)/tile_size);
- var right_foot_x = Math.floor((x_pos+5)/tile_size);
- var foot_y = Math.floor((y_pos+9)/tile_size);
- var left_foot = level[foot_y][left_foot_x];
- var right_foot = level[foot_y][right_foot_x];
- if (left_foot != 0) {
- current_tile = left_foot;
- } else {
- current_tile = right_foot;
- }
- switch (current_tile) {
- case 0 :
- speed = air_acceleration;
- friction = air_friction;
- falling = true;
- break;
- case 1 :
- over = "ground";
- speed = ground_acceleration;
- friction = ground_friction;
- break;
- case 2 :
- over = "ice";
- speed = ice_acceleration;
- friction = ice_friction;
- break;
- case 3 :
- over = "treadmill";
- speed = ground_acceleration;
- friction = ground_friction;
- bonus_speed = -treadmill_speed;
- break;
- case 4 :
- over = "treadmill";
- speed = ground_acceleration;
- friction = ground_friction;
- bonus_speed = treadmill_speed;
- break;
- case 5 :
- over = "cloud";
- speed = ground_acceleration;
- friction = ground_friction;
- break;
- case 6 :
- over = "ladder";
- speed = ground_acceleration;
- friction = ground_friction;
- break;
- }
- }
- public function check_collisions() {
- y_pos += yspeed;
- get_edges();
- // collision to the bottom
- if (yspeed>0) {
- if ((bottom_right != 0 && bottom_right != 6) || (bottom_left != 0 && bottom_left != 6)) {
- if (bottom_right != 5 && bottom_left != 5) {
- y_pos = bottom*tile_size-9;
- yspeed = 0;
- falling = false;
- jumping = false;
- } else {
- if (prev_bottom<bottom) {
- y_pos = bottom*tile_size-9;
- yspeed = 0;
- falling = false;
- jumping = false;
- }
- }
- }
- }
- // collision to the top
- if (yspeed<0) {
- if ((top_right != 0 && top_right != 5 && top_right != 6) || (top_left != 0 && top_left != 5 && top_left != 6)) {
- y_pos = bottom*tile_size+1+8;
- yspeed = 0;
- falling = false;
- jumping = false;
- }
- }
- x_pos += xspeed;
- get_edges();
- // collision to the left
- if (xspeed<0) {
- if ((top_left != 0 && top_left != 5 && top_left != 6) || (bottom_left != 0 && bottom_left != 5 && bottom_left != 6)) {
- x_pos = (left+1)*tile_size+6;
- xspeed = 0;
- }
- }
- // collision to the right
- if (xspeed>0) {
- if ((top_right != 0 && top_right != 5 && top_right != 6) || (bottom_right != 0 && bottom_right != 5 && bottom_right != 6)) {
- x_pos = right*tile_size-6;
- xspeed = 0;
- }
- }
- prev_bottom = bottom;
- }
- function get_edges() {
- // right edge
- right = Math.floor((x_pos+5)/tile_size);
- // left edge
- left = Math.floor((x_pos-6)/tile_size);
- // bottom edge
- bottom = Math.floor((y_pos+8)/tile_size);
- // top edge
- top = Math.floor((y_pos-9)/tile_size);
- // adjacent tiles
- top_right = level[top][right];
- top_left = level[top][left];
- bottom_left = level[bottom][left];
- bottom_right = level[bottom][right];
- }
- }
- }
The result is the same, so no need to publish it... just download the source code.
New tile based platform engine – part 6 – ladders
It's time to introduce ladders.
Ladders are quite hard to do for one reason: there is a lot of ways of intending ladders.
Can the player jump when on a ladder?
Should a ladder act as a hole if the player walks on it?
These are only two of the many questions you may ask about ladders.
So I have to made some rules:
1) Player climbs the ladder up and down using UP and DOWN arrows
2) Player can jump when on a ladder
3) If the player is falling and encounters a ladder, he will keep falling until he presses UP or DOWN
4) Player can climb a ladder if at least one of its corners is in the ladder Read more
New tile based platform engine – part 5 – clouds
You asked for the clouds, and here they are.
Clouds are marked with a
5 in
level array.
Next stop... ladders... Read more
New tile based platform engine – theory behind the player
This post will explain the theory behind the player of my tile based plaftorm engine, and answer to some questions readers made commenting steps 1 to 4
First, you must know how Flash converts coordinates to pixels.
The player in this platform is a rectangle whose width is 12 pixels and height is 18 pixels.
Its origin is set to
(0, 0) so you should expect to have 6 pixels to the left, 6 to the right, 9 to the top and 9 to the bottom.
This is the theory that will make you fail your collision engine.
In this picture
You can see the red/white cross at in the center of the green rectangle (the player) representing the
(0, 0) origin of the object, but if you look at the red pixel (yes, it's a pixel because the image is magnified) at real
(0, 0) position, you will see it's not centered in the origin point as you could expect but it's moved one pixel to the right and one pixel to the bottom.
This happens because every pixel has a size, and its size is... one pixel.
So the red pixel starts at
(0, 0) but its size makes him end at
(1, 1).
For this reason, the right bottom yellow pixel it's not at
(6, 9) as someone can imagine, but at
(5, 8) while the left bottom one is a
(-6, 8).
Fail to determine the real player edges and you will mess up the collision engine.
The couple of red pixels at the bottom of the rectangle, located at
(-6, 9) and
(5, 9) is not part of the player sprite, it's just a couple of pixels representing the feet of the player. The left and the right one.
Checking both feet can make me know what kind of tile the player has under its feet.
Obviously a player can have one foot on a tile and one foot on another tile.
That's why I had to decide the main foot. If your player has one foot on a normal tile and one foot on a ice tile, do I have to apply ground or ice friction?
I decided to make the left foot as the main one. This means if the left foot is not in the air, that's the foot that will inherit ground friction. If the left foot is in the air, then the main foot will be the right one.
This routine can (and will) be optimized in two ways: the first way checking if both feet are on the ground and looking for the tile in the middle of them.
The second way id determining if the player is facing left or right then assigning the right or left feet as the main one.
The other pixels (the yellow ones) are used to check collisions between the player and the environment... but I will talk about it later.
Now I would like to answer a couple of questions:
Wouldn’t a matrix be better than an array of arrays? It would be all one variable, and it would make a level editor easier to design, in my opinion.
Sure, but it would make harder to design level "on the fly" directly from the source code. Obviously the final version will have a matrix and an editor.
There is an inconsistency in that if the player gets on the conveyor belt throws you off. A related inconsistency is that the -> conveyor belt doesn’t affect you until you are on all the way but the <- conveyor belt starts as soon as you touch it
This is due because of the primary feet issue I described before.
That's all at the moment... next step... clouds... then more theory.
New tile based platform engine – part 4
For all jumping fanatics out there, I developed the jumping routine.
You can jump hitting
SPACE, hope you like it.
From next post, I'll start explaining how I developed this script and obviously I will add a lot of tile types.
Now I need some decent pixel art to give the game a polished look.
And don't forget I am about to translate it into AS3... Read more
New tile based platform engine – part 3
In this 3rd step I introduced the gravity.
Now the hero starts on the top of the screen and he can walk "downhill" thanks to gravity.
During next step I'll introduce jumps, then a massive explication will follow.
And AS3 version, of course.
I have to say, making a platform engine is funny and a little harder than I expected. Read more
New tile based platform engine – part 2
In the second part of the engine, I added boundary walls.
Still raw code at the moment, but you can see it in action here.
It's the same thing as the one published in part 1, I just added walls and collisions.
One note... the hero now has its anchor point on its center, where in the previous example it was in the upper left corner) | http://www.emanueleferonato.com/2008/09/ | crawl-002 | refinedweb | 2,335 | 69.52 |
Abstract: When iterating over a heterogeneous collection, one way to differentiate between object types is with instanceof. Another way is to create a special visitor object that can be applied to the objects in the collection.
Welcome to the 40th edition of The Java(tm) Specialists' Newsletter, sent to over 2550 Java experts in over 70 countries. Despite my plea that you don't unsubscribe, I had a rather surprising number of unsubscriptions, as programmers expressed their outrage at my audacity by voting with their feet. My views are my own and that of my employer - since I am my own employer ;-). I'm working on a program at the moment and I do make sure that our JavaDocs are up to date by running a Doclet that tells me where I've forgotten a tag. Whenever I change a method, I nuke the comments, and then the Doclet tells me where I need to add a comment again.
The ideas in this newsletter were spawned by Inigo Surguy (inigosurguy@hotmail.com) who works in Lemington Spa in the United Kingdom. Inigo is the UK Head of Research and Development of Interactive AG. Inigo also pointed out BCEL to me, used to change byte code "on the fly". I will write about some application of that in future.
A few newsletters ago, I mentioned traffic fines, and that I had had only one speeding fine in all my life. Last Wednesday, I was on my way to a meeting with my auditor, I was late, and, hmmm, make that 2 traffic fines in all my life? The road between where I live and where my auditor works is notorious. The police tell you: "if you have a puncture on that road, please carry on driving slowly until you get to the next town. Don't worry about damaging your wheel - rather break your wheel than ..." Ok, I'm exaggerating a bit, but the point I'm making is that I had never seen a speed trap on that road, because the cops are too scared to hang around long enough to book you. Never, until last Wednesday. I was caught fair & square, doing 160km/h in a 120km/h zone. Fortunately, the cop was in a good mood, so we had a good laugh when he pulled over some cops who were speeding, and he kindly reduced my speed to 139km/h. The speeding fine ended up being ZAR 100, about US$ 8.50. I'd be quite interested to hear from you what type of punishment you would face in your country for getting caught doing 160km/h in a 120km/h zone ... [hk: in case there are any cops on this list, that story was purely ficticious :-]
javaspecialists.teachable.com: Please visit our new self-study course catalog to see how you can upskill your Java knowledge.
Warning: the comparator we use in this code is not implemented according to the specification. It can happen that the elements are stored out of order. A more complicated matching mechanism needs to be used to dispatch to the correct methods.
I'm getting tired. Not tired of writing newsletters, but tired of Java. Tired of writing the same code over and over again. For example:
// ... Iterator it = ages.iterator(); while(it.hasNext()) { Integer age = (Integer)it.next(); System.out.println("Now you're " + age + ", in 3 years time, you'll be " + (age.intValue() + 3)); }
I don't like that
while loop with the iterator.
Don't know why I don't like it, it just looks inelegant to
me. I like the weird
for loop for an iterator even
less:
// ... for(Iterator it = ages.iterator(); it.hasNext();) { Integer age = (Integer)it.next(); System.out.println("Now you're " + age + ", in 3 years time, you'll be " + (age.intValue() + 3)); }
Lastly, I don't like downcasting and I don't like the problems that occur when you have different types in a collection.
Before looking at a solution, I would like to show how I would use iterators normally:
import java.util.*; public class OldVisitingIteratorTest { public static void main(String[] args) { Collection c = new LinkedList(); for (int i=0; i<3; i++) c.add(new Integer(i)); Iterator it = c.iterator(); while(it.hasNext()) { // lots of brackets - looks almost like Lisp - argh System.out.println(((Integer)it.next()).intValue() + 10); } c.add(new Float(2.1)); c.add("Hello"); it = c.iterator(); while(it.hasNext()) { Object o = it.next(); if (o instanceof Integer) { System.out.println(((Integer)o).intValue() + 10); } else if (o instanceof Number) { System.out.println(((Number)o).intValue() + 20); } else if (o instanceof String) { System.out.println(((String)o).toLowerCase()); } else { System.out.println(o); } } it = c.iterator(); while(it.hasNext()) { System.out.println(((Integer)it.next()).intValue() + 10); } } }
The output from that code is:
10 11 12 10 11 12 22 hello 10 11 12 Exception in thread "main" java.lang.ClassCastException: java.lang.Float at OldVisitingIteratorTest.main(OldVisitingIteratorTest.java:32)
Instead of constructing an Iterator and going through the Iterator and doing some operation on its contents, why not pass in an object with an execute() method that is called with each element? After some speed-typing yesterday, while waiting for my students at a Design Patterns course at the Strand Beach Hotel near Cape Town to finish an exercise, I came up with:
import java.util.*; import java.lang.reflect.*; public class VisitingIterator { /** * Ordering methods in "best-fit" order. */ private static final Comparator METHOD_COMPARATOR = new Comparator() { public int compare(Object o1, Object o2) { Class paramType1 = ((Method)o1).getParameterTypes()[0]; Class paramType2 = ((Method)o2).getParameterTypes()[0]; return paramType1.isAssignableFrom(paramType2) ? 1 : -1; } }; /** * Threadsafe version of visit. * @param lock the object on which to synchronize * @param task is an Object with an execute(...) : void method */ public void visit(Collection c, Object task, Object lock) { synchronized(lock) { visit(c, task); } } /** * @param task is an Object with an execute(...) : void method */ public void visit(Collection c, Object task) { TreeSet methods = new TreeSet(METHOD_COMPARATOR); Method[] ms = task.getClass().getMethods(); for (int i=0; i<ms.length; i++) { if (ms[i].getName().equals("execute") && ms[i].getParameterTypes().length == 1) { methods.add(ms[i]); } } Iterator it = c.iterator(); while(it.hasNext()) { boolean found = false; Object o = it.next(); Iterator mit = methods.iterator(); while(!found && mit.hasNext()) { Method m = (Method)mit.next(); if (m.getParameterTypes()[0].isInstance(o)) { try { m.invoke(task, new Object[] { o }); } catch(IllegalAccessException ex) { // we were only looking for public methods anyway throw new IllegalStateException(); } catch(InvocationTargetException ex) { // The only exceptions we allow to be thrown from // execute are RuntimeException subclases throw (RuntimeException)ex.getTargetException(); } found = true; } } if (!found) throw new IllegalArgumentException( "No handler found for object type " + o.getClass().getName()); } } }
Instead of having that ugly
while loop, we can now
pass an object to the VisitingIterator and the correct
execute(...) method is called for each element
in the collection. The OldVisitingIterator now becomes:
import java.util.*; public class VisitingIteratorTest { public static void main(String[] args) { Collection c = new LinkedList(); for (int i=0; i<3; i++) c.add(new Integer(i)); VisitingIterator vit = new VisitingIterator(); vit.visit(c, new Object() { public void execute(Integer i) { System.out.println(i.intValue() + 10); } }); c.add(new Float(2.1)); c.add("Hello"); vit.visit(c, new Object() { public void execute(Object o) { System.out.println(o); } public void execute(Number n) { System.out.println(n.intValue() + 20); } public void execute(Integer i) { System.out.println(i.intValue() + 10); } public void execute(String s) { System.out.println(s.toLowerCase()); } }); vit.visit(c, new Object() { public void execute(Integer i) { System.out.println(i.intValue() + 10); } }); } }
The output from our new style is:
10 11 12 10 11 12 22 hello 10 11 12 Exception in thread "main" java.lang.IllegalArgumentException: No handler found for object type java.lang.Float at VisitingIterator.visit(VisitingIterator.java:62) at VisitingIteratorTest.main(VisitingIteratorTest.java:33)
Perhaps I've been smoking Java for too long, but I much prefer
that code to the
while(it.hasNext()) ... but I have
not had the chance to try this idea out "in the real world". I
will start using it and let you know if it makes code neater
(or not). I know that it will be less efficient, but then, Java
is so slow anyway, I'd rather have cool style than super-optimal
code.... | https://www.javaspecialists.eu/archive/Issue040-Visiting-Your-Collections-Elements.html | CC-MAIN-2020-45 | refinedweb | 1,397 | 50.73 |
The QPEApplication class implements various system services that are available to all Qtopia applications. More...
#include <qtopia/qpeapplication.h>
Inherits QApplication.
List of all member functions.
Simply by using QPEApplication instead of QApplication, a standard Qt application becomes a Qtopia application. It automatically follows style changes, quits and raises, and in the case of document-oriented applications, changes the currently displayed document in response to the environment.
To create a document-oriented application use showMainDocumentWidget(); to create a non-document-oriented application use showMainWidget(). The keepRunning() function indicates whether the application will continue running after it's processed the last QCop message. This can be changed using setKeepRunning().
A variety of signals are emitted when certain events occur, for example, timeChanged(), clockChanged(), weekChanged(), dateFormatChanged() and volumeChanged(). If the application receives a QCop message on the application's QPE/Application/appname channel, the appMessage() signal is emitted. There are also flush() and reload() signals, which are emitted when synching begins and ends respectively - upon these signals, the application should save and reload any data files that are involved in synching. Most of these signals will initially be received and unfiltered through the appMessage() signal.
This class also provides a set of useful static functions. The qpeDir() and documentDir() functions return the respective paths. The grabKeyboard() and ungrabKeyboard() functions are used to control whether the application takes control of the device's physical buttons (e.g. application launch keys). The stylus' mode of operation is set with setStylusOperation() and retrieved with stylusOperation(). There are also setInputMethodHint() and inputMethodHint() functions.
See also Qtopia Classes.
By default, QLineEdit and QMultiLineEdit have the Words hint unless they have a QIntValidator, in which case they have the Number hint. This is appropriate for most cases, including the input of names (new names being added to the user's dictionary). All other widgets default to Normal mode.
See also inputMethodHint() and setInputMethodHint().
See also setStylusOperation() and stylusOperation().
Currently, this is only used internally.
For applications, t should be the default, GuiClient. Only the Qtopia server passes GuiServer.
This signal is emitted when a message is received on this application's QPE/Application/appname QCop channel.
The slot to which you connect this signal uses msg and data in the following way:
void MyWidget::receive( const QCString& msg, const QByteArray& data ) { QDataStream stream( data, IO_ReadOnly ); if ( msg == "someMessage(int,int,int)" ) { int a,b,c; stream >> a >> b >> c; ... } else if ( msg == "otherMessage(QString)" ) { ... } }
Note that messages received here may be processed by qpe application and emitted as signals, such as flush() and reload().
This signal is emitted whenever a category is added, removed or edited. Note, on Qtopia 1.5.0, this signal is never emitted.
This signal is emitted when the user changes the clock's style. If ampm is TRUE, the user wants a 12-hour AM/PM clock, otherwise, they want a 24-hour clock.
Warning: if you use the TimeString functions, you should use TimeString::connectChange() instead.
See also dateFormatChanged().
This signal is emitted whenever the date format is changed.
Warning: if you use the TimeString functions, you should use TimeString::connectChange() instead.
See also clockChanged().
Shows and calls exec() on dialog. An heuristic approach is taken to determine the size and maximization of the dialog.
nomax forces it to not be maximized.
Under Qtopia Phone Edition this function does nothing. It is not possible to grab the keyboard under Qtopia Phone Edition.
See also ungrabKeyboard().
See also setInputMethodHint() and InputMethodHint.
See also setInputMethodHint() and InputMethodHint.
See also setMenuLike().
See also setKeepRunning().
This signal is emitted whenever an AppLnk or DocLnk is stored, removed or edited. linkFile contains the name of the link that is being modified.
See also inputMethodHint() and InputMethodHint.
For example, the phone key input method includes support for the names input methods:
The effect in the phone key input method is to modify the binding of phone keys to characters (such as making "@" easier to input), and to add additional "words" to the recognition word lists (such as "www").
If the current input method doesn't understand the hint, it will be ignored.
See also inputMethodHint() and InputMethodHint.
See also keepRunning().
Menu Like dialogs typically have a single list of options, and should accept the dialog when the select key is pressed on the appropriate item, and when a mouse/stylus is used to click on an item - just like menus. Menu Like dialogs should only have one widget accepting key focus.
By marking a dialog as Menu Like Qtopia will map the Back key to reject the dialog and will not map any key to accept the dialog - you must do that yourself.
The default dialog behaviour is to include a cancel menu option in the context menu to reject the dialog and to map the Back key to accept the dialog.
Only modal dialogs can be Menu Like.
See also isMenuLike().
See also stylusOperation() and StylusMode.
This method temporarily overrides the current global screen saver with the screenSaverHint hint, allowing applications to control screensaver activation during their execution.
First availability: Qtopia 1.6
See also screenSaverHint.
Shows dialog. An heuristic approach is taken to determine the size and maximization of the dialog.
nomax forces it to not be maximized.
This calls designates the application as a document-oriented application.
The mw widget must have this slot: setDocument(const QString&).
See also showMainWidget().
See also showMainDocumentWidget().
See also setStylusOperation() and StylusMode.
This signal is emitted when the time changes outside the normal passage of time, i.e. if the time is set backwards or forwards.
If the application offers the TimeMonitor service, it will get the QCop message that causes this signal even if it is not running, thus allowing it to update any alarms or other time-related records.
This signal is emitted whenever the mute state is changed. If muted is TRUE, then sound output has been muted.
This signal is emitted if the week start day is changed. If startOnMonday is TRUE then the first day of the week is Monday; if startOnMonday is FALSE then the first day of the week is Sunday.
This file is part of the Qtopia platform, copyright © 1995-2005 Trolltech, all rights reserved. | http://doc.trolltech.com/qtopia2.2/html/qpeapplication.html | crawl-001 | refinedweb | 1,034 | 50.84 |
The
JSONAPIAdapter is the default adapter used by Ember Data. It is responsible for transforming the store's requests into HTTP requests that follow the JSON API format.
The JSONAPIAdapter uses JSON API conventions for building the url for a record and selecting the HTTP verb to use with a request. The actions you can take on a record map onto the following URLs in the JSON API adapter: paths can be prefixed with a
namespace by setting the namespace property on the adapter:
import DS from 'ember-data'; export default DS.JSONAPIAdapter.extend({ namespace: 'api/1' });
Requests for the
person model would now target
/api/1/people/1.
An adapter can target other hosts by setting the
host property.
import DS from 'ember-data'; export default DS.JSONAPIAdapter.extend({ host: '' });
Requests for the
person model would now target.
© 2017 Yehuda Katz, Tom Dale and Ember.js contributors
Licensed under the MIT License. | https://docs.w3cub.com/ember/classes/ds.jsonapiadapter/ | CC-MAIN-2019-26 | refinedweb | 153 | 57.16 |
This scratchpad will let us keep track of who's doing what, etc. If you're working on a draft or have bits and pieces of something finished, link it into here under the proper section so that we can keep track of everything.
Who: Frank Manola
Write up the RDF model: intro, background (URIs, XML, namespaces), RDF model
Who: Eric Miller, Dan Brickley
Work on the RDF Schema section.
Who: Sean B. Palmer, Aaron Swartz, Eric Miller
Provide a user scenario to put it all together. Likely will be photo metadata.
Who: All
Collect various hints and tips for collecting into the primer / RDF cookbook. | http://www.w3.org/2001/09/rdfprimer/todo | CC-MAIN-2016-22 | refinedweb | 106 | 81.73 |
I have to get a list of entries from a text file to display in a JFrame. I was able to do that but everything is displaying on one line.
How do I get it to display as it is listed in the text file?
Here is what the text file contains:
Rick Sebastian 49 Rick@home.com 813-111-2222
John Doe 35 John@work.com 813-222-3333
Peggy Bundy 50 Peggy@theoffice.com 813-333-4444
Al Bundy 55 Al@thebar.com 813-444-5555
Jane Doe 27 Jane@thebeach.com 813-555-6666
Here is how it is displayed in the program:
Attachment 1857
Here is the code I'm working with:
Code java:
package contact.display.info; import java.io.*; import javax.swing.*; /** * * @author Rick Sebastian */ public class ContactDisplayInfo extends JFrame { JTabbedPane jtp1=new JTabbedPane(); JPanel jp1=new JPanel(); JTextArea t1=new JTextArea(); ContactDisplayInfo () throws Exception { super("ContactDisplayinfo"); FileReader f=new FileReader("C:/Users/Me/Documents/ContactInformationProgram/ContactInfo.txt"); BufferedReader brk=new BufferedReader(f); String s; while((s=brk.readLine())!=null){ t1.append(s); } jp1.add(t1); jtp1.addTab("Tab1",t1); add(jtp1); //setSize(400, 400); setLocationRelativeTo(null); } public static void main(String args[]) throws Exception { ContactDisplayInfo cdi=new ContactDisplayInfo(); cdi.pack(); cdi.setVisible(true); cdi.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } }
I set the frame size to 400 400 and that only changed the size of the frame. The text still extended out in one line. i know I need a word wrap of some sort, but I'm not sure how that's done, or how I would get it to group by individual entry. Ideally I would have 5 lines with 5 entries being displayed.
Any help you can offer would be greatly appreciated!
Rick | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/25299-need-line-breaks-output-display-printingthethread.html | CC-MAIN-2016-18 | refinedweb | 290 | 50.23 |
XML::XPath::XMLParser - The default XML parsing class that produces a node tree
my $parser = XML::XPath::XMLParser->new( filename => $self->get_filename, xml => $self->get_xml, ioref => $self->get_ioref, parser => $self->get_parser, ); my $root_node = $parser->parse;
This module generates a node tree for use as the context node for XPath processing. It aims to be a quick parser, nothing fancy, and yet has to store more information than most parsers. To achieve this I've used array refs everywhere - no hashes. I don't have any performance figures for the speedups achieved, so I make no apologies for anyone not used to using arrays instead of hashes. I think they make good sense here where we know the attributes of each type of node.
All nodes have the same first 2 entries in the array: node_parent and node_pos. The type of the node is determined using the ref() function. The node_parent always contains an entry for the parent of the current node - except for the root node which has undef in there. And node_pos is the position of this node in the array that it is in (think: $node == $node->[node_parent]->[node_children]->[$node->[node_pos]] )
Nodes are structured as follows:
The root node is just an element node with no parent.
[ undef, # node_parent - check for undef to identify root node undef, # node_pos undef, # node_prefix [ ... ], # node_children (see below) ]
[ $parent, # node_parent <position in current array>, # node_pos 'xxx', # node_prefix - namespace prefix on this element [ ... ], # node_children 'yyy', # node_name - element tag name [ ... ], # node_attribs - attributes on this element [ ... ], # node_namespaces - namespaces currently in scope ]
[ $parent, # node_parent - the element node <position in current array>, # node_pos 'xxx', # node_prefix - namespace prefix on this element 'href', # node_key - attribute name '', # node_value - value in the node ]
Each element has an associated set of namespace nodes that are currently in scope. Each namespace node stores a prefix and the expanded name (retrieved from the xmlns:prefix="..." attribute).
[ $parent, <pos>, 'a', # node_prefix - the namespace as it was written as a prefix '', # node_expanded - the expanded name. ]
[ $parent, <pos>, 'This is some text' # node_text - the text in the node ]
[ $parent, <pos>, 'This is a comment' # node_comment ]
[ $parent, <pos>, 'target', # node_target 'data', # node_data ]
If you feel the need to use this module outside of XML::XPath (for example you might use this module directly so that you can cache parsed trees), you can follow the following API:
The new method takes either no parameters, or any of the following parameters:
filename xml parser ioref
This uses the familiar hash syntax, so an example might be:
use XML::XPath::XMLParser; my $parser = XML::XPath::XMLParser->new(filename => 'example.xml');
The parameters represent a filename, a string containing XML, an XML::Parser instance and an open filehandle ref respectively. You can also set or get all of these properties using the get_ and set_ functions that have the same name as the property: e.g. get_filename, set_ioref, etc.
The parse method generally takes no parameters, however you are free to pass either an open filehandle reference or an XML string if you so require. The return value is a tree that XML::XPath can use. The parse method will die if there is an error in your XML, so be sure to use perl's exception handling mechanism (eval{};) if you want to avoid this.
The parsefile method is identical to parse() except it expects a single parameter that is a string naming a file to open and parse. Again it returns a tree and also dies if there are XML errors.
This file is distributed as part of the XML::XPath module, and is copyright 2000 Fastnet Software Ltd. Please see the documentation for the module as a whole for licencing information. | http://search.cpan.org/~manwar/XML-XPath-1.37/lib/XML/XPath/XMLParser.pm | CC-MAIN-2016-50 | refinedweb | 611 | 58.11 |
Comments about what the code is up to are included below.
This code is written to simply demonstrate a few things, as a reference. I chose to write it a bit "dirty" for the sake of keeping what it does demonstrate as clear as possible. For example it makes some assumptions about the nature of the data it is working with. In a real program, this may not be the right thing to do.
The code will generate a warning you can ignore:
Note: Lister.java uses unchecked or unsafe operations.
This is a result of not doing complete checking to make sure that the Object types passed to System.out.println() are compatible, i.e., have a toString() method. It's one of the things I'm not worrying about for the sake of this short demo.
/* Playing around with Lists, in basic ways.
We use the ArrayList class as an implementation of List,
since at this point we're just using a List's features,
rather than learning about building our own List-based
class.
-Mark Graybill, Apr. 2010
*/
import java.util.*; // include the package with List in it.
public class Lister{
public static void main(String arg[]){
// Create some Lists to play with.
ArrayList lister = new ArrayList();
ArrayList dave = new ArrayList();
ArrayList kryten = new ArrayList();
// Put some things in the lister List, manually.
lister.add("The End");
lister.add("Future Echoes");
lister.add("Confidence and Paranoia");
lister.add("Thanks for the Memories");
// Print the current lister list.
System.out.print(lister);
System.out.println();
/* Get a sublist from lister.
This will get items at indices 1 and 2. Stops
short of item 3. I.e., gets sublist from element
1 up to, but not including, 3. */
dave.addAll(lister.subList(1,3));
System.out.print("Sublist elements 1 and 2: ");
System.out.print(dave);
System.out.println();
// Put something in the kryten list.
kryten.add("The Rimmer Experience");
kryten.add("Ace");
// See if kryten is in the lister list.
if (lister.containsAll(kryten)){
System.out.println("All of kryten is in lister.");
}
else {
System.out.println("Items in kryten aren't in lister.");
}
// Do the same thing with the dave list.
if (lister.containsAll(dave)){
System.out.println("Items in dave are all in lister.");
}
else {
System.out.println("Items in dave aren't in lister.");
}
// Step through the list with a for-each
System.out.println(); // Get a blank line.
System.out.println("Items in Lister:"); // Title the list.
for (Object epname: lister){ // Print the list.
System.out.println(epname);
}
/* Note: the above code is a little bit "dirty", in that we're counting on
the objects we get from lister to be printable. Since we've had tight
control over what goes in, we can get away with this. If you make a List
that lets anything in, make sure they've all got toString() methods, or
otherwise take care of object types. Generics are a partial solution to
this problem (though they're less than perfect.)
*/
// Get an iterator and use it.
ListIterator iter = lister.listIterator();
// Iterator starts out with the first element in List as its "next()"
System.out.println();
System.out.println(iter.next());
// Doing next() has advaned our iterator in the list,
// so if we do it again:
System.out.println(iter.next());
// We see we have advanced. And we advance again.
// We can it to go through the list, use hasNext() to watch for the end.
System.out.println("\nGoing forward:");
while (iter.hasNext()){
System.out.println(iter.next());
}
// And we can go backward:
System.out.println("\nGoing backward:");
while (iter.hasPrevious()){
System.out.println(iter.previous());
}
// Print the element in front of "Thanks for the Memories".
System.out.println("\nItem before \"Thanks for the Memories\" is:");
while (iter.hasNext()){
if ( iter.next().toString().contentEquals("Thanks for the Memories") ){
iter.previous(); //we found it, now move back to it.
System.out.println(iter.previous()); // Two steps to get back.
break; // We've done it, now get out of the loop.
}
}
} // end of main()
} // end of Lister
Since I'm not using generics in this code, some of the information about what's stored in the list gets lost--everything becomes an "Object". Not using generics limits the usability of a Collection quite a lot, so it's usually best to use generics as it beats writing all the code to work around the lack of known class types for data.
For example, using Java's Generics, I can declare a type for items in the list that Java will enforce:
ArrayList<String> lister = new ArrayList<String>();
In this way, I can now treat items that come out of lister as a String, too:
if (iter.next().contentEquals("Thanks for the Memories")){ ...
Watch for more about Generics in another article. | http://beginwithjava.blogspot.com/2010/04/lists-in-java-sample-code.html | CC-MAIN-2018-39 | refinedweb | 799 | 60.92 |
Embedded Java with GCJ
After building the cross compiler and root filesystem, building your GCJ application will be a bit anticlimactic. We'll start with the traditional hello world:
Class hello { Static public void main(String argc[]) { System.out.println("hello from GCJ"); } }
Following Java convention, this class resides in the hello.class file. To compile the file, enter:
powerpc-750-linux-gnu-gcj hello.class --main=hello -o hello-java
What's going on with --main=hello? Any class could define a method with a suitable entry point. The --main=hello option tells the linker to use the main method in the hello class when linking. Leaving off this option results in a link error, “undefined reference to main”, which, to the uninitiated, is confusing, because your class contains a main.
Download this file to the target and run it from the chrooted shell. You'll see:
# ./java-test Hello from GCJ
At this point, development carries on much like any other Java project, with the exception of invoking the GCJ cross compiler instead of the native javac compiler.
In this example, the root filesystem weighs in at more than 20MB. Because many embedded systems use Flash memory, which is considerably more expensive on a per-megabyte basis than disk-based storage systems, a minimally sized root filesystem is frequently an important business requirement. One easy way to reduce the size of your root filesystem is to link your application statically. Although this may seem counterintuitive at first, as you'll have an extra copy of libc code in your application, recall that libgcj.so contains the entire Java standard library. Most applications use a fraction of the standard Java library, so using static linking is a great way to winnow out the unused code in the library. Just be sure to strip the resulting binary; otherwise, you'll be shocked at the size due to the amount of debugging information in libgcj.so.
From the article, you've seen that creating software for an embedded system using GCJ is something that can be reasonably accomplished using tools already present in the Open Source community. Although there are a few minor nits, configuring the root filesystem doesn't require a heroic effort; you just need to get a few different libraries from what you otherwise would need. For applications requiring a smaller root filesystem, we've seen how you can use static linking of your application to reduce the root filesystem greatly. In short, GCJ is a practical solution for using Java on a resource-constrained embedded system—worthy of consideration for your next project.
Gene Sally has been working with Linux in one form or another for the last ten years. These days, Gene focuses his attention on helping engineers use Linux on embedded targets. Feel free to contact Gene at gene.sally@gmail
Commission Blueprint
I compiled it and it didn't work. I repeat: it did NOT work.
Rather unfortunate.
Typographical Error
While everyone noted the sample "hello" program does not compile as-is, I'm surprised no one expressed why but instead offered alternatives. The reason the sample program does not compile is because there are two typographical errors in it (I'm sure it was unintended - an artifact as a result of using a Word Processor such as MS Word).
Class hello {
Static public void main(String argc[]) {
System.out.println("hello from GCJ");
}
}
The keywords "Class" and "Static" should not be capitalized. It should look like this:
class hello {
static public void main(String argc[]) {
System.out.println("hello from GCJ");
}
}
The capitalization changed the keywords "class" and "static" into Object names. The compiler was trying to find a "Class.class" to make a "Class" object named "hello" and couldn't. The same with "Static."
This is a great article, and the author did a great job writing it. Unfortunately formatting the document (and the code, especially) is a bear. Just to get the code snippet above look the way is does was a lot of work - way more work than just typing.
~lum
Won't compile with gcj-4.1 on my Debian
Hmmm... maybe this is not related to the specifics of embedded environment at all, but I noticed that the hello world example won't compile with "gcj-4.1 hello.class --main=hello -o hello-java" on my Debian testing system :
hello.class:1: error: Class or interface declaration expected.
Class hello {
^
hello.class:2: error: Class or interface declaration expected.
Static public void main(String argc[]) {
^
hello.class:3: error: Class or interface declaration expected.
System.out.println("hello from GCJ");
^
hello.class:4: error: Class or interface declaration expected.
}
^
hello.class:5: error: Class or interface declaration expected.
}
^
5 errors
Dunno what's wrong
After a quick search on the
After a quick search on the net, I found that writing this way, it compiles :
public class hello {
static public void main(String argc[]) {
System.out.println("hello from GCJ");
}
}
This worked for me:class
This worked for me:
class hello
{
static public void main( String argc[] )
{
System.out.println( "hello from GCJ" );
}
}
Notice that none of the keywords like "class" are capitalized. Also, typical naming convention for java is to call the source file hello.java and using the javac command turns the byte code into hello.class We're not using javac, so there won't be a hello.class file. So, if you name your file this way, the compile step would be:
gcj hello.java --main=hello -o hello-java
Hope this helps.
After reading this article, I
After reading this article, I just realized how we can simplify our seemingly complicated problems which in fact can actually be solved very easily! It actually helps us to use GCJ which is really a part of the GCC compiler suite, in a Linux project! The advantage is that it can be code with a high-level language like Java! It gives us a detailed tutorial about its advantages and pitfalls, the host and target configuration and lastly if you are happy with what you have read, the step by step instructions to build your GCJ cross compiler! Very cool and informative read indeed!!RFID Tags
Statically linking
I tried statically linking a simple "Hello, World!" program but received the following error:
/usr/bin/ld: cannot find -lgcj
Here's the command I used to compile:
gcj -static-libgcj -o hello --main=hello hello.java
Any ideas? | http://www.linuxjournal.com/article/8757?page=0,2&quicktabs_1=2 | CC-MAIN-2015-18 | refinedweb | 1,082 | 56.25 |
In the past few days, I couldn’t publish posts in our Flutter tutorial series because I was busy with an ASP .NET Core project. Anyway, let’s continue learning Flutter. In this post, we’ll learn how to use image asset in Flutter applications.
What is an asset?
Assets are the resources that we add locally with our application so that they can be used even if the application works offline. Assets can be an image, files, fonts etc.
Adding an image
To add an image asset, Right-click on the Project folder and select New -> Directory and name it images. Now add an image to the folder ( Just copy and paste… simple).
Using image assets
To use an asset in Flutter, it should be specified in
pubspec.yaml. So, to add image assets to our application, open
pubspec.yaml and uncomment the following lines.
# assets: # - images/a_dot_burr.jpeg # - images/a_dot_ham.jpeg
This is where you can add image assets of your application.
Did you know?
In Android Studio, you can uncomment any line by selecting it and pressing
Ctrl +
/ .
Now modify the lines as shown below. Replace mickey-mouse.png with the name of your image file.
assets: - images/mickey-mouse.png
As you have made changes to pubspec.yaml, click Packge Get located at the top-right corner of the code editor. If you get some error now, make sure that there is a proper indentation of two spaces (2 at the beginning of
assets and 4 at the beginning of
- image).
Final coding
Come back to main.dart and add the following code. In this code, I’ve simply defined a material app and placed an Image widget.
import 'package:flutter/material.dart'; void main() { runApp( MaterialApp( title: "My Application", home: Scaffold( appBar: AppBar(title: Text("My App"),), body: Container( child: Image.asset("images/mickey-mouse.png"), ), ), ) ); }
In this tutorial, we’ve learned how to use image assets in Flutter applications. This was a very basic tutorial and we’ll learn more about aligning, scaling and styling image assets later. | https://www.geekinsta.com/using-image-asset-in-flutter/ | CC-MAIN-2021-31 | refinedweb | 344 | 59.3 |
HST Server Side Includes Support
Introduction
HST provides basic support for generating SSI (Server Side Includes) statements, so that external SSI processors like Apache, Nginx, LightHttpd and IIS can include certain components in an asynchronous way.
Server Side Includes (SSI) is a simple interpreted server-side scripting language used almost exclusively for the Web.
You can configure HST Components to be served as SSI markup instead of fully-rendered HTML markups. In this case, the SSI markup in the response will be processed by the external SSI Processor to replace the SSI markup with the full retrieved HTML markup in the final phase.
SSI has a simple syntax:
<!--#directive parameter=value parameter=value -->
Directives are placed in HTML comments so that if SSI is not enabled, users will not see the SSI directives on the page, unless they look at its source.
How to Make an HST Components Render SSI Markup?
An HST Page is constructed from a tree of HST Components, managed in Hippo Repository.
First of all, you should mark an HST Component as asynchronous by setting the hst:async property. If you set an HST Component as asynchronous, the HST Component and its descendants will be rendered asynchronously.
hst:async: true
Above configuration by default results in an asynchronously rendered HST Component window via client-side AJAX script calls.
See Asynchronous HST Components and Containers for details on asynchronous HST Components.
Now, asynchronous HST Components can also be rendered/aggregated on the server side by an external SSI Processor.
If you want to change the default AJAX asynchronous rendering behavior to SSI processing, you should add the following property:
hst:asyncmode: ssi
The default value of hst:asyncmode is ajax.
If an HST Component is configured as asynchronous with an hst:asyncmode of ssi, the rendered markup of the HST Component window will be SSI markup in the first phase like the following example:
<!--#include virtual="/news?_hn:type=component-rendering&_hn:ref=r1_r2" -->
The SSI Include source is an HST Component Rendering URL, which invokes a specific HST Component window (having ' r1_r2' namespace) rendering instead of rendering the whole page.
So, HST's page response includes all HTML markup for all descendant HST Component windows except for SSI-based asynchronous HST Component window. The markup of the asynchronous HST Component windows is replaced by SSI markup as in above example.
Now, the whole page can be processed by external SSI processors such as Nginx or Apache httpd. The external SSI processor will invoke the SSI Include URL (e.g, /news?_hn:type=component-rendering&_hn:ref=r1_r2) and replace the SSI markup by the retrieved HTML markup to serve the final page output to the client. | https://documentation.bloomreach.com/12/library/concepts/web-application/hst-2-server-side-includes-support.html | CC-MAIN-2019-47 | refinedweb | 450 | 51.18 |
31 March 2010 12:48 [Source: ICIS news]
SHANGHAI (ICIS news)-- China’s top offshore oil and gas producer CNOOC Limited said on Wednesday its net profit for the financial year of 2009 dropped 33.6% year-on-year to yuan (CNY) 29.5bn ($4.32bn) primarily due to a significant decline in the international oil price.
For the year ended 31 December 2009, the company’s total revenue was CNY105.2bn, representing a 16.5% decrease over the previous year, CNOOC said in a disclosure to the Hong Kong Stock Exchange.
The company’s oil and gas sales decreased 16.8% to CNY83.9bn in 2009 from CNY100.8bn in 2008, primarily as a result of significantly lower average realised oil prices in 2009, according to the disclosure.
CNOOC said in the filing the average realised price in 2009 for its crude oil decreased by 32.2% to $60.61/barrel compared with 2008, while the average realised price for its natural gas increased 4.7% to $4.01 per thousand cubic feet.
The company’s net production of oil and gas in 2009 rose 17.2% year-on-year to ?xml:namespace>
“In 2010, we aim to increase our production by 21%-28%, and after then will continue such growth at a compound annual rate of 6%-10% for the next five years,” said Fu Chengyu, chairman and CEO of the company.
($1=CNY6.83 | http://www.icis.com/Articles/2010/03/31/9347232/cnoocs-net-profit-drops-33.6-to-4.32bn-on-crude-oil-fall.html | CC-MAIN-2014-35 | refinedweb | 237 | 76.62 |
Async loops, and why they fail! Part 1
Mixing loops with async calls in JavaScript produces unexpected results; how to handle looping and reducing.. In the second part of the article we’ll deal with the complementary
map() and
reduce() functions; in the third we’ll work with
some() and
every(), and in the fourth and last article we’ll consider
find() and
findIndex().
By the way, we will be using the latest version of Node.js, including
Promise.allSettled(),
Promise.finally(), and
.mjs modules, allowing us to use
import and
export, so we’ll also get a chance to try out these newer features! Thus, even if you don’t get to actually require mixing loops and async calls, all this work will prove an interesting exercise in JavaScript coding.
Viewing the problem
In order to see what the problem is, let’s start by having a fake async call, that will just wait a short time (
timeToWait) and return a given value (
dataToReturn). For testing purposes, sometimes we’ll also want to be able to make the call fail, so we’ll include a third parameter (
fail) that will be false by default. We will use the code below for most examples.
We will also want a logging function including the current time, and we can make do with the following.
Now that we have these functions, let’s see some code. The following sequence works perfectly well — but we expected that, since there are no loops anywhere!
Running this, we get the results below, which are OK. The async calls don’t go out in parallel; the first requires 1 second, the next 2 seconds after that, and so on, so the whole experiment takes around 19 seconds.
13:05:28.264 START #1 -- sequential calls
13:05:29.269 data #1
13:05:31.270 data #2
13:05:34.274 data #3
13:05:39.274 data #5
13:05:47.283 data #8
13:05:47.283 END #1
We can also try this out with a common
for() loop, and it will also work — which is to be expected, since no higher order functions are involved either.
Results are similar; calls go out in sequence as before, and so on.
13:05:47.284 START #2 -- using a common for(...)
13:05:48.285 data #1
13:05:50.286 data #2
13:05:53.290 data #3
13:05:58.292 data #5
13:06:06.296 data #8
13:06:06.297 END #2
However, let’s now try with
forEach()!
Oops!! The loop ends before any async calls are done!
13:06:06.297 START #3 -- using forEach(...)
13:06:06.298 END #3
13:06:07.299 data #1
13:06:08.298 data #2
13:06:09.298 data #3
13:06:11.298 data #5
13:06:14.298 data #8
The unexpected problem is well known; for example, in MDN we read “forEach expects a synchronous function - forEach does not wait for promises. Kindly make sure you are aware of the implications while using promises (or async functions) as forEach callback.”
This kind of problem also affects
map(),
reduce(), and others, so let’s see how to work around this!
Looping
How can we solve the
forEach() problem? As we’ll be dealing with promises, the result of that method will be a promise itself. We want to go sequentially through the array, calling the provided callback each time — but not until the previous callback has finished. A simple way to manage this is to chain the new call to the previous one. We can use
finally() so we’ll be able to deal with failures (ignore them) as well.
We make do by using
.reduce() and starting with a resolved promise. For each element in the array, we call the async function in the
.finally() call for the previous promise. (We could also work with both
.then() and
.catch() but we’d have to duplicate code.) After a promise succeeds, the next function call will go out, traversing the whole array.
In all cases, we’ll be giving two implementations for each function — one by adding to the Array.prototype (though modifying a prototype is not usually recommended…) and one as a stand-alone function, and you can select the one that you prefer.
Let’s see this alternative implementation work! We’ll have a
getForEachData() call that will get values from our mock API call. Just for variety, we’ll have the call fail if we pass 2 as its argument. Full code is below.
Both implementations produce the same kind of result, so let’s see just one run.
17:26:16.476 START -- using .forEachAsync(...) method
17:26:16.480 Calling - v=1 i=0 a=[1,2,3,5,8]
17:26:17.482 Success - data #1
17:26:17.482 Calling - v=2 i=1 a=[1,2,3,5,8]
17:26:19.484 Failure - error
17:26:19.484 Calling - v=3 i=2 a=[1,2,3,5,8]
17:26:22.488 Success - data #3
17:26:22.488 Calling - v=5 i=3 a=[1,2,3,5,8]
17:26:27.494 Success - data #5
17:26:27.494 Calling - v=8 i=4 a=[1,2,3,5,8]
17:26:35.503 Success - data #8
17:26:35.503 END
Success! The sequence of logs is what we expected: an initial START, then all five calls, and a final END. And, as a plus, a very similar algorithm will work as an alternative for
.reduce() — let’s see how.
Reducing
Reducing an array to a single value using
.reduce() also requires going through all its values sequentially. I’ll admit, however, that calling a remote endpoint to do the reducing isn’t a very likely situation, but let’s just accept that for completeness’ sake.
The exact type of code we wrote above will serve — but we have the reducing process with the initial value, and each promise has to pass the updated result to the next call. We can write the following, then.
If you compare the
reduceAsync() code with the previous code for
forEachAsync() two things appear:
- we provide a promise, resolved to the initial value for reducing, to
reduce()
- we aren’t using
.finallybecause we want to pass a value to the next promise; if the previous call was successful, we pass the updated accumulator, and if the call failed, we ignore it, and pass the (unchanged) accumulator.
We can see this work; the code below uses our new implementation.
Our (fake) reducing call just sums the accumulator and the new value. When the passed value is 2, the call “fails” instead. The result of both loops is similar; let’s see just one.
17:37:35.646 START -- using .reduceAsync(...) method
17:37:35.650 Calling - v=1 i=0 a=[1,2,3,5,8]
17:37:36.652 Success - 1
17:37:36.653 Calling - v=2 i=1 a=[1,2,3,5,8]
17:37:38.655 Failure - error
17:37:38.655 Calling - v=3 i=2 a=[1,2,3,5,8]
17:37:41.658 Success - 4
17:37:41.658 Calling - v=5 i=3 a=[1,2,3,5,8]
17:37:46.663 Success - 9
17:37:46.663 Calling - v=8 i=4 a=[1,2,3,5,8]
17:37:54.671 Success - 17
17:37:54.671 END -- 17
All values — except 2, which was ignored because of the faked failure — were added, and the final result is 17; we’re done!
What about using .reduceRight() with async calls? In reduceAsync(), just change .reduce() to .reduceRight(), and you’ll have your reduceRightAsync().
Summary
In this first article, we’ve seen that some higher order functions fail when working with async calls or promises, and we developed alternative implementations for
reduce() and
forEach(). In the next article in the series we’ll provide alternatives for
map() and
filter(), which also won’t work correctly if used asynchronically.
References
This article is partially based on Chapter 6, “Programming Declaratively — A Better Style” of my “Mastering JavaScript Functional Programming” book, for Packt; some implementations are different.
Check MDN for the description of array.forEach(), array.reduce(). and array.reduceRight().
Code for all articles in the series is available at my repository: | https://medium.com/dailyjs/async-loops-and-why-they-fail-part-1-6909a7d134f2 | CC-MAIN-2021-17 | refinedweb | 1,413 | 77.23 |
Practical TypeScript
Peter walks through a simple Web page that retrieves and updates data on the server to summarize his best practices for creating the client-side portion of an ASP.NET application.
As I write this, the TypeScript team has published its roadmap on how it will get from the current version of TypeScript to version 1.0. Presumably, that means a stable, production-ready release of TypeScript is in sight. It seems like a good time, therefore, to sum up building client-side applications using TypeScript.
When I started this column back in April 2013, I wanted to ensure that I could, in TypeScript, do all the necessary things required by client-side applications, including leveraging a variety of JavaScript libraries. I wanted to answer the question, "Do I want to use TypeScript?"
Well, I do. Even in beta, I like the IntelliSense support, and I like having my code checked at compile time rather than run time. In Visual Studio 2013, I like that I can debug TypeScript code rather than having to switch to the generated JavaScript code (and I also like the way Visual Studio 2013 integrates retrieving TypeScript definition files for JavaScript libraries).
I'm still waiting for Visual Studio to include a test-driven development framework for testing TypeScript outside the browser. While I wait on that integration, Chutzpah and Qunit provide a testing environment for much of what I want to build. I wish that the TypeScript support in the applications I build for my clients who want me to program in Visual Basic was as slick at it is in my C# projects (that will come with version 1.0, I assume). I also wish that I had a girlfriend named "Lola." But, in the meantime, I'm happy enough with what I've got.
On the Server
With Entity Framework (EF) 5, the data-access support for my application consists of an entity class that maps to a table in my database, and a DbContext object to handle the conversion from rows to objects:
public class Customer
{
public int Id {get; set;}
public string FirstName {get; set;}
public string LastName { get; set; }
public string IsActive { get; set; }
public int CustomerType { get; set; }
}
public partial class SalesOrderContext:DbContext
{
public DbSet<Customer> Customers {get; set;}
}
This is boring, repetitious code to write, especially for the entity class, and the EF team has traditionally supplied a tool for generating this code (and, where that tool isn't available, you can just use EF's database-first designer to do the same thing). Typically, I end up modifying some of this tool-generated code to create more complex relations than "one entity = one table," so the tool's output isn't necessarily my final code. It's still a great starting point.
I've also started using the Breeze JavaScript library for managing my client-side objects, and will be integrating Breeze into future applications (and for all of you reading this and already using Breeze
well, yeah, I'm late to the party). To use Breeze, I create a Web API controller that makes my DbContext available to Breeze. I need to add three methods to the controller: one method to return my Customer entity objects, another method to handle updates and a third method to return the metadata that Breeze needs. I also need a constructor that wraps my DbContext object in Breeze's provider.
Listing 1 shows a sample controller with all of that code. In the method that returns Customer objects, I've added a restriction that limits the results to active customers and sorts the results by Customer name. Any query that Breeze makes to this method will be merged with this server-side query, which means, for example, that my sorting will be done by the database engine (almost always the best tool for that job).
[BreezeController]
public class CustomerManagementController : ApiController
{
EFContextProvider<SalesOrderContext> cm;
public CustomerManagementController()
{
cm = new EFContextProvider<SalesOrderContext>();
}
[HttpGet]
public string Metadata()
{
return cm.Metadata();
}
[HttpGet]
public IQueryable<Customer> Customers()
{
return from cust in cm.Context.Customers
where cust.IsActive == "T"
orderby cust.LastName, cust.FirstName
select cust;
}
[HttpPost]
public SaveResult SaveChanges(JObject saveBundle)
{
return cm.SaveChanges(saveBundle);
}
Adding Breeze to my project also adds a routing rule (beginning with the string "breeze") that directs clients to my Breeze controller.
I'm not a completely RESTful kind of guy, so I'll no doubt be adding transaction-oriented operations to this Web API service. But, out of the box, I can do a lot with this service.
ViewModel on the Client
On the client, I need to define a TypeScript interface to use with my server-side entity objects. This is the one for my Customer entity (I put it in a file called SalesOrderEntities.ts):
module SalesOrderEntities
{
export interface ICustomer
{
Id: number;
FirstName: string;
LastName: string;
IsActive: string;
CustomerType: number;
}
I'm currently shopping around for a Visual Studio add-in that, when aimed at an entity class (or classes), will generate a TypeScript interface for me.
I use Knockout to create a class that exposes two things: methods that retrieve objects from my server, and properties to hold those objects once I retrieve them -- a ViewModel, in other words.
At the start of my ViewModel code (in a file called SalesOrderVM.ts), I put the reference tags that give my TypeScript code access to the definition files for Breeze and Knockout, along with a reference to the file holding my interfaces (these aren't required in Visual Studio 2013). To save some typing, I use TypeScript's import statement to provide me with shorthand versions for Breeze's namespace and the namespace that I declared my Customer interface in:
/// <reference path="../typings/breeze/breeze-1.2.d.ts" />
/// <reference path="../typings/knockout/knockout.d.ts" />
/// <reference path="SalesOrderEntities.ts" />
module SalesOrderMvvm
{
import b = breeze;
import ent = SalesOrderEntities;
I need two properties on my ViewModel, both defined as KnockoutObservables: one to hold the collection of Customer objects (I called that property "customers") and one to hold the current Customer object (a property I called "cust"). Thanks to TypeScript, I can specify data types for all of these items and initialize them just by setting them up as optional parameters to my ViewModel's constructor. My constructor also initializes a variable to hold the Breeze client-side EntityManager that keeps track of my client-side objects and handles retrieving Customer objects from my Web API service:
export class CustomerVM
{
constructor(
public customers: KnockoutObservableArray<ent.ICustomer> =
ko.observableArray<ent.ICustomer>([]),
public customer: KnockoutObservable<ent.ICustomer> =
ko.observable<ent.ICustomer>(),
private em: b.EntityManager =
new b.EntityManager(""))
{ }
I could set up additional properties on my ViewModel for each of the properties on my Customer object. So far, I haven't needed to, and I'm hoping to continue to avoid that. As it is, I have to keep my client-side interface synchronized with my server-side entity class as I add and remove columns from the Customer table; I don't want to have to keep my ViewModel in sync, also.
Now I need a method to fetch Customer objects from my service. In a previous column, I constructed a Breeze query and then passed it to the Breeze EntityManager for processing. Breeze's fluent API includes a method called "using" that lets me pass the EntityManager to the query in the same statement that creates the query. Taking advantage of that method, the code that retrieves all the Customer objects from the server and then stuffs them into my ViewModel's "customers" property looks like this:
fetchAllCustomers()
{
b.EntityQuery.from("Customers")
.using(this.em)
.execute()
.then(dt => { this.customers(dt.results); });
}
The method for saving changes to the objects under Breeze's control is even simpler, since all I have to do is call the saveChanges method on Breeze's EntityManager (Breeze tracks changes to the objects it retrieves):
saveChanges()
{
this.em.saveChanges();
}
My final step is to make my ViewModel known to Knockout. To do that, I add one more TypeScript file to the project (UIIntegration.ts) with code to instantiate my ViewModel, and pass it to Knockout when the page starts up. That's one line of code in a jQuery ready function (along with the necessary references to TypeScript definition files and my code):
/// <reference path="../typings/knockout/knockout.d.ts" />
/// <reference path="../typings/jquery/jquery.d.ts" />
/// <reference path="SalesOrderMvvm.ts" />
$(function () {
ko.applyBindings(new SalesOrderMvvm.CustomerVM());
});
Tying to the UI
Of course, for this code to work in the browser, my HTML page needs script references to the JavaScript libraries I'm using and the JavaScript files generated from my TypeScript code:
<script src="Scripts/jquery-2.0.3.js"></script>
<script src="Scripts/knockout-2.3.0.js"></script>
<script src="Scripts/q.min.js"></script>
<script src="Scripts/breeze.min.js"></script>
<script src="Scripts/Application/SalesOrderEntities.js"></script>
<script src="Scripts/Application/SalesOrderMvvm.js"></script>
<script src="Scripts/Application/UIIntegration.js"></script>
Knockout lets me declaratively bind parts of my ViewModel to HTML elements in my page by adding an attribute called data-bind to the elements. To bind a ViewModel method to an event fired by an element in the page, I use Knockout's event binding, passing two things: the name of the element's event and the name of a method on my ViewModel. This example tells Knockout to bind the click event on two buttons to my ViewModel's fetchAllCustomers and saveChanges methods:
<input type="button" value="Get Customers"
data-
<input type="button" value="Save Changes"
data-
Knockout supports more complex declarative bindings and allows you to mix in procedural code. The following example, for instance, does many things to a dropdown list:
<select id='CustomerList'
data-bind="options: customers,
value: customer,
optionsCaption: 'Select a Customer',
optionsText:
function (cust) { return cust.FirstName() + ' ' + cust.LastName() }"/>
One reason I like Knockout is that it's primarily a declarative system: Specify what binding you want, set the parameters and it all works -- or should. There are problems with declarative systems, of course. You're limited to what the framework allows you to declare (as opposed to procedural code, where you can probably program around any problem, eventually). Debugging is often impossible in declarative programming. If you get the syntax right, everything works; get the syntax wrong and nothing works (compile time error messages are also often non-existent). And, of course, every declarative system has its own special syntax that you need to become familiar with. Still, I'm coming to value declarative programming, and Knockout is a great example of how it works. (For more about Knockout, see Kelly Adams and Mark Michaelis' two-part introduction.) I need another binding to display data from the currently selected Customer and let the user update it. To bind a textbox, I use Knockout's value binding, this time passing one thing: the name of the ViewModel property holding the currently selected Customer object with the property on that Customer object that I want to bind to. However, since there will be times when no Customer is selected and that property is set to null, I actually pass Knockout a conditional statement that checks to see if there's anything in my customer property, and displays a message if there isn't:
<input id='CustId' type="text"
data-
And, with that, the user can now retrieve server-side entities with a button click, select a Customer from a dropdown list and update the Customer's last name from a textbox.
It's Still in Beta
If you've been following along in this series, I made some changes to my environment for this column, though I still continue to work in Visual Studio 2012. I updated all my NuGet packages, because Visual Studio started whining about my project using incompatible packages (I also applied Update 3 for Visual Studio 2012). Along the way, NuGet wanted to upgrade me to EF 6, but at the time, no compatible version of Breeze existed for EF 6 (there is a compatible version now). In the meantime, I stuck with EF 5 for this sample. Even with the upgrades, I still had TypeScript teething problems. My computer's memory would gradually fill up with instances of tsc*32.exe (the TypeScript compiler, I assume) and something called conhost.exe. I'd eventually have to terminate those processes in Task Manager or reboot my machine. TypeScript 0.9.5 seems to have addressed these issues.
While poking around in the definition file for Breeze, I noticed that it contained a reference to the TypeScript definition file for Q (Q is a JavaScript library that Breeze depends on for promise-based, asynchronous programming). That reference assumed that the Q definition file was in the same folder as the Breeze definition file. That's not the case in my project (the Q definition file is in the folder where NuGet put it), so I updated the Breeze reference to point to the Q definition file's actual location in my project. That change meant that I no longer needed a separate reference to the Q definition file in my code; other than that, nothing changed.
This application is trivially simple, so in my next columns, I'll do something more interesting: implement support for adding and deleting customers and then expand it to a master-detail page that displays all the sales orders for a selected customer.
Printable Format
> More TechLibrary
I agree to this site's Privacy Policy.
> More Webcasts | http://visualstudiomagazine.com/articles/2014/01/01/integrating-the-client-and-the-server-with-typescript.aspx | CC-MAIN-2015-11 | refinedweb | 2,265 | 50.46 |
AIR with JavaScript and AJAX (noob design issues)chris.s.jordan Nov 13, 2009 3:54 PM
Okay, so as the subject states, I'm a noob when it comes to designing an AIR app. My question is kind of two fold:
First, as a matter of design, I've got a main window that has several drop-down type menus "File", "Preferences", "Help" that kind of thing. My plan was to keep reusing the main window for each of my different screens (unless a pop-up/dialog type screen was called for)
Anyway, the application I'm writing will, in part, handle a database of patrons. So under one of the menus (File in this case) I've got a "Patron" option. Clicking on "Patron" fires a function called newPatron() which in turn calls my function ebo.displayScreen('patron.htm'). This latter function takes the filename passed in and reads that file then dumps it's contents out to the main screen.
So, my main window consists (in part) of the following html:
<body onload="ebo.doLoad();">
<div id="content"></div>
</body>
then my displayScreen function looks like this:
function displayScreen(filename){
var my = {};
// get a handle on the file...
my.file = air.File.applicationDirectory(resolvePath(filename);
// get a handle on the stream...
my.stream = new air.FileStream();
//open the stream for read...
my.stream.open(my.file, air.FileMode.READ);
// read the data...
my.data = my.stream.readUTFBytes(my.stream.bytesAvailable);
// close the stream...
my.stream.close();
// update the screen (I'm using jQuery here)
$("#content").empty().append(my.data);
}
So anyway, this works like a champ. I click on "Patron" from my file menu and the screen changes to display the contents of patron.htm.
Currently, patron.htm just contains the following:
<div style="text-align:left;">
<input type="button" value="add" onclick="ebo.add(1,2);" />
</div>
<div id="result"><div>
ebo.add looks like this:
function add(a,b){
var my = {};
my.result = a + b;
$("#result").empty().append(my.result + "<br />");
}
So, if anyone hasn't guessed by now, the code contained in the ebo namespace gets included on the main screen when the application loads, and my problem is that despite the fact that once the patron.htm file is loaded in the content div by clicking on the menu option, my button on that screen refuses to work. I've even tried just having the button do an alert directly,
<input type="button" value="add" onclick="alert('AIRRocks!');" />
but even that fails!
So, I added some code to the main page to test a button from there...
<body onload="ebo.doLoad();">
<input type="button" value="add" onclick="ebo.add(1,10);" />
<div id="result"></div>
<div id="content"></div>
</body>
So, now when the main screen loads, I get an "add" button and when I click it the number 11 appears in the "result" div. When I click on the Patron menu option, the new html and javascript are loaded into the "content" div, but the add button in that area refuses to work!
What gives? I'm sure I'm missing something. So I guess the two questions are: is my scheme of loading new content into the main window by reading the contents of a file, flawed in some way? Or am I just missing something about making calls from this dynamically loaded content? It *looks* like it should work fine, but if I can't make javascript calls from the resultant content, then the concept is no good.
I realize this has been a somewhat long winded post, but hopefully it describes in enough detail the problem I'm having. I should maybe add that I've looked at what's in the DOM using the AIR HTML/JS Application Inspector and it looks like everything should work perfectly.
I hope someone out there can help me and might have the patience to explain where I've gone wrong. I might also mention that the only book I've read (or am reading) on AIR with JavaScript and AJAX is "Adobe AIR (Adobe Integrated Runtime) with Ajax: Visual QuickPro Guide"... it really hasn't covered aspects of what makes good design (like what's the best way to reuse the main application window to hold interactive content)... but anyway, there you have it.
Again, I hope someone can help me out.
Thanks!
Chris
1. Re: AIR with JavaScript and AJAX (noob design issues)commadelimited Nov 15, 2009 11:46 AM (in response to chris.s.jordan)
Chris...
I've not done this sort of thing yet in AIR, but if you're losing the namespace, try doing a $.load() in the main page instead of replacing the current one. That way you can keep your vars. Alternately you could load the new page a new window, which would keep references to vars set in the parent.
2. Re: AIR with JavaScript and AJAX (noob design issues)chris.s.jordan Nov 15, 2009 11:51 AM (in response to commadelimited)
Thanks for responding, Andy. I don't think I'm losing my namespace. That
thought had crossed my mind, which is why (and I thought I put this in my
original post) I tried putting a somple alert in the onclick event of the
button in my "patron.htm" file... but that simple alert doesn't even work.
:o(
Do you still think it's an issue with the namespace?
3. Re: AIR with JavaScript and AJAX (noob design issues)chris.s.jordan Nov 18, 2009 1:13 PM (in response to chris.s.jordan)
Does my original post need clarification? My post has been viewed 59 times (how many of those are me, I have no idea), but only one guy has even tried to help (thanks btw Andy).
Are there any OTHER forums out there that deal with the AIR platform? | https://forums.adobe.com/thread/524280 | CC-MAIN-2018-34 | refinedweb | 982 | 72.76 |
Hi,
Just wondering if anyone can help me out in turning this program into a working program.
Basically, everything works, apart from inserting the results in the last SYSTEM() line.
Any help would be greatly appreciated.
Code:
#include <cstdlib>
#include <iostream>
using namespace std;
int main()
{
string a;
string b;
string c;
cout << "Enter source path (where files will be moved from)\n";
cout << ": ";
cin >> a;
cout << "Enter destination path (where files will be moved to)\n";
cout << ": ";
cin >> b;
cout << "Enter minimum access age in days\n";
cout << "(This will move all files not accessed in x number of days)\n";
cout << ": ";
cin >> c;
cout << c << a << b;
system("robocopy //MOVE //MINLAD:" << c << a << b"");
system("PAUSE");
return EXIT_SUCCESS;
} | http://cboard.cprogramming.com/cplusplus-programming/74325-no-match-operator-compile-problem-printable-thread.html | CC-MAIN-2015-35 | refinedweb | 120 | 59.47 |
Now that we’ve got a solid grasp on how resolvers work, we can get into integrating a web application frontend. For this tutorial, we’ll be using React, and Apollo’s new hooks library. For those unfamiliar with hooks, these are a new way to build out logic inside React components. To get more up to speed on hooks, take a look at the React docs on them. This library is in beta at the time of writing, but very stable at this point and it is unlikely the features are using will change. I wanted to make sure that we are writing React code that’s as future-proof as possible, and hooks are here to stay. It will also make our code more concise for easier understanding.
Follow this GraphQL Series:
- 1. The Missing GraphQL Introduction
- 2. Resolvers: An in-depth look
- 3. Client Side Integration with Apollo Hooks
- 4. GraphQL Subscriptions with Apollo Server and Client
Table of Contents
Setup
Since we are adding a separate app into our repo, for those of you following along using the
resolver-end branch on the course github. First we will make a new folder in the root of the repo called
server and move every file into it, except for the
.gitignore. Now, we can continue with creating the web app.
For those just joining in, please clone this repo and
git checkout 3-client-integration-start.
Let’s get started by creating a new react app in the root folder:
npx create-react-app web
You should now have
server and
web inside the cloned repository. Now we can go inside the
web folder, and install the required dependencies for Apollo Client:
rm -rf node_modules npm install apollo-client apollo-link-http apollo-cache-inmemory graphql graphql-tag @apollo/react-hooks npm install
Since we started with npm on the server, this will keep it the same inside create-react-app, so that its easy to continue following along.
Next, let’s delete every file inside of
web/src except for
index.js and
App.js. We’ll replace the
index.js with the following:
import React from 'react'; import ReactDOM from 'react-dom'; import App from './App'; ReactDOM.render(<App />, document.getElementById('root'));
Next up, let’s create a file called
apolloSetup.js:
import { HttpLink } from 'apollo-link-http'; import { ApolloClient } from 'apollo-client'; import { InMemoryCache } from 'apollo-cache-inmemory'; const httpLink = new HttpLink({ uri: '', }); export default new ApolloClient({ cache: new InMemoryCache(), link: httpLink, });
Apollo tries to support a lot of use cases, so they have split out the setup into a few different packages. We will be adding a couple more later on, but for now this is a pretty simple setup.
We create a link that tells Apollo Client where our server is located. Then, when we create the client. We pass it the
httpLink and an in memory cache. Apollo supports other caching options which can be useful for offline apps. With this setup, a page refresh will clear our cache, but clicking back and forth between pages would be able to use the cache, instead of making calls to our server for data.
With our
src/apolloSetup.js in place, we can now update our
App.js:
import React from 'react'; import apolloClient from './apolloSetup'; import { ApolloProvider } from '@apollo/react-hooks'; import Books from './pages/Books'; const App = () => ( <ApolloProvider client={apolloClient}> <Books /> </ApolloProvider> ); export default App;
This wraps our app with Apollo, so that we can start using it. For this tutorial, we will make a few components and put them inside the Provider. In a more realistic React app, we would probably place some sort of router, like @reach/router inside of our provider.
Querying our Server
With our setup out of the way, let’s make our first query. Make a file called
web/src/pages/useBooksQuery.js:
import gql from 'graphql-tag'; import { useQuery } from '@apollo/react-hooks'; export const query = gql` query Books { books { title author } } `; export default () => useQuery(query);
Now we can make
Books.js inside of the same pages folder:
import React from 'react'; import useBooksQuery from './useBooksQuery'; const Books = () => { let { data } = useBooksQuery(); if (!data || !data.books) return null; return data.books.map(book => ( <div> <h3>{book.title}</h3> <p>{book.author}</p> </div> )); }; export default Books;
At this point, your
web/src directory should look like this:
├── App.js ├── apolloSetup.js ├── index.js └── pages ├── Books.js └── useBooksQuery.js
The Power of Apollo
Let’s open up the
server directory and run
npm start.
Next, open a new terminal window to start our web app. Type
npm start inside the
web directory.
You should see the two books on screen that our server returned!
Now we can take a step back and compare a REST request like we did in the first post. This time we understand how a real query looks with real code.
A REST call for this same data may have looked like this inside of a React component:
import { useState, useEffect } from 'react'; const useBooksRequest = () => { let [books, setBooks] = useState(); useEffect(() => { const getBooks = async () => { let response = await fetch(''); let books = await response.json(); setBooks(books); }; getBooks(); }, [setBooks]); return { books }; };
This would be a hook that runs when the component mounts, and sets the state of books when the response comes back. It would look very similar to our current Books:
const Books = () => { let { books } = useBooksRequest(); if (!books) return null; return books.map(book => ( <div> <h3>{book.title}</h3> <p>{book.author}</p> </div> )); };
So what are we getting out of using Apollo Client? Some of this goes back to that first article. But there is a lot more I can talk about this time around.
Comparing Redux / REST
Imagine if we had a delete book REST call. And also imagine that we have the latest book title in the navbar at the top of our site. This means, if we are on our books page, and we delete the newest book, what needs to happen?
- Call delete book endpoint
- Remove the book from the books array in our state
- If that book was the first book, we need to update the nav bar, since it is now deleted
Now let’s imagine we also care about showing a loading spinner while the delete is taking place. If we are using something like Redux, we are probably firing off this stuff in order:
- START_LOADING
- DELETE_BOOK
- SET_NAVBAR_BOOK_TITLE
- STOP_LOADING
At this point, this is feeling pretty complicated. This is where Apollo client comes in, and makes a huge difference. All of this code that you’d be used to writing with redux is no longer needed. Let’s see how.
In
web/src let’s make a new file called
Nav.js.
import React from 'react'; import useBooksQuery from './pages/useBooksQuery'; const Nav = () => { let { data } = useBooksQuery(); if (!data || !data.books) return null; return ( <div> This is our amazing nav bar. The latest book in our collection is{' '} {data.books[0].title} </div> ); }; export default Nav;
Next, we can edit
src/App.js to render the nav at the top:
import React from 'react'; import Nav from './Nav'; import apolloClient from './apolloSetup'; import { ApolloProvider } from '@apollo/react-hooks'; import Books from './pages/Books'; const App = () => ( <ApolloProvider client={apolloClient}> <React.Fragment> <Nav /> <Books /> </React.Fragment> </ApolloProvider> ); export default App;
Now, if we load up the page, you’ll see that text at the top. Once again, I am trying to demonstate an area where some state is needed in two different places, in a global element (nav bar) and on a single screen (books edit page). This is very realistic for many apps. You could imagine an edit user screen and showing user info in the nav, for instance.
The Benefits of Redux, With Less Code
Let’s go step by step showing each piece of the puzzle.
Adding the delete mutation
First, we need to add a delete mutation to our server. We will make this really simple and fake deleting a book. What I mean by this is, our server will respond saying the book is deleted, but refreshing the page will bring it back.
Let’s open up
server/src/resolvers.js and add a mutation at the bottom:
Mutation: { //existing one addAuthor: (_, { input: { name, twitter } }) => { return { name, twitter, }; }, deleteBook: (_, { title }) => true, },
Next, open up
typeDefs and add this to the existing Mutation type:
type Mutation { ... deleteBook(title: String!): Boolean }
Now we can restart our server. What we just did will add a mutation that takes a book title, and returns true every time. Realistically, we would want a book
id to be sent, and then actually delete it from our database.
Adding the mutation to the frontend
Let’s make
useDeleteBookMutation.js inside
web/src/pages:
import gql from 'graphql-tag'; import { useMutation } from '@apollo/react-hooks'; export const mutation = gql` mutation DeleteBook($title: String!) { deleteBook(title: $title) } `; export default () => { let [deleteBook] = useMutation(mutation); return deleteBook; };
You’ll notice this time we are importing
useMutation instead of
useQuery. The rest is pretty straight forward. This hook will give us a
mutate function that we can call when we want to hit the server. Let’s open up
Books.js and import({ variables: { title: book.title } })}> Delete Book </button> </div> )); }; export default Books;
At this point, if we load up our app, and hit delete, it hits the server, but nothing happens. This is expected. Since we are returning only a boolean, it’s up to us to tell Apollo what to do next.
Updating the local store
We can update the local cache after any request for immediate UI updates. Here’s how it looks, if we edit the
button code in
Books:
// Update the import at the top for `useBooksQuery to be: import useBooksQuery, { query } from './useBooksQuery'; ... <button onClick={() => mutate({ variables: { title: book.title }, update: store => { const data = store.readQuery({ query, }); store.writeQuery({ query, data: { books: data.books.filter( currentBook => currentBook.title !== book.title, ), }, }); }, }) } > Delete Book </button>
Well… that looks like a lot doesn’t it! If we run it, and click delete, you will notice that the navbar and the list of books both automatically update with the correct data, having the book removed. Before I explain this further, let’s refactor it slightly. Change
Books to(book.title)}>Delete Book</button> </div> )); }; export default Books;
You’ll notice that
deleteBook is only taking a book title. We can update
useDeleteBookMutation to do all of the logic we previously had in our render. This keeps our
Books much more readable. Open
useDeleteBookMutation and replace it with:
import gql from 'graphql-tag'; import { useMutation } from '@apollo/react-hooks'; import { query as booksQuery } from './useBooksQuery'; export const mutation = gql` mutation DeleteBook($title: String!) { deleteBook(title: $title) } `; export default () => { let [deleteBook] = useMutation(mutation); return title => { return deleteBook({ variables: { title }, update: store => { const data = store.readQuery({ query: booksQuery, }); store.writeQuery({ query: booksQuery, data: { books: data.books.filter( currentBook => currentBook.title !== title, ), }, }); }, }); }; };
Ahh, this looks much better. Our GraphQL logic is now co-located with its query. As a developer it is much easier to come in and see what is going on by keeping this section together.
So, how does it work? Any mutation can have an optional
update function. It gets the store (local cache from all queries) and the result of the mutation as a second parameter, which we are not using. Since we want to update the UI after this mutation, we first read a query:
const data = store.readQuery({ query: booksQuery, });
This gives us the data just like running a query with
useQuery except it is grabbed from our
InMemoryCache that we setup. Next, all we do is tell apollo to write a new object to the store for that query:
store.writeQuery({ query: booksQuery, data: { books: data.books.filter(currentBook => currentBook.title !== title), }, });
In the code above, I am telling apollo to update the
booksQuery data to filter out any book that has the title we passed into the mutation.
Apollo also gives us a loading indicator built in to our queries. If you want to show a spinner or some other UI element when something is loading, it would look like this, inside
Books.js
const Books = () => { let { data, loading, refetch } = useBooksQuery(); let mutate = useDeleteBookMutation(); if (loading) return <div>loading...</div>; if (!data || !data.books) return null; return data.books.map(book => ( <div> <h3>{book.title}</h3> <p>{book.author}</p> <button onClick={() => refetch()}>Reload Books</button> <button onClick={() => mutate(book.title)}>Delete Book</button> </div> )); };
In this example, I even threw in the refetch function. By default, the first time this component loads, it will show a loading indicator. If we trigger a refetch, it will load it behind the scenes, and not show the loading div. This is something that can be changed if needed. You can pass
notifyOnNetworkStatusChange to our
deleteBook mutation if you wanted it to set loading to true during refetches:
return deleteBook({ variables: { title }, notifyOnNetworkStatusChange: true ... })
Apollo also gives us the flexibility to choose when to automatically refetch data. The default will load it once, then use the cache, until you manually call refetch, or in our case, reloading the page since we are using the in memory cache. You can specify other policies, as seen here. In a typical redux type application, or even the
fetch example I wrote above, you wouldn’t have this level of control without bringing in another library. You would always be fetching when the component mounted.
Automatic Updates
You may be wondering… can Apollo be smarter? With the GraphQL type system, it can. We just need to do a couple easy things before I can show you this in action.
Open
resolvers in the server folder, and let’s add some unique id’s to the books:
books: () => { return [ { id: 1, title: 'Harry Potter and the Chamber of Secrets', author: 'J.K. Rowling', }, { id: 2, title: 'Jurassic Park', author: 'Michael Crichton', }, ]; };
Open
typeDefs and add it to the book type:
type Book { id: Int! title: String! author: String! }
Restart the server, and open our
useBooksQuery in the web folder, and add it to the query:
books { id title author }
Add Change Book Mutation
Cool, we are all set. Apollo client is smart enough to see something of the same type (Book) and the same id (1, or 2 in our case) it will update the local cache. So if we make a new resolver, called
changeBookTitle in
server/resolvers.js we can see this in action. Replace the contents of
resolvers:
const books = [ { id: 1, title: 'Harry Potter and the Chamber of Secrets', author: 'J.K. Rowling', }, { id: 2, title: 'Jurassic Park', author: 'Michael Crichton', }, ]; export const resolvers = { Query: { books: () => { return books; }, authors: () => { return [ { name: 'Todd', twitter: 'toddmotto' }, { name: 'React', twitter: 'reactjs' }, ]; }, }, Mutation: { addAuthor: (_, { input: { name, twitter } }) => { return { name, twitter, }; }, deleteBook: (_, { title }) => true, changeBookTitle: (_, { input }) => { let { id, title } = input; let book = books.find(book => book.id === id); //Return the new book title return { ...book, title, }; }, }, };
I pulled out the books array so that we can mimic finding it in our database or elsewhere, and then returning the book object with the new title we pass in.
Now, we need to update our
typeDefs:
input ChangeBookInput { id: Int! title: String! } type Mutation { changeBookTitle(input: ChangeBookInput!): Book ... }
Create client side mutation
Our
changeBookTitle resolver will take a book id, and a title, and return a Book object. Now we can create the client side query. Let’s name it
src/pages/useChangeBookTitleMutation.js
import gql from 'graphql-tag'; import { useMutation } from '@apollo/react-hooks'; export const mutation = gql` mutation ChangeBookTitle($input: ChangeBookInput!) { changeBookTitle(input: $input) { id title } } `; export default () => { let [mutate] = useMutation(mutation); return ({ id, title }) => { return mutate({ variables: { input: { id, title } }, }); }; };
Hopefully you’re starting to see the pattern here.
- Create a typeDef for a resolver
- Make the resolver
- Create the client side query / mutation
- Use it!
Let’s make an input field called
ChangeTitle.js inside
src/pages:
import React, { useState } from 'react'; import useChangeBookTitleMutation from './useChangeBookTitleMutation'; const ChangeTitle = ({ book }) => { let changeTitle = useChangeBookTitleMutation(); let [title, setTitle] = useState(book.title); return ( <div> <input value={title} onChange={e => setTitle(e.target.value)} /> <button onClick={() => changeTitle({ id: book.id, title })}> Change it! </button> </div> ); }; export default ChangeTitle;
Now, let’s add it to
Books.js:
import React from 'react'; import useBooksQuery from './useBooksQuery'; import useDeleteBookMutation from './useDeleteBookMutation'; import ChangeTitle from './ChangeTitle'; const Books = () => { let { data } = useBooksQuery(); let mutate = useDeleteBookMutation(); if (!data || !data.books) return null; return data.books.map(book => ( <div> <h3>{book.title}</h3> <p>{book.author}</p> <button onClick={() => mutate(book.title)}>Delete Book</button> <ChangeTitle book={book} /> </div> )); }; export default Books;
Phew! We did it. If you change some text in the first input field, and hit save. You will see that the title is automatically updated in the navbar, and the book itself. We didn’t have to do any state updates ourself.
This is why I love Apollo / GraphQL. Before, I would need a state management system that was a very manual process for doing anything, and it was a lot of code. Now, I will use Apollo for everything, and React Context, or Apollo’s local resolvers for local state.
Conclusion
To recap, we covered setting up Apollo in a React frontend. We compared it to a REST endpoint now that we can see all of the benefits. We also learned that the intelligent updating due to the type system, and ease of manual store updates, cuts down on the UI logic we had to write in the past.
You can clone the course repo and
git checkout 3-client-integration-end to see the final code if you made a mistake along the way. | https://ultimatecourses.com/blog/graphql-client-side-integration-with-apollo-hooks | CC-MAIN-2019-47 | refinedweb | 2,956 | 65.73 |
I'm trying to use the ipython notebook running on my pi as a persistent testing ground for new code, whether I'm at work, home, my brothers house etc. I use dynamic DNS to point it back to my home network and port forward to my Pi. All this works fine, however I appear unable to run any code?
I've used ipython notebook may times on both linux and Windows, but for some reason I can't seem to get it running stable on my pi.
I'm tried the most simple statement possible:
I've also tried:
Code: Select all
print "Hello World!"
But whatever I run results in a message box which says:
Code: Select all
import this
I'm running up to date Raspbian, nothing custom at all on a 256Mb Model B.I'm running up to date Raspbian, nothing custom at all on a 256Mb Model B.Dead Kernel.
The kernel has died, would you like to restart it? If you do not restart the kernel, you will be able to save the notebook, but running code will not work until the notebook is reopened.
Has anyone else come across this issue yet?
Regards, | https://www.raspberrypi.org/forums/viewtopic.php?f=32&t=35050 | CC-MAIN-2020-34 | refinedweb | 201 | 77.47 |
Notes for transition from BIRD 1.6 to BIRD 2.0Notes for transition from BIRD 1.6 to BIRD 2.0
TablesTables
Instead of default table named master, there are two default tables named master4 and master6, for IPv4 and IPv6.
Table definitions now specify network type. Instead of
table xyz; it is now
ipv4 table xyz; or
ipv6 table xyz;. There are more network / table types than just ipv4 or ipv6, see documentation for more details.
ROA tables are no longer a special structure, but just a variant of routing table of network type roa4 or roa6. Therefore, they can be defined by
roa4 table xyz; or
roa6 table xyz;. The definition cannot preinitialize the table with ROA records, but it is possible to use static protocol for that purpose. It is no longer possible to use commands show/add/delete/flush roa, but can be examined by regular show route command.
ChannelsChannels
Protocols and tables are now connected by explicit channels, most related protocol options (table, import, export, ...) are now channel options. Most protocols need (implicit or explicit) channel definition. IPv4 and IPv6 channels are defined by ipv4 and ipv6 keywords in protocol sections. For simple protocol with default edit parameters, it is just:
protocol static { ipv4; route 10.10.0.0/16 via 10.1.1.1; ... }
In most cases some channel options are used. For example instead of:
table xyz; protocol ospf { table xyz; import all; export where source = RTS_STATIC; interface "eth*" { ... }; }
It is now:
ipv4 table xyz; protocol ospf { ipv4 { table xyz; import all; export where source = RTS_STATIC; }; interface "eth*" { ... }; }
Some protocols can use multiple channels, For example:
protocol direct { ipv4; ipv6; } protocol babel { ipv4 { export all; }; ipv6 { export all; }; interface "*" { ... }; }
Important: There must be at most one channel of each type in a protocol, independent channel definitions are not merged. Therefore, this is invalid:
protocol bgp { ipv4 { import all; }; ipv4 { export all; }; }
Since 2.0.1, channels inherited from template may be 'redefined' in protocol definition.
BGP Export filter processingBGP Export filter processing
In old BIRD, a route exported to a BGP protocol was first modified by the BGP code and then by the export filter. Now the order was reversed, so it is first processed by the export filter and then by integral BGP processing. Therefore, export filters see route attributes as they are in routing tables and expressions like
export where bgp_path.len < 5; work as expected, but with nontrivial filters it may cause some subtle changes in behavior.
MiscellaneousMiscellaneous
Global option listen bgp was removed. Use strict bind BGP option instead.
For BGP, many protocol options are now channel options, as they are limited to each AFI/SAFI represented by that channel. See documentation for details.
External BGP now requires explicit configuration of import and export policies (import and export filters in channels).
OSPF and RIP use ECMP and link detection by default. Also direct BGP uses link detection by default.
Babel protocol interface options hello interval and update interval now require time units (s, ms) and have sub-second precision.
Pipe protocol now by default propagates routes in both direction (like with
import all; export all;).
On Linux, default value for kernel option metric was changed to 32.
Kernel option device routes was removed. Device routes are handled as regular router.
Device protocol option primary was replaced by section interface with option preferred.
Long obsolete syntax for bgp masks (e.g. /1 2 3/) was removed. | https://gitlab.nic.cz/labs/bird/-/wikis/transition-notes-to-bird-2 | CC-MAIN-2020-34 | refinedweb | 576 | 58.69 |
root-config - Man Page
ROOT utility for your Makefiles
Synopsis
root-config [options]
Description
root-config is a tool that is used to configure and determine the compiler and linker flags that should be used to compile and link programs that use ROOT.
CPPFLAGS += $(shell root-config --cflags) LDLIBS += $(shell root-config --libs) LDFLAGS += $(shell root-config --ldflags) %Cint.cxx:Include.h LinkDef.h rootcint -f $@ -c $^
in your Makefile to use the built-in rules of GNU make. For GUIs, replace --libs by --glibs.
You may also find the automake(1), autoconf(1), and libtool(1) macro file /usr/share/aclocal/root.m4 useful. If that macro file isn't installed where aclocal will find it, copy the contents to your local acinclude.m4 file. In the directories you use ROOT libraries, have in your Makefile.am file:
lib_LTLIBRARIES = libFoo.la pkginclude_HEADERS = Foo.h noinst_HEADERS = FooCint.h libFoo_la_SOURCES = Foo.cxx FooCint.cxx libFoo_la_LDFLAGS = -version-info 1:0 -R @ROOTLIBDIR@ libFoo_la_LDADD = -lCore -lCint @ROOTAUXLIBS@ BUILT_SOURCES = FooCint.cxx FooCint.h AM_CPPFLAGS = -I@ROOTINCDIR@ AM_LDFLAGS = -L@ROOTLIBDIR@ CLEANFILES = *Cint.cxx *Cint.h *~ core %Cint.cxx %Cint.h:Include.h LinkDef.h @ROOTCINT@ -f $*Cint.cxx -c $(INCLUDES) $(AM_CPPFLAGS) $^
where you should substitute Foo with whatever, and list configure.in file
ROOT_PATH(, [ AC_DEFUN(HAVE_ROOT) have_root=yes ]) AM_CONDITIONAL(GOT_ROOT, test "x$have_root" = "xyes")
And then in some Makefile.am
EXTRA_SOURCES = root_dependent_source.cc if GOT_ROOT LIBFOOXTRA = root_dependent_source.cc else LIBFOOXTRA = endif lib_LTLIBRARIES = libFoo.la libFoo_la_SOURCES = Foo.cc $(LIBFOOXTRA)
The full list of substitution variables are:
- ROOTCONF
full path to root-config
- ROOTEXEC
full path to root
- ROOTCINT
full path to rootcint
- ROOTLIBDIR
Where the ROOT libraries are
- ROOTINCDIR
Where the ROOT headers are
- ROOTCFLAGS
Extra compiler flags
- ROOTLIBS
ROOT basic libraries
- ROOTGLIBS
ROOT basic + GUI libraries
- ROOTAUXLIBS
Auxiliary libraries and linker flags for ROOT
- ROOTAUXCFLAGS
Auxiliary compiler flags
- ROOTRPATH
Same as ROOTLIBDIR
Options
Give a short list of options available, and exit
- --version
Report the version number of installed ROOT, and exit.
- --prefix=<prefix>
If no arguments are given, reports where ROOT is installed. With an argument of =<prefix>, set the base of the subsequent options to <prefix>. If \--exec-prefix is passed with an argument, it overrides the argument given to \--prefix for the library path.
- --exec-prefix=<prefix>
If no argument is given, report where the libraries are installed. If an argument is given, use that as the installation base directory for the libraries. This option does not affect the include path.
- --libdir
Print the directory where the ROOT libraries are installed.
- --incdir
Print the directory where the ROOT headers are installed.
- --libs
Output a line suitable for linking a program against the ROOT libraries. No graphics libraries are included.
- --glibs
As above, but also output for the graphics (GUI) libraries.
- --evelibs
As above, but also output for the graphics libraries and Eve libraries.
- --cflags
Output a line suitable for compiling a source file against the ROOT header (class declararion) files.
- --new
Put the libNew.so library in the library lists. This option must be given before options --libs and --glibs.
- --nonew
Compatiblity option, does nothing.
- --auxlibs
Print auxiliary libraries and/or system linker flags.
- --noauxlibs
Do not print auxiliary libraries and/or system linker flags in the output of --libs and --glibs. --auxcflags Print auxiliary compiler flags.
- --noauxcflags
Do not print auxiliary compiler flags in the output of --cflags.
- --noldflags
Do not print library path link option in output of --libs, --evelibs and --glibs.
- --ldflags
Print additional linker flags (eg. -m64)
- --arch
Print the architecture (compiler/OS)
- --platform
Print the platform (OS)
- --bindir
Print the binary directory of the root installation (location of the root executable)
- --etcdir
Print the configuration directory (place of system.rootrc, mime type, valgrind suppression files and .desktop files)
- --config
Print arguments used for ./configure as used when building root. These cannot be used for ./configure if root was built with CMake.
- --git-revision
Print the ROOT git revision number from which root was built.
- --has-<feature>
Test if <feature> has been enabled in the build process.
- --features
Print list of all supported features
- --ncpu
Print number of available (hyperthreaded) cores
- --python-version
Print the Python version used by ROOT
- --cc
Print alternative C compiler specified when ROOT was built
- --cxx
Print alternative C++ compiler specified when ROOT was built
- --f77
Print alternative Fortran compiler specified when ROOT was built
- --ld
Print alternative Linker specified when ROOT was built
See Also
root(1), root-cint(1)
See also the ROOT webpages:). | https://www.mankier.com/1/root-config | CC-MAIN-2021-21 | refinedweb | 743 | 58.79 |
THANKS
please send your sample project about java animation program.thanks
Simple Basic Stroke Example
Simple Basic Stroke Example
... setStroke() sets the stroke settings for the Graphics2D
context, when you draw... stroke.
The stylistic representation of the outline for the specified shape
Graphics2D
Java: Graphics2D
The javax.swing.Graphics2D class of Java 2 supports
many more graphics operations than the Graphics class.
For example,
Graphics2D... to Graphics2D
to make additional methods available.
For example
public void
Graphics2D in java - Java Beginners
Graphics2D in java I am working on GIS project. I want to represent a line in different formate(for railroad,airplane root etc).I know how to represent solid, dashed,dotted lines.But i am not getting any idea to represent rail
Thanks - Java Beginners
Thanks Hi Rajnikant,
Thanks for reply.....
I am... is the advantage of interface and what is the use of interface...
Thanks... from unrelated classes just to get the required functionality. For example, let
Thanks - Java Beginners
Thanks Hi,
Thanks ur sending url is correct..And fullfill... and send me...
Thanks once again...for sending scjp link Hi friend...("MySQL Connect Example.");
Connection conn = null;
String url = "jdbc:mysql
Swing - Swing AWT
information, visit the following link:
Thanks...) {
super.paintComponent(g);
Graphics2D g2D = (Graphics2D) g;
paintContent(g2D, w, h
java swing - Swing AWT
:
Thanks...java swing how to add image in JPanel in Swing? Hi Friend,
Try the following code:
import java.awt.*;
import java.awt.image.
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | http://www.roseindia.net/tutorialhelp/allcomments/139710 | CC-MAIN-2013-20 | refinedweb | 280 | 60.41 |
Monday, Sep 06, 2010
Does this negative number re capacity utlisation explain IR policy?
H.M. Treasury: Inflation and the output gap in the UK: march 2010
this was part of a discussion with easybet man re tightness in cap utilisation. i think it goes some way to explaining uk ir policy:
".. this paper demonstrates that the level of the output gap has an important role in
explaining inflation and suggests that the lagged effect of the large negative output gap will generate
significant downward pressure on inflation over the next few years. The analysis also finds
strong empirical evidence of the influence of import prices on inflation, with a one-off shock to
import prices taking around 1 year to fully feed through to inflation. .. The analysis has informed the
Treasury’s view on recent inflation developments and underpins judgements on the prospects
for...
page 20 of the report (26 of the pdf) has a nice chart.
What Does Output Gap Mean?.
2. Crunchy said...
Does this negative number re capacity utlisation explain IR policy?
I think the negative number re capacity was -9/11 when Greenspan set his IR policy.
Carry on by all means.....
3. alan_540 said...
techie, this is my understanding of the above :
i) So low demand is keeping prices down basically (with regard to domestic production)
ii) And imported goods haven't caught up (yet) with a devalued pound
iii) A low interest rate will help stimulate the economy and will not lead to high inflation in the short term because of the above and will more importantly help prevent deflation
seems to me to be supply-demand pure and simple, but i'm probably missing the finer points!
4. mark wadsworth said...
Techie, keep up the good work. On the earlier thread the score is Deflation 3: Inflation 0.
5. paul said...
This of course completely and ignores the fact that the Bank created the import inflation by lowering rates when inflation was rising.
Convenient omission.
6. easybetman said...
Thanks techieman for the paper. The same chart is interesting though, in 1975 - 85, the output gap was up to -8% and they managed to have double digit inflation - so that was 'output gap' that never was - more money were created than the ability of the economy to produce (perhaps due to labour factors etc) and the lot were happy to run -ve interest rates for years until Paul Volker lead the charge to stop it.
7. techieman said...
"This of course completely and ignores the fact that the Bank created the import inflation by lowering rates when inflation was rising.
Convenient omission." eh? i think you miss the point of the document and the post paul.
the posting is to try to explain why the treasury (and not the BoE) are looking at the output gap to explain why they believe there is or isnt underlying in(de)flationary pressures in the economy. I suppose i am making a leap that the mpc will use this information in the same way, and predicate monetary policy on it.
Well spotted easybetman - i think though this addresses the point you made before about tight capacity in the economy. I am no expert on all this, i just thought its something that "we" may have overlooked. if you like your comment on capacity utilisation got me thinking, as i thought i had seen this somewhere before. of copurse things may have changed since march - i would like to see an update of this graph.
As part of the budget (FWIW) they give some forecasts of this - see - page 3 (7 pdf)
Re the 75-85 period i think we had the oil price shock and a wage / price inflationary spiral in part contributed too by the unions. Then i think IRs were ratched up to squeeze the inflation out, and perhaps thats why there was the gap then with little disinflation let alone deflation. However i could stand corrected on my short summary of historical movements.
the fact remains that if the treasury look at this and have correlated it currently to IR policy, then i think its one thing (but perhaps not the only thing) we need to be aware of when trying to determine the risks - both inflationary and deflationary - to the economy.or at least to determine the likely policy response to the perception of either.
of course these are all aggregate numbers and the devil is in the detail. one issue is the same as always that IRs are a pretty blunt instrument - in that respect we are like the Eurozone in minature. Just as some countries needed higher IRs to squeeze the inflation some didnt, some sectors probably need a IR stimulus and others dont.
i have read the summary a couple of times and its not the easiest thing in the world to get your head round... but perhaps like tesco - every little helps!
8. techieman said...
easybetman:
summary @ 1.2 "There has also been a big fall in output as a result of the recession and, while there are significant uncertainties surrounding current estimates of the output gap, it is likely that a large degree of spare capacity has opened up."
9. techieman said...
paul the paper deals with many issues you raise. for example it explains the effect of the trade weighted index fall in sterling and the extent and likely ongoing effect this will have offseting deflationary pressures caused by the output gap.
it is an interesting - albeit difficult read. it seems like you are one of those people who have ingrained ideas. all i am doing is leading you to the water - its up to you whether you drink it. remember this is explaining WHY they are doing what they are doing. Of course people can have their own views as to whether what they are doing is right or wrong. i dont say they are 100% right in doing what they are doing or their conclusions, i am however interested in anything that helps explain why they are doing it. imo this paper deserves more than a cursory look for that reason.
10. Crunchy said...
9. techieman
They do what they want to do, regardless of numbers, past form being considered here. Save your time and energy is my view.
There is a much bigger picture and getting distracted is very easily done.
Have a cup of tea and think about where we are heading and how they are going to get us there.
This is about management through economics, not managing economics.
11. easybetman said...
@techieman,
Thanks for your feedback and I think such quality discussions allow all of us to learn and to find the truth.
Totally agree with you that it is not an easy piece of literature to read and I too spotted that interesting graph on first glance thorugh.
1. Contrary to HMT guesses in section 1.2 (yap, we are all guessing), in private moments, those BOE chaps thought differently:
"It was also noted that the level of spare capacity might be lower than many currently judged."
Source :
2. If one look closely into CPI components:
Actually, those 'imported' items are not contributing to inflation (though imported oil etc may indirectly do so). At least the sterling devaluation excuse cannot reasonably explain most of the effects. There are quite a bit of 'home brew' inflation here (so, those with pricing power are taking advantage of that power)
3. Agree IR is a pretty blunt instrument. It always is. My believe is that it is better for BoE to ensure price stability and that the commercial banks are safe AND HMT does the stimulus/economic policies bits.
4. Re 1975 - 85, inflation is always a monetary phenomena and while unemployment was high, the money creation ran even faster and hence the double digit inflation (until Paul Volcker etc stopped it). While 2010 M4 increase is small (+2% ish) year on year, the M4 growth between 2000 - 2010 (mainly 2000-2007) had been phenomenal. The 'extra' money are now coming home to roost. When there is more money around (mostly in the wrong hand, perhaps) than real economic productive capacity, I don't feel that 'spare capacity' is a reasonable word to use. Also, a good chuck of the 75-85 spare capacities are in the wrong place (coal mining etc) which weren't exactly spare capacities at all (if there is no plan to resume coal mining, surely calling those unemployed miners spare capacities is bordering cheating).
As we are now, those businesses who have closed down so far tend to be those who are over leveraged and perhaps those locked in expensive leases. The demand haven't 'collapsed' but these 'weak' capacities gone because they are weak (e.g. need 100 shoppers a week, and if that drops to 95, it was enough to cause them to go belly up. In absolute term, the demand had just been reduced slightly). Credit crunch prevent those capacities from coming back on line (plenty of empty offices, shops and factories but few people can / willing to bring them on-line - perhaps prices aren't right and landlords are hoarding those properties and demanding unrealistic rent/prices.) So is it fair to call spare capacities that cannot be bring back on line quickly spare capacities?
Then we have areas like rail transport which to bring on new capacities are immensely costly and oh yes, the train companies have been merrily raising prices. I think National Statistic haven't hedonically adjusted these rail prices, given how awful it is to travel now in trains, the hedonically adjusted price inflation must have been shocking. (Of course, NS happily does the hedonic things when it suits them).
12. easybetman said...
Just thought of something - is it reasonable to call unemployed mortgage advisors, CDO creators "spare capacities" as they are now? Or these people are not really spare capacities until they are retrained ? (In 1975-85 these were the 'coal miners').
13. techieman said...
easybetman - interesting comments, what is your background / job if you dont mind me asking?
just an observation - you seem to be arguing with yourself a bit about the 75-85 era, i.e. explaining away why there is a negative rate then. whereas before you seemed to be saying that like now there was a negative rate but we had double digit inflation then.
my basic thoughts on this are that we just need to understand what they are doing and the reasons for it. you have shown examples where this chart may just be plain wrong, but overall - as we have both alluded to - nobody really knows (even HMT) - i.e. i think it probably lags and anecdotal evidence is probably as good as anything else. we cant really say - oh yes they are completely wrong because we dont have the data they have.
having said all that yes m4 is still positive, but my point on it is that despite the stimulus it is still trending down and the main volatile component - i.e. m4 lending is contracting y-o-y. i have no problem with the inflationist or even hyper inflationist point - which yes is probably a matter of timing, but for it to be right we need to see the trend reverse imo. i know you are saying look where we were as well, but i dont agree with that - again you could be right and i wrong.
overall i am pleased you have engaged on this because its interesting to hear well reasoned views rather than "boE bad" "huge inflation round the corner" etc.
14. This comment has been removed as it was found to be in breach of our Blog Policies.
15. easybetman said...
Hi techieman,
Send me a PM and we can chat more. I can see a techieman on the main forum with just a few post - so not sure if it is you.
Re: 75-85, oh yes, the main reason of negative rates was fast monetary growth and probably velocity growth as well. Everything is relative you see - if MV (over a period) down, capacity down faster, inflation. If MV (over a period) up, productivity up even faster, then deflation.
I don't see double digit inflation now but probably 5% or so based on real index, not those in statistician wonderland. For the hyperinflationist, first they got to define hyper, if they are talking about 1E10^6%, I don't see that. There is only 1 cause of hyperinflation - velocity, which is directly affected by confidence. So, if Zimbabwe had printed 10 trillion but being nice to its people and its people were happy to use the money, then it probably would have settled at1e10^3% rather than 1E10^10.
As for M4 - I think we need to look at this over a period. These sort of things does not have effects immediately (even if there are 2-3 quarters of -ve M4, it doesn't means deflation. But 10 years of -ve M4 is of course deflation)
Finally thanks - nice discussions with you too. We are here to be student of the truth (and allocate portfolios accordingly).
16. easybetman said...
Just remember, when Volcker turned off the inflation pump at 1980, my guess ( I don't have data) is that M4 lending must have grown negative for several quarters and yet inflation raged on (though less furiously). I think we are just going to pay for the past 10 years of monetary inflation - no free lunch... The questions is of course who pays, and the answer tend to be the innocent... | http://www.housepricecrash.co.uk/newsblog/2010/09/blog-does-this-negative-number-re-capacity-utlisation-explain-ir-policy-30128.php | crawl-003 | refinedweb | 2,280 | 67.49 |
Building the Right Environment to Support AI, Machine Learning and Deep Learning
Watch→
This is C source code written in Gcc platform (Linux, Ubuntu) that finds the compilation time of any other source code (e.g. C, C++, Java & others). It also finds the execution time of any command of Ubuntu in three time units: nanosecond, microsecond and millisecond.
Note: If you want to find the compilation time of a piece of Java source code using this C source code, the Java compiler must be installed in the machine.
#include <sys/time.h>
#include <stdio.h>
#include <time.h>
#include <math.h>
int main()
{
struct timespec ts1;
struct timespec ts2;
char str[20];
printf("\n\tCompile the code/ Execute the Commamnd: ");
fgets(str, sizeof str, stdin);
clock_gettime(CLOCK_REALTIME, &ts1);
system (str);
clock_gettime(CLOCK_REALTIME, &ts2);
if (ts2.tv_nsec {
ts2.tv_nsec= ts2.tv_nsec+ pow(10,9););
}
else);
return. | http://www.devx.com/tips/cpp/find-compilation-time-in-nanosecond-microsecond-millisecond-of-any-code-c-c-java-all-others-execution-time-of-any-command-151216020823.html | CC-MAIN-2020-16 | refinedweb | 146 | 65.42 |
On Fri, 24 Oct 2003 18:29:59 +0200
Tomas Szepe <szepe@xxxxxxxxxxxxxxx> wrote:
> /* tcp_input.c, line 1138 */
> static inline int tcp_head_timedout(struct sock *sk, struct tcp_opt *tp)
> {
> return tp->packets_out && tcp_skb_timedout(tp, skb_peek(&sk->write_queue));
> }
>
> The passed NULL (and yes, this is where we are getting one) is dereferenced
> immediately in:
>
> /* tcp_input.c, line 1133 */
> static inline int tcp_skb_timedout(struct tcp_opt *tp, struct sk_buff *skb)
> {
> return (tcp_time_stamp - TCP_SKB_CB(skb)->when > tp->rto);
> }
If tp->packets_out is non-zero (which by definition it is
in your case else the right hand side of the "&&" would not be
evaluated) then we _MUST_ have some packets in sk->write_queue.
Something is being fiercely corrupted. Probably some piece of
netfilter is freeing up an SKB one too many times thus corrupting
the TCP write queue list pointers. | http://oss.sgi.com/projects/netdev/archive/2003-10/msg01322.html | CC-MAIN-2014-52 | refinedweb | 134 | 56.18 |
Jeffery Richter
Jeffrey Richter wrote Advanced Windows (Microsoft Press, 1995) and Windows 95: A Developer's Guide (M&T Books, 1995). Jeff is a consultant and teaches Win32-based programming seminars. He can be reached at v-jeffrr@microsoft.com.
QI'm developing a shell extension DLL. In order to help debugging, I decided to place a call to the DebugBreak function directly inside my code. This way, when I run the Explorer and it loads my shell extension DLL, the call to DebugBreak should cause a breakpoint exception message box to appear, allowing me to spawn the debugger to debug my code. But, the call to DebugBreak never causes the message box to appear and the remainder of my code in my shell extension DLL never executes. What am I doing wrong and how can I get the debugger to debug my shell extension when it is invoked?
Val Ludwig
ALet me start off by saying that the system is working exactly the way it should. What you are seeing is the effect of Microsoft's shell team making the Explorer robust. When the Explorer calls a function in your shell extension DLL, it's afraid that you might have a bug in your code that would force the Explorer to terminate abnormally. Specifically, I'm talking about your code raising exceptions such as access violations, division by zero, or stack overflow exceptions.
To protect itself from your bugs, the Explorer calls your shell extension DLL functions like this:
__try { YourShellExtensionDllFunction(...);}__except (EXCEPTION_EXECUTE_HANDLER) { //YourShellExtensionDllFunction raised an exception //Let's pretend we never called the function at all}
By wrapping the call to your shell extension function inside a structure exception handling (SEH) frame, the Explorer can trap any exceptions you raise and recover so the Explorer process itself continues running but without the benefit of your shell extension.
The call to DebugBreak inside your code raises an EXCEPTION_BREAKPOINT exception. Normally, raising this exception would display a message box, allowing you to spawn the debugger so you can debug your code. However, the Explorer is trapping this exception so it can recover from your error and continue execution. The only time you see the message box allowing you to debug an application is if an exception is raised that is not handled.
What you need to do is make the system think that your exception is not being handled so the message box will appear and you can debug your code. To do this, wrap your call to DebugBreak with your own SEH frame.
// Make sure that we only force the debugger to appear// in debug builds of our shell extension DLL.#ifdef _DEBUG__try { DebugBreak();}__except (UnhandledExceptionFilter(GetExceptionInformation())) { // Nothing to do in here.}#endif // _DEBUG
Because your SEH frame is nested inside the Explorer's SEH frame, your frame's exception handler gets the first crack at handling the breakpoint exception. The call to UnhandledExceptionFilter fakes the system into believing that the exception was not handled and causes the message box to appear. Now you can connect the debugger.
UnhandledExceptionFilter is a documented Win32® function. The system usually calls this function when a thread does not handle an exception, but there is no reason why you cannot call this function explicitly in your own code just as I have here. It is this function that displays the message box and spawns the debugger.
By the way, this technique is useful when you are developing and debugging any DLL that is loaded into some executable's address space. I have used this technique when developing Performance Monitor Counter DLLs and ISAPI DLLs too.
QI hope you can help me find a solution to a big problem we have. We are porting our 16-bit Windows¨-based application to Win32, but our application requires the use of several third-party 16-bit DLLs that do not have 32-bit versions available yet. I know that we can use the flat thunking mechanism supported on Windows 95 to do this but this mechanism is not supported on Windows NTª. I also know that we can't use the generic thunking mechanism (supported on both Windows 95 and Windows NT) because this only allows a 16-bit executable to call a 32-bit DLL-we need to go the other way.
I suppose we could write a 16-bit executable stub program that talks back and forth with our new 32-bit application, but I'm not sure how to do this. We need to send big buffers (several hundred kilobytes for images) between the processes. We need a large bandwidth!
Piergiorgio Grossi
AWithout a doubt, the absolute best way to solve this problem is to create a 16-bit executable stub application and communicate across the bit boundaries by using window messages. In particular, you want to use the new WM_COPYDATA message, which was added to Win32 by the Windows NT 3.1 team and therefore has been around for over three years. It is fully documented and supported on all versions of Windows NT and on Windows 95.
The original purpose of the message was to allow an easy way for one Win32 process to send data to another Win32 process. Internally, the message causes USER32.DLL to create a file-mapping object backed by the paging file and copies the data from the source process's address space into the file mapping. When the target process's thread receives the WM_COPYDATA message, it maps another view of the file-mapping object into its address space.
WM_COPYDATA exists as a convenience; it is not the most efficient way to transfer data across process boundaries. It's not super efficient because sending the message actually makes a copy of the data, so additional memory must be allocated and a block of bytes must be copied. When the target processes the WM_COPYDATA message, it is working with its own copy of the data; changes made to the data are not reflected back to the sending process. You'll want to use the memory-mapped file APIs directly if you're concerned about performance.
For 16-bit Windows-based applications, you must use the WM_COPYDATA message because the memory-mapped file APIs cannot be called from 16-bit applications. When trying to compile a 16-bit application that uses WM_COPYDATA you may run into a problem because some versions of the 16-bit Windows header files do not contain a #define for WM_COPYDATA. Don't let this scare you! The message is supported but it just wasn't defined. You can easily fix this problem yourself by adding the following line to your 16-bit source code module:
#define WM_COPYDATA 0x004A
By the way, when you use WM_COPYDATA to transfer data to and from 16-bit Windows applications, the maximum size of the block that you can transfer is 16MB. This gives you the bandwidth that you require. Win32-based applications are limited only by the amount of virtual memory available.
QI would prefer to embed linker options inside my source code modules rather than by setting options using the Visual C++ integrated environment. Is there an easy way to do this?
Chuck Bell
AI explain how to do this in both my "Advanced Windows" and "Windows 95: A Developer's Guide" books, but I'll discuss the technique here. The following code fragment shows the proper way to embed a linker directive inside a source code module.
//Create a new data section for linker directives//The section MUST be called ".drectve"#pragma data_seg(".drectve")//Add string(s) into the ".drective" data sectionstatic char szShared[] = "-section:Shared,rws";static char szFixed[] = "-fixed";//Stop adding strings into the ".drective" data section#pragma data_seg()
When the compiler compiles this code, it creates a data section called ".drectve" inside the source module's associated object file. Any literal strings that appear after the first pragma data_seg directive will be placed into this section. The strings in this section must always be ANSI strings even if you are compiling for Unicode.
When the linker combines all of the OBJ modules together, it looks specifically for any ".drective" sections. If it finds any, the linker parses the ".drective" sections and pretends that the directives were passed as command-line arguments. The linker also removes the ".drective" sections so that there is no sign of them in the resulting EXE or DLL file.
In both of my books, I used a different technique than the one described above. For example, to make a "Shared" section readable, writable, and shared, I used the following single line.
#pragma comment(lib, "kernel32 " "-section:Shared,rws")
I must admit that this is a hack! When the compiler sees the line above, it automatically creates a ".drective" section and adds the following string to the section.
-DEFAULTLIB:kernel32 -section:Shared,rws
If you were to pass this string as a command-line argument to the linker, the linker would see two options specified.
-DEFAULTLIB:kernel32-section:Shared,rws
The linker that shipped with Visual C++ version 2.x interpreted my pragma comment directive perfectly, but the linker that ships with Visual C++ 4.x parses my pragma comment directive incorrectly. This, unfortunately, breaks many of my sample applications when compiled with Visual C++ 4.x. When I reported this problem to Microsoft's Visual C++ team, they responded with a very satisfactory explanation that I feel compelled to accept. The Visual C++ team broke my hack to support long filenames; specifically, they wanted the linker to support filenames with spaces in them. So, when the Visual C++ 4.x compiler sees my pragma comment line, it now creates a single string in a ".drective" section that makes the linker think I have a library file called "kernel32 -section:Shared,rws". This library file does not exist and the Visual C++ 4.x linker fails to link my sample applications.
To build my sample applications with Visual C++ 4.x, you must modify my code to use the technique I describe here. Why didn't I use this first technique to begin with instead of using the hacky pragma comment technique? I used the hack because the Visual C++ 2.0 linker didn't handle the pragma dataseg technique correctly. There is a bug in the Visual C++ 2.0 linker that made it recognize and parse only the first linker directive and ignore the remaining directives. So, if you have the following lines in your source file, the linker sees only the "-section" directive and ignores the "-fixed" directive completely.
#pragma data_seg(".drectve")static char szShared[] = "-section:Shared,rws";static char szFixed[] = "-fixed";#pragma data_seg()
Because of this bug, you have to use the pragma comment technique if you're still using Visual C++ 2.0. Because adding linker directives to source code has become so popular, the Visual C++ 4.0 team modified the #pragma comment directive to make this incredibly simple. With Visual C++ 4.0 you can now have the following line in your source code.
#pragma comment(linker, "-section:Shared,rws")
This technique is by far the simplest and is the solution of choice. If you are having trouble compiling any of my book's sample applications with Visual C++ 4.0, use this new directive and all will be fine.
QI am implementing a C++ class inside a DLL. When an instance of my class is created, I'd like to get the full pathname of the DLL and save it inside a member variable. How can I get the pathname of my DLL at run time?
Donna Murray
AThe quickest and easiest way to get the pathname of any module in your process's address space is to call the GetModuleFileName function.
GetModuleFileName(HINSTANCE hinst, LPTSTR szPathName, int cch);
The first parameter is the HINSTANCE or HMODULE of the EXE or DLL. Remember, in Win32 an HINSTANCE and an HMODULE are exactly the same. Also, remember that HINSTANCEs and HMODULEs represent the base address of where an EXE or DLL module is loaded in the process's address space.
In your class's constructor, all you need to do is determine the base address where your DLL loaded. The best way to do this is to save the hinst value that is passed to your DLL's DllMain function.
You can also get your DLL's base address by calling VirtualQuery. The following code fragment demonstrates this technique.
class CSomeClass { TCHAR m_szDllPathname[_MAX_PATH];public: CSomeClass ();};
·
·
CSomeClass::CSomeClass () { MEMORY_BASIC_INFORMATION mbi; VirtualQuery(DllMain, &mbi, sizeof(mbi)); GetModuleFileName(mbi.AllocationBase,
m_szDllPathname, _MAX_PATH);}
The first parameter to VirtualQuery is the address of something-anything that is contained inside the DLL's module. I chose to give the address of the DllMain function but you could give the address of any function, any global variable, or any static variable. You cannot pass the address of a local variable since this address identifies something on the calling thread's stack, which will definitely not be contained inside the DLL's module.
When VirtualQuery returns, the AllocationBase member of MEMORY_BASIC_INFORMATION contains the memory address where the DLL was loaded (this is also the DLL's HINSTANCE value). Passing this value to GetModuleFileName returns the full pathname of the DLL module.
Have a question about programming in Win32? You can mail it directly to Win32 Q&A, Microsoft Systems Journal, 825 Eighth Avenue, 18th Floor, New York, New York 10019, or send it to MSJ (re: Win32 Q&A) via:
Internet:
Jeffry Richterv-jeffrr@microsoft.com
Eric Maffeiericm@microsoft.com
From the May 1996 issue of Microsoft Systems Journal. | http://www.microsoft.com/msj/archive/S1D4F.aspx | CC-MAIN-2016-44 | refinedweb | 2,271 | 54.22 |
How to: Add a Content Type to a SharePoint List
You can reference content types in the XML for a SharePoint list definition so that each time a user creates a list of that type, Microsoft SharePoint Foundation 2010 includes the content type on the list by default.
Last modified: May 27, 2011
Applies to: SharePoint Foundation 2010
Available in SharePoint Online
You can also add content types to an existing list by writing code that uses the SharePoint Foundation object model.
Before you can add a content type to a list definition, you must be sure that the list is configured to support content types. The first thing to check is the template from which the list instance is created. If the ListTemplate element has an attribute named DisallowContentTypes, and the attribute is set to TRUE, the list does not support content types. If the attribute is set to FALSE or is missing, the list does support content types.
The next thing to check is whether content types are enabled on the list instance. The Schema.xml file that defines the list instance has a List element. This element must include an attribute named EnableContentTypes, and the attribute must be set to TRUE. Setting the attribute to TRUE is equivalent to selecting Yes under Allow management of content types? in the Advanced Settings section of List Settings.
To add a content type to a list definition, you add a ContentTypes element to the list schema. The ContentTypes element contains a collection of ContentTypeRef elements. Each ContentTypeRef element specifies a site content type that SharePoint Foundation should copy locally to the list, as a list content type, whenever a user creates a new list of the specified type. The ContentTypeRef element contains a single attribute, ID, which you set to the content type ID.
The site content type that you reference must be in scope for the list—that is, it must be declared at the same site level or higher in the site hierarchy. For more information about content type scope, see Content Type Scope.
To add a content type to a SharePoint list definition
In the list definition XML, add a ContentTypeRef element to the ContentTypes element.
Set the ID attribute of the ContentTypeRef element to the content type ID of the content type that you want to include on the list.
The following example shows partial markup for a list definition schema that includes a ContentTypeRef element.
<List xmlns: <MetaData> <ContentTypes> <ContentTypeRef ID="0x01060062efcfca3f4d4036a0c54ed20108fa2e" /> </ContentTypes> ... </MetaData> </List>
For more information about the ContentTypes element in the list definition schema, see ContentTypes Element (List).
You can use the SharePoint Foundation object model to add content types to an existing list.
To add a content type to a SharePoint list
Use the Lists property to get a collection of lists for the site on which the list is located.
Declare a variable of type SPList, and set it equal to the object in the site list collection that represents the list.
Enable content types for the list by setting the value of the list’s ContentTypesEnabled property to true.
Use the AvailableContentTypes property to access the content types that are available for the site on which the list is located. This method returns an SPContentTypeCollection object.
Declare a variable of type SPContentType, and set it equal to the SPContentType object in the collection that represents the site content type you want to add to the list.
Verify that the list can accept the content type that you have selected by calling the IsContentTypeAllowed(SPContentType) method.
Use the ContentTypes property to access the collection of list content types on the specified list. This method returns an SPContentTypeCollection object.
Use the Add method to add the SPContentType object to the list content type collection.
When you add a site content type to a list using the object model, SharePoint Foundation automatically adds any columns that the content type contains that are not already on the list. This contrasts with referencing content types in a list schema, in which case you must explicitly add the columns to the list schema for SharePoint Foundation to include them in list instances.
Example of Console Application That Adds a Site Content Type to the Shared Documents List
The following example is a console application that adds a site content type to the Shared Documents list in a site. The content type that is used in this example is the same content type as is created by the example for How to: Add a Content Type to a Site.
Console applications are useful for experimenting in a development environment. In a production environment, the code for this example would more likely be included as part of the FeatureActivated method of an SPFeatureReceiver object.
using System; using Microsoft.SharePoint; namespace Test { class Program { static void Main(string[] args) { using (SPSite siteCollection = new SPSite("")) { using (SPWeb site = siteCollection.OpenWeb()) { // Get a content type. SPContentType ct = site.AvailableContentTypes["Financial Document"]; // The content type was found. if (ct != null) { // Get a list. try { SPList list = site.Lists["Shared Documents"]; // Throws exception if does not exist. // Make sure the list accepts content types. list.ContentTypesEnabled = true; // Add the content type to the list. if (!list.IsContentTypeAllowed(ct)) Console.WriteLine("The {0} content type is not allowed on the {1} list", ct.Name, list.Title); else if (list.ContentTypes[ct.Name] != null) Console.WriteLine("The content type name {0} is already in use on the {1} list", ct.Name, list.Title); else list.ContentTypes.Add(ct); } catch (ArgumentException ex) // No list is found. { Console.WriteLine("The list does not exist."); } } else // No content type is found. { Console.WriteLine("The content type is not available in this site."); } } } Console.Write("\nPress ENTER to continue..."); Console.ReadLine(); } } } | https://msdn.microsoft.com/en-us/library/office/aa543576(v=office.14).aspx | CC-MAIN-2016-50 | refinedweb | 963 | 55.24 |
Writing Custom Eto forms in Python
Using the Eto dialog framework to create custom dialogs.
Overview
Eto is an open source cross-platform user-interface framework available in Rhino 6. Eto can be used in Rhino plug-ins, Grasshopper components, and Python scripts to create dialog boxes and other user-interface features.
Rhino.Python comes with a series of pre-defined user interface dialogs which can be used for the times a simple dialog box is needed. But, if the pre-defined dialogs above do not provide enough functionality, then creating a custom dialog box using Eto may be the right solution.
For example, here is a custom, collapsing dialog that uses many controls:
The Eto framework allows creation of the dialog box, the controls, and the actions and events required to make the form functional.
Eto is powerful, full-features user-interface toolkit. Understanding how best to write, organize and use Eto dialogs will take some work.
This guide will cover the basics and best practices of creating Eto dialogs in Rhino.Python. Some of the syntax may seem a little onerous, but in practice the following methods allow Eto code to efficiently be managed in Rhino.Python scripts.
The Eto framework
Conceptually, an Eto dialog can be thought of as a set of layers:
Learning how each code each of these layers is key to learning Eto:
- Custom Dialog Class - Extending the Eto Dialog/Form class is the best way to create a dialog.
- The Dialog Form - The Dialog/Form is the base container.
- The Controls - Controls, such as labels, buttons and edit boxes, can be created and then placed in a layout.
- The Layout - Within each form, a layout is used to position the controls.
- Control delegates - Delegate actions are the methods that are executed when a control is click, edited or changed. Any delegate actions must be bound to specific controls to specify what methods are run at the time of control events.
Thinking about theses dialog parts as layers can help keep the code is organized. As an example of the layered approach of a dialog, here is a simple Eto dialog with few controls.
The rest of this guide will cover the sections of the code in much more detail:
# Imports import Rhino import scriptcontext import System import Rhino.UI import Eto.Drawing as drawing import Eto.Forms as forms # SampleEtoRoomNumber dialog class class SampleEtoRoomNumberDialog(forms.Dialog[bool]): # Dialog box Class initializer def __init__(self): # Initialize dialog box self.Title = 'Sample Eto: Room Number' self.Padding = drawing.Padding(10) self.Resizable = False # Create controls for the dialog self.m_label = forms.Label(Text = 'Enter the Room Number:') self.m_textbox = forms.TextBox(Text = None) # Create the default button self.DefaultButton = forms.Button(Text = 'OK') self.DefaultButton.Click += self.OnOKButtonClick # Create the abort button self.AbortButton = forms.Button(Text = 'Cancel') self.AbortButton.Click += self.OnCloseButtonClick # Create a table layout and add all the controls layout = forms.DynamicLayout() layout.Spacing = drawing.Size(5, 5) layout.AddRow(self.m_label, self.m_textbox) layout.AddRow(None) # spacer layout.AddRow(self.DefaultButton, self.AbortButton) # Set the dialog content self.Content = layout # Start of the class functions # Get the value of the textbox def GetText(self): return self.m_textbox.Text # Close button click handler def OnCloseButtonClick(self, sender, e): self.m_textbox.Text = "" self.Close(False) # OK button click handler def OnOKButtonClick(self, sender, e): if self.m_textbox.Text == "": self.Close(False) else: self.Close(True) ## End of Dialog Class ## # The script that will be using the dialog. def RequestRoomNumber(): dialog = SampleEtoRoomNumberDialog(); rc = dialog.ShowModal(Rhino.UI.RhinoEtoApp.MainWindow) if (rc): print dialog.GetText() #Print the Room Number from the dialog control ########################################################################## # Check to see if this file is being executed as the "main" python # script instead of being used as a module by some other python script # This allows us to use the module which ever way we want. if __name__ == "__main__": RequestRoomNumber()
This script is split into 3 main sections.
- The
importsection to include all the assemblies needed for the script.
- The dialog class definition
SampleRoomNumberDialog()
- The script itself
RequestRoomNumber()
Imports for Eto
Eto is a large assembly. For readabilities sake, you need only
import the most important portions:
import Rhino.UI import Eto.Drawing as drawing import Eto.Forms as forms
The
Rhino.UI assembly is used to interface between Rhino and Eto. When using
dialog.ShowModal method, using a
Rhino.UI.RhinoEtoApp.MainWindow class allows the dialog to show as a child of the Rhino application.
Eto is a large namespace. The next two
import lines access the most referenced portions of Eto, the
Eto.Drawing namespace and
Eto.Forms. The
Eto.Drawing namespace contains specific classes that help with the graphic properties of objects. The
Eto.Forms namespace contains the dialogs, layouts, and controls for a dialog. Using Python’s renaming feature, the namespaces are shortened to
drawing and
forms.
Along the left column of the Python editor, the methods within this Eto Assembly are listed. For a detailed view of all the methods the Eto can be found in the Eto.Forms API Documentation
Custom Dialog Class
The next section of the code creates a new class definition that extends the
Dialog(T) class. Creating classes in Python requires some very specific syntax. While it may seems little more complicated to create a class, the ability to reuse, import and interact with class based dialog in Python scripts is well worth the practice. A class will contain the default information about the default layouts and actions of the class controls. The class will also be used to store all the values of the controls for the while the script is running. contain the values of
A dialog class is started with these lines:
class SampleEtoViewRoomNumber(forms.Dialog[bool]):
In this case the new class will be named
SampleEtoRoomNumberDialog and inherits the Eto class
Eto.Froms.Dialog[bool]. The
bool argument shows that a Boolean value is expected back from the dialog. This Boolean value can be used to tell if the
OK or
Cancel button was hit when the dialog was exited. If more return values then True/False are need back from a dialog then a
Dialog[int] or
Dialog[string] might be needed.
Note, this guide will only cover the creation of model dialogs, which require the user to respond before continuing the program. The other dialog types, semi-modal and modeless, are beyond the scope of this guide, but may be useful in future projects.
The Dialog Form
Once the new class is declared, then the init instantiation operation to assign the defaults to the new dialog object when created. Python uses the self variable in class declarations to reference the class members in the init. This of self as a placeholder for the class name once the class is actually created in the script.
class SampleEtoViewRoomNumber(forms.Dialog[bool]): def __init__(self): # Initialize dialog box self.Title = 'Sample Eto: Room Number' self.Padding = drawing.Padding(10) self.Resizable = False
The first section of the init is a few common properties that all dialogs have:
self.Title- Sets the title of the dialog. This is a standard string.
self.Padding- Set a blank border area within which any child content will be placed. This requires the creation of a Eto.Drawing.Padding structure.
self.Resizable- Whether the dialog box is resizable by dragging with the mouse. This is a True/False Boolean.
There are a few
Padding formats that are accepted by Eto.Drawing.Padding. These match the margin and padding formats of standard CSS styling:
Eto.Drawing.Padding(10)- 10 pixel padding around all 4 sides.
Eto.Drawing.Padding(10, 20)- A padding of 10 to the left and right and a padding of 20 on top and bottom.
Eto.Drawing.Padding(10, 20, 30, 40)- A padding of 10 to the left, 20 to the top, 30 to the right and 40 to the bottom.
By default the dialog will automatically adjust its size to the contents it contains. But an addition line can be added to set an initial size to the dialog using
self.ClientSize:
self.ClientSize = drawing.Size(300, 400) #sets the (Width, Height)
The ClientSize property takes a
Eto.Drawing.Size structure.
After we create the controls and a layout the contents can be placed within the dialog using the
self.Content class, as is done on line 39:
self.Content = layout
A dialog class is will show up on the screen as modal. To close the dialog a button will be pressed. To close a dialog, use the
self.Close method. It is common to do little data checking before closing the dialog:
# Close button click handler def OnCloseButtonClick(self, sender, e): self.m_textbox.Text = "" self.Close(False)
The
self.Close method is also returning a
False because the Cancel button was pressed to cause this event. The script will continue on based on the return value of the dialog.
Also, because we are using a new class object to create the dialog, even after the dialog is closed the dialog will still be in memory. This means the methods and values within the dialog will continue to be available within the scope of the script as the script may need to reference those values.
After creating the dialog framework, we will start to create some controls for the dialog.
The Controls
The business end of a dialog is the user-interface controls it displays. Controls may include Labels, Buttons, Edit boxes and Sliders. In Eto, there are more then 35 different controls that can be created. For details information on these controls, go to the Eto Controls in Python guide.
Controls normally need to be setup properly in a layout before they are added to a dialog.
Label Control
The simplest control is the Label control. It is simply a piece of text that normally is used to create a prompt, or label be, but a label also has many more properties in addition to
Text. Additonal properties include
VerticalAlignment,
Horizontal Alignment,
TextAlignment,
Wrap,
TextColor, and
Font. Properties can be added to the Text Property by using a comma(
,):
self.m_label = forms.Label(Text = 'Enter the Room Number:', VerticalAlignment = VerticalAlignment.Center)
For a complete list of properties and events of the Label class, see the Eto Label Class documentation.
TextBox Control.
Button Controls
Buttons are placed on almost every dialog. Buttons are created, then bound through their
.Click event to run a method when the button is clicked.
Buttons can be assigned to any name. Along with the name, the
Text property can be set to display on the button:
# Create the default button self.DefaultButton = forms.Button(Text = 'OK') self.DefaultButton.Click += self.OnOKButtonClick
Once created, the button then can be bound to an event method (
OnOKButtonClick) through
.Click class using the
+= syntax as follows:
# Create the default button self.DefaultButton = forms.Button(Text = 'OK') self.DefaultButton.Click += self.OnOKButtonClick
The bound method is run if the button is clicked on. The bound method is declared in the methods section later in the class:
# Close button click handler def OnOKButtonClick(self, sender, e): if self.m_textbox.Text == "": self.Close(False) else: self.Close(True)
In this.
This guide review the most basic controls. To understand how to create and control more controls with Python see the Eto Controls in Python guide (TODO).
Once all the controls for the dialog are created then they can be placed in a layout to be positioned on a dialog.
The Layout
Layouts are used to size and place controls in a logical way in a dialog. They can generally be thought of as grid controls that adjusts based on their contents. The sample code a new Layout is created on line 30 in this section of the code:
# Create a table layout and add all the controls layout = forms.DynamicLayout() layout.Spacing = drawing.Size(5, 5) layout.AddRow(self.m_label, self.m_textbox) layout.AddRow(self.DefaultButton, self.AbortButton) # Set the dialog content self.Content = layout
The code for the layout comes further down in the class definition, because it seems to make sense to create the controls first before placing them in a layout.
In this a case a new dynamic layout object is created at:
layout = forms.DynamicLayout()
The DynamicLayout is one of 5 layout types supported by Eto. The Dynamic layout is a virtual grid that can organized controls both vertically and horizontally. For a detailed look at layouts, go to the Eto Layouts in Python guide.
The spacing between controls in the layout is set by
layout.Spacing on the line:
layout.Spacing = drawing.Size(5, 5)
The
Eto.Drawing.Size(5, 5) sets the horizontal spacing and vertical spacing of the controls to 5 pixels between the controls.
Placing Rows in Layouts
Once the layout type has been setup then controls can be placed. Control are placed into Rows in the layout.
layout.AddRow(self.m_label, self.m_textbox) layout.AddRow(self.DefaultButton, self.AbortButton)
Each row can be added to the newly created
Eto.Forms.Layout object using the
.AddRow method. Each control that is added in each row is given a cell on the row added. So if two controls are added, the row will contain two cells that control the placement of the control. The controls will stretch to fill up the cells.
The
Eto.Forms.DynamicLayout can positioned controls vertically and horizontally. Each vertical set of controls can be aligned with controls in previous horizontal sections, giving a very easy way to build forms. For more information see the Eto DynamicLayout documenation
Using None in a Layout
Sometimes blank spacers are needed within a layout to help controls align properly or help to align the number of cells from above. Or a blank form may be needed to allow the height of the layout to fill up the vertical space of the dialog. In Eto, using the
None value will allow for spacers in dialogs.
In the sample above a blank row is added between the controls:
layout.AddRow(None) # spacer
If the dialog box gets vertically taller, then the
None row will expand to fill up the needed space.
None can also be used in a Row as a horizontal spacer. For instance the buttons could be dynamically justified to the right of the row by adding a
None spacer at the start of the row:
layout.AddRow(None, self.DefaultButton, self.AbortButton)
The
None cell will expand and contract to justify the buttons to the right.
There are many options when using Layout, Rows and Cells with Eto to place controls. For more information on the details of using Layouts see the Eto Layout advanced Options with Python (TODO)
Control Delegates and Events
The last section of the Dialog class, in the example, is a series of class methods:
- Methods used to access the class members
- Method actions for binding to control events.
A common practice is to create a function that returns the value of a control you might want to get or set:
# Get the value of the textbox def GetText(self): return self.m_textbox.Text
There is an unusual syntax here in the method declaration with the inclusion of
(self) as an argument of the function. This is done in functions that are a member of a class. As we learned before the self variable is like a placeholder for the class name that this method is a member.
To use this method in a script the following syntax is used:
dialog = SampleEtoRoomNumberDialog(); rc = dialog.ShowModal(RhinoEtoApp.MainWindow) if (rc): print dialog.GetText() #Print the Room Number from the dialog control
In this case, a new dialog is created with the name
dialog. The dialog is shown and the return value is assigned to the
rc variable. Then, based on the result of
rc the
GetText method is used to get the value of the Textbox in the dialog using the
dialog.GetText() method, even though the dialog has already been closed.
Class methods also need to be created to handle events that may happen with controls in the dialog. Here is a function that will be used for the
OK button:
# OK button click handler def OnOKButtonClick(self, sender, e): if self.m_textbox.Text == "": self.Close(False) else: self.Close(True)
Again here is an unusual syntax for the method declaration:
(self, sender, e). This is the standard argument declaration for any function that will be bound to a control action. This
OnOKButtonClick() method will be bound to the OK button click with this code:
self.DefaultButton.Click += self.OnOKButtonClick
So now on every click the method will be called.
There are many more events that methods may be bound to on controls such as
TextChanged,
CheckedChanged,
AddValue, etc.. Review the Eto APi documentation for specific events supported by each control.
Using Eto Dialogs in Scripts
Once the class definition is set, the dialog is ready to be used in a script:
# The script that will be using the dialog. def RequestRoomNumber(): dialog = SampleEtoRoomNumberDialog(); rc = dialog.ShowModal(Rhino.UI.RhinoEtoApp.MainWindow) if (rc): print dialog.GetText() #Print the Room Number from the dialog control
First a new class instance of the dialog is created:
dialog = SampleEtoRoomNumberDialog();
Once create then the dialog needs to shown as a child of the Rhino application:
rc = dialog.ShowModal(Rhino.UI.RhinoEtoApp.MainWindow)
Because the dialog is modal, the script will continue to the next lines only after the dialog is closed. When
dialog.Close is called the dialog will also return a value that is assigned to
rc .
The script continues along, checking the return
rc value and also referencing the
dialog.GetText() value. Remember, if if the dialog is closed the values of the dialog controls are still available.
Sample Dialogs
Now with some understanding of Eto Dialogs in Python, take a look at some of the Sample dialogs in the Python Developer Samples Repo: | https://developer.rhino3d.com/guides/rhinopython/eto-forms-python/ | CC-MAIN-2020-40 | refinedweb | 2,995 | 56.96 |
Hi all,
I have a project which requires a cpp library to link to a local libtorch dependency for inference. This cpp library is wrapped using pybind11. The same project also uses Pytorch (for training) which interacts with the cpp library.
LibTorch is linked like so in the CMakeLists.txt
target_link_libraries(${CPP_Library} PUBLIC ${TORCH_LIBRARIES})
I also set up a python virtual environment where I install PyTorch like so:
pip install torch
In Python, when I call
import torch, I get the linking error:
from torch._C import * # noqa: F403
ImportError: /home/<path-to>/venv/lib/python3.8/site-packages/torch-1.10.2-py3.8-linux-x86_64.egg/torch/lib/libtorch_python.so: undefined symbol: _ZNK5torch3jit5Graph8toStringEb
Both pytorch and libtorch are the same version 10.2 (CUDA).
Is there a recommended way of using both libtorch and pytorch in the same project?
Thank you! | https://discuss.pytorch.org/t/linking-error-due-to-using-libtorch-and-pytorch-in-same-project/143406 | CC-MAIN-2022-33 | refinedweb | 143 | 56.45 |
In ABAP we can define a static attribute for a class via keyword CLASS-DATA, whose validity is not associated with instances of a class but with the class itself. In order to prove this fact I use the following simple Pointer class for demonstration:
class ZCL_POINT definition public final create public . public section. data X type I . methods CONSTRUCTOR importing !IV_X type I !IV_Y type I . private section. data Y type I . class-data COUNT type I . ENDCLASS. CLASS ZCL_POINT IMPLEMENTATION. method CONSTRUCTOR. me->x = iv_x. me->y = iv_y. count = count + 1. endmethod. ENDCLASS.
In this class, static attribute count is responsible to maintain the number of created Point instances.
Then create four point instances:
data(a) = new zcl_point( iv_x = 1 iv_y = 1 ). data(b) = new zcl_point( iv_x = 1 iv_y = 2 ). data(c) = new zcl_point( iv_x = 1 iv_y = 3 ). data(d) = new zcl_point( iv_x = 1 iv_y = 4 ).
Via any variable of a, b, c or d, we can monitor the value of count in debugger.
Can we access the static attribute of a class without object instance in debugger?
Since in theory the static attribute belongs to class instead of any dedicated object instance, so question comes: is there approach to monitor the static attribute value in ABAP debugger directly from class instead? Yes it is possible.
1. type text “{C:ZCL_POINT} in debugger and press enter key
2. double click, and you can see the attribute value is directly maintained in class ZCL_POINT, without any object instance created on top of it.
And I try to change its visibility dynamically via class descriptor via the following code and actually it is not possible:
data(lo) = CAST cl_abap_objectdescr( cl_abap_classdescr=>describe_by_name( 'ZCL_POINT' ) ). read TABLE lo->attributes ASSIGNING FIELD-SYMBOL(<count>) WITH KEY name = 'COUNT'. CHECK SY-SUBRC = 0. <count>-visibility = 'U'.
Since the structure is read-only and not editable outside cl_abap_objectdescr.
This makes sense otherwise the encapsulation will be violated. Just check many other attribute marked as read-only in Class/Object descriptor class.
Reflection in Java
Check the following code which demonstrates how to access private static attribute value in code via Reflection.
import java.lang.reflect.Field; public class Point { private int x; private int y; static private int count = 0; public Point(int x, int y){ this.x = x; this.y = y; count++; } private static void accessStaticPrivate(Point point){ Class classObject = point.getClass(); try { Field countField = classObject.getDeclaredField("count"); System.out.println("count: " + countField.get(point)); } catch (NoSuchFieldException | SecurityException | IllegalArgumentException | IllegalAccessException e1 ) { e1.printStackTrace(); } } public static void main(String[] arg){ Point a = new Point(1,2); accessStaticPrivate(a); Point b = new Point(1,3); accessStaticPrivate(b); Point c = new Point(1,4); accessStaticPrivate(c); Point d = new Point(1,5); accessStaticPrivate(d); } }
For ABAPer it is easy to understand the usage of Class object in Java by just comparing it with CL_ABAP_CLASSDESCR in ABAP.
When running this small program locally, you will get output in console:
count: 1 count: 2 count: 3 count: 4
Unlike RTTI in ABAP, Java reflection can sometimes lead to security issues, see one example how Java Singleton would be bypassed in blog Singleton bypass – ABAP and Java.
Hi Jerry,
the ABAP part where you attempt to change the visibility of the private field is a bit confusing: You seem to expect that changing the visibility of an attribute in the class descriptor would have the effect of making the attribute public, but the class descriptors returned by the RTTI services have no such relation to the actual classes.
What you’re running into is simply that CL_ABAP_OBJECTDESCR declares its attibute ATTRIBUTES as READ-ONLY. Almost all the public attributes of the RTTI classes are read-only, simply because changing them does not make sense: They are descriptions of certain types as they exist in the system, and changing these descriptions would only mean that you have messed up the description, not that you have actually changed anything about the object. | https://blogs.sap.com/2017/04/04/try-to-access-static-private-attribute-via-abap-rtti-and-java-reflection/ | CC-MAIN-2019-35 | refinedweb | 659 | 55.24 |
Hi,
I have made a design on work using the EMX module but thought that I wanted to do a private project.
I wanted to play with some of the modules available using gadgetter. I got a FEZ Cobra II NET and upgraded my system to NETMF and Gadgeteer Package 2013 R2.
I tried to make my first project added a couple of the modules I bought but are running into some problems. Adding the lightsensor to socket 4 is no problem but when adding the temperature and Humidity sensor I get the following error:
This module installer was not correctly set up, it does not provide a compatible assembly for .NET Micro Framwork version 4.2
And under errors:
Error 1 The type or namespace name ‘Seeed’ does not exist in the namespace ‘Gadgeteer.Modules’ (are you missing an assembly reference?)
Program.generated.cs 25 35 GadgeteerApp1
This is in the auto-generated file so something is missing from my system. Any tips or ide would be appreciated. Have tried to seek for seeed but cant find any module or referents to that.
-Thomas | https://forums.ghielectronics.com/t/fez-cobra-ii-and-seeed-problem/14616 | CC-MAIN-2020-50 | refinedweb | 185 | 63.29 |
SQL Server 2005 Compact Edition Data Access with the SqlCeResultSet and Visual Basic.NET
Microsoft Corporation
January 2007
Applies to:
Microsoft Visual Studio 2005
Microsoft SQL Server 2005 Compact Edition
Summary:. (17
30 minutes
Tutorial Objective exercise:
- Using the SqlCeResultSet to access and update SQL Server 2005 Compact Edition data
Using the SqlCeResultSet to Access and Update SQL Server 2005 Compact Edition Data
In this exercise, you will learn how to use the SqlCeResultSet, a powerful cursor-based data-access implementation to access data from SQL Server 2005 Compact Edition. ResultSetDemo, and then click OK, as shown in Figure 1..
To set up the project form for data binding
- In Visual Studio, click View | Toolbox to open the form designer toolbox.
- In the toolbox, expand Data if it is not already expanded.
The data-related controls are displayed, as shown in Figure 2. You will be using the BindingSource and DataGridView controls.
Figure 2. The Data controls in the toolbox
- Double-click the BindingSource control to add BindingSource1 to the form designer's component tray, as shown in Figure 3. This control will be bound to a cursor over a SQL Server Compact Edition database.
Figure 3. The BindingSource1 control in the component tray
- In the toolbox, double-click the DataGridView control to add DataGridView1 to the form. Figure 4 shows the newly added DataGridView control. This control will display the results of the data-bound cursor.
Figure 4. The DataGridView1 control
The DataGridView control allows you to choose your data source in a number of ways. One of these ways is to use the smart-tags menu.
- If the smart-tags menu is not already visible, click the smart-tags arrow button at the top right of the DataGridView1 control to display the smart-tags menu.
- In the smart-tags menu, click the Choose Data Source drop-down button, and then click BindingSource1, as shown in Figure 5. This sets the data source of the DataGridView control to the BindingSource control that you just created.
Figure 5. The DataGridView1 control's smart-tags menu, showing the Choose Data Source options
- Still in the smart-tags menu, click Dock in parent container to cause the control to fill the entire form, even when the form is resized at run time, as shown in Figure 6.
Figure 6. The docked DataGridView1 control
Now, you are ready to add the data to the form. You will use the Northwind sample database that ships with SQL Server Compact Edition.
To set up the form for data binding
- In the Visual Studio Solution Explorer, right-click the ResultSetDemo project, and then click Add | Existing Item.
- In the Add Existing Item - ResultSetDemo dialog box, under Files of type, select All Files (*.*).
- Navigate to C:\Program Files\Microsoft Visual Studio 8\SmartDevices\SDK\SQL Server\Mobile\v3.0, select Northwind.sdf, and then click Add, as shown in Figure 7.
Note As an alternative to the above.
- Because you will be using the SqlCeResultSet instead of a typed DataSet, click Cancel to close the Data Source Configuration Wizard.
Now, you must add a reference to the ADO.NET provider for SQL Server Compact Edition.
Note If you had chosen to use the Data Source Configuration Wizard and typed DataSets, Visual Studio would have added this reference for you.
- In the Visual Studio Solution Explorer, right-click the ResultSetDemo project, and then click Add Reference.
- On the .NET tab of the Add Reference dialog box, locate and click System.Data.SqlServerCe, and then click OK, as shown in Figure 8.
Figure 8. Adding the System.Data.SqlServerCe project reference
Visual Studio adds a reference to the System.Data.SqlServerCe assembly. Now, you will add the code necessary to connect to and load the data from the SQL Server Compact Edition database.
To add data binding code to the form
- In Visual Studio, in the Form1 form designer, double-click the form's title bar to open the Form1 class's code view and create the Form1_Load event handler, as shown in Figure 9..
- At the top of the file, before the Public Class Form1 declaration, add an Imports statement for the System.Data.SqlServerCe namespace whose assembly you referenced in the last task, as shown in the following code example.
Imports System.Data.SqlServerCe
This allows you to reference members of the namespace in your code without fully qualifying them with the prefix System.Data.SqlServerCe.
- Inside the Form1 class declaration, but outside any other members, declare a private variable of type SqlCeConnection named _conn, as shown in the following code example.
Private _conn As SqlCeConnection
- Now, you must create a constructor for the Form1 class, which is very simple in Visual Basic.NET. After the _conn variable you just declared, on a new line, type Public Sub New, and then press ENTER.
Visual Studio fills in the details of the constructor for you, which should look like the following code example.
Public Sub New() ' This call is required by the Windows Form Designer. InitializeComponent() ' Add any initialization after the InitializeComponent() call. End Sub
- Inside the new constructor, in place of the comment that follows the InitializeComponent call that Visual Studio has put in place, create a new instance of the SqlCeConnection class and assign it to the _conn variable, as shown in the following code example.
application deployment easier, SQL Server Compact Edition provides the DataDirectory substitution string that automatically provides the appropriate data directory for your application.
- Inside the parentheses of the SqlCeConnection instance that you just assigned to the _conn variable, type the connection string "Data Source = |DataDirectory|\Northwind.sdf", so that the line of code that you just typed now looks like the following code example.
.
- After the line you just added and modified that assigns to the _conn variable, set the DataGridView1 control's AutoGenerateColumns property to True, as shown in the following code example..
- Locate the private Form1_Load event handler. It is currently empty.
- Inside the Form1_Load event handler, create a New instance of the SqlCeCommand class and assign it to a variable named cmd, as shown in the following code example.
Dim cmd As New SqlCeCommand()
- Next, assign the _conn variable that you defined above to the Connection property of the cmd object..
- Before launching the Query Designer, prepare a line of code for the SELECT statement by assigning an empty string to the cmd object's CommandText property, as shown in the following code example.
cmd.CommandText = ""
Now, you will use the Query Designer to generate the SELECT statement that you will paste into the empty quotation marks in the code you just typed.
- In Solution Explorer, double-click Northwind.sdf.
Visual Studio adds a database connection and displays it in the Server Explorer, showing the Northwind.sdf database and its contents, as shown in Figure 10.
Figure 10. The Northwind.sdf SQL Server Compact Edition database in the Visual Studio Solution Explorer
- In Server Explorer, expand Northwind.sdf | Tables | Employees to view the Employees table's schema.
- Now, right-click on Employees, and then click New Query to open the Query Designer.
- Add the Employees table to the Query Designer by clicking Employees in the Add Table dialog box, and then clicking Add, as shown in Figure 11.
Figure 11. Adding the Employees table to the query
- Click Close.
In order to make the result set updateable, you must include the table's primary key column, Employee ID, in the query.
- Check the box next to Employee ID, as shown in Figure 12.
Figure 12. Adding the Employee ID column to the query (Click on the picture for a larger image)
The [Employee ID] column is added to the criteria pane and the SQL pane.
- Now, add the Last Name, First Name, and Photo columns by checking their boxes in the same way that you did in the previous step for the Employee ID column. You might need to scroll down in the Employees table to locate the Photo column.
- To execute and test the query, click Query Designer | Execute SQL on the Visual Studio menu.
The Results pane displays the selected data, as shown in Figure 13.
Figure 13. Executing the query (Click on the picture for a larger image)
- Highlight the SQL statement by right-clicking anywhere in the SQL pane of the Query Designer and then clicking Select All on the pop-up context menu, as shown in Figure 14.
Figure 14. Selecting the query text (Click on the picture for a larger image)
- Right-click the text that is now highlighted, and then click Copy on the pop-up context menu to copy the entire SELECT statement to the clipboard.
- Click File | Close to close the Query Designer and return to the Form1 class code view.
- Back in the Form1_Open event handler, on the last line of code that you typed to assign the cmd object's CommandText property an empty string, right-click directly between the two quotation marks, and then press Paste in the pop-up context menu that appears.
- Delete any extra white space, including the line break, which was pasted from the Query Designer so the SELECT statement appears all on one line, as shown in the following code example.
cmd.CommandText = "SELECT [Employee ID], [Last Name], [First Name], Photo FROM Employees"
Next, you must execute the command so you can get the results of the query. Before you can execute a SqlCeCommand object, you must open its connection.
- Open the _conn object, as shown in the following code example.
_conn.Open()
In order to open a direct result set over the database, you must create an instance of the SqlCeResultSet class.
- Declare a variable of type SqlCeResultSet named resultSet, as shown in the following code example.
Dim resultSet As SqlCeResultSet
- Assign to the resultSet variable the results of the query you designed by calling the cmd object's ExecuteResultSet method. In order to allow movement back and forth in the result set and to allow updates to the data, use the Or operator to pass the ResultSetOptions.Scrollable and ResultSetOptions.Updatable options to the ExecuteResultSet method, as shown in the following code example.
resultSet = cmd.ExecuteResultSet(ResultSetOptions.Scrollable Or ResultSetOptions.Updatable)
- Finally, bind the data to the grid by assigning the resultSet to the BindingSource1 control's DataSource property, as shown in the following code example.
To test the application
- In Visual Studio, click Debug | Start Debugging to run your application.
- Click in the Last Name column of row 2 on the text Fuller.
- Change the last name by typing a new value in the cell, and then press TAB to move to the next cell.
The change is saved to the result set row buffer and will be saved to the database when you move to another row or close the form.
- Close the form to stop the application.
- To see that the data has been saved to the database, once again, click Debug | Start Debugging to run your application.
Notice that the application appears with the data that you changed, as shown in Figure 16, demonstrating that the data was saved to the SQL Server Compact Edition database file.
Figure 16. The modified SQL Server Compact Edition data
- Close the form to stop the application and the Visual Studio debugger..
To understand where the modified data is stored
- At the top of the Visual Studio Solution Explorer, click the Show All Files button to reveal all of the folders and files in your project directory.
Note You might need to hover your mouse pointer over the buttons to see their names. The button is outlined in Figure 17.
- In the Visual Studio Solution Explorer, locate the ResultSetDemo project node. Under the ResultSetDemo project node, expand the bin and Debug folders to reveal a copy of the Northwind.sdf, file among other files, as shown in Figure 17.
Figure 17. The Northwind.sdf file copy for the Debug build
- In Solution Explorer, locate and double-click the original copy of Northwind.sdf to open it in Server Explorer. Be sure to double-click the original copy, and not the hidden one that you just revealed.
- In Server Explorer, expand Data Connections | Northwind.sdf | Tables, if necessary.
- Right-click the Employees table, and then click Open, as shown in Figure 18.
Figure 18. Opening the original Employees table
- Notice that the data is not changed, but it still contains the original values, as shown in Figure 19.
Figure 19. The original Employees data
- Click File | Close to close the table.
- In Solution Explorer, now locate and double-click the previously hidden copy of Northwind.sdf to open it in Server Explorer. Be sure to double-click the copy under the Debug folder, and not the original one.
- In Server Explorer, expand Data Connections | Northwind.sdf1 | Tables, if necessary.
- Right-click the Employees table, and then click Open.
- Notice that this is the data that you changed, as shown in Figure 20.
Figure 20. The modified Employees data
- Click File | Close to close the table..
- In Solution Explorer, right-click the original Northwind.sdf file that you added to the project, and then click Properties to view the file's properties, as shown in Figure 21..
Conclusion
In this tutorial, you performed the following exercise:
- Using the SqlCeResultSet to access and update SQL Server 2005 Compact Edition data. | https://technet.microsoft.com/en-US/library/bb219486(v=sql.90).aspx | CC-MAIN-2017-30 | refinedweb | 2,228 | 62.17 |
Name | Synopsis | Description | Return Values | Errors | Attributes | See Also | Notes
#include <unistd.h> pid_t vfork(void); running in the child's context, since the eventual).
Upon successful completion, vfork() returns 0 to the child process and returns the process ID of the child process to the parent process. Otherwise, -1 is returned to the parent process, no child process is created, and errno is set to indicate the error.
The vfork() function will fail if:
The system-imposed limit on the total number of processes under execution (either system-quality or by a single user) would be exceeded. This limit is determined when the system is generated.
There is insufficient swap space for the new process.()().
Name | Synopsis | Description | Return Values | Errors | Attributes | See Also | Notes | http://docs.oracle.com/cd/E19253-01/816-5167/vfork-2/index.html | CC-MAIN-2015-32 | refinedweb | 126 | 56.15 |
.core.groups;47 48 /**49 * holds the information about a multicast group and serves as a key in the groups map50 * 51 * @author Amir Shevat52 *53 */54 public class GroupKey {55 // the multicast addess56 private String groupIP;57 // the multicast IP58 private int groupPort;59 60 public GroupKey(String groupIP ,int groupPort ){61 this.groupIP = groupIP;62 this.groupPort = groupPort;63 }64 65 /**66 * @return Returns the groupIP.67 */68 public String getGroupIP() {69 return groupIP;70 }71 /**72 * @return Returns the groupPort.73 */74 public int getGroupPort() {75 return groupPort;76 }77 78 public int hashCode(){79 return groupIP.hashCode()+groupPort;80 }81 public boolean equals(Object obj) {82 if(obj instanceof GroupKey){83 return ((GroupKey)obj).groupIP.equals(this.groupIP) && ((GroupKey)obj).groupPort == this.groupPort; 84 }else{85 return false;86 }87 }88 }89
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/mr/core/groups/GroupKey.java.htm | CC-MAIN-2017-04 | refinedweb | 151 | 56.15 |
Assertions and exceptions have similar purposes; they both notify you that something is not right in your application. In general, you’ll want to reserve assertions for design-time debugging issues and use exceptions for runtime issues that might have an impact on users. Nevertheless, you should know how to use both tools to catch problems early.
When you use an assertion, you assert that a particular Boolean condition is true in your code and that you want the Microsoft .NET runtime to tell you if you’re wrong about that. For example, in the DownloadEngine component of Download Tracker, it doesn’t make sense to try to download a file from an empty URL. Ideally, I’ll never write code that attempts this; there should be validation of user input to prevent this situation. But if it does happen, through some mistake on my own part (or as a result of some undue sneakiness on the part of the user), I’d rather know immediately.
To use assertions in .NET, you need to set a reference to the System.Diagnostics namespace. Then you can use the Debug.Assert or the Trace.Assert method. For example, here’s a snippet of code to verify the presence of some data in the SourceUrl property:
// This method should never be called without // a Source URL Debug.Assert(((d.SourceUrl != string.Empty) && (d.SourceUrl != null)), "Empty SourceUrl", "Can't download a file unless a Source URL is supplied");
The first argument to the Assert method is a Boolean expression that the Common Language Runtime (CLR) will evaluate at runtime. If it evaluates to true, then nothing happens, and the code keeps running with the next line in the application.
However, if the Boolean expression evaluates to false, the CLR halts execution and uses the other two arguments to construct a message to the developer. I can test this with the aid of a quick little harness application. Figure 4.1 shows the result of calling into this method with an empty SourceUrl property.
Figure 4.1: Assertion at runtime
Notice the truly horrid caption of the message box that displays the assertion, which tells you that the buttons do something other than what their captions might lead you to believe. Apparently Microsoft decided to overload the existing system message box API rather than design some custom user interface for assertions. Don’t do this in your own code!
As I mentioned earlier, you can use either the Trace.Assert or the Debug.Assert method in your code. These classes are identical, except that the Debug class is active only if you define the DEBUG symbol in your project, and the Trace class is active only if you define the TRACE symbol in your project. These constants are commonly defined in the Build section of the project properties (available by right-clicking on the project node in Solution Explorer and selecting Properties), as shown in Figure 4.2.
Figure 4.2: Defining build constants
By default, the Debug configuration defines both the TRACE and DEBUG constants, and the Release configuration defines the TRACE constant. Thus, Trace.Assert is active in both default configurations, whereas Debug.Assert is active only in the Debug configuration. I tend to use Debug.Assert because I usually reserve assertions for design-time debugging rather than end-user error logging. Because assertions are an aid for the developer rather than the end user, I don’t believe they have a place in release code. Instead, code you release to other users should implement an error-handling and logging strategy that precludes the necessity for any assertions.
The Trace and Debug classes can send their output to more than one place. There are special classes within the System.Diagnostics namespace called Listener classes. These classes are responsible for forwarding, recording, or displaying the messages generated by the Trace and Debug classes. The Trace and Debug classes have a Listeners property, which is a collection capable of holding objects of any type derived from the TraceListener class. The TraceListener class is an abstract class that belongs to the System.Diagnostics namespace, and it has three implementations:
DefaultTraceListener An object of this class is automatically added to the Listeners collection of the Trace and Debug classes. Its behavior is to write messages to the Output window or to message boxes, as you saw in Figure 4.1.
TextWriterTraceListener An object of this class writes messages to any class that derives from the Stream class. You can use a TextWriterTraceListener object to write messages to the console or to a file.
EventLogTraceListener An object of this class writes messages to the Windows event log.
You can also create your own class that inherits from the TraceListener class if you want custom behavior. You could, for example, export all assertion results to an XML file. When doing so, you must at least implement the Write and WriteLine methods.
Most modern computer languages have some built-in assertion facility similar to the Trace and Debug support in .NET. Even if your language of choice doesn’t support assertions, it’s easy enough to add them by writing a small bit of code. In general, you’d use a structure something like this:
Public Sub Assert (Assertion As Boolean, Message As String) If Not Assertion Then MsgBox Message, "Assertion Failed" Stop End If End Sub
That is, if the asserted condition is false, you want to display the message and halt the program. Otherwise, the procedure will return to the calling code.
Generally, you don’t want end users to see the results of assertions. Ideally, you’ll fix any problems that lead to assertions firing before you ship the code, but even if you don’t manage to do that, you should develop a friendlier strategy for end users than just dumping stack traces on their screen. That’s why I suggest using Debug.Assert rather than Trace.Assert in most cases. Assuming that you’re shipping the release configuration to your users (and you should), debug assertions will simply vanish when you compile the code.
That leads to another guideline for the good use of assertions: Make sure assertions don’t have side effects. Assertions should check that something is true, not execute any other code. For example, here’s a bad use of an assertion:
// Make sure path is OK Debug.Assert( (newPath = Path.Combine(foldername, filename)) != string.Empty, "Bad path to download");
See the problem? If you take out the assertion, then newPath will never get initialized. Putting executable statements into assertions is a good way to introduce such mysterious errors when you switch from debug to release builds.
Another mistake is to use assertions to verify that your compiler and language are working properly:
int[] intSizes = new int[3]; Debug.Assert(intSizes.GetUpperBound(0) == 2, "Failed to initialize array");
If you can’t trust your compiler to allocate three array members when you tell it to declare an array with three members, you might as well just give up now. No amount of assertions will make your applications any more reliable.
So what should you use assertions for? One excellent use for assertions is to verify that input data is reasonable. If you refer back to the first code snippet in this chapter, you’ll see that is what I’m doing with the SourceUrl argument. Note that I’m checking in the method that uses the value, not in the method that sets the value. By placing the assertion here, I can make sure that any code that calls this method is using reasonable values.
Another good use for assertions is to check your assumptions after a particularly involved or tricky piece of code executes. For example, you might make some calculations that are designed to yield a percentage based on a complex set of inputs. When the calculations have finished, you could use an assertion to verify that the final result is between zero and one.
Exceptions represent another way to handle problems with your code. As the name implies, exceptions are for exceptional situations—primarily errors that cannot be avoided at runtime.
For example, consider the DownloadEngine component within Download Tracker. I’m developing this as a general-purpose library that any application can use. In particular, the application can pass in a URL to be downloaded. But there’s no way for the DownloadEngine component to prevent the calling application from passing in “Mary had a little lamb” instead of a valid URL. So what should the component do when that happens? That’s where exceptions come into play. An exception signals an exceptional problem in your code that will be handled in another part of your code. As an example, the DownloadEngine component uses this code structure to handle problems with the actual downloading process:
public void GetDownload(Download d) { string filename = string.Empty; string foldername = string.Empty; WebClient wc = new WebClient(); try { // Code to perform the actual download goes here } catch (Exception e) { // Bubble any exception up to caller, with custom info throw new DownloadException("Unable to download" ,d, e); } finally { wc.Dispose(); } }
You should already be familiar with the coding mechanics of exceptions in C#, but here’s a quick review just in case. The following four statements together make up the exception-handling mechanism:
The try statement indicates the start of an exception-handling block. If any code within the block raises an exception, that exception is handled by the associated catch statement.
The catch statement indicates the start of the actual exception handling. The CLR transfers execution here if an exception is raised in the associated exception-handling block.
The finally statement indicates the start of code that will always run, whether or not an exception occurs. Typically, the finally block is used for cleanup.
The throw statement is used to generate an exception. If you throw an exception in a catch block, it will be handled by the parent routine’s exception-handling block (if any).
So, the DownloadEngine component takes a relatively straightforward approach to exception handling. It catches any exception in its own operations, wraps this exception in a new custom exception (more on that in a moment), and throws it to the parent code. This is a typical pattern for library code, which doesn’t have a user interface to present errors directly to the end user. Ideally, every user interface component will validate the input so that such errors never happen. But as a library writer, you can never be quite sure that’s going to happen (unless the library is purely for internal use, and you have control over every single application that calls the library).
You’ll want to create custom exception classes in two situations:
There is no existing exception class that correctly represents the exceptional condition.
You want to pass additional information to the parent code with the exception.
In the case of the DownloadEngine component, I want to pass the actual failing Download object back to the caller so that it can determine which download failed. Thus, I’ve created a custom DownloadException class in my code.
Here are some guidelines that you should follow when creating a custom exception class:
Derive your exception classes from the System.ApplicationException class.
End the name of custom exception class with the word Exception.
Implement three constructors with the signatures shown in the following code:
public class MyOwnException : System.Exception { public MyOwnException() : base() { } public MyOwnException(string message) : base(message) { } public MyOwnException(string message, System.Exception e) : base(message, e) { } }
Here’s the code for the DownloadException class. In addition to implementing the required constructors, it supplies two other constructors that accept a Download object.
/// <summary> /// Custom exception class to handle download errors /// </summary> /// <remarks> /// Additional fields: /// d: The failing download to pass with the exception /// </remarks> public class DownloadException : System.Exception { // Define the additional fields private Download _d; // Define read-only properties for the additional fields /// <summary> /// The associated Download object /// </summary> public Download d { get { return _d; } } /// <summary> /// Parameterless (default) constructor /// </summary> public DownloadException() : base() { } /// <summary> /// Constructor for an exception with text message /// </summary> public DownloadException(string message) : base(message) { } /// <summary> /// Constructor for an exception with text /// message and inner exception /// </summary> public DownloadException(string message, System.Exception e) : base(message, e) { } /// <summary> /// Constructor for an exception with text /// message and Download object /// </summary> public DownloadException ( string message, Download d) : this(message) { _d = d; } /// <summary> /// Constructor for an exception with text /// message, inner exception and Download object /// </summary> public DownloadException( string message, Download d, System.Exception e) : this(message, e) { _d = d; } }
In addition to the rules for creating exception classes, other guidelines you should keep in mind when using exceptions in your code include the following:
Exceptions are for exceptional situations. Don’t use exceptions as an alternative to other control-of-flow statements. As with other software tools, many developers have a tendency to overuse exceptions when they’re first exposed to the concept. Don’t, for example, use an exception to signal to the calling code that a download was completed successfully. That’s what the return value from your method is for. Alternatively, your method could raise an event for the calling code to handle.
In general, you should use the exception constructors that include a string value for additional information. This approach lets the calling application display the string to the end user if it can’t figure out what else to do with the exception.
If you’re throwing an exception to pass error information from lower-level code to higher-level code, use one of the constructors that accepts an Exception object. This lets you wrap the exception that you received. The calling code can drill into that exception, if necessary, by using the InnerException property of your exception class.
Don’t create custom exceptions when the built-in ones will do. For example, you should use the build-in InvalidOperationException if calling code attempts to set a property before your class is completely initialized, or ArgumentException if inputs are out of the allowed range. | https://flylib.com/books/en/1.67.1.29/1/ | CC-MAIN-2018-39 | refinedweb | 2,357 | 53.51 |
Slashdot Log In
No Nonsense XML Web Development with PHP
Posted by samzenpus on Wed Mar 15, 2006 12:46 PM
from the serious-development dept.
from the serious-development dept.
Alex Moskalyuk writes "PHP and XML seems like a marriage made in heaven. Powerful manipulation functions and support on the core language level in PHP5 combined with universal extensibility of XML make it a technology of choice for quite a few Web enthusiasts and companies out there. However, anyone inspired by PHP's ease of use can probably find a good cure from insomnia when facing with XML specs. With all the DTD's, XML Schemas, XSLT and XPath queries one can easily get the impression that the world is changing on them, and perhaps sticking to hard-coded HTML with PHP statements combined with SQL statements for data retrieval would be within the zone of comfort." Read the rest of Alex's review.
Thomas Myer's No Nonsense XML Web Development with PHP is an XML primer for those who have been exposed to PHP, but are yet waiting to appreciate the elegance of PHP+XML solutions. Throughout 10 chapters and 2 appendices Myer is introducing the reader to different aspects of XML, their best-practice implementations in LAMP (where last P stands for PHP) environment, and their relevance to the real world. For the real-world example Myer decides to guide the reader through writing a custom content management system - complete with publishing/admin interface, templating/presentation layer, search engine, RSS feeds and other commonly expected features.
The book is not an introduction to PHP, but it does assume that the Web developer knows what XML is, but has never dealt with it. So the first chapter just talks about properly parsing XML with IE and Firefox, validating an XML document, differences between a well-formed and a valid XML document. Overall, it provides a very good introduction to XML for someone who has never dealt with it, and could probably be skipped by developers with XML exposure.
Chapter 2, XML in Practice, goes into nitty-gritty details of XML, and 26 pages later the reader knows how to create an XML file to display in the browser, declare proper namespaces, attach a CSS file to existing XML file and display the proper XML+CSS file (look, Ma, no <html>!) in the browser. The author earns instant geek credibility by displaying Firefox screenshots, with the exception of IE screenshot whenever IE is discussed. At the end of the chapter the author takes us through the basic XSLT.
DTD's, XSLT and writing a practical PHP app take up the next three chapters, followed by XML manipulation chapters. JavaScript enthusiasts will probably find Chapter 6 pretty useful, as it discusses manipulating XML on the client side, working with XSLT, and creating dynamic site navigation based on the XML source. Chapter 7 is what one would expect from the book that has the words PHP and XML in the title - discussion of SAX, DOM and SimpleXML parsers, examples of their implementation, discussion of proper use cases for each one of the technologies. The SimpleXML subchapter also contains a good primer on XPath - a query language that allows the developer to provide the parser with a query to navigate down the XML document.
Chapter 8 takes the reader through RDF and RSS, discusses the ways the syndication feeds are used on the Web nowadays. Since throughout all these chapters we're building a content management system, this is the right time to add the RSS headlines functionality to the site. The next chapter discusses another practical implementation of XML on the Web - XML-RPC calls between the sites and proper ways of exchanging data via XML Web services. The chapter discusses SOAP, although not a whole lot, and just mentions REST as another way to implement Web Services. As a practical exercise, the author takes readers on a tour of building an XML-RPC client, server and connecting those two together.
The last chapter talks about using XML with databases. Native XML databases are discussed, but let's face it - most of the PHP development is done with relational databases anyway. Myer talks about exporting MySQL database contents into XML with phpMyAdmin and mysqldump. The first appendix includes function reference for SAX, DOM and SimpleXML parsing in PHP, while the second one completes the CMS project by providing the rest of the necessary files.
I found the author's style very easy to follow and approachable. The code samples are succinct and to the point, there are also no generic discussions, such as "Why PHP?" The project chosen for the practical implementation is a bit boring, but at the same time quite real-world. The screenshots are clear, and code examples are nicely highlighted. The errata is provided on the book Web site. Code archive is available as a single file download as well. The book site also provides 100% money back guarantee (less shipping and handling fees) to anyone who bought the title, and didn't feel like they were getting their money's worth.
However, there are a few drawbacks that I noticed as well. With topics like XSLT and XPath broken into several chapters and discussed in smaller chunks, it's hard to use the book as a reference later on. Appendix A with PHP function reference for XML parsing hardly seems like a worthy addition, since PHP manual page on the subject contains equivalent information with more real-life examples contributed by users.
With all that, the book is quite informative, educational and useful. The author manages to tackle quite a few difficult topics in 260 pages provided to him (the count excludes preface and appendices). However, kudos to the author for writing chapters on XML without sounding boring, redundant or too academic. I would highly recommend this book to anyone interested in developing PHP-driven Web sites that provide or consume Web services, work with XML data or generate XML for others to use."
You can purchase No Nonsense XML Web Development with PHP from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
This discussion has been archived. No new comments can be posted.
No Nonsense XML Web Development with PHP | Log In/Create an Account | Top | 131 comments | Search Discussion
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
Better matched, perhaps, than Perl & XSLT (Score:3, Insightful)
(Last Journal: Monday May 22 2006, @07:16PM)
Re:Better matched, perhaps, than Perl & XSLT (Score:4, Informative)
()
Any recent install of java will almost certainly have an xslt processor on it, you just have to remember the magic incantation:java org.apache.xalan.xslt.Process -XSL [template] -IN [file]
Code Download (Score:5, Informative)
()
Can't please everyone, can you? (Score:5, Interesting)
( | Last Journal: Tuesday July 24, @05:17PM)
Michael, a web programmer, February 7, 2006,
Almost worthless.
Based on the title, one might presume that Myer and Marini wrote the book for people who are already familiar with PHP and XML and want to learn some advanced techniques for combining them. What he gets instead are long (relative to the book itself), superficial introductions to PHP and XML and tiny, trivial examples of their combination. Everything in the book is common sense to someone who already knows PHP and XML. What the book teaches to beginners, however, is effectively useless for its superficiality, so I'd discourage anyone, especially beginners, from reading this book, even if he receives it for free. Time also is too valuable to waste on this book. Read 'PHP and MySQL Web Development' by Luke Welling and Laura Thompson and 'XML 1.1 Bible' by Elliotte Rusty Harold. One can visit SitePoint's web site to find a list of their titles and then return to a vendor site to read product reviews. SitePoint books are generally sub-par. This book is no exception.
Somewhere, someone at bn.com is shaking their head, wondering if this "reader reviews" thing is all that good a deal after all.
(FWIW: I think the book looks like just what I need, with my n00b level of knowledge of PHP and XML but with hopes to put them together myself [magicnumbers.org], if I can just find the right feed.)
XML/XSLT is often more work than it's worth (Score:5, Insightful)
()
This isn't intended to be me bashing XML/XSLT, but more of a warning. If you plan to use these two, ensure you fully understand them and how they will tie into your site. I've found with OmniNerd that XML/XSLT solutions are very nice for the more static or semi-static content and that using PHP to generate XHTML directly from the database is better suited for dynamic content.
Whatever you choose to use though, good luck!
Re:XML/XSLT is often more work than it's worth (Score:5, Insightful)
(Last Journal: Friday April 07 2006, @09:22AM)
XML is good for transferring data between systems. It is not good for storing data, which is what databases are for, or presenting data, which is what applications are for.
Re:XML/XSLT is often more work than it's worth (Score:4, Insightful)
()
Well, in the classroom you may be correct, but when you're looking for solutions, XML is often times a better place to store static data than a database. A perfect example is on OmniNerd, when one of our articles gets Slashdotted, or we think it's going to be, we bypass the database and create a static copy of our article in XML. It's faster since no "thought" is required to query specific data as it's all just there. The results have been that our server doesn't flinch when the massive wave of HTTP requests hit our site.
I also use it to store data for parts of the site that remain static. Why insert my FAQ into my database if it's not structured in a dynamic manner? It's far easier for me to go edit an XML file than run a bunch of queries, and we already mentioned the removed burden from the database.
Consider the alternative of storing it in an XHTML file. If I change the style of my site, then I have to update the XHTML file too as it's static. I can quickly translate the XML via XSLT with PHP, ASP, etc. There's no need to touch the data when I make a structural change. So given the static nature not requiring a database, the desire for easy updates, and the need to remove data from structure, I still choose XML.
So, yes, from a purist perspective it's for describing data. But from the perspective of someone trying to run a functional and effective site, it can be useful for storing certain data as well.
Re:XML/XSLT is often more work than it's worth (Score:5, Insightful)
(Last Journal: Friday April 07 2006, @09:22AM)
Re:XML/XSLT is often more work than it's worth (Score:4, Insightful)
( | Last Journal: Saturday July 29 2006, @03:58PM)
XML is a key technology, and much underused by my profession, which still relies too much on FrameMaker, Word, and (God help us!) plain old HTML. But it's not the solution to every content management problem.
A compromise? (Score:5, Insightful)
()
Some quick advantages:
So even if you don't want to get into XML, XSLT, etc. then using the DOM for page generation is a much better solution than the traditional mixing HTML into PHP into files. The only qualifier to that I can think of is very small sites and when you don't have said libraries and such built up.
When else would hard coding HTML be preferred? I'm drawing a complete blank.
Re:A compromise? (Score:5, Insightful)
The downside to using the DOM as you describe is that you need to generate the whole document before you start sending it. For example, imagine if Slashdot used your approach - on a page with hundreds of comments, you'd have to wait for every last comment to be added to the DOM before you even started to send the headline to the user.
PHP is a tool of the homosexual interent agenda (Score:1, Funny)
Praise Jesus!
Present in PHP5 and PEAR for PHP4..but where else? (Score:2)
()
Match made in heaven? (Score:2)
()
You don't order nonsense, honey--it just comes (Score:2)
( | Last Journal: Friday November 08 2002, @06:29PM)
It's getting boring... (Score:2, Insightful)
()
- PHP saves granny from death
- How to build nuclear reactor using PHP and components from radioshack
- Reliable extraterrestial exploration using php.net functions reference comments
- PHP programmer cured from cancer, aids and herpes (aciquired while trying to understand any basic computer science topic)...
- PHP Saves! Better than Jesus!
- PHP - a quick guide to shopping.
C'mon nerds - trying to manipulate XML with common PHP functions is like trying to hang a picture on your wall using McDonalds fries, average sized elephant and twenty years old issue of playboy magazine. Ok, I have no problems to use PHP for what it's intented - quick, dirty and unmaintainable html generators occasionally attempting to simulate functionality of even the most basic OO languages, but please - everything has its limits.
P.S. I occasionally do XML for living. And XSLs are simple.
XML data and HTML middleware (Score:3, Interesting)
()
A lot of the middleware that converts data to HTML and back can go away when you use the right XML tools. XSLT does a good job of presenting static pages, and it can be fast if you cache the results as well.
But for dynamic pages (and forms) XML to XSLT to HTML leaves some big gaps:
These are some of the reasons we updated the W3C HTML forms module [w3.org] to take account of XML data directly.
How does it fix the above problems?
Nice work if you can get it, you say? Well, as everyone knows Microsoft hasn't yet implemented XForms. (Heck, they haven't even implemented CSS, though we hear they do have it as a goal now.)
So what can you do today:
Here's a quick example:
Let's suppose you have a book list you want to view, avaialble at [example.com].If you want to display this data
This book is useless... (Score:1)
I was disappointed to find that the author barely used php5 dom functions to support the CMS (if you can call it that) that you are being tutored along because that is definitely one of php5's strong points. i was looking for a book with function references and some code examples, they were there, however, only for the simpleXML class.
in reality if you wanna build a php/xml cms, you're going to need to use php4.4+ and either write or use one of the many classes out there for actually parsing/writing xml.
Match made in heaven (Score:3, Funny)
()
Is that another way of saying that they deserve each other? *ducks*
Joy and Sorrow (Score:5, Insightful)
()
PHP is losely typed, full of hacks (excellent hacks that make coding easier) and is great exactly because it allows the coder to be pretty careless and have the language look out for him as far as possible.
XML, on the other hand, is strict and harsh on the coder. Forgot to close a tag? Wrong character somewhere? Not got the tag order correct? Sorry, your entire tree fails parsing.
They just don't mix well, and it shows everywhere. I'm currently coding a PHP app using XML-RPC, and gosh is it convoluted. You've gotta cast practically everything into the special XML-RPC values and back out again. You'd expect the libraries to have functions doing that for you, but you'd be mistaken. On the average line stuffing together an XML-RPC call, the whole "new XML_RPC_VALUE" stuff takes up twice the space of the actual variables.
Doesn't mix well. Sorry, I like PHP a lot and XML is an excellent thing. But they just don't mix well.
So You Want To Learn PHP5 and XML... (Score:1)
if you know some cool php5 / DOM / XML / XPath / XQuery / XSLT scripts, be sure to visit and share for all the newbs.
i have set up a few example scripts to get you started. others are more knowledgeable than i and can answer more specific question, too.
i've found learning the DOM to be a royal pita. php4 is very very different than php5. different versions of php5 behave very differently. my recommendation - get the latest and greatest php5 version. examples in the php online manual often don't work. there are undocumented capabilites.
as a trick, you will likely find more information about xml when googling "javascript xml tutorial" or "jaascript DOM tutorial" - and then you can convert the code to php5 class style. it will often work.
it is pretty cool, but it is a pita if you are swinging in the dark. too much missing or bad information. not very many examples that actually work - or examples that work with php4 or eralier versions of php5 but won't work with php5. or examples that just plain don't work!
good luck!
Signed up just to say DON'T BUY IT! (Score:2, Informative)
()
Ok, so it's no good for beginner and too basic for anyone else...
One positive note though -- the introduction to designing and using DTDs is quite good, especially for newbies, but that alone is not worth the price of the book.
Purchasing this book also resulted in a regular flow of Sitepoint spam until I wrote an abusive email to their marketing department saying that I had most certainly never ticked any of their "Spam me to death" checkboxes and that I couldn't be any less interested in discounts on their books. Surprisingly, their response thanked me for the honest feedback.
At least it ended well
Read my own review of the book... (Score:1)
()
Here [raquedan.com]
It's brief but I hope it helps. It's a good book by the way.
Re:wut (Score:3, Insightful)
XML stands for Xtremely Media-hyped Language and PHP stands for Perl-Hater's Platform. They are both very overused and should be ignored from this point on. Oh crap. I guess I get a free downmod for going against Slashdot culture. Oh well.
No, you should get the downmod for posting a moronic comment that contains flamebait only with no facts or even anecdotes to back it up. You rightly deserve at least a -3 for such a comment.
Malice (Score:5, Funny)
Dude, calm down! Hating Perl is not something developers do out of malice. It's a bit more like the obvious conclusion a child draws about fire after getting burned for the first time. Of course there are also some people, like you for example, who enjoy pain....
Re:wut (Score:1)
()
I'm not saying XML isn't useful I'm saying it's not that impressive as a "technology". It falls under say the Dewey Decimal system in terms of excessive usefulness to society.
PHP on the other hand ain't bad. It's a bit simpler than Perl and IMHO meant to be a bit more lightweight. In the grand theme of things there is no reason why you could SSI perl script [hint: I did it with a CGI script that would parse my own special brew of perl/html].
Tom
It's not just key/value pairs. (Score:4, Informative)
( | Last Journal: Thursday October 16 2003, @05:50PM)
In an XML file, I can throw in extra attributes or elements that won't be read by an old version of an app that wasn't looking for them. In a simple comma-separated-values layout, if I add something to the format, it's completely incompatible with previous versions.
The most complicated tools you have for comma-separated values are along the lines of cut and sed. When you have an XML document, you can convert it to *any* other XML format with a simple XSLT stylesheet (or, for that matter, into non-XML formats). SQL-Select-like statements can be represented with XPath, letting you select various fields of nodes which contain a certain attribute, act on the a certain way, etc.
Any anyway, would you look at an HTML document and say "it's just key-value pairs"? No! The order of elements, the hierarchy of data, etc, all makes up the page as a whole. HTML was an application of SGML, which XML was derived from.. Use XHTML if that last bit confuses you - it's not key/value pairs.
People have thrown the buzzwords at you because they're either really impressed with the technology, or because they're the kind of people that like buzzwords. Ignore the latter group of people, and try to focus on why those of us in the first group are singing its praises.
Re:wut (Score:4, Informative)
And BTW, XML is a tree format, not strictly key/value. And when you parse an XML file, you're never having to do direct text manipulation (which is error-prone). You're either receiving the information stored in the XML file as a series of events (SAX) or you're manipulating it via an object model (DOM).
Re:wut (Score:3, Insightful)
That said, as any Lisp programmer will tell you, tree-structured data is a Good Thing(TM). There's a reason why reading in input like: is complicated and fragile, whereas reading in input like: is so trivial that, well, I just typed this into DrScheme: and copy-pasted the second one into the input box, and DrScheme understood it perfectly.
Regexps are basically a hack to deal with data, like the first log file (which is what it actually looks like on my system), where the structure has been compressed/eliminated. In a perfect world, everything would be tree-structured, and none of those hacks would be necessary.
But wait... that's XML! It's harder to read than the parenthetical version, and slightly harder to parse (especially if there are attributes inside the XML tags), but the two are basically equivalent.
In Scheme, at least, you can build a generic XML-to-s-expression parser that will allow you to deal with any XML data that comes at you as easily as if it were parenthetical. And by generic, I mean that it can deal with any (well-formed) XML data ever. By contrast, regexps are fragile by definition. Even splitting along whitespace isn't always safe.
As far as PHP goes, I couldn't care less... it's both slower and less flexible than Scheme. What a combo! (Of course, Perl is too...
Re:wut (Score:4, Informative)
I think the parenthetical version and the XML version are about equal in terms of readability once you remember that any decent editor will have syntax highlighting to emphasise the text over the tags and that both versions will typically be split over multiple lines. Linebreaks don't really aid readability when you have short ending delimiters, but they do when you have longer ending delimiters.
The idea that XML is just a reinvention of s-expressions is quite popular, but this article [prescod.net] does a decent job of explaining how they differ.
Re:XML Web Services with PHP? (Score:2)
"Web services" is just a buzzword. There's no specific API you need to use. Your use would be called "REST" or "POX" by many web services people, so long as you use the right HTTP verbs (e.g. use DELETE not GET to delete things).
You aren't going to get significantly leaner than PHP if all you are doing is outputting XML. As for the choice of XML over JSON, CSV, YAML, etc, it really depends on what's going to be consuming your data. XML isn't the only solution, but it's probably the best supported and has the most mindshare. | http://books.slashdot.org/books/06/03/15/1432236.shtml | crawl-002 | refinedweb | 4,096 | 61.87 |
AIV GRB building import/Import plan
Contents
- 1 Goals
- 2 Schedule
- 3 Import Plan Outline
- 4 Import data
- 5 Data preparation
- 5.1 About the data
- 5.2 Data processing
- 5.3 Decision tree for building types
- 5.4 Source tags
- 6 Data merge workflow
- 7 QA
Goals
In Flanders, we have an approved on-going import of addresses from an open data government database called the CRAB (Central Reference Address Database). However, that database does not contain building outlines. The GRB import is now also approved.
The goal of this project is to complete all building outlines together with their addresses in Flanders in a high-quality fashion. To do this, we will use another open data government database alongside the CRAB, namely the GRB (Large scale Reference Database). You can read more about the GRB at AIV_GRB_building_import/Background. We will import outlines and addresses from the GRB, and then check addresses with CRAB. We will not simply import all data in large chunks but drip feed data, assuring maximum OSM object history retention and combining the GRB data with other sources and our own judgement.
So this is a manually verified, "assisted mapping" import, with a strong focus on data quality.
If you want to join, be sure to read about the whole project, then go to AIV_GRB_building_import/Instructions to see how it works.
Schedule
The import wants to be a model import and does not have the objective to load the data into OSM as fast as possible, but as high quality as possible. As such there can be no end date, we will take as long as we need to. This may be until the end of 2018, 2019, or even later. Once all buildings have been added, we will update them when the GRB is updated.
Data preparation
Data import preparation is complete and can be re-run in an automated manner to sync with both OSM and government datasets.
Communication with the local community has been done through the Riot Chat channel, and been summarized on the Belgian subsection of the OSM forums:
The data can be previewed here: , by clicking 'enable info window' on the top left and hovering over buildings to see the underlying data taken into account.
As 'proof of concept' to show what the endresult could look like, this link gives a slide-over comparison: moving the slider at the top from left to right shows the difference between a dataset with or without the buildings.
Community buy-in
The import has been discussed very extensively on the Belgian Riot chat channel and on the talk-be mailing list. The import has also been discussed on the imports mailing list [1]. Most of the criticism there has been at least partially addressed.
Data import
Data import is ongoing and will be for a long time.
Data validation
Given this step is not automated, it's more about creating a 'how to' as guideline. This work can be continued in parallel with the data import step.
Import Plan Outline
The import happens in 3 main steps
- Data preparation
- Data insertion
- Data validation
This means: take all the building data from the GRB dataset (layers 'GBG' and 'GBA'), add height data from the 3D GRB LOD1 dataset, compare data with the current OpenStreetMap data (building on current tagging in OSM, and the landuse that's currently mapped for the location), and make a suggestion on what tags the building should get. This step is done 'behind the scenes' for the entirety of Flanders, so end users will have access to this data. It will not be recalculated on the fly, but will be periodically updated.
A front-end tool will allowing the user performing the import to select a (limited) area for which he wishes to add the missing buildings. The data will be pulled from the pre-processed data from the first step, and inserted into OSM.
While the data is carefully prepared, it is still the importing party's responsability to VERIFY the data. This implies that, to ensure proper data validation, the data insertion part needs to be limited to small enough chunks to ensure it will be validated properly by humans.
Import data
Background
Data sources: GRB, CRAB and 3D GRB LOD1 DHMVII
Data licence: Flemish Gratis Open Data Licentie. GRB
Type of license (if applicable): custom, requires attribution (see #Licence below)
OSM attribution: Contributors#AIV, source on each changeset
ODbL Compliance verified: yes (done for separate CRAB import, this is the same licence)
The subsets to be imported from the GRB (the GRB calls them entities) are Gbg (buildings), Knw (man-made objects like bridges) and Gba (building attachments). A description of the GRB data is also available on this wiki page [2]
Licence
The GRB and CRAB are released under the Flemish Open Data Licence. It's a public domain with attribution style licence that is designed to be compatible with the UK Open Government Licence, the French Licence Ouverte, CC BY 3.0 and the Open Data Commons Attribution licence 1.0.
For the CRAB import it was established that this licence is compatible with the ODbL.
GRB
Licence grant on the old AGIV site (archived)
Licence grant on the Flemish government's website (archived) Unported translation by us:
Every natural person, each legal person and each group can use the GRB free of charge under the license “Free of charge open data license Flanders v1.02” (Gratis open data licentie Vlaanderen v1.02).
The sole condition is a requirement of attribution to the data set and its owner upon sharing or distributing (publishing etc.) the data. In concreto this means you should mention ‘Source: Large Scale Reference Database Flanders, AGIV[sic]’. You can read all conditions in the “Free of charge open data license Flanders”.
CRAB
Licence grant on the Flemish government's website (archived) Unported translation by us:
Every natural person, each legal person and each group can use the CRAB free of charge under the license “Free of charge open data license Flanders v1.0” (Gratis open data licentie Vlaanderen v1.0).
The sole condition is a requirement of attribution to the data set and its owner upon sharing or distributing (publishing etc.) the data. In concreto this means you should mention ‘Source: Informatie Vlaanderen’. You can read all conditions in the “Free of charge open data license Flanders”.
OSM data
The OSM data is created on the fly by the import tool built by Belgian community member Glenn Plas.
An example of the data returned by this tool can be obtained in GeoJSON by executing this command:
curl '' --compressed
Import type
This is a semi-automated import, we want the data to be verified by a human and merged with existing data.
JOSM and the Replace Geometry tool from its UtilsPlugin2 play a crucial role in our workflow.
The data will be updateable through the use of IDs that are added to each imported building.
Using the web tool we exclude buildings that are already imported (using the IDs added to imported objects) and take diffs in the web browser by combining JOSM features with Overpass API.
Data preparation
A member of the Belgian community,
Glenn (Glenn Plas on osm), created a dedicated web platform to prepare GRB data. See the mapper instructions GRBimport/Instructions.
The toolset is split up between a data parser and frontends. There is a dev frontend (coded without framework) and a ongoing production frontend (based on Laravel framework). The data handling side lives on itself and currently uses Terraform to launch a data cruncher on google cloud. The repository is here : . All repos will eventually be migrated to the OSM-BE github account. This tool has 2 different branches, one is a doing the actual data conversion , the other one sets up a tile serving postgis database to create tiles to give an idea of the final result, it will remove all buildings in OSM and replace them with GRB buildings, like this one can generate tiles that can be used in a map to show what the final result would look like. The idea is not to have to run this yourself, but the code is open for all the good reasons to open this.
Other parts are : the JSON api side to export the data from postgis, the addressing tool to apply the .dbf address data directly in the database using update queries, the dev site and the prod site interface.
The source code of the GRB tools is available at
gplv2/grb2pgsql and
gplv2/grbtool (deployed at). Code for the CRAB tool is at
gplv2/aptum.github.io (deployed at).
About the data
To do: Refer to Background
Data processing
The data processing combines data from three sources:
- OpenStreetMap (all building contours, landuses and their attributes)
- GRB data (from layers Gbg, Gba and Knw + the OIDN fields and the source date)
- 3D GRB data (data H_DTM_MIN, H_DTM_GEM, H_DSM_MAX, H_DSM_P99, HN_MAX and HN_P99)
In the context of 3D GRB data, DTM stands for "Terrain Model" and DSM for "Surface Model". The differences between DTM and DSM result in the building heights. The data source lists both the maximum value and a value beneath which 99% of the points are located. In turn those two allow to detect 'flat' versus 'pointy' roof structures.
Data reduction and simplification
In the text below we will use the term "export" to mean loading data from the GRB tool in JOSM for integrating with existing OSM data. "The tool removes" will mean that the tool doesn't export certain data.
Overnoding in the GRB source is tackled in the web based tool. Importers can do further simplification with the Simplify Area plugin for JOSM.
Objects from different GRB layers are automatically "glued" together with common nodes when appropriate when they are exported.
Decision tree for building types
Building on the 'extended dataset' created by combining the data from three sources, there is a decision tree in place to figure out which tag should be suggested for building=*
For refence, the decision tree is as follows (in pseudocode):
To do: INSERT DECISION TREE
The result of that analysis can be preview through, by toggling 'Enable Info Window' in the top bar and hovering over buildings.
Tagging plan
The import platform removes objects that clash with their OSM counterparts. As an example, GRB uses just two classifications of buildings: main buildings and non-main buildings. The tool will use a heuristic to guess the building type using the existing landuse it is in.
To do: elaborate on which building tag is exported by default. The importing mapper is expected to check and correct the building type using common sense, aerial imagery and street-level imagery [3] [4].
The web tool exports objects with the following tags:
- building=*
- man_made=bridge, man_made=silo, man_made=storage_tank, man_made=mast, man_made=chimney, man_made=water_tower, man_made=tower, tower:type=cooling, man_made=works
- highway=steps
- Source tags, explained below
Importing mappers add other tags manually where needed. In particular the case of building passages has to be done manually, using the tag tunnel=building_passage on the way.
On the changesets, mappers are required to include the item
GRB in the
source tag.
We decided not to put a source=* tag on each individual object. It doesn't honour the history of the object, and when the importing mapper has done its job well, it has combined GRB with other sources, so source=GRB would not be correct.
The tool does however export some tags to allow coupling the GRB and OSM objects. This is absolutely necessary to maintain a stable link between the two. Using them we can track which buildings have been imported. Additionally, when the GRB is updated, the web tool can easily see whether the corresponding OSM objects have been updated.
After careful consideration and many debates, the tags were chosen with a
source:geometry namespace:
- source:geometry:date=2014-12-03
- source:geometry:ref=Gbg/996978
All of those values are required to uniquely identify a version of an object in the GRB. What they mean in the GRB is explained in the sections below.
source:geometry:date
As far as we could see, this date represents the last time an object was updated. We don't know whether this is the date of measurement or of the actual update itself. What we do know for sure is that this will change if buildings are being re-measured and/or the structure is being expanded or modified. GRB has regular updates so we can use this to distil a list of buildings that have been updated. The good thing about this tag is that it's human readable.
It helps future mappers that look at the object to get an idea of how recent the data is. It can help to avoid edits based on older aerial pictures.
source:geometry:ref
This ref uniquely identifies an object in GRB. It's a concatenation of the GRB entity (the GIS layer), and the OIDN. The entity is essential as the OIDN is only unique per layer in the GRB data set.
Automated edits to retag old work to new data model
After feedback from the imports mailing list, we decided to simplify the datamodel of the tags refering back to the GRB data. Before:
source:geometry:date=2009-12-07
source:geometry:entity=Gbg
source:geometry:oidn=2155715
source:geometry:uidn=2440819
After:
source:geometry:date=2009-12-07
source:geometry:ref=Gbg/2155715
This is being done in a series of corrections based on this script. In accordance to the automated edits guidelines, this was discussed on talk-be and on the GRB Matrix channel. First real edit. The automated edit will also remove any source=GRB tags on objects, because this is implied by the other tags and it is often factually incorrect: often not ALL the data on the building is GRB-sourced!
CRAB
To do: move to mapper instructions or QA
After using the GRB tool, the mapper should check the imported addresses using the CRAB tool. That tool uses Overpass to retrieve existing OSM addresses and displays missing or wrongly positioned ones according to the CRAB, which contains higher quality addresses, but they are data points and not linked to the GRB.
The CRAB tool is open and available at. This import was previously approved.
Data merge workflow
Team approach
All experienced mappers can join the import effort. They will be monitored, see the section #Revert plans.
We will organize meetups to guide users in real life to do importing right.
References
To do: what do they mean with List all factors that will be evaluated in the import?
Workflow
See the AIV_GRB_building_import/Instructions page.
Revert plans
A designated team of experienced mappers will be monitoring each mapper's first 32 GRB import changesets, and doing spot checks on later ones. They will intervene promptly: immediately revert when the import's rules aren't followed, and banning people from the tool if they don't follow the import's rules.
Bad changesets will be reverted as soon as they are detected, using Frederik Ramm's revert scripts.
Conflation
Using the JOSM plugin Replace Geometry, way history of the existing buildings will be preserved.. Importers receive clear instructions on how to deliver quality work, and are also instructed to use JOSM's validator before upload.
The GRB web tool applies rate limiting
To do: add specifics. People failing to comply with the import's rules will be banned and their work reverted, as outlined in #Revert plans.
The end result will be better than both GRB and current OSM data.
When mistakes in source data are found, the AIV provides tools to notify them of those mistakes, so the mistakes can get corrected, and in the next data update, the differences between OSM and CRAB/GRB will be gone. The reaction time is dependent on the municipalities, but it's usually a few weeks. | https://wiki.openstreetmap.org/wiki/AIV_GRB_building_import/Import_plan | CC-MAIN-2020-05 | refinedweb | 2,673 | 59.74 |
I'm trying to do something very simple. I have to draw a box with a border on it both on the screen and also onto the printer graphics surface.
But the printer graphics object appears to behave differently.To test this out, I am using a PictureBox and a PrintDocument to draw exactly the same thing in the exactly the same way:
private void printDocument1_PrintPage(object sender, System.Drawing.Printing.PrintPageEventArgs e)
{
this.DrawSquare(e.Graphics);
e.HasMorePages = false;
}
private void pictureBox1_Paint(object sender, PaintEventArgs e)
{
this.DrawSquare(e.Graphics);
}
private void DrawSquare(Graphics graphics)
{
int top = 100;
&nb
View Complete Post
The System.Windows.Shapes namespace is Charles Petzold's namespace of choice for rendering two-dimensional vector graphics in WPF. Here he explains why.
Charles Petzold
MSDN Magazine March | http://www.dotnetspark.com/links/46590-why-does-graphicsfillrectangle-behave-differently.aspx | CC-MAIN-2017-13 | refinedweb | 132 | 51.55 |
Creating Animation with Java
- Animating a Sequence of Images
- Sending Parameters to the Applet
- Workshop: Follow the Bouncing Ball
- Summary
- Q&A
- Quiz
- Activities
Whether you are reading this book in 24 one-hour sessions or in a single 24-hour-long-bring-me-more-coffee-can't-feel-my-hand-are-you-going-to-finish-that-donut marathon, you deserve something for making it all this way. Unfortunately, Sams Publishing declined my request to buy you a pony, so the best I can offer as a reward is the most entertaining subject in the book: animation.
At this point, you have learned how to use text, fonts, color, lines, polygons, and sound in your Java programs. For the last hour on Java's multimedia capabilities, and the last hour of the book, you will learn how to display image files in GIF and JPEG formats in your programs and present them in animated sequences. The following topics will be covered:
Using Image objects to hold image files
Putting a series of images into an array
Cycling through an image array to produce animation
Using the update() method to reduce flickering problems
Using the drawImage() command
Establishing rules for the movement of an image
Animating a Sequence of Images
Computer animation at its most basic consists of drawing an image at a specific place, moving the location of the image, and telling the computer to redraw the image at its new location. Many animations on Web pages are a series of image files, usually .GIF or .JPG files that are displayed in the same place in a certain order. You can do this to simulate motion or to create some other effect.
The first program you will be writing today uses a series of image files to create an animated picture of the Anastasia Island Lighthouse in St. Augustine, Florida. Several details about the animation will be customizable with parameters, so you can replace any images of your own for those provided for this example.
Create a new file in your word processor called Animate.java. Enter Listing 24.1 into the file, and remember to save the file when you're done entering the text.
Listing 24.1 The Full Text of Animate.java
1: import java.awt.*; 2: 3: public class Animate extends javax.swing.JApplet 4: implements Runnable { 5: 6: Image[] picture = new Image[6]; 7: int totalPictures = 0; 8: int current = 0; 9: Thread runner; 10: int pause = 500; 11: 12: public void init() { 13: for (int i = 0; i < 6; i++) { 14: String imageText = null; 15: imageText = getParameter("image"+i); 16: if (imageText != null) { 17: totalPictures++; 18: picture[i] = getImage(getCodeBase(), imageText); 19: } else 20: break; 21: } 22: String pauseText = null; 23: pauseText = getParameter("pause"); 24: if (pauseText != null) { 25: pause = Integer.parseInt(pauseText); 26: } 27: } 28: 29: public void paint(Graphics screen) { 30: super.paint(screen); 31: Graphics2D screen2D = (Graphics2D) screen; 32: if (picture[current] != null) 33: screen2D.drawImage(picture[current], 0, 0, this); 34: } 35: 36: public void start() { 37: if (runner == null) { 38: runner = new Thread(this); 39: runner.start(); 40: } 41: } 42: 43: public void run() { 44: Thread thisThread = Thread.currentThread(); 45: while (runner == thisThread) { 46: repaint(); 47: current++; 48: if (current >= totalPictures) 49: current = 0; 50: try { 51: Thread.sleep(pause); 52: } catch (InterruptedException e) { } 53: } 54: } 55: 56: public void stop() { 57: if (runner != null) { 58: runner = null; 59: } 60: } 61: }
Because animation is usually a process that continues over a period of time, the portion of the program that manipulates and animates images should be designed to run in its own thread. This becomes especially important in a Swing program that must be able to respond to user input while an animation is taking place. Without threads, animation often takes up so much of the Java interpreter's time that the rest of a program's graphical user interface is sluggish to respond.
The Animate program uses the same threaded applet structure that you used during Hour 19, "Creating a Threaded Program." Threads are also useful in animation programming because they give you the ability to control the timing of the animation. The Thread.sleep() method is an effective way to determine how long each image should be displayed before the next image is shown.
The Animate applet retrieves images as parameters on a Web page. The parameters should have names starting at "image0" and ending at the last image of the animation, such as "image3" in this hour's example. The maximum number of images that can be displayed by this applet is six, but you could raise this number by making changes to Lines 6 and 13.
The totalPicture integer variable determines how many different images will be displayed in an animation. If less than six image files have been specified by parameters, the Animate applet will determine this during the init() method when imageText equals null after Line 15.
The speed of the animation is specified by a "pause" parameter. Because all parameters from a Web page are received as strings, the Integer.parseInt() method is needed to convert the text into an integer. The pause variable keeps track of the number of milliseconds to pause after displaying each image in an animation.
As with most threaded programs, the run() method contains the main part of the program. A while (runner == thisThread) statement in Line 44 causes Lines 4551 to loop until something causes these two Thread objects to have different values.
The first thing that happens in the while loop is a call to the applet's repaint() method. This statement requests that the applet's paint() method be called so that the screen can be updated. Use repaint() any time you know something has changed and the display needs to be changed to bring it up to date. In this case, every time the Animate loop goes around once, a different image should be shown.
NOTE
In Java, you can never be sure that calling repaint() will result in the component or applet window being repainted. The interpreter will ignore calls to repaint() if it can't process them as quickly as they are being called, or if some other task is taking up most of its time.
The paint() method in Lines 2934 contains the following statements:
Graphics2D screen2D = (Graphics2D) screen; if (picture[current] != null) screen2D.drawImage(picture[current], 0, 0, this);
First, a Graphics2D object is cast so that it can be used when drawing to the applet window. Next, an if statement determines whether the Image object stored in picture[current] has a null value. When it does not equal null, this indicates that an image is ready to be displayed. The drawImage() method of the screen2D object displays the current Image object at the (x,y) position specified.
NOTE
The paint() method of this applet does not call the paint() method of its superclass, unlike some of the other graphical programs in the book, because it makes the animated sequence look terrible. The applet's paint() method clears the window each time it is called, which is OK when you're drawing a graphical user interface or some other graphics that don't change. However, clearing it again and again in a short time causes an animation to flicker.
The this statement sent as the fourth argument to drawImage() enables the program to use a class called ImageObserver. This class tracks when an image is being loaded and when it is finished. The JApplet class contains behavior that works behind the scenes to take care of this process, so all you have to do is specify this as an argument to drawImage() and some other methods related to image display. The rest is taken care of for you.
An Image object must be created and loaded with a valid image before you can use the drawImage() method. The way to load an image in an applet is to use the getImage() method. This method takes two arguments: the Web address or folder that contains the image file and the file name of the image.
The first argument is taken care of with the getCodeBase() method, which is part of the JApplet class. This method returns the location of the applet itself, so if you put your images in the same folder as the applet's class file, you can use getCodeBase(). The second argument should be a .GIF file or .JPG file to load. In the following example, a turtlePicture object is created and an image file called Mertle.gif is loaded into it:
Image turtlePicture = getImage(getCodeBase(), "Mertle.gif");
NOTE
As you look over the source code to the Animate applet, you might wonder why the test for a null value in Line 31 is necessary. This check is required because the paint() method may be called before an image file has been fully loaded into a picture[] element. Calling getImage() begins the process of loading an image. To prevent a slowdown, the Java interpreter continues to run the rest of the program while images are being loaded.
Storing a Group of Related Images
In the Animate applet, images are loaded into an array of Image objects called pictures. The pictures array is set up to handle six elements in Line 6 of the program, so you can have Image objects ranging from picture[0] to picture[5]. The following statement in the applet's paint() method displays the current image:
screen.drawImage(picture[current], 0, 0, this);
The current variable is used in the applet to keep track of which image to display in the paint() method. It has an initial value of 0, so the first image to be displayed is the one stored in picture[0]. After each call to the repaint() statement in Line 45 of the run() method, the current variable is incremented by one in Line 46.
The totalPictures variable is an integer that keeps track of how many images should be displayed. It is set when images are loaded from parameters off the Web page. When current equals totalPictures, it is set back to 0. As a result, current cycles through each image of the animation, and then begins again at the first image. | http://www.informit.com/articles/article.aspx?p=30419&seqNum=6 | CC-MAIN-2017-30 | refinedweb | 1,713 | 60.45 |
Hello to everyone. I have a problem to which i cannot find any answers. I tried to google it but i didn't found anything which could help me.
So i have this small sequence of code:
class class1 { private int value; public class1(){ this.value = 10; } public int getValue(){ return this.value; } public int getAnotherValue(auxiliar aux) { return aux.visit(this); } public void setValue( int value) { this.value = value; } } class class2 extends class1 { public class2() { this.setValue(20); } } class auxiliar { public int visit ( class1 c1) { System.out.println("not here"); return c1.getValue(); } public int visit ( class2 c2 ) { System.out.println("here"); return c2.getValue(); } } public class Main { public static void main ( String[] args) { class2 c2 = new class2(); auxiliar aux = new auxiliar(); System.out.println(c2.getAnotherValue(aux)); } }
the output is the following:
not here
20
I would like it to be:
here
20
I noticed that when i copy the getAnotherValue(...) to class2 everything works as i want. But i don't think that's a smart approach. Also i think the problem comes from using 'this' in class1, getAnotherValue(...). But i don't know how it should be better to solve it.
What amazes me the most is the fact that if i add System.out.println(this.getClass().getName()); in the getAnotherValue(...) method the output is class2. | http://www.javaprogrammingforums.com/whats-wrong-my-code/13189-late-binding-problem.html | CC-MAIN-2016-18 | refinedweb | 221 | 62.24 |
Is there plugin or easy way to create new area which is surrounded by other existing areas in JOSM?
Often there are areas with landuse and white areas between them. I'm looking easy way to fill this space with new area with proper tags.
I know plugins 'Contour Merge' and 'ContourOverlappingMerge' which can help this task, but still a lot of clicks.
I just want to doubleckick on empty area between other areas and have generated new area from surrounding area sections.
asked
22 Sep '17, 13:38
juhanjuku
21●1●1●2
accept rate:
0%
edited
22 Sep '17, 13:44
OpenStreetMap is not a colouring book. We do not have, and do not want, tags for every possible land use. Therefore it is completely ok and normal for there to be "white areas" between explicitly tagged landuse. Please do not start tagging everything as "grass" or "earth" or whatever just because you dislike seeing white areas on the map.
answered
22 Sep '17, 14:00
Frederik Ramm ♦
71.3k●84●645●1113
accept rate:
24%
I'm using OSM to prepare my hiking trips. For me, and I'm sure for other hikers, it's very useful to know what type of terrain surrounds my route.
So I'm not a painter who doesn't like white space.
I hope someone will answer my question.
I often create a gpx route on a map for distant estimation ( add 10% as no hikes or rides are straight) then display it over an aerial image to do a virtual recce to get an idea of the surface, road, dirt, grass or sand. I agree with Frederik white is good, the map easier to read and you use less ink, or toner, if you print it.
It isn't as easy as you want, but the "Follow" tool can help with the task. To start it, make a way with 2 nodes connected to an existing way. Then press "F" to connect the new way to the next node along the existing way.
It doesn't know which way to go at an intersection, so it is necessary to manually connect the new way to the first node on the next existing way.
answered
23 Sep '17, 13:40
maxerickson
11.0k●10●74●154
accept rate:
30%
Thank you, not perfect but better than nothing :)
Can you point to examples of the white areas in question? If you can, I could perhaps give you a few tips.
Simple example: white area
must be natural=scrub
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
This is the support site for OpenStreetMap.
Question tags:
josm ×521
area ×169
plugin ×24
question asked: 22 Sep '17, 13:38
question was seen: 1,843 times
last updated: 25 Sep '17, 18:49
How to deal with 'Way terminates on area" errors?
How to search for overlapping areas/polygons/closed ways in JOSM?
Transfer tags between building node and area
import image into JOSM
Printing fault in upgrade
Can not store imagery offset (JOSM)
How to see roads underneath areas in JOSM ?
Importing PDF JOSM
JOSM pdfimport
measurement plugin does not work
First time here? Check out the FAQ! | https://help.openstreetmap.org/questions/59770/easy-way-to-create-area-surrounded-by-other-areas?sort=oldest | CC-MAIN-2019-51 | refinedweb | 551 | 78.59 |
Agenda
See also: IRC log
<PhilA> scribe: PhilA
<scribe> scribeNick: PhilA
paper
<bschloss> John Sheridan from the National Archives begins talking - move from 'ephermal, temporary' world to a world where there is more confidence in our data
JohnS: Talking philosophically
about the need for longevity
... how do I discover data that I can trust nad rely on, use etc.
<bschloss> How to discover open data I can trust and rely on
Johns: How do we firm up our open
data can begin to use it and re-use it confdeintly and
well
... Sustaining our open data. How do we do that.
... our budgets are declining. how do we sustain our publishing activity
... Share the responsibility of supporting and curating open data
... open data community is good at coming up with the rock to build on
... I work for a reputable institution. You'll trust the data if you trust the institution
... Adds solidity
... Extreme end is legislation. e.g. INSPIRE regulations that demand certain data norms
... How can we know if policies like INSPIRE will work? Should we be asking for more of that or going to people like the National Archives and asking them for commitments
... There's a lot to do to build our data on rock
... The ODI certificate may be one of the most important things for the community to work on this year
<yvesr>
JohnS: it would be good to
discuss here what role things like the ODI certificate can
have
... Talking about the Gazettes (London Belfast etc.)
... This is about putting things on the public record, where data is available, provenence and authenticity supported and availability guaranteed
... service will be completed by September
... how do we see more services like this come into existence
... it's about devising tracks
... the way forward to make all this happen with a solid basis, that we can build on
... No one organisation can do this on its own, we need to act as a community to solidify our efforts
<lottebelice> THe ODI certificate is really interesting, something to consider for things like and
paper
Millie: UNDP spends about $5bn a
year that generates a lot of data. Have we improved things?
What effect have we ahd
... we also generate procurement data
... we use thaty data mostly for accountability purposes
... we've been wondering what other insights might be accessible from that data
... can we work out which projects will be most effective
... what about the companies we pay, who is most effective, who do they employ etc
... We started a series of events called Data Dives where we worked with people we don't normally work with
... data analysists, programmers etc
... are there problems that we're not asking that we shoujld be asking
... We'll be opening a new challenge prize shortly for the best algorithm
... We took data from the World Bank on major contracts in 2007. We were interested in the suppliers and the relationships between those companies
PhilA: As an aside - must introduce Mille to Chris Taggart this evening
Mille: Certain companies tend to
win contracts in particular sectors
... two companies dominate this sub network of projects. What happens to the sub contractors is something goes wrwong with the main contractor - few points of failures
... do certain clusters of companies that tend to bid together
... we see clusters. Are these people really good ior is there something else going on?
... do contracts go to home countries or from the more developed world
Millie: A few hours' work
produced these insights
... the World bank folks had the data but not the insights which actually didn't take a huge time to create
<cjg> This analysis might be interesting (and easy) to apply to ...
<edsu> is a 404 for me btw
Millie: shows visualisation of projects and performance
<danbri> edsu, i think the whole paper is in the 'abstract'
PhilA: Thanks edsu - I'll fix than when I'm done scribing
Millie: It's not big data, it's lots of little data scattered around
<edsu> danbri: thanks I found now :)
Millie: global challenges coming up. We need help, people in orgs who can help open more data sets and help us get more insights out of that data
<ldodds> danbri: would make interesting reading, although I've not seen any open data on that?
<danbri> re eu, I think you'd need a temporal view... some partners sorta dominate, then EU notice that and punish them in later rounds
<cerealtom> this was the link from the final slide of the talk:
paper
TimD: Poses questions - why
people are interested in open data - transparency, innovation,
inclusion and empowerment
... the way we do open data can make it easier to realise these different aspects
... Talking about the launch (tomorrow) of ODDC
... Web Foundation and OGP are behind it
... Slides are expressive and contain the gist of the talk
... Draw out some key points
... As we've seen, supply needs to be built on solid foundations
... Are we building platforms that reply on always on high capacity systems in rural areas of the developing world
... are the standards right/ We articulate standards but are the right people in the room
... loads of standards being specifed - but do they work in all contexts? Does a London-based system work in Kenya?
... Are the licensing arrangements, correct/ Are first movers keeping others out?
... We have opendataresearch.org and more - see slides
paper
Hayo: Talking about the Dutch
linked data project in NL
... We started out open data programme 2 years ago
... want to help government depts open their data
<cerealtom> good collection of questions there
<cerealtom> ...
<cerealtom> what problem are we solving?
hayo: now 6K data sets from national and local administrations.
<cerealtom> why spend money on opening data?
hayo: some great apps but not really solving real problems
<cerealtom> why is nobody using our data?
<cerealtom> why dont they build an app like...?
hayo: what actual problem does it solve? Where are the apps that do clever stuff?
<cerealtom> hayo: we've reached a kind of impasse; governments are losing enthusiasm
Hayo: We need to look at how OD is being used to solve real problems?
<cerealtom> hayo: our approach: focus on real-life problems
Hayo: Purple areas on shown map are where population is declining, orange it's growing
<cerealtom> e.g. disadvantaged and depopulated areas
Hayo: we want to help those
people with the real problems, disadvantaged areas etc.
... trying to companies together, working on the problem
... There's a problem of continuity. data is opened once and not updated
... produced for one hackathon and then stopped
... we're tackling that with linked data
... NL has a lot of open data around legislation, case law etc. Gov not using it, they're buying it from people who put wrapper around our data and sell it back
... can we reduce the amount of money we spend on getting our own data and maybe we can profit from it ourselves
... We notice that policy makers often say "I base my policy on law x" - people make comments or annotations - we can use those in linked data and make the data more useful
... shows nice labelled directed graph
... We're allowing people to make real links between laws, policies, their text or whatever
... what marketeers call deep linking
... we reward people for linking to laws. We contact people and say, Ok you link to the law, how about linking to this policy
... we can notify people that link to a law as it's clearly important to them
... laws have versions
... need to be able to point to a lw as it was in 2010 etc.
... System will be available in September - getting government people enthusiastic about using their open data. This is a good example of showing govs how they can use their data
... of course others can use it too.
paper
slides (already!)
BobS: We put together what we've
put together when considering what we think might be
missing
... I think it's great when we get lots of open PSI
... we need it in educational, arts and business worlds too
... we need to get a virtuous circle where value is created
... looking at an Irish linked data front end
... we started in Oct 2011 with 4 Irish authorities (Dublin + 3)
... Looked at the cost/benefits of uploading open data
... this issue that the people who publish trust that their effort will deliver a return
... people have to want your data and they want it in their format
... (not yours)
... you need to be able to state how complete is the data, when and where does it cover etc.
... whole cluster of ideas
... you can synthesise this open data with yours and do good stuff
... The three principles
... (see slide 6)
... Slide 7 for the second principle
... talking about things like showing logos for limited time, potentially contacting data users
... need to be able to log if there's a new version of the data
<JeniT> disturbed a bit about the additional limitations bschloss is suggesting for "open" data
<JeniT> seems to be stretching what "open" means beyond the usual definitions
<cjg> Bingo!
<StevenPemberton> +1
<cjg> "What if terrorists use our data" is on my bingo card:
<cjg> (but to be fair, hazardous materials is actually a reasonable dataset to keep limited access. )
<StevenPemberton> Except if you want to see if there is hazardous material stored near your school. #west
<edsu> cjg: puts a new spin on the JISC's ‘The coolest thing to do with your data will be thought of by someone else.’
JohnS: We make instiutional commitments
Hayo: Our governments trust third
parties more than our open data
... we're trying to educate tem
TimD: We're trying to talk about purposes and use of data more than you need to publish in a given format etc.
Millie: This is a room full of evangelists, the shift in thinking needed is enormous, don't underestimate that
TomHeath: I like John's quotes. I
don't like "if you agree with me you're wise if not you're a
fool"
... How do we convince others of the wisdom
BobS: What we're doing in Dublin
- we capture the identity of the app, program and org that
downloads everything and there's an offline process for
assessing the value of that
... then go back to the data publisher and tell them what's going on, what people are doing with your data
<edsu> aside: best way to convince people is to show them the utility of it, not appealing to their better (wiser) nature imho
<edsu> &coffee;
<cjg_> edsu: I swear that we've had people suggest that if terrorists got access to the live bus times they could use it… there's a wear and tear on my desk from banging my head on it.
PhilA: grrr dropped off IRC, sorry, missed some comments and questions
<cjg_> yeah, I've got a talk at IWMW this year about how open data can get better value for money -- seems a good way to think about it in these tightened times.
BobS: IBM has been looking at
specific cities. We don't push up hill - we find the people
that want to do open data
... We also need to find the person in the street
... we don't have 'how open data can improve your life' days
Hayo: Yes, talk about problems, not open data
TimD: Yes, we want data you can
build upon in gov and society
... Lots of great examples from places like Sao Paulo
... talking about accountability and capacity not open data
<cjg> We have a policy of always putting a front end on our open data; even if it's as simple as a basic HTML page. 99% of the users are just using that and not the underlying data, but that's OK.
TimD: so the new research project will include lots of case studies from Brazil.
BobS: In Africa, the knowledge of prices for their farming goods is transforming farming
<edsu> cjg: :-D
BobS: So we've been working on projects for people who can't read - working on spoken web in India
Millie: In the Balkans we have an
issue of forest fires and consequent air quality
... I want to know if my child can go out on the street
... we have kids building air quality monitors
... we move to solutions too quickly
<edsu> cjg: it's hard to develop all the apps/visualizations people want ; giving them the data and empowering them to do it seems like a no brainer -- except to people who don't want new interesting visualizations of their data :)
JeniT: For Bob - you spoke about the need for collecting data about people using the data and restricting terrorists's access - that's not the usual definition of open data
BobS: I see a spectrum, not a point
<cjg> I generally tell people that "open" means removing as many barriers as possible
BobS: we're going to have rock solid stuff - it will be there and accurate for 9 years. Then there's softer and softer - we need to cover the specturm
<cjg> the barriers can be technical, social or legal.
<cjg> "as open as possible" can still be used to describe data which is confidential.
<HadleyBeeman> For reference, I think JeniT is referring to the Open Definition
<HadleyBeeman> Great question… I've been wondering as well if we're still having the same discussions (as we were a year or two ago).
bhyland: Yes. we're all evangelists but we're not working in a vacuum. There are people in gov who are not minded to hand data over to a bunch of smart people they don't trust
Hayo: It takes pateince. We have to change contracts occasionally. We changed our legislation publishing contractor 5 years ago - that made a big difference
<StevenPemberton> I think he said that it took 5 years to change the contract
<StevenPemberton> and only then could they use their own data
Millie: SorryScribe note - sorry, I missed Millie's comment about Pulse??
Billr: My experience as a private sector person working for gov - see that some of the bigger people only just picking up the potential for open data. Some early birds are winning
Last thoughts...
JohnS: Spend more time talking to people not involved with open data about fixing problems
BibS: OD is a means, not an ends. talk about the ends
Hayo: OD will take time and money. Maybe 5 years +
BillR: +1
Millie: UNDP uses tax payer's money to change people's lives - we need help
TimD: Think about who's in the room when we define standards
<StevenPemberton> scribenick: markbirbeck
<StevenPemberton> Scribe: Mark Birbeck
<StevenPemberton> Paper:
<bhyland> Concluding remark from first session: "Open data is a means, not an end. Come at it from what real world problems it will solve."
<cjg> "
Paul Davidson introducing James King — senior principal scientist at Adobe — to talk about how PDF is more open than we all think it is.
<edsu> BibS++ concur
Structure of talk- open data paradigm, PDF itself, and then its role in open data.
Organisations taking data, shaping it and presenting it.
…but others — the "processors" — would prefer to deal with the raw data...
…they might present that too, but also use the data to draw new conclusions, or use it for advocacy.
…A further group is that of the tool providers, who will help us process this data.
…About 30% of the room are providers...
…80% are processors...
…most are consumers, and some are tool providers.
…PDF will be 20 years old this June.
…PDF and Acrobat are different beasts.
…The internals of PDF have always been published, and it became an ISO Standard in 2008.
<PhilA> PhilA: Nice approach to backwards compatibility from Adobe for PDF
…A PDF 1.0 doc is also a 1.7 doc — always backwards compatible.
<bhyland> Jim King: PDF will be 20 years old this June. PDF 1.7 became an ISO Standard in July 2008. ISO work on PDF is ongoing.
<edsu> hopefully mozilla's pdf.js will get a mention ...
…To make the PDF spec into a 'proper' ISO Standard the team at Adobe had to go through the entire document…very thoroughly…
…PDFs are abundant, containing lots of useful information.
<cjg> I had surprisingly good results converting our student union committee minutes from PDF to RDF: -- just looking at where on the page text appears gives more semantics than the naive pdf2utf8 (or 2html) approach.
…It's a format that distinguishes between text and graphics, and can be used to produce good looking documents.
…But it's not a data format.
<edsu> cjg: i think that's roughly what google scholar does when it scrapes pdfs
…Billions of documents out there, but difficult to extract any data that's in there.
<edsu> cjg: grabbing the largest text at the top of the first page as the title
…If pages *contain* graphics then extract that with something like Illustrator.
…If pages are text then there's a bunch of software that can process the text.
…(A big list is on Wikipedia.)
<bschloss> There is a 'spectrum of open data' -- totally free, available forever, no recording of downloader is one end of that spectrum, but airlines, investment markets, sports leagues, available job listing websites, retailers are all doing open data on a slightly different point on the spectrum.
…And if the pages are images (i.e., rather than *containing* images) then need to go the OCR route.
<StevenPemberton>
<cjg> We found a nice command line tool which converts PDF to and XML representation of the data structure inside and that gets it into our 'hacking comfort zone'
<ivan> wikipedia list for pdf tools
…
… wikipedia list for pdf tools
…If you're making PDFs, here's what you could do to make things easier.
…Making files that both contain raw data and look good is difficult.
…There *is* software around that can embed metadata to provide structural information.
<bschloss> Seems to me that any producer of a PDF who wants it to be available to people with no sight is hopefully providing a table or textual alternative rendering in the PDF for any diagram or image in the PDF, yes?
…The structural information would be stuff like reading order, tags such as headers, footnotes, figures, maths, and so on.
…Tools can make use of this extra data which will make the extraction process much more reliable.
<markbirbeck1> …A second thing to do is make use of the attachment facility.
<markbirbeck1> scribenick: markbirbeck1
…A second thing to do is make use of the attachment facility.
…Raw data on its own is probably insufficient for doing something useful.
…For example, what's the currency? the data format? the semantics of the fields? provenance?
<alex> the attachments-in-PDFs thing might actually be useful for scholarly publications, so that the data doesn't get divorced from the paper
…So we create a PDF file that contains raw data with a schema, giving the end-user everything they need.
<alex> bhyland: yeah, presumably there's not the tools support beyond what Adobe sells
…Can then make use of all the nice PDF features that have evolved over the last 20 years, such as digital signing.
…There are some examples in the slides.
<edsu> bhyland: same could be said of most metadata on the web
Peter Murray-Rust: Spent years hacking PDFs in the wild.
…Trying to write software that will process them, but they are generally pretty bad.
…If anyone else is trying to hack on this then please talk to me; there's hundreds of billions of dollars worth of information out there that is simply unusable at the moment.
<cjg> I had a bit of a rant about PDFs as a way of communicating data to a reporter from the register, which resulted in them publishing this: (I'm quite proud of that)
Dan Brickley: Is this thing loud enough?
…PDF can be used well and powerfully, and of course it's clear that some people aren't using it well.
<edsu> heh, re: billions of dollars worth of information that's unusable, you have to wonder if that's by design, not by accident ...
…You didn't mention XMP, though, which includes RDF.
…You also didn't mention accessibility.
<bhyland> Peter Murray-Rust - Scientific publishers are paid $10B/yr worldwide to lock up scholarly publishing, that is after governments spend $100B/yr globally on scientific funding for R&D in the first place. He is looking for people to help him in his mission to unlock the enormous value locked in PDFs.
James: The accessibility aspects are quite mature in PDF, and the structured aspects help that.
<StevenPemberton> PDF is a page description language, so not in a reading order necessarily
…We don't have much control over what people produce, although things have improved in the last 5 years.
<bhyland> @edsu - perhaps re: your comment above. My experience suggests that we're more thoughtful publishing structured data about data sets (metadata) because they are fewer in quantity whereas PDF are like water, they are everywhere and almost "too easy" to create by the mere click of "Print —> PDF" …
speaker: For many people PDF data is closed data.
hadleybeeman: You've outlined many things I didn't know were possible, so why is there not the uptake on these features?
<bhyland> @hadleybeeman - because the tools are proprietary, complex to use … at least harder than clicking "Print —> PDF" and well let's face it, people are lazy and hand entered metadata has been proven to be *very* challenging and highly inconsistent.
James: Not sure if it's our fault. In some areas there have been successes, perhaps where there's industry interest or our sales people have promoted a feature.
s/speaker2/HadleyBeeman/
<alex> If they want stuff like metadata to be adopted, then surely they need to encourage support in tools other than their own (OpenOffice; Word)
<inserted> scribenick: bhyland
<inserted> scribe: Bernadette Hyland
Jeni: sets the tone around different formats for tabular data, advantages & disadvantages of various approachs.
… NB: Special allowance for Rufus who has been known from time to go on ...
Rufus: Intro on OKFN and their mission to liberate data
… Proposed "Our Mission" is to make it radically easier for data to be made used & useful"
<ivan> scribenick: bhyland
Rufus: Stated problem of data on the Web in many different formats & issues that poses.
<cjg_>
… propose 3 minor innovations involving " borrowing" approaches others have used before us.
In this model, there are the usual suspects … data creators & packagers, consumers and the effort in the middle to do "data packaging"
… Linked Data effort has been knowledge APIs and has been successful [to varying degrees]
… Packaging has to be done as a distinct step that minor packaging effort, agnostic about the data, its packaging is designed specifically ...
… Today, there is a huge amount of friction on getting & using data on the Web. We want to build for the Web. Rufus said RDF is not Web native … he has been laughed at when proposed its use ...
Proposal: 1 - One (small) part of the data chain; 2- Build for the Web; 3 − 4 − 5 [to fast to record]
Concluding remark: Package data more effectively and produce one killer tool to make data more accessible.
Speaker: Omar Benjelloun, Google, GPDE
Omar highlighted Google public search feature, Knowledge Graph capability and origins.
… Highlighted the Public Data Explorer, using Dat Cube representation. Anyone can upload & share data using RDF.
<HadleyBeeman> Oh. ignore my s/DSPL/GPDE
DSPL = Dataset Publishing Language, describes tabular data + semantic description including concepts describing re-useable data types. All packaged in a zip file. Visualizations can be shared.
Omar's Propositions: Datasets need good Web pages with stable, official, up-to-date canoncial location. Also, add good markup for reasonable SEO.
<StevenPemberton> s;s/DSPL/GPDE;;
… Let tables be tables. [Let it be… ] Relational data & schema are well understood. Better than triples: tables naturally capture relations. Better than APIs: no access patterns, scalability issues.
… Add semantic annotations to tables. Leverage EXISTING approaches (RDF, schema.org) [emphasis is scribes :-)]
… Better to follow this approach than create custom data models (SDMX, DSPL).
Next speaker: Stuart Williams, Epimorphics
Overview of Epimorphics, doing services and LD design work. Working with data.gov.uk. Helped to lay down some of the sand that John Sheridan previously described.
Working to publish bathing water quality, now expanding to the river network in the UK.
… Thinking about getting 'beyond the data', we feel that we need to get beyond the 4 & 5 Star Data attribute, and evolve the message to solving a real world problem.
… Works with the UK Environmental Agency to make publication of valuable data … easy!
… Think about how to allow publishers to add simple bits of markup.
… Think about how to contribute to the virtuous circle of making it easy to contribute something valuable & receiving something valuable.
Next speaker: John Snelson from MarkLogic
Describes himself as an XML-guy and actively involved in W3C around those recommendations.
… MarkLogic helps its customers use data effectively using XML.
… John is a data pragmatist. We must look beyond those formats.
Next Speaker: Tyng-Ruey Chuang, from Academia Sinica in Taiwan
Involved in Taiwan's culture heritage efforts.
<PhilA> Tyng-Ruey Chuang, Academia Sinica (Taipei) see
<cjg> I can't ask our catering department to provide menus in a well structured RDF format :-)
<cjg> (much as I wish they would)
… dealing with heterogeneous collections of content including media files, documentation. His focus is on sharing & making cultural heritage content usable for the long term.
… Putting data on the Web itself does not guarantee longevity.
… We can & should learn from the Free Software Foundation. Supports giving people the ability to make copies of content. Highlighted the importance of porting content to be ported to many other computer systems, both on & off the Web, for it to be considered truly open.
Panel Convener is Jeni … She puts the following question to Rufus. Q) There is debate on how manage metadata, to embed or not ...
Rufus: Regarding embedding, it almost becomes an AI project to figure out metadata that is embedded. It can be a nightmare. The beauty of keeping it separate is it is easier on tools & therefore treatment by tools. He is supportive of graceful degradation.
Tyng-Ruey Chuang: Prefers to have structured schema as part of the data (?)
Omar: Mainly, the important thing is to get agreement on format, then all kinds of good things can happen. Linking tables & metadata to Web pages (authoritative) is really important.
Stuart: We're been using this word "metadata" which leads us to schema information. In RDF world, we can click through to it & immediately see it.
… Using RDF model, you don't have to scramble all over the Web, rather, you get bits of schema info back because it is carried *with* the data.
… Highlighted the perils of carry possible too much provenance information that it drowns out the important data itself.
<cjg> Quite simply, tabular data requires a lower cognitive load to work with. Most people can't be bothered to learn to think in graphs. So tabular is more open because it's easier to comprehend.
<edsu> aside: embedded metadata (facebook opengraph, schema.org) is getting published because it is getting used
<HadleyBeeman> cjg I wonder how much of that is because our computer science training wasn't very graph-focused. Next generation might be different?
<edsu> i don't buy the argument that it needs to be separate ...
Questions from the audience ...
Ivan: When we speak of metadata, my biggest issue is what vocabularies to use. It is the biggest problem we have to solve, even more important than the data format/model … if we had widely available vocabularies, it would solve many problems.
<cjg> HadleyBeeman: I'm talking about the people who maintain my data. They are *not* computer scientists… they are in finance, buildings & estates, catering...
<cjg>
Rufus: If you meet most developers, and start talking about vocabularies, "they'll run for the hills." Been part of long countless fights on what vocab. Suggested a new site called as a joint project of cygri and Rufus ;-)
<HadleyBeeman> cjg: Ah, I see. Yes, different user base there.
<cjg> I went to see what they already had, tidied it all up in excel and moved it to google spreadsheets so it was easy to grab automatically.
… What is the minimum to make CSV files useful. Just give me the basics, string, integer. This is *our* problem, not publishers. I'm all about 'reducing the time' … open vs. closed data.
<edsu> problem hasn't been schemas per se, as much as it has been schemas divorced from their actual use
… Licensing is a lower priority for many.
… Ease of publishing is king
<cjg> Also, I want to create a collection of SPARQL queries which produce useful spreadsheet downloads for humans to consume. Secretaries are a whizz with Excel, but only if the file loads first time. Telling them TSV can be "easily imported" is already outside their comfort zone.
… Our mission is to reduce the cost & RDF, at the moment, is not doing that.
Omar: If we want to bring data together, we have to harmonize into a common model. I don't know whether developers should have to be encumbered with that responsibility. But it is a real problem to solve.
Bhyland notes, (not in a comment), there is a wide spectrum of opinions in the room & that is good to stimulate that discussion. Deepening understanding is key to all of this.
Stuart: Finding the stuff in the first place, with schematic markup answering provenance information, is critical to solving the hurdles we face with better use of open data on the Web.
<alex> cjg: I played with SharePoint/Excel integration yesterday, and it looks like you can get Excel to live-update from SharePoint lists; I suppose something similar could be done with s/SharePoint/SPARQL endpoint/
John Snelson: Vocabularies have their place, but search is a great way to find data that is not expressed perhaps as nicely as we'd like...
<alex> then SPARQL would be truly Enterprise™
<alex> it would also be possible to embed the metadata for a table in a second sheet of an XLSX/ODS file, instead of prepending it to a CSV file
Questions from the mob: You've got to help represent/model data, but that is not the entire story. It is a "horses for courses" kind of thing. Please be careful not to reinvent RDF with JSON glasses on.
IBM guy - Dealing with data is hard. It is harder than process. We won't solve problems with data exchange standards alone. One thing we haven't heard about today is Best Practices and Architectural processes. We need to rise above data formats and really focus on data patterns, best practices.
<cjg> I have this horrific image of people creating n-triples documents in Excel...
<yvesr> cjg, i saw that being done *a lot* at the bbc
<cjg> gah!
<pieterc> cjg: why would that be horrific?
<cjg> for one thing, excel plays silly buggers with certain values.
Bhyland to 'IBM guy' - let's talk real soon — there is Best Practices work, albeit nascient, underway within W3C Gov't Linked Data working group & we'd welcome your input.
<cjg> We have real trouble getting people to enter phone numbers without it getting muddled. 079671234567 gets converted to an integer as does +44....
Rufus: Described state of the world that is very fragmented, messy & dirty and urges us to not look for [utopian data model] that everyone if required to use.
Tyng-Ruey: RE: Best Practices, validators would be helpful to check data representation is correct. Need: Better validators (Note to PhilA).
<cjg> Shout out for - for converting live google spreadsheets into XML or RDF/XML etc.
<HadleyBeeman> Oo, fun, cjg. Thanks.
Omar: "I think we've been spoiled by the Web" because search engines have done a good job. The question is, can we make this Web of Data thing work such that we publish our metadata & data and have it easily found. This is the question.
<pieterc> cjg: spreadsheets are for calculations, not data. CSV is a format which people use with spreadsheet programs, thus not suited for the job. Got your point?
Peter Murray-Rusk: To Omar - what do you do with things are labelled as tables but really are not tables?
<cjg> yeah, maybe we need a nice "CSV" editor?
<cjg> Or even a "table" editor, using PMR's description.
Omar: Smart people are working on it … it's complicated.
<pieterc> cjg: thought of it as well already
<cjg> basically a cut-down google docs.
<pieterc> cjg: open refine? ;)
John Snelson: Need to be able to break out & work with data in a schema-less fashion.
<cjg> with a magic table heading
John Sheridan asked, in the world of tables & CSVs and [screw the metadata], how are you prepared to deal with the license matter?
Rufus: I didn't say, 'screw the metadata'. Rather, we need simplicity and innovation about process. He suggested having multiple parties be part of the "packaging process".
… Clearly a license has to come from an authoritative source. Gave example about data from Bank of England. Two important points, we need minimal metadata and … [some one else augment please, scribe missed second point]
<cjg> *if* the source of the metadata is the same website as the data then that's probably good enough for me.
Wrap up from panelists - 'wear your schemas on the outside, use HTTP URIs to describe things if putting on the Web.'
John: Great opportunity for tool developers to liberate data.
<StevenPemberton> Scribe: Steven Pemberton
End of panel facilitated by Jeni. Thanks all.
<StevenPemberton> scribenick: StevenPemberton
<pieterc> I have a problem with the fact that the data are/is being able to be processed through quick bash scripts, or other low barrier scripting languages, but the meta-data needs a json parser
<bhyland> Someone else able to scribe, please? Pretty please??
markbirbeck1: I am from a semweb
background
... software developer for decades
... [lists examples of RDF-based software projects he has worked on]
... also involved with RDFa at W3C
... you can tell I'm setting things up to have a good moan
... usually data not available, or in inconvenient formats
<alex> markbirbeck1: ooh, a jobs ontology. we wrote our own having found nothing in the wild ()
markbirbeck1: or not linked
... Lessons -
... - need a big cultural change to get open data
... - spreadsheets aren't that bad, don't need to wait for RDF
<edsu> alex:
markbirbeck1: but the timeframes were a big issue
<alex> edsu: ah, cool; thanks :-)
markbirbeck1: - Join question. Linked data would be great, but consistent code would be enough
<cjg> hmm, is there a schema.org->RDF mapping? there must be...
<yvesr> cjg,
<edsu> cjg: there is, but really who cares?
markbirbeck1: Big data is
relevant, lessons learned from that are useful.
... Open data doesn't need to be RDF, use context
... only when you cross (company) boudaries, do thinks like schemas become important
<cjg> edsu; me as we've just stared publishing vacancy data last week! Making it Linked Data is useful as it can cross-reference to our URIs for various departments & faculties.
timbl: when you mention experience you've had, please say who you are/were working for, was it big or small project, public or private, et.
<bhyland> There goes TimBL again about context, context, context! ;-)
<HadleyBeeman> Metadata for our conversations. :)
<PhilA> who'd have thought context mattered for data eh bhyland?
markbirbeck: There was a layered
approach to it in my case, people who had bought in but didn't
know enough, which was worse
... but NHS in my case was an example, timing was bad because of looming cuts
... but I was naive too about the issues involved about publishing certain types of data and aggregation
<bhyland> TimBL: Context is important. Users in intelligence community won't consider using data without provenance, won't even start the conversation or analysis.
Raphael: Most are tool builders
here, but we need more than tools
... this a report of what we have done at a "datalift data Camp" last year
... lifting data to 5 star status
... It worked a bit, but was a good learning experience
... varied data source types
... and varied companies, with different needs
... Datalift is a package with single click download
... cross-platform
... [shows workflow]
... converts to RDF
... and then the interlinking
... used for two large data collections in France
... DIfficulties are how to choose the right vocabulary
... rdf conversion, URI schemes to adopt
... automatic detection of datasets to link to
... LOV initiative, 260+ vocabs
... now open source!
...
<bhyland> I love how a French speaker says "LOV bot" as love boat.
Raphael: Conclusion -
multilingual vocabs important
... hide complexity of sparql
... eg QAKIS
... Shape files are important
... INSPIRE directive and W3C GLD vocabs need to be covered
<bschloss> Since Open Data is a means to several valuable ends, IBM is talking to our clients about thoughts of "becoming a Contextual Enterprise" and we emphasize the critical need to dynamically assemble context for every key input and output of their work, including the context of external data they import. See for very high-level summary of our recently released Global Technology Outlook.
Raphael: GTFS/DSPL formats
Tristan: We work with cultural
heritage. Will talk about science museum now
... also a plea for help
... Science Museum is august and venerable, with loads of internal systems, we are trying to consolidate them
... we extract, and convert to linked data
... triple store
<pieterc> rtroncy: how active is the development of Datalift? I haven't seen a lot of activity on the SCM
Tristan: built a data model, in
cooperation with British Library, British Musem [others], see
the paper
... use that to drive the website
... my plea for help is what should be the next steps
... how can we make it more open?
... Publication strategies, stable URIs, dereferencable etc
... Is the data model interoperable
Madi: I am new to W3C, and open
linked data devotee
... Pearson is a publishing company, owns Financial Times and some Penguin books.
... I think we are the first W3C publisher member
[applause]
<edsu> that says a lot
Madi: There is a new Community Group at W3C with 23 members
[link here to CG please]
<HadleyBeeman>
Irina: Raphael, what were the outcomes?
<HadleyBeeman> Eek, sorry. Try this:
<bhyland> Madi: Data + education is a natural fit. Whatever we can do to make it easy for students + instructors + open data advocates to get together make the world a better place.
<bhyland> +10
Raphael: It was part one of a two
part process. We wanted clean data, the next step will happen
later this year, to reuse the data to build apps.
... Some of data sets are just data dumps
q1: is there automatic linking between data possible?
<ivan> s/[link here to CG please]/-> Open Linked Education Community Group/
MarkBirbeck: It is not just
topics
... do you mean just numerics?
q1: Not necessarily,
MarkBirbeck: This is what I was
referring to earlier, for instance trying to identify a company
from different versions of its name
... URIs are a great goal, but you can get there earlier
[SESSION ENDS]
<HadleyBeeman> scribenick: hadleybeeman
<scribe> Chair: LeighDodds
Kal Ahmed: Intro to talk on OData
… OData is a standardised protocol for consuming and creating data APIs. -odata.org
… originally conceived by Microsoft, this is bringing it into being a common protocol.
… Odata is entity-centric. Comes from .NET developers with tables of data. STandard itself defines how you publish your metadata: service metadata and schema.
<ivan> scribenick: HadleyBeeman
… OData has a URL-based syntax for access.
… Includes inline expansion between entities
… POST a represention to an entity set's URL. PUT, PATCH, MERGE, or DELETE.
<cjg> I've never heard of MERGE or PATCH before…
<alex> PATCH is only just a Thing, isn't it?
… Other nice features: combines metadata properties with a special media source URL. Named streams. Ability to embed your own custom actions and functions and expose them as URLs
<JeniT> PATCH is a proper thing, haven't heard of MERGE
<alex> only just> March 2010, according to
… There are a lot of reasons to like OData. You can reliably discover the schema. Clients are all linked. Easy to experiment using those URLs.
<pieterc> alex: The DataTank supports PATCH
… There is a javascript serialization format
<pieterc> alex: (tdt is a RESTful data adapter project in PHP)
… There is a growing set of OData consumers. GUI controls and libraries.
<pieterc> alex: (it sounds worse than it is)
<alex> "The remainder of this section defines a custom HTTP MERGE method"
… Criticisms of OData: Service definitions tend to be siloed. Links don't tend to go outside the data service. Don't use any shared ontologies.
… Another slight criticism: because of its history as being pushed by Microsoft, it's seen as being vendor specific. Not true; standarisation now under OASIS, other contributors
… Why do developers use it? We live the features and the flexibility of RDF/SPARQL. We were disappointed with the Linked Data Platform proposals and the flexibility it would give.
… We wanted it to be a declarative configuration only, ultimately to do that config automatically.
… Previous attempt: LINQ - to - SPARQL, hand crafted as c#
… This implementation: Proxy service for a SPARQL endpoint.
… Key part of this: the annotations. They're in the OData spec. Defined for: URI namespace for entity primary keys, URIs for entity typoes, properties and directionality of links
… Annotations are visible to the consumer, mappings done against the SPARQL endpoint are visible
… Allows you to reconstruct the source triples you've just queried, if you'd ever want to.
… Implementation issues: Our naive approach: if you ask for an entity, a DESCRIBE will give you what you want. It was too unspecified, so you have to use CONSTRUCT, which led to sroting and identification issues.
… OData allows the server to do paging. If there's been a server-side limit imposed, you don't know that.
… Biggest implementation issue: because we're turning primary keys into URI identifiers, every entity in the entity set has to have the same base URI. Not a problem in most cases, but potentially.
… [Example query to select a simple film]
… [Example query to enumerate films]
… [example query to show property navigation]
… That's all leading up to a bunch of questions. First and I'm most interested in discussing here: What is the group's seen importance of standards in interoperability? Do standards need to interoperate? Do different standards body's standards need to interoperate? Whose responsibility is it?
… More questions: what could the W3C LDP WG learn from OData and vice versa. OData changed in response to feedback/requirements. Now on third iteration… Should these requirements and use cases be shared between groups?
…
… Finally, is there a shared meta-model for entity-oriented view of data resources between the two?
LeighDodds: Do you have a sense of uptake?
<JeniT> (uptake of OData)
Kal: hard to tell because search discovery of OData endpoints is hard. Probably more not visible to the Web than those that are.
<bschloss> [I think the SAP ERP platform, recent version, has APIs to get information as ODATA]
ivan: There have been several attempts to get these groups together. For all kinds of personal reasons, it did not work out. There is a community group at W3C on OData vs RDF; the group is silent, empty.
Kal: It shouldn't be "OData vs RDF". They should be coexistant and work together.
<bhyland> My question is (and I'm not being snarky or flip), why OData? Isn't this MS trying to redo RDF? RDF has matured and is well-documented. It is not perfect & use is far from ubiquitous however, why fragment?
subtopic: Neil Benn, Fujitsu. LOD approach to engineering health-sensory datasets/
Neil: I'll focus more specifically on health and health sensor data. I've recently joined this group, and this is one of the projects we're working on.
… We're working on a cloud platform for large-scale graph storage. Public and private data. That seems to be a tension that is coming across throughout today. Therefore, Linked (Open|Closed) (Big) Data
<bschloss> Mentions Linked (Open|Close) (Big) Data and mentions Fijitsu and DERI Collaboration on Linked Data Global Repository
… We've been working with DERI on a CKAN-like LInked Data Global Repository. Faster and more searchable.
… We're also involved in the W3C LDP WG
… With the University of Singapore, we've been working on health care sensors. Temperature monitor, heart rate monitor, establish patient history. Challenge: how to combine sensor data with patient specific data from their health record, which might be different to medical best practice, clinical recommendations, etc?
… We're making this sensor data linkable - 10m triples per person per week, for example - standardise, and link to data about effective drugs.
… Announced in Nov, just working out how to do this. Open, closed and anonymisable data involved.
… We are handling temporal data and binary data. Do we want to convert binary sensory data, with an established community of tools, into RDF? Maybe not. If not, how to work with the binary and the (other) linked data?
… These things keep me… well, not awake at night, but certainly busy during the coffee break.
… Non-technical challenges: main motivator for this paper: most open health data is on hospital numbers, costs of services, etc. But these are questions for policy makers; not as much emphasis on medical research.
… Found data on ECG and HBR stuff… but not as much emphasis of having a "broad church" of open medical health care data to generate further epidemiological and clinical research.
… Generating these datasets is labour-intensive. One researcher said teams of researchers working on a dataset would be useful… How to do on the Web?
… Could be that we have more administrative hospital data than clinical data because it's easier to lobby governments than universities and researchers?
… There still isn't much best practice on this. Vocabularies, dataset engineering patterns. We have patterns for building modular software… is there an equivalent here?
… Ex: There is an ECG ontology I came across… should I use it?
Questions
BillR: You should look at Linked Data Patterns, LeighDodds is one of the authors
Discussion with panel, including Albert Meroño-Peñuela
<CaptSolo>
Albert: We work with historical censuses, encoded in thousands of .xls spreadsheets. We would like to uniformly query them, but they are extremely messy. We'd like to transform them into RDF Data Cube and other vocabularies using SPARQL queries?
Question: Bob Schloss: The value we seem to be talking about is mashups between datasets with unexpected results. Mapping was one of the first join points. What other join points do you see and do you agree this is critical?
Kal: Yes, I agree. Increasingly, I see a lot of time-series value type data, sets combined in a way to expose latent knowledge. Biggest problem is vocabulary interoperability. Odata doesn't have them so we can't do conceptual joins with data tagged with different systems.
Bob: Let's reuse the requirements gathered from XBRL in the Financial industry. They do have publicly listed busineses.
Neil: Open data is administrative, government-driven. People want to answer local questions, so that has driven a lot of the applications. But in that healthcare example, it's not geographically-specific. New disease patterns may not be tied to parts of a city.
… With regard to the vocabularies question… I don't want to learn about all the vocabs out there. In the same way I can modularly take a bit of a software library to see what's in it, I'd like to do the same with a vocabulary. I want to conceptualise my data first, and modularly pick a vocabulary.
Kal: The individual is an interesting join-point. For governments and otherwise.
Albert: In some domains, historical data is so badly degraded… and it may not have been intended to be comparable.
TomHeath: Re data engineering patterns: we do need to go further than Leigh's book. Hack-y stuff (download, GREP, etc), ad-hoc processes. Things going on in the Hadoop community to describe these processes
Neil: The term dataset engineering patterns… [coining a new phrase]
Michael (from the EC): to Neil: re the link between closed/sensitive/open data… Are you looking at aggregated personal data that then can be opened? As in other areas of sensitive public data
Neil: we don't quite have a generic process for anonymising sensitive data. Some organisations do that… I'm just in the early stages of learning the issues around that.
questionasker?: concerned about applying the label of "open data" to data that's locked behind a query API. Do you share my concerns?
Kal: OData entity set that conforms to the standard is enumerable… It's an ATOM feed with Next links in it. You can download it. Also, a data dump isn't any better — you're relying on the server's capacity to provide the data and the data being up to date.
… I can see your point but I think it applies to all open data.
questionasker?: If I were going to mortgage my house to fund a startup on this data, I would see this as a problem.
Kal: Of course, there are different applications.
[Closing session]
<rtroncy> scribenick: rtroncy
<scribe> scribe: rtroncy
<scribe> Chair:Alex Coley
Alex introducing the session, composed of three talks
on Jay le Grange - GeoKnow: Leveraging Geospatial Data in the Web of Data
[ paper]
EU Project GeoKnow:
scribe: inspired by earlier work
on transforming Open Street Map into Linked Data
... 3 major sources of open geospatial data
... spatial data infrastructures (compatible with almost all GIS), open data catalogue (SHP, KML files), crowdsourced geospatial data
... ontologies: basic geo vocabulary, GeoOWL ... and GeoSPARQL
... efficient geosparql RDF querying, fusion and aggregation of geospatial RDF data, visualization and authoring, public-private geo-spatial data (sync workflows)
... aim to provide a suite of GeoKnow Generator tools
... two use case scenarios: e-commerce and supply chain
... the GeoKnow generator is expected by December 2013
RRSAgent: draft minutes
scribe: see also:
Michael Lutz - Interoperability of (open) geospatial data – INSPIRE and beyond
[ paper]
Michael: INSPIRE in a
nutshell
... legal framework for establishing an infrastructure for spatial information in Europe
... 34 spatial themes
... implementation 2009-2020
... there is a growing interest in creating innovative products and services based on INSPIRE and other data
... we realize that with INSPIRE we cover a lot of topics of this workshop
... key issues with INSPIRE: enriching INSPIRE data models with application specific business data
... example: urban planning, waste management plans, environmental impact assessment, risk management on top of geo data
... beyond INSPIRE, traditionnally link with GIS format and XML ... how we move towards RDF
... how to create and manage persistent identifiers
... implications of opening up data for the organisations: governance, long term commitments, etc.
... how to address those issues? ISA = Interoperability Solutions for European Public Administrations program
... see also: ARe3NA (INSPIRE reference platform), EULF (EU Location Framework)
... W3C LOCADD community group
... advertisement for the INSPIRE conference in Florence 23-27 June 2013
... ISA program
Mark Herringer - Open Data on the Web and how to publish it within the context of Primary health care
[ paper]
Panel opened
<scribe> unknown: question about identifiers, can we expect a better framework, e.g. URI in INSPIRE ?
Michael: in INSPIRE, there are 2
types of identifiers
... for data objects and for real-world things
... we relax recently how to write those identifiers and enable http identifiers
<PhilA> Thank you Michael Lutz on URIs
Raphael: there are a number of
initiatives that try to take part of UML diagrams of INSPIRE
and build RDF schema, see e.g. efforts from Laurent Lefort and
others
... are there plans to have an official schema in RDF for INSPIRE ?
Michael: yes, we will organize a workshop where everybody presents its modeling ... and we wish to have an agreed upon model
RRSAgent: generate minutes
<yaso> Lotte Belice about Open Culture Data
<JeniT> Scribe: yaso
<HadleyBeeman> scribenick: hadleybeeman
Johnlsheridan: It's 2020 and we've seen the failure of the world's first multibillion dollar open data corporation. How did this happen?
<yaso> Yes, I'm with connection problems
Conor Riffle: We've been looking at lots of business models. Sponsorship would be hard to scale to that level.
… Also look at people like Google who make tons of apps and sell ads on that.
JohnLsheridan: which of the eight business models Michele has identified could scale to that level?
Michele: Usually, all the four actors are able to manage a huge amount of data. We have some enablers - usually they are scalable - but they do not serve end users. They're in a wholesale position in the value chain. Examples: Microsoft, Socrata.
… Many of them have other business lines, even outside the boundary of public sector information.
Irina: I think you'd want lots and lots of smaller companies, not one big one. As small music app companies are threatening the big distributors, a big company doesn't fit.
Bart: The Fire Department wants to be the authoritative source of information. They won't make a business out of it, but they will engage to have usable data.
Miguel: Risk to opening up data… fear of losing control. But benefit: they will be seen as the authoritative source. We see both.
Lotte: open data can bring big benefits to companies.
questionasker?: Do we all agree that we should build public infrastructure, basic datasets to build business models on top of… If we don't do it fast, a big multi-billion company maybe wants to become a public infrastructure provider? Or the market will collapse and transform in another way. We, as a community, need to identify the basic datasets which will be the "streets" of open data.
JohnLsheridan: What are the basic datasets of interest for fire services?
Bart: Address data. Real streets. We don't have "highways" for open data yet; we have "rural roads."
… Large companies taking over scares the Fire departments as well. "What if a company over in America is holding our data?" An important discussion to have.
Johnlsheridan: Do you see CDP becoming that sort of infrastructure provider?
Conor: I think we are. Especially where companies are contributing pollutants to that atmosphere, it impacts all of us. But we see it's useful where people can make money out of it. Investors will use it. But there's more to do with it. We need a hybrid model: some monitisable, some open.
Bernadette: I'd recast the question: It would give me great joy if, next year, there are 20 companies 10-100 people with $2-20m in gross revenues who are using this technology to share information, for-profits (not grant-funded). We don't need yet another social network or cow-tipping site.
… If they are venture-funded, it would be with a social enterprise angle.
Chris Metcalf: In the US, I feel like we're seeing the steam come out of pure open data. We need to show the benefits, which are often business. We work with small businesses to do that. We need to focus on that in the community.
Bob: Infrastructure isn't always provided by regulators, grant makers and hackers/coders. It's sometimes created by lawyers and judges. I think some orgs and agencies are hesitating to publish open data because they're afraid of inaccurate records and resulting harm and subsequent lawsuits. We may need some case law to determine this.
… To Conor: because your data can impact stock price, do you have T&Cs to cover that?
Conor: We do have cleverly-written T&Cs. Many many companies to agree to them. Other orgs can learn from our lessons: we don't own the data submitted to us.
… To Chris: Yes, we need to crate value from things built on public data, but also as a provider: how can we increase the value all along the chain?
?? What we see: one the benefits is people correcting data and pushing it back to the publisher. Enhancing it, geotagging, improving our metadata.
… There was a company who wanted to make money out of the data, and we want them to succeed. But this is a public sector answer, I realise.
Lotte: Do not forget SMEs like ours: manufacturers, consulting services, pharmacies… they are the ones who will recreate the value in the data.
<scribe> … New standards, new protocols, new releases, new things.
phil tetlow: This isn't a level playing field. In the development of the Web, it's a case of survival of the fittest, driven by quality, quantity and cost.
… Chances are high that whoever that company is in the future, they are here today. I'm hearing that open data should be a communal type where everyone has a chance. Those at the front will probably stay there; this is a call to them to maintain the lead.
Thijs: Can we learn from the open source business models?
Michele: Yes, one of our models is called "open source like".
… where reusers do not pay. As with Open Corporates, Licenses allowing non-commercial reuse.
Conor: Ask: How did the open source software people monetise it? A lot of them got burned.
Thijs: Training, consultancy,
Bart: In the Netherlands, the interesting datasets are often 3GB downloads. They will pay someone to maintain it in a usable form for them. That's the added value.
<bhyland> Bart: Services model similar to what RedHat does — good packaging and great support for enterprises.
Irina: CKAN is both open source and open data. How do you make it sustainable for businesses who publish data? Isn't that only an issue for businesses who only sell data? If it's a by-product of something else, it may drive more traffic
John: final thoughts
Lotte: We're seeing a shift from the fear of publishing to the network of data and content. Besides data, I look forward to opening more videos and content.
Michele: The first enabler is the government itself. Gov has to build the governmental infrastructure. Inspiring motto from Federal CIO of USA: Everything should be an API.
… 1st step: publish open data, 2nd step: bring gov into the business model.
… data reuse. A shared data model across agencies.
Miguel: SMEs need data to create value and generate new business lines.
Bart: Fire fighting data work is 20% technology and 80% people and politics. I'd like to see this reversed.
Conor: We need to get the business model right both for the providers and users.
[Session ends]
<StevenPemberton> Scribe: Deirdre Lee
<StevenPemberton> scribenick: DeirdreLee
<HadleyBeeman> Scribenick: hadleybeeman
<scribe> Chair: Julian Tate
<ivan> scribe: ivan
Opening up the BBC's data to the Web, Olivier Thereaux, Sofia Angeletou, Jeremy Tarling and Michael Smethurst
Sofia: The problem with the older approaches was that the material was not ours:
… we have only certain freedom to use it for some purposes
… another thing we were doing is to use MusicBrainz for the music website
… we do the same thing for the weather website
…. we use a lot of reuse from open datasets
… also from wikipedia for nature and wild life
… we reuse the wikipedia id-s
… because the uris are not static, then the service breaks
… this is a big deal for the BBC
… we cannot blindly rely on dataset and we need editorial control
… these were the first efforts with using LOD
… all of these experiences convinced BBC to invest more into the SW stuff
…. eg for the olymic web site
sofia: the sport web site uses about 4 million user a day
<scribe> scribe: DeirdreLee
Sofia: next steps for BBC is to
roll-out aproach beyond sport
... currently working on linking content together on news site
... trial from birmingham and black country will be rolled out nationwide in coming months
... will annotate news items with other pieces of related content
... would like to roll this out with archival content also
<bhyland> Appreciate Sofia's choice of headline at Google London office, "Google boss defends UK tax record to BBC" with byline "Eric Schmidt defends Google just paying 6M GBP in UK corporation taxes"
Sofia: diagram from presentation
shows content from archives, BBC hope to use Linked Data to
expose their data in interesting ways
... BBC have identified some challenges with publishing Linked Data (listed in presentation)
... what are the drivers for opening up their LD datasets, how to select good quality datasets, and how to meaure success
Alvaro Graves from RPI up next on Democratizing Open Data
Alvaro: Good news, there is
millions of Open Datasets on the Web, billions of triples in
the LOD cloud
... Bad news, there is a lot of inconsistent noisy data out there
... but this can be solved with standards, etc
... other bad news is that much of the datasets out there is boring!
... for example, stale data
... there is also 'unusable' data, that the majority of the general public can use
... how can those without access to technical skills & expertise make use of Open Data?
... small-scale communities or journalists?
... If we look at the Web, in the beginning there was a need for a webmaster to develop web-pages, but then tools like wikis, blogs came along that helped everyone to create web-content
... this should be possible with Open Data too, to encourage use
... visualisations are an easy win to get people to make use of Open Data
... Visualbox, a tool for creating visuallisations based on LD, used in workshop
... feedback was positive, and people learned quickly. however SPARQL was deemed difficult by workshop participants
... another complaint was about the quality of the data
... Call to arms: we need better tools - libraries and APIs for geeks are not enough
... general public usually have better needs. citizens need to be empowered to use Open Data, so they don't need a PhD in Semantic Web to get started!
... visualisations are a good way to start
<JeniT> seems like is relevant re tools
subtopic: Andreas Koller from Royal College of Art, talking about Opening Open Data
Andreas: background in graphic
design
... wants to discuss graphic design and coding, and tools that allow ordinary people use Open Data
<alex> JeniT: I once asked them whether they had documentation for an API for whatever software Oxford had bought, and they pointed me back at our own people
<alex> (they didn't seem to do 'open' at that time)
<alex> but stuff like is cool
<JeniT> alex: you still have to upload your data to them, I think, to use it, so not for everyone, but in terms of interface it's something to look at
Andreas: designers could help with data ownership and data ethics
<bhyland> RE: reference to the saying, "Data is the new oil!", see
<StevenPemberton> s;[link here to CG please];-> Open Linked Education Community Group;
Andreas: When teaching students to code, they may have a fear of tools
<bhyland> Jer Thorp, "Any kind of data reserve that exists has not been lying in wait beneath the surface; data are being created, in vast quantities, every day. Finding value from data is much more a process of cultivation than it is one of extraction or refinement."
Andreas: having libraries for
existing designers' tools would enable easy access to Open
Data
... as would low-level examples and list of data catalogues
... This is an example of how Open Data could be opened up to another community
... small effort for Open Data practitioners, but would be of great benefit to other communities
... easy access to Open Data would enable designers (and other communities) to see the value within the data and enable them to use it and extract knowledge from it
subtopic: Benedikt Groß, Royal College of Art, Large Scale Data & Speculative Maps
Benedikt shows Data Viz Pipeline
Benedikt: most of what we have
been talking about today focuses on the left side of the
pipeline
... will show some projects that use Open Data
<bhyland> The HBR article by Jer Throp nicely supports the thoughts of the speakers, (I think), "As we proceed towards profit and progress with data, let us encourage artists, novelists, performers and poets to take an active role in the conversation. In doing so we may avoid some of the mistakes that we made with the old oil."
Benedikt: Metrology, visualises the London tube map with Open Street Map data as a mental map, by mapping actual locations to tube map, using mathematical models
<StevenPemberton_> He showed the mapping from true life to the tube map, and then reversed the process to make a real map with the same distortions
Benedikt: Speculative Sea Level
Explorer project Combines NASA data on sea level with map
visualisations to show effects of sea levels rising and
falling
... sneak preview to m3ta.js, a visual programming language with metaphor to lego-blocks
<bschloss> Fascinating to see what Royal Academy of Art people can do for visualizations. Can less skilled people do something nearly as good. My IBM colleagues are experimenting with a site called Many Eyes 2.0 (beta) at
subtopic: panel discussion
julian: do you see yourself creating a toolbox for visulaising open data?
benedikt: great to release tools, but you can't just release source-code but need documentation and examples too, which is time-consuming
Alvaro: you can't just release code/tools/projects, but you are responsible for maintaining it (like kids :) )
<yvesr> had very good experiences with for data visualisation - very powerful toolkit
Question from audience
<bhyland> @Alvaro, Interesting analogy, Open Source is like a marriage, 'it comes back and you have to answer questions… it is also like children, you cannot let them out into the wild [without guidance]' ;-)
Ivan: if you have to convince CNN in an elevator pitch to use the approach as BBC, how would you do it?
yvesr: ) (BBC, from audience): focus on your own data, and use Open Data where possible to fill the gaps
TimBL: Who publishes data about
their own products?
... if people publish data about their own products, there won't be a need for CNN to publish data
<bhyland> I invite everyone to publish information about their organization, project, product and/or service on the Web today using.
<bhyland> If you care, it is a entirely Linked Data app. If you don't care, just fill out the form, publish the dir.ttl file produced for you automagically (like FOAF-a-Matic) on the public Web and submit it for harvesting.
sofia: so much in archives, not just about publishing data, but reusing data
Comment from audience: metadata is advertising for your data
<bhyland> RE: dir.w3.org, if you want to read an FAQ, see
Neil Benn (Fujitsu): in 2020, what have the political arguements been to convince governments to publish Open Data
Alvara: it's socially beneficial
for everyone, Open Data enables people to solve more
problems
... in chile, a lot of money is being invested in start-ups and entrepreneur programmes; is is not fair to ask for similar spend on democratising data?
Benedikt: in the future, there mightn't be an open data debate, it will just be the standard
Bschloss: TimBL alluded to a key
thing, CNN will have to put out metadata on related
content
... uses the example of airlines. putting out ticket information because they wanted to be listed
Andreas: key is that entry level for using Open Data is very low
bhyland: there is now a community directory online dir.w3.org/
<timbl> logger, pointer?
<bschloss> CNN will have to put out metadata or risk losing sales or eyeballs. Let's learn from history where first movers got value (like Airlines that listed their schedules and prices on GDS', then other Airlines followed rapidly to not be at a disadvantage)
to list Linked Data products, services and projects
<alex> bhyland: the "Create an entry" link at doesn't work, and there's a missing stylesheet error when one goes where you'd think it should have pointed
<bhyland> On behalf of the W3C Gov't Linked Data Working Group, I encourage everyone attending this workshop to add their organization to dir.w3.org today or tomorrow.
<alex> (ah; I'd missed the '?view' off the end of my guessed URL)
sofia: important to show the value to publishers of opening up data
<StevenPemberton>
<bhyland> It is simple to do, fast and gets more valuable Linked Data on the public web … plus it builds community & helps us all help one another.
<StevenPemberton> Best Buy reports a 30% increase in page views, and 15% increase in click throughs
Alvaro: if a major part of the population cannot access the data, the technical discussions are irrelevant. general public needs to be empowered to access and use Open Data
<bhyland> @alex, what browser are you using? I see it ok on FF & Chrome
Andreas: agrees, general public should realise Open Data is THEIR data
<rjw> bhyland: the Create an entry link on fails :-(
Benedikt: things are looking positive, lets hope to implment even 30% of what we have been discussing here tday
<bhyland> Ah Alex, I see the problem, try this
<bhyland> Thanks for pointing out that incorrect link, will fix now. | http://www.w3.org/2013/04/23-odw-minutes.html | CC-MAIN-2016-30 | refinedweb | 11,618 | 63.8 |
The next version of SBA SDK is coming (under a different product name) and it's time to start looking at some of the changes in the Add-ins area.
Referencing SBAAPI.dll
First of all, we found out that our 1.0 SDK sample for UIAddIn, we were referencing SBAAPI (when we are always telling you not to do it). So now that 2.0 is coming I can show you why this is bad. The problem with this is that it references the v1 sbaapi.dll which is not going to be there for 2.0, so if you try to install this add-in in our 2.0 product you will get the following error when you try to click on the "Invoices" button.
If you don't want your add-in to work with Office Accounting 2007 then this is ok, but in case you do, here is what you need to do to remove the reference to SBAAPI.1. In the UIAddIn project, remove the reference to SBAAPI.2. You'll get the following build error:
CustomerInvoices.cs(63): The type or namespace name 'CustomerAccount' could not be found (are you missing a using directive or an assembly reference?)
The problem goes away if you use the ICustomerAccount instead of the CustomerAccount class that exists in SBAAPI. Here is the corrected line of code:
this
This problem has been fixed in our SBA SDK 2.0. You can find the SBA SDK 2.0 beta here.
Business Logic Add-ins
If you tried using my sample code from my ISdkAddInDriver Sample that I posted in C# or in VB.NET, there are changes that you need to make if you want to remove the reference to SBAAPI.dll. If you want to do validation, you need to throw an ApplicationException instead of a SmallBusinessException, for example the new code will be:
C# -> throw new ApplicationException("EMAIL MISSING");VB.NET -> Throw New ApplicationException("EMAIL MISSING")
C# -> throw new ApplicationException("EMAIL MISSING");VB.NET -> Throw New ApplicationException("EMAIL MISSING")
One of the improvements we've made for Office Accounting 2007 in validation is the error message that users will see when the business logic add-in throws an ApplicationException.
In Small Business Accounting 2006 you will see this:
In Office Accounting 2007 you will see this:
As you can see, the error message now is shorter and more helpful. First, we show the Add-in name and we show whatever Message the Add-in specifies in the ApplicationException. You can find Office Accounting 2007 beta here.
UI Add-ins
One of the improvements we made here is the ability to have cascading menus (up to 3 levels), so now you can organize your menus better. We have updated the UI Add-in sample in SBA SDK 2.0 to show cascading menus. You can find the SBA SDK 2.0 beta here. | http://blogs.msdn.com/b/martha/archive/2006/10/18/moving-sba-sdk-1-0-add-ins-to-sba-sdk-2-0-office-accounting-2007.aspx | CC-MAIN-2014-52 | refinedweb | 485 | 73.27 |
Hello All👋 I hope you all are doing well. In this very short article I'll be writing about how you can shut down your system using Runtime class in Java.
Let's begin...
What is Runtime class?
Java Runtime class is used to interact with java runtime environment. Java Runtime class provides methods to execute a process, invoke GC, get total and free memory etc. There is only one instance of java.lang.Runtime class is available for one java application.
Let's not get much into Runtime class and see how to shut down your pc.
For doing so we must know a few things first:-
getRuntime() method
This method returns the runtime object associated with the current Java application. This is a static method of Runtime class, to know more about static method read my last article static keyword in Java, since this is a static method we will call this method without creating an object of the class.
Runtime.getRuntime();
exec() method
To execute a system command we pass the command string to the exec() method of the Runtime class. The exec() method returns a Process object that abstracts a separate process executing the command. From the Process object we can get outputs from and send inputs to the command. To use this method we will call it using method().method() syntax since both the methods are of same class.
Runtime.getRuntime().exec();
Let's see code to shutdown your pc
To shut down we will use the command shutdown -s. We can also specify the time after which you want to make your computer shut down.
import java.lang.Runtime; import java.util.Scanner; public class ShutDown { public static void main(String[] args)throws Exception { Scanner sc = new Scanner(System.in); System.out.println("Enter Password"); String pass = sc.next(); if(pass.equals("xyz")) { System.out.println("Welcome, you entered correct password"); } else { System.out.println("You entered wrong password, soon your machine will shut down automaticallly"); run.exec("shutdown -s"); } } }
On compiling above code
On entering correct password
On entering incorrect password
Several other tasks you can perform instead of shut down
- On replacing line number 14 with below code you can shut down your pc after specified time
run.exec("shutdown -s -t");
- On replacing line number 14 with below code you can restart your pc
run.exec("shutdown -r");
- On replacing line number 14 with below code you can restart your pc after specified time
run.exec("shutdown -r -t");
- On replacing line number 14 with below code you can log out of your pc
run.exec("shutdown -l");
- On replacing line number 14 with below code you can log out of your pc after specified time
run.exec("shutdown -l -t");
- On replacing line number 14 with below code you can lock your pc
run.exec("c:/windows/system32/rundll32.exe user32.dll, LockWorkStation");
Apart from this you can run system applications(.exe) using it
import java.io.IOException; public class SystemApp { public static void main(String[] args) { try { Runtime run = Runtime.getRuntime(); String path = "C:\\Users\\hp\\AppData\\Local\\Programs\\Microsoft VS Code\\Code.exe"; Process process = run.exec(path); } catch (IOException e) { e.printStackTrace(); } } }
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/ritvikdubey27/shut-down-with-runtime-class-3n9n | CC-MAIN-2021-39 | refinedweb | 537 | 57.77 |
It seems that there are problems finding symbols which require searching in paths other than the one where the current CPP file resides. For example, I am developing Windows apps using the Qt libraries. So I have all of my sources and headers in one single directory divided up into two group folders: "include" and "src". This works fine ... UEStudio can find anything in any header in that directory.
But I would also like to look up symbols in the Qt source files. So I install the sources and add the main Qt "include" directory to the main environment variables (PATH and INCLUDE), in "Compiler Options" I set "Additional Include Directories" to the Qt include directory, and I also add it to the project. I have tried adding it both as a group (i.e. virtual folder) and as a real folder.
In one of my header files, which extends a built-in Qt class, I have an #include statement and a class definition as follows:
- Code: Select all
#include <QtGui/QSpinBox>
class AlphaSpinBox : public QSpinBox
{
Q_OBJECT
public:
AlphaSpinBox(QWidget *parent = 0);
int valueFromText(const QString &text) const;
QString textFromValue(int value) const;
};
This code works fine when I compile it from the command line (I'm not even talking about compilation here, just finding symbols ... so it should also work from within UEStudio).
When I try to find the definition for "QSpinBox", it can't find it. However, now I get status messages "Parsing xxx of 2500 files..." whenever I open the program...
I tried the following (after waiting for the parsing orgy to stop):
1. changing the forward slash to a backslash;
2. inserting the full path and file name of QSpinBox (which is a dummy include for other header files);
3. inserting the full path and file name of the "real" include file inside quotes;
4. reparsing the active CPP file after every step.
Still, I always get the "symbol QSpinBox was not found!" status message. I'm sorry, but every compiler in the world can resolve include paths better than UEStudio! This can't be rocket science, can it?? | http://www.ultraedit.com/forums/viewtopic.php?p=17687 | CC-MAIN-2013-48 | refinedweb | 353 | 64.1 |
18 September 2008 07:59 [Source: ICIS news]
SINGAPORE (ICIS news)--South Korean high density polyethylene (HDPE) producer Daelim Industrial is considering the option of cutting production at its No 1 and No 2 plants in Yeosu by the end of September due to weak demand, a source close to the company source said on Thursday.?xml:namespace>
“It may cut production if export demand doesn’t improve by end [of] September but this is not confirmed yet,” the source said.
Daelim’s No 1 HDPE line with a 130,000 tonne/year capacity was producing blow moulding grade for large containers and its No 2 HDPE unit with the same capacity was producing blow moulding grade for small containers, he said.
Its No 3 HDPE line with a 140,000 tonne/year capacity was producing pipe grade, he added.
?xml:namespace>
For more | http://www.icis.com/Articles/2008/09/18/9156633/daelim-mulls-hdpe-production-cuts-at-yeosu-lines.html | CC-MAIN-2013-48 | refinedweb | 144 | 52.94 |
The program segment given below accepts marks in a single subject and uses a nested if statement to determine the validity of marks and the result if the value of marks is valid. This code can be written in a more readable form using an if-else-if statement as
#include <stdio.h>
void main()
{
int marks;
clrscr();
printf("Enter Marks of single subject :\n");
scanf ("%d", &marks);
if (marks >= 0)
{
if (marks <= 100)
{
if (marks >= 35)
printf("Result: Pass\n");
else printf("Result: Fail\n");
}
else printf("Error: Marks can't exceed 100\n");
}
else printf("Error: Marks can't be negative | http://ecomputernotes.com/what-is-c/control-structures/examination-result-in-a-single-subject-with-data-validation | CC-MAIN-2018-47 | refinedweb | 103 | 57.95 |
getsockname - get the socket name
#include <sys/socket.h>
int getsockname(int socket, struct sockaddr *restrict address,
socklen_t *restrict address_len);
The getsockname() function shall retrieve the locally-bound name socket has not been bound to a local name, the value stored in the object pointed to by address is unspecified.
Upon successful completion, 0 shall be returned, the address argument shall point to the address of the socket, and the address_len argument shall point to the length of the address. Otherwise, -1 shall be returned and errno set to indicate the error.
The getsockname() function shall fail if:
- [EBADF]
- The socket argument is not a valid file descriptor.
- function.
None.
None.
None.
None.
accept(), bind(), getpeername(), socket(), the Base Definitions volume of IEEE Std 1003.1-2001, <sys/socket.h>
First released in Issue 6. Derived from the XNS, Issue 5.2 specification.
The restrict keyword is added to the getsockname() prototype for alignment with the ISO/IEC 9899:1999 standard. | http://pubs.opengroup.org/onlinepubs/000095399/functions/getsockname.html | CC-MAIN-2016-44 | refinedweb | 161 | 54.52 |
I didn't realize I've stopped blogging for 1 year. What a shame! Fortunately I didn’t waste the time: we ship Whidbey Beta1 and Beta2 in the past year! Now with Beta2 out of door, I have more spare time for blogging. 🙂
Today I want to talk about some interesting facts about Timer in CLR. There is an example for how to use timer in MSDN:
This sample starts a timer and does certain things when the timer fires for certain times, like killing the timer. However, this sample has a bug which will cause trouble in stress scenario. To demonstrate the problem, I made a little change to the code:
using System;
using System.Threading;
class TimerExample
{
static void Main()
{
AutoResetEvent autoEvent = new AutoResetEvent(false);
StatusChecker statusChecker = new StatusChecker(100);
// Create the delegate that invokes methods for the timer.
TimerCallback timerDelegate =
new TimerCallback(statusChecker.CheckStatus);
Console.WriteLine("{0} Creating timer.\n",
DateTime.Now.ToString("h:mm:ss.fff"));
Timer stateTimer =
new Timer(timerDelegate, autoEvent, 0, 10);
// start another thread to post work items to thread pool
Thread t = new Thread (new ThreadStart (PostWortItem));
t.Start ();
// When autoEvent signals, dispose of
// the timer.
autoEvent.WaitOne();
stateTimer.Dispose();
Console.WriteLine("\nDestroying timer.");
}
// a Thread proc which keeps posting work items to thread pool
static void PostWortItem ()
{
// Post some user work items to thread pool
for (int i = 0; i < 1000; i++)
{
ThreadPool.QueueUserWorkItem (new WaitCallback (WorkItem));
Thread.Sleep (10);
}
}
// An nop work item for thread pool
static void WorkItem (object o)
{
Thread.Sleep (500);
}
}
class StatusChecker
{
int invokeCount, maxCount;
public StatusChecker(int count)
{
invokeCount = 0;
maxCount = count;
}
// This method is called by the timer delegate.
public void CheckStatus(Object stateInfo)
{
Console.WriteLine("Checking status " + (++invokeCount));
if(invokeCount == maxCount)
{
//signal Main.
AutoResetEvent autoEvent = (AutoResetEvent)stateInfo;
autoEvent.Set();
}
}
}
Basically I added another thread to keep posting work items to threadpool, but the rest part is still expected to behave the same: when the timer fires the 100th time, it should set an event so the main thread would stop the timer.
In one of 5 runs in my machine, I got such output:
5:48:07.625 Creating timer.
Checking status 1
Checking status 2
Checking status 3
Checking status 4
…
Checking Status 93
Checking Status 94
Checking Status 95
Checking Status 96
Checking Status 97
Checking Status 98
Checking Status 102
Checking Status 99
Checking Status 103
Checking Status 104
Checking Status 105
…
Checking Status 698
Checking Status 700
Checking Status 701
Checking Status 703
Checking Status 703
Checking Status 704
Checking Status 705
…
^C
It seems that invokeCount never hits 100 thus the program doesn't stop and some other sequence in the output looks to be out of order.
How does this happen? First we need to understand how timer is implemented in CLR, who is executing the timer callbacks?
One simple idea would be putting all timers in a queue and having a dedicate thread doing something like this (pseudo code):
while (true)
{
foreach (Timer t in timer queue)
{
if (t.TimeToFire ())
{
t.InvokeCallback ();
}
}
Sleep(MinumInterval);
}
However with this logic one lengthy timer callback would block all other timers. In CLR, we do have a timer queue and a dedicate timer thread. However the only job of timer thread is to maintain the timer queue, when a timer needs to fire, timer thread queue a work item to threadpool, then a thread pool's worker thread will pick up the work item and invoke the timer callback.
In Rotor's source, the timer thread's logic is in vm/Win32threadpool.cpp, the thread proc is ThreadpoolMgr::TimerThreadStart and ThreadpoolMgr::FireTimers does most of interesting work. The pseudo code looks like:
while (true)
{
foreach (Timer t in timer queue)
{
if (t.TimeToFire ())
{
// put a work item to thread pool
// to call timer cal back on t once
WorkItem work = CallTimerCallbackOnce (t);
ThreadPool.QueueWorkItem (work);
}
}
//MinumInterval is minum of next firing interval
// for all timers in the queue
Sleep(MinumInterval);
}
The timer thread only guarantees to put timer callback requests to a queue in thread pool (ThreadpoolMgr::QueueUserWorkItem) in order of timer firing. But timer callbacks are not called in a serialized way. If a timer fires twice and there are more than one worker thread in thread pool, there's no guarantee that the first callback will be finished before the next callback starts. Therefore, it's not thread safe for timer callbacks to access shared data without locking. That's why the example in MSDN breaks: when CheckStatus is executed in multiple threads, it's possible that "if(invokeCount == maxCount)" will never be satisfied. Changing the code to this would make it more robust:
public void CheckStatus(Object stateInfo)
{
int count = Interlocked.Increment (ref invokeCount);
Console.WriteLine("Checking status " + count);
if(count == maxCount)
…
Another interesting thing about timer implementation is that when a client thread creates a new timer, it doesn't insert the timer to timer queue directly. Instead, it queues a user APC to the timer thread (see ThreadpoolMgr::CreateTimerQueueTimer and InsertNewTimer). Similar thing is done for updating (ThreadpoolMgr::ChangeTimerQueueTimer and UpdateTimer) and deleting timer (ThreadpoolMgr:: DeleteTimerQueueTimer and DeregisterTimer). That way, client threads don't need to synchronize to access the shared timer queue. After all, the timer thread is sleeping (alertable) for most of time.
PS: to make the race happen more easily, I did more tweaks to the MSDN sample than the threadpool workitems, it should be obvious to you. 😉
This posting is provided "AS IS" with no warranties, and confers no rights.
Link Listing – May 10, 2005
PingBack from
PingBack from
PingBack from | https://blogs.msdn.microsoft.com/yunjin/2005/05/08/thread-safety-of-timer-callbacks/ | CC-MAIN-2017-39 | refinedweb | 931 | 61.97 |
1.6 The apply Method
In Scala, it is common to use a syntax that looks like a function call. For example, if s is a string, then s(i) is the ith character of the string. (In C++, you would write s[i]; in Java, s.charAt(i).) Try it out in the REPL:
"Hello"(4) // Yields 'o'
You can think of this as an overloaded form of the () operator. It is implemented as a method with the name apply. For example, in the documentation of the StringOps class, you will find a method
def apply(n: Int): Char
That is, "Hello"(4) is a shortcut for
"Hello".apply(4)
When you look at the documentation for the BigInt companion object, you will see apply methods that let you convert strings or numbers to BigInt objects. For example, the call
BigInt("1234567890")
is a shortcut for
BigInt.apply("1234567890")
It yields a new BigInt object, without having to use new. For example:
BigInt("1234567890") * BigInt("112358111321")
Using the apply method of a companion object is a common Scala idiom for constructing objects. For example, Array(1, 4, 9, 16) returns an array, thanks to the apply method of the Array companion object. | http://www.informit.com/articles/article.aspx?p=1849235&seqNum=6 | CC-MAIN-2019-51 | refinedweb | 203 | 71.14 |
CAPM APT and DDM
This content was STOLEN from BrainMass.com - View the original, and get the already-completed solution here!
Use of the dividend growth, capm and apt. How accurate are these three models and how realistic are the assumptions of the three models. Which is the best one to estimate the discount rate for Target Corp.© BrainMass Inc. brainmass.com October 25, 2018, 2:58 am ad1c9bdddf
Solution Preview
Dividend growth model relates stock price, dividend, discount rate and growth rate. The model states that price = dividend/(discount rate - growth rate). This one is very easy to use and fairly accurate in the long run (the model calculates intrinsic value in the long run, it says nothing about how stock changes in the short run).
CAPM determines a theoretically appropriate required rate of return of an asset, given the market return and ...
Under Armour Cost of Equity- CAPM, DDM, APT
Which of the three models (dividend growth, CAPM, or APT) is the best one for estimating the required rate of return (or discount rate) of Under Armour?
Explain the challenge of estimating or coming with a good feel for the "cost of equity capital" or the rate of return that you feel Under Armour investors require as the minimum rate of return that they expect of require Under Armour to earn on their investment in the shares of the company.View Full Posting Details | https://brainmass.com/business/capital-asset-pricing-model/capm-apt-and-ddm-322418 | CC-MAIN-2018-51 | refinedweb | 236 | 60.45 |
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Project Help and Ideas » Water Curtain
I am going to make this a blog about my Water Curtain project:
So a lot of this is going to be my wondering out-loud.
It is definitely open to "Help and Ideas".
This started with my question about switching a solenoid valve.
Well actually this started when I saw this youtube video of this
fantastic water curtain at a mall in
Japan (Canal City).
So I have some solenoid valves and transistors or solid state relays. So I can build ten column concept.
I would like to expand this to at least 64 columns but that would take a financial investment that had best wait until I can see if I can make it work.
So this is all at the initial thinking stage nothing has been thought through besides purchasing the initial 10 solenoids, transistors and power supply.
I am going to start with just 8 columns, thinking that I might use registers and then be able to left shift and right shift to do the math. Timing is going to be critical so I do not know if using the built in timers would be best or if using interrupts would even be feasible as there will need to be near simultaneous switching of the solenoids.
What do you think, how might I be thinking about timing the solenoid on/off?
I suppose I will have to figure out the dimensions of my trial curtain so that I can time how long it takes for a drip to fall. I am starting with my guessing 500 ms so everything will need to be done in that time frame.
so if I wanted to make the letter H:
(Assuming a | is a drop of water)
||||||||
| |||| |
| |||| |
| |
| |||| |
| |||| |
||||||||
So I am picturing this as a binary representation, is that possible?
Could I put that in a register? Is that a good way or would just using a variable work just as well?
Timing (starting from the bottom) would be.
onononononononon
onoffononononoffon
onoffononononoffon
onoffoffoffoffoffoffon
onoffononononoffon
onoffononononoffon
onononononononon
using a 500ms window:
column one on 500ms
column two on 75ms off 357ms on 75ms
column three on 214ms off 75ms on 214ms
column four on 214ms off 75ms on 214ms
column five on 214ms off 75ms on 214ms
column six on 24ms off 75ms on 214ms
column seven on 75ms off 357ms on 75ms
column eight on 500ms
Is there a better way to be thinking of doing this?
Thanks so much for all of the help, I am really going to need it.
Ralph
Sounds like an amazing project Ralph. Once you complete this you'll have something to really show off and be proud of!
If your going to be driving 8 solenoids why not use the shift register that the guys used in the robotic xylophone? You could just shift in the bits you need at the timing that makes things work right. For instance for the H you show and using your 50ms timing. You'd shift in 11111111, 50ms later 10111101, 100ms later 10000001, 50ms later 10111101, 100ms later 11111111.
You could even program in a library with all the letters, numbers, and characters you'd want to use. Then you could build a function that will parse whatever string you enter and feed in the proper bits. It would be a lot of coding but well worth it in my opinion. Imagine having guests over and setting up your water curtain to say "Welcome John and Jane" or whatever. That would be sweet!
Ok while I was tyring to go to sleep last night I had some interesting thoughts about how to do this, or more specifically how to not do column addressing (individual solenoid) as I illustrated above but to do it like this:
(remember this was thought up while trying to go to sleep so besides the fact that I am just learning all of this as I go it might be a little fuzzy).
What if I did some thing like this?
PORTB = b11111111; or PORTB = 0xFF;10000001; or PORTB = 0x81;11111111; or PORTB = 0xFF;
Now why would I need a shift register or BCD-to-7Segment Decoder? Why couldn't I just drive the transistors directly?
I could see if I were to use relays I might need more current than the mcu should/could put out. But I am leaning towards using transistors.
I suppose the reason to use a external driver component, would be to save pin count, that I could drive more solenoids from one mcu.
But what about using multiple mcu's using SPI(?)? Or even TWI. What are the advantages one way or the other?
I am picturing having a "library" stored on a EEPROM. That will require a new thread as I know absolutely nothing about using/programming a EEPROM.
Thanks rkuykendall for the pointer to the xylophone phone project.
Any thoughts and comments are welcome and Thank You!
That xylophone tutorial is really great. It pretty much explains what I am going to be doing.
It looked like they were only playing one note at a time. I will need to simultaneously trigger multiple solenoids ( eight to start with) it looks like the shift register should make multiple pins high triggering the solenoids well actually triggering the transistors which triggers the solenoid.
I will definitely be looking at the shift register but for now I think I will just drive the transistors directly from the mcu.
I think the nice thing about starting out with the shift registers is it will easily scale to however large you want. I'm thinking specifically about how they talk about chaining them together. Plus they only cost about a quarter from Jameco so the cost is not prohibitive. You've inspired me Ralph, I'm gonna work on one of these myself.
That's great rkuykendall, thank you. It's so amazing (to me at least) the resources we have available because of the Nerdkits.
I went looking to buy some shift registers and found some on AVNET.com for 13¢. I am sure they have a minimum order so I'll have to wait. The same I am sure for JAMECO.
I am really excited about this, it doesn't really appear to be so hard. At least physical setup and initial coding to turn solenoids on and off appears to be very straight forward. Especially thanks to Humberto and Mike I have source code from similar projects like the xylophone.
Once I get the physical setup I need to figure out how to time a water drop.
If I turn the solenoid on for 75ms how much water will I get and how fast will it fall 3' or 6'?
I am picturing setting this up in the back of my pickup truck so that it will be mobile and I can possible use it for advertising.
It certainly should get peoples attention.
I was wondering if I used a accelerometer to turn it off while moving, I could turn it on when I came to a stop.
Then I was thinking about lighting it up, maybe using some laser pointers and
Laser Line Generator Lenses
Of course I am to cheap for that kind of money but I think I can cut a wine glass stem with my dremel and a diamond wheel
and possible make my own lens (I didn't think up the idea of using wine glass stem I saw it quickly referenced in a forum somewhere, it was just a throw-off comment but it sure stuck with me).
If I get some fair weather I'll go out to my shed and start putting the framework together.
I still need to figure how I am going to plumb it all up (cheap) also.
Stay tuned, comments please.
I agree it is amazing what you can learn around here. Great job to Humberto and Mike for putting together the kit, the tutorials, as well as this community!
As far as how much water you'll get in 75ms I'm not sure, it'll be a function of the pressure as well as the size of your valve. How far the water will travel is a little easier question to answer. Any object accelerates at 32' per second under the effect of earths gravity, and the equation for how far and object falls in a given time is d=Vi*T+1/2AT^2 where d = distance traveled, Vi = initial velocity, A = acceleration, T = time.
If you have a pressurized tube at the top then there will be an initial velocity component, and it may be significant. If you have an unpressurized feed at the top then the water will be under only the influence of gravity. I find it easier to think in terms of the length of column, and in the case of 75ms you will have a column aprx one inch long. It would be easy to calculate the timing once you figure out how large you want your letters or patterns. Of course there will be some fine tuning after everything is set up.
Have you emailed Mike or Humberto to get their take on this? I'd be interested to hear what they think of the feasibility, and of course any suggestions they had towards method. Again awesome project idea.
Ralph,
This is the coolest thing i have seen all day! very awesome. i have a few things bouncing around in my head.
I LOVE your idea of bit bashing the solenoids. the only problem i see with that is the granularity. what i mean by this is when you want to make a curve. if you have a 1 at 75ms ON (numbers from your example above) then when you try to make an 'O' i think it will look squared off. you will probably have to go small. of course i have no idea what the timing will end up being but instead of making a single character 7 bits tall make it 14 or even 21 and use 25ms on each "squirt". this way you could also make designs and not just letters.
another thing that popped into my head while reading this is lighting. i was thing you could run some LED's down the sides and, when viewing at night, you could strobe the lights at different times to make different designs. think of this, you send a design in water that looks like a clock BUT only strobe the lights for say 20ms about half way thru the "drop". move the clock a little and do the same thing. it would look like the water was stationary. granted you would not get a high "frame rate" out of it but it was just a thought to throw out there.
thank you for letting us all join in on your thought process Ralph!
bryan
Thanks everyone for the feedback and comments, rkuykendall thanks for the formulas, I was asked once to do the water flow calculations for a very extravagant fountain. I actually used
the fountain came out beautifully.
Today I am thinking about the water flow and nozzle, as 6ofhalfdozen had mentioned in an email.
I have no idea how much water will pass through these particular solenoids.
The manufacturer list the orifice as 1.7mm.
So here is a question how much water @ 5psi passes through a 1.7mm orifice in 75ms? And then of course @ 1 psi and possible .5 psi?
The 5psi is a random number I grabbed as is the 75ms.
I was looking for battery powered pumps and they are expensive, relatively speaking.
I have a 12 volt pump from a RV (recreation vehicle) that I picked up at a yard sale so I have no idea if it even works.
SHURFlo Model # 200-21C-39
Voltage 12 VDC, AMPS 7.0, OPEN FLOW 2.8 and 10.6 GPM (gallons per minute).
I know these RV pumps have a limited head (vertical distance a pump can pump).
I ordered these to use as a nozzle.
I figure I can drill them out to get more water. I also have 1/8" NPT (National Pipe Thread) plugs which I could drill.
I think I am going to modify the body of the solenoid in order to pack them as tight together as possible:
I need to remove the lug on each side. it does not appear to be functional as the valve seat is beyond the lugs.
Without the lugs removed the nozzle center line would about 17 mm which is probable to far apart to make it appear as a solid curtain so I might need to make two rows with staggered center lines. This would get me about 13 mm.
What do you think about those coils being stacked so close together? Think that might cause a problem? The outside ones would probable be on constantly. I fact as a second thought I probable do not want solenoids on the outside just pipe them up for continuos flow. Once I have a larger count than just 8 I should be able to have maybe 4 columns on continuously on each side.
Of course if I wanted to do a bar all the way across then I would need solenoids on the end columns.
As far as making shapes and objects, luckily I can vary the time of each row so I'll be able to squish a row(s) to make shapes. There would be more rows than the 7 I have used for the H illustration.
This is reminiscent of the discussions going on (a long time ago) with making fonts for dot matrix printers, of course there we had a fixed rows and columns to work with. Of course every graphics program developer has to deal with boxy pixels. I will have a fixed number of columns but a variable row count.
They should be fine, stacked up. The bigger problem is getting all the plumbing together in such a tight space.
A little trick I used in bakery equipment where water was used to slice into the top of the bread before baking was to put a little check valve just ahead of the nozzle. It gives you a positive stop when the flow stops instead of a dribble from the feed line.
||
||
\
V
Mongo that is interesting, probable not easy for me to implement unless I found the check valves somewhere.
I am not sure I have a good picture of what you are saying?
We used water streams to slit the top of the dough for breads and rolls. At about 70 PSI, a stream of water about 1/32" does a nice job. You can control the depth of the slit bu increasing and decreasing the pressure.
Instead of letting the jets run constantly, we put them on solenoids. Although the solenoids were pretty quick acting, the nozzles would dribble when they shut off, causing water marks on the top of the bread that really showed up once it was baked. it became a quality issue.
I got some little check valves that fit right in front of the nozzles and it cured the problem. it takes a tiny bit of pressure to get past them and as the pressure drops, they basically slam shut. Being right at the nozzle, there is no extra water to dribble.
They are called "spring-ball and cone" type. It has a little ball that fits into a cone shaped indent inside the valve. A spring holds it in position. The water enters at the tip of the cone, like a funnel backwards. it overcomes the spring tension and flows through. When the water stops, the ball seats back into the funnel and positively stops any further flow. Capillary action does the rest under the very short distance left.
A good way to picture it is a rubber ball in a funnel, only really little.
Ah the check valve goes after the nozzle that makes more sense.
∏
V nozzle
^
● check valve
§ spring
∩
⋅
⋄ squirts
⋄
Thanks again Mongo. The logic of this, make a defined pulse without any drips, is starting to sink in.
I can see where this would, as you said, make the pulse more defined.
Now to find some "spring-ball and cone" type 1/8" NPT check valves.
I just Googled "spring-ball and cone check valve" and searched ebay. Those are expensive items the cheapest I found were some 3/8" plastic ones from Grainger for $13.00. Well if anyone comes across some 1/8" spring-ball and cone check valves aka "poppet checks"
or you have other ideas please let me know.
Ralph
Yep, not cheap at all... There is an alternative that can be made from junk around the house. Basically, a rubber flapper across an orifice. The rubber acts as the spring and the valve component together. I am sure that a little creativity and brainstorming can come up with all sorts of ideas.
Ok time for a update on my thoughts on the Water Curtain.
I should be able to makeup a framework to hold everything in the next two weeks, that is if the weather stays above freezing and it doesn't rain.
I can not decide between 5' or 6', what do you think?
I think I will have the two outside columns be a steady stream. That will give me 8 columns for the animation. For a total of 12 columns.
Speaking of the animation is there (could there) be a POV (Persistance of Vision) aspect to this. I wonder if something like Keyster
had suggested doing with lighting. What is the POV hertz, seems like it is 60hz maybe less. so lets say 40hz or 40 frames per second. Would that be asking to much of the mcu? Probable the slowest component will be the solenoid with a 5-8 ms response time. I could just put each frame into a loop.
I have been experimenting with a water dropper. Drips are certainly move visible than squirts so it looks like I will need to use low pressure on the pump and possible nothing as a nozzle or at least something larger than the 1.7mm solenoid valve orifice so that I get more of a drip than a squirt.
I will have to have a solenoid hooked up to see the effects. I have also tried color back grounds so far a light yellow seems to be the best. While doing the background test I noticed that the shadow of the drip is as noticeable as the drip itself possible even more noticeable, I wonder if having a screen in front of the display might make the animations more seeable? Of course that would probable be a detriment to the whole effect of the Water Curtain. This certainly is interesting.
So here is what I can see that I need at the moment. If any one sees something and thinks "hey I could do that" or "hey that would be interesting todo" Please jump on in, there is plenty of things for everybody, no matter your skill level.
On a PC I need a script to processes a file (or take direct keystrokes) of the designs.
The file would look like my H in binary (if this is the best way to go)
11111111,75
00111100,75
00111100,75
00111100,75
00000000,75
00111100,75
00111100,75
00111100,75
11111111,75
11111111,75
11100111,35
11011011,30
10111101,35
10111101,35
01111110,30
01111110,35
10111101,35
10111101,30
11011011,35
11100111,30
11111111,750
11111111,750
So we have a binary number and time factor.
Now are binary numbers the best to use processing wise?
Would:
FF,75 or FF,4B or 255,75 be better than 11111111,75?
I like the binary besides you coud overlay a transparency of a objecct you want to have on the curtain over a sheet of ||||||||| and be able to get your number easily. this is doing it by hand, of course that's not complicated or challenging enough. I want to be able to scan a image taken from a tablet and send the output to the mcu. I had been deeply involved it the PDA (personal device appliance(?)) field. You know like the Palm Pilot and signature capture was always the most challenging application to accomplish, at least in the early days. Capturing a signature on a hand held device evolved down to very simple coding (but it took a lot of hard work to get there). Essentially you know the size of the signature box and then you would just scan that area for changes and record that change position as a dot on a graphic file. Then you would scan on, chances are once you got the first dot you would have more immediately following on the same line scan so depending on the stroke thickness you might have 5-10 dots. You would finish scanning that line and then scan the next. On and on you would go scanning and essentially build the signature on the graphics file. Easy right, well that is what I want. Anybody up for it? All you need to do is to scan for a change, you are not concerned with color or anything else you just see that it is no longer white at 35,79 36,79 37,79 38,79 etc. and then the next line.
Then on the mcu I need to take the UART download of the processed image and send it into EEPROM. I have absolutely absolutely no knowledge on EEPROM programming so I'll be looking for specific help on that in the Support and Programming forums.
Of course then I need to read the EEPROM and turn my solenoids on and off.
Sounds like fun.
Like I have said if anyone would like to take a piece of this I sure would appreciate it.
I was fascinated by your water project idea and I had a thought. Can you treat the outputs like a dot matrix printer. That is how they do the smoke signs you see coming from planes that are advertising at sports events.
I kinda had a thought on this. What if instead of pumping the water through the valves, you built a collection trough that you filled with water. Then you could pipe the valves from the bottom of the trough and let gravity feed them. You would have a lot less issue with water pressure causing a 'squirt'.
What do you think?
I'm working on a trough idea like your suggesting Rick, unfortunately most of the solenoid valves require a certain amount of pressure to activate, and the gravity fed ones seem to be more expensive. After talking with an ME at work though I think I have a working idea but need to do some more testing. It will involve homemade solenoids like the NK guys did, nylon rods, and needle valves like set ups.
Doing a home made needle valve solenoid woud be so cool, it definitely would be doable especially if you had a lathe available.
I do not even have a drill press so a lot of my machining work is done with my dremel by hand and eyeball. Definitely not precision work.
I should be able to get water flowing within the week or two at most. I do not know when the transistors I ordered from ebay might arrive, they definitely are a Chinese issue but for $1.29 for 50 I figure hey go for it, if they ever show up that will be good. I have some 24 volt relays I got from SparkFun to use in the mean time but I think I prefer the transistors.
Using what I have around the house, I am going to build the framework out of 1 1/2" PVC Sch 40 PVC pipe. I will put a 3" PVC accumulator at the top and run the feed from the pump to the accumulator with a relief piped back down to the pan with a ball valve to control pressure. It would be fun to add a stepper motor to the ball valve to automate pressure control, I wonder what size stepper I would need to generate the necessary torque. Technically I should use a globe valve instead of a ball valve but that would require multi turns instead of a quarter turn which would be easier to automate with a stepper motor.
I have no idea what the solenoid valves I have will require. I doubt I will have even a half pound of pressure on the accumulator. I just happen to have a U-tube Manometer from another life so I'll be able to monitor pressure at the accumulator (solenoids) and control it with the relief ball valve. I could vent the accumulator and essentially have Rick's trough but I think having a closed system with the relief valve might give me more options, especially if I could have the relief valve under the control of the mcu.
Well I should have my Water Curtain flowing this weekend. I am still missing the relay to turn on and off the pump and my water piping fittings should be in tomorrow also my shift registers have not arrived yet. Technically I'll be able to fill the accumulator with water by hand so I do not need the pump and water piping completed to see if this is going to work. Well I am sure it will "work" but to see how well it works. I still do not know if I will need a nozzle or if just letting the solenoid drip without a nozzle would work or possible just adding a nipple. I'll setup a variety of configurations. Now I still have to finish my coding for the test. I pretty much have inputting the letter H into the onboard EEPROM and then reading the EEPROM handled. It would be nice to have more figures added. I still need to code a "Playlist" of the shapes I want to display. I would like to do a double buffering routine so that I have the next figure stored in RAM instead of reading it from EEPROM while trying to display. Of course this is all speculation as I have no idea what reality will bring. Luckily I have the Nerdkits Tutorials as references, I definitely will be using them. I will start with the letter H and just loop it over and over. I'll try to use Rick's great button code to be able to vary the timing delays. The H I have pictured so far uses the same delay in it's seven steps so just changing a delay variable should change the figure. The water will be falling 5 feet so it will really be interesting to see what happens.
Sounds like things are progressing well, I hope you'll be posting youtube video of it once you have it running. I'm waiting on some materials to see if my homemade stuff will work.
Well I only have a cell phone .AVI video source so it would be a huge download, but I certainly will have it posted. Things keep coming up so it might be a challenge to make it by this weekend but I should see water flow and then to see if I can make a object appear. It has to work, to flow and stop flow, so it should "easily" produce horizontal bars and then I'll try a forward and back slash and then my letter H. If that all works and is perceived then a O would be next. Timing is going to be the thing, that and water flow. I wonder if I will need to get into water chemistry in order to get the correct drip.
Would soft water be better than hard water? I am picturing this as a accumulation and absence of drips, I have no idea if that theory is even valid.
I'll do a full makeup with links and prices for the whole build. Turns out I am going have a bit invested in this so I sure hope it works.
Ralph
For the first time I powered on one of the solenoid valves and I think I may have a problem.
Switching on power to the valve does not energize it instantaneously, there is a very slight delay until I hear the solenoid pull open.
The spec on the solenoid valve called for 5 - 8ms actuation time which I think I can confirm but nothing was said about de-activating.
Then on power off there is a half second (500ms) delay between power off and the closing of the valve.
This is all done by ear with no instruments so all figures are just my judgement (guess).
When we are talking about flipping the valve on and off in 75ms cycles that gives about 500ms for the whole object composition.
So with a 500ms delay I will only get one drip per frame.
I think there is just going to be to much of a lag to build the object. But of course the timing cycles were just guesses about what would be required.
I can see where building horizontal bars will probable work as that would just be a on - off cycle but I just do not know how to speed up the closing of the valve.
rkuykendall spoke of building his own needle valves that seems to be what it would take. If I built my own then I could have a spring return speeding up the close.
Well I will proceed building the Water Curtain framework since I have all of the materials.
It would be interesting just to see it flowing and being able to turn it on and off. Possible if this was a pressurized system the water pressure would help close the valve but at the moment I am picturing this with maybe 6" of head so the pressure would be insignificant.
I do not know how I would find "fast acting" solenoid valves.
Possible servo motors mounted on ball valves would have snap action which I believe is what is needed but of course this is all speculation.
Well lesson learned, I hope but I doubt it.
Take a look at this article ... says he's doing 7ms to 30ms drops. Lots of good technical info on making drops. Cool photos too.
Thats a bummer Ralph, but I was afraid that you'd find it to be that way. Have you tried it with the water under pressure? If it will be fast acting in that configuration then you'll just have to use pressure. I was thinking about you saying that drips look better than squirts, and how you can get a drip from a pressure nozzle. What if you used some sort of "funnel" below each nozzle to collect and drip the water out? Even if it overflowed the water will lose it's velocity and drip off right? Just a thought.
I'm waiting on some materials but should be able to try out my homemade "needle valve" idea this weekend. If it works I should be able to build valves for about $2. I'll let you know how things turn out.
Noter that is a great link thank you, how did you find that? I sure found it hard to read, is he cycling the solenoid to get the individual drops? The timing information is priceless. I would have liked more information on the solenoid valve, it appears his has a 1.5mm orifice I have a 1.7mm
I'll read the article a few more times. I would love more details on the the timing sensor I have some thoughts on making one.
Man, rkuykendall if you can make up valves for $2.00 that would be amazing.
I have been thinking about how one might make a valve it starts out simple and then gets complicated.
I searched on "solenoid valve timing water drop" via google and it was the 1st one up. There are a few more in the list including an DIY ardunio project with schematics and source code that may save you some development time.
I believe he is cycling the solenoid for individual drops and has to change the nipple size as the drops get larger to produce good quality drops. Perhaps more quality than you need for your water curtain but good information to consider.
I just had a thought. Dunno if its any good or not, but figured I would pass it along. Most of the solenoids I have worked on are all AC powered, so working on DC solenoids is new to me. But in theory, solenoids "hold power" in their coils, so is it possible that you need to give the solenoid a "high speed" discharge path when the power is off?? I don't remember you having anything like this in your setup, but I could be mistaken. You will also need to worry about reverse emf and all that if you do provide a "high speed" discharge path, but I wonder if it would greatly speed up the solenoid shutoff speed beyond just cutting off the power. The faster the magnetic field is fading, should be a faster valve closing.
just a thought
I love this idea! It got me thinking about ways to control the water flow. What if you gravity fed the water through flexible rubber hoses and used a solenoid to squeeze the hole shut. I think this would work... This might work... Have you ever put your finger over your drink straw and pulled it out of the glass. To me it always seemed like a nice controlled stream coming out the bottom of the straw when I let my finger off.
I sketched this up in about 2 minutes using paint and my pen tablet so bare with me and I'll decipher the chicken scratchings. The yellow lines are hoses submerged in water. The little black things are the rods from the solenoids (The squiggly above is an inductor/solenoid symbol). When the solenoids get flow the rod comes out and squeezes the hose shut. The vacuum in the hose wouldn't let the water fall out the end. Also your solenoids wouldn't need to be close together. They could go anywhere as long as the hose are in a nice tight line.
Squeezing hoses might not give you good hose life but I'm sure a flap or something could be rigged in there.
Using the math and pictures from Noter's there might be lots of time. He has pictures labeled 7ms to 30ms with the drips 2-3cm apart. If 60" = 152.4 cm that gives me a good window it seems. Of course it will depend on the solenoid closing.
What would I have to do "to give the solenoid a "high speed" discharge path at power off?" As 6ofhalfdozen has suggested?
Or at least I think that is what I saw. I suppose now that I have the formulas I should do the calculations and then I would not have to be guessing which everything I have said to this point has been. I'll try to get something going this weekend, right now I am trying to wire up the transistors and seemed to have over amped my breadboard and blown a mcu and possible a LCD.
I have never wired a transistor or used one in a project so this is all new. I made the transistor turn a led on and off so it seemed I should be able to turn a solenoid on and off but I must have gotten the 24 volts misplaced as I definitely fried a mcu and the LCD has a whole bunch of black squares though the entire LCD is not covered. The mcu no longer has the correct Signature so it is lost unless someone knows how to restore the Signature, the device still powers up.
Yeah, BStory I was wondering about making a squeeze valve. I would need some squishy tubing and then of course a lathe would be nice but I have done some amazing things using my 3/8" drill and Dremel. I could put a magnet on the chuck and using a Hall Effect sensor count the revolutions so I could load some coils with a accurate count. I have seen a guy on line winding a coil by hand, I would think that 32 ga mag wire would start cutting after 300 turns.
Well here's hoping,
Here is my Solenoid Control wiring, does it look OK?
Thanks,
I think you may need a resistor between the mcu and the base pin on your transistor to keep it from frying. Also, you may need more current than that little transistor can supply. Look at the schematic from the DIY ardunio project. Notice that he is using a TIP121 which can drive up to 5 amps according to the datasheet.
Thanks Noter, I was wondering about having a resistor mcu to base. What Ω would that need to be? Also Mike has a 100k(?) Ω resistor from Base to ground in his motors Tutorial would this help? The current on mcu to Base is limited by the mcu, if anything I would think there might not be enough ampacity to trip 8 transistors. I will use a shift register but they have not arrived yet so I was thinking of driving the transistor directly from the mcu. I can easy enough add a resistor in line if needed but wonder about the size.
The solenoid is rated @ 6 watt (250ma) the transistor is rated @ 500ma so it seems I should be ok, but I have never done this.
I don't know why I would need 5 amps to drive a 6 watt coil.
Every thing I have done and am doing is completely based on speculation and guesses on my part so possible I blew the transistor when testing and that is what shorted out the 24volts to the rails of the breadboard and blew the mcu. Could that be likely? Something certainly blew the mcu and probable the LCD. Though I suspect a wiring error caused the burn out. I ripped wires out when I felt the voltage regulator start to sizzle I didn't see any magic smoke but just passed my hand past the regulator and felt the heat two inches away so it was hot.
I am building the physical framework today. I would like to get water flowing this weekend even if it is not controlled yet. Just making a drip (turning the piped solenoid on and off) will be a big step in seeing what I am capable of doing.
I am going to use my modified tempsensor project code with the substituted potentiometer in place of the LM34 temp sensor in order to control the timing of the solenoid on off cycle instead of hard coding a value in the code. That way I will be able to try different scenarios.
Thanks again Noter, anybody else see anything or have words of advice, it's much appreciated (and needed).
A 1k ohm should keep things from burning up but if you want to limit the collector current to around 400ma something more like 2k ohm should work. From the datasheet I think you want a base current of about 1.5ma but a bit more will for sure turn the solenoid all the way on so I'd start with a 2k ohm resistor.
I agreen with your assessment of what happened. I've fried a few transistors along the way too by forgetting to put the resistor on the base but I didn't have higher voltage anywhere so my mcu was ok.
Well I have most of the framework made up and assembled. I also wired up another solenoid using the above schematic and a 2.2k Ω resistor in line with the mcu to BASE. Now when testing I no longer hear the delay that I was hearing when I just had the solenoid wired up directly, without the transistor and diode so maybe these solenoids will work. Maybe the diode causes a rapid discharge like 6ofhalfdozen had suggested. It sounds instantaneous, so I am excited this might work.
Movin on,
How are things coming Ralph? My first try at homemade needle valves failed but I haven't given up yet. I think I was using too small a diameter magnet wire to create the magnetic field I need.
I have the eight solenoids wired up and tested .
That was more work than I expected. In 20 20 hindsight I see that I should not put the transistor and resistor in the solenoid connector box. I got the idea seeing the manufacturer has a on/off led option and there was space. Every thing should be on the PCB board with just two wires to the solenoid then it would have been simple to wire up and the testing would not have been so tough.
I was getting a occasional short jamming the cover on.
So now I am learning how to use a shift register and the internal mcu EEPROM then I need to get the I2C external EEPROM working.
I should have water flowing this weekend if the weather is good. I don't know if I will have the programming done but I'll finish building the framework and get my pump setup and tested.
Of course I do not know if my pump will work. I powered the motor from my car battery and the motor turns but I have not pumped any water. Plus I might have a issue with the power supply I got off ebay. I powered it up and it is putting out 12 volts but when I powered a motor the motor ran in pulses. It would run for 5 seconds and stop for 5 seconds and then run for 5 seconds and stop it did this continuously. Now I have no idea if the motor is any good or if it is the power supply? The power supply is from a CCTV (Closed Circuit Television) system which I have never worked with so maybe they use pulsed power.
Could it be thermal protection in the power supply causing it to shut down when it overheats?
That could be. The power supply is supposed to be rated at 10 amps. The motor is not big I'll put a amp meter on to see how much current is being drawn. The motor also does not seem to be spinning very fast or with much power I'll see if I can stall it.
Well the power supply I got off ebay does not power the pump. So I need to get another one.
I made up the solenoid valves into a solid clump (all 8 together)
I used the "Amazing Goop" to cement them together. They really look very nice in place on the framework.
They are not precisely aligned but I do not know if that will be detrimental or not. I suppose each drip will have a different angle.
I will try those 1/8" NPT by 200 tube series adapters as nozzles (from the first picture).
Well I am back on this project finally!!
I had actual water flow today.
I got the header plumbed up with the pump and actually pumped some water through.
So the 12 volt RV pump I got at a yard sale works!
I was just dumping the water through the nozzles so I got to see the effect of varying the pressure.
I definitely need some back pressure as with no pressure (just gravity) some of the nozzles did not even drip.
Now trying to do some safe wiring, water and electricity do not mix.
Geesch, now I can not find my code that I built last winter. I hope it is on my other computer.
I'll be posting some pictures and movies of my progress.
I have already seen that you will have to be within five feet to distinguish any thing. It definitely is a matter of scale.
So let me have your comments please,
Did any one else make any progress on the various components you were working on?
Also in looking at the Canal City video again
I see how they used lights to highlight the flow!
This opens a whole new arena for me to play in and also look for ideas from you all.
I wonder what I might do with a laser pointer (they are amazingly cheap).
What do you think a laser pointer with one of these lenses from
[Edmund Optics]( might do?
Of course I am too cheap to pay their price but I do have my Dremel with a diamond blade and some wine glasses.
I actually tried making a laser line lens once and it actually sorta worked it changed the laser beam into a flat line,
not very well defined or sharp but the effect was there.
So what do you think I should do about lighting?
I have a 24volt power supply which I do not "think" is heavily loaded.
It looks like the Canal City lights might be high intensity which would make sense which is why I was thinking abut using a laser pointer.
Please let me know your thoughts and questions.
I Really need some help in determining the best way to proceed with generating letters and symbols.
Currently I am using PORTD with 8 leds for testing.
Here is my current working code, thanks to Noter, populating an array and lighting the leds 1 - 255 in binary.
//
int16_t ON, OFF = 0, i;
uint16_t line_display[256]; // allocate array
for ( i = 0; i < 256 ; i++ ){
line_display[i] = i; // initialize array
}
ON = 20; // initialize delays
OFF = 20;
while (1){
for ( i = 0; i < 256 ; i++ ){
PORTD = (line_display[i]); // light the LEDs on PORTD
delay_ms(ON);
PORTD = 0; // turn'em off
delay_ms(OFF);
}
}
}
This takes up about 7500 bits of ram so I extended it by hardcoding some symbols and letters.
I still have lots of space (ram) on my ATmega328p so I actually could use this but it is ugly and long and there has to be a better way.
For example, here is a display I call slantLeft. I am just placing it in line in place of the second for statement.
PORTD |= (line_display[3]); // ON
delay_ms(ON);
PORTD = 0; // OFF
delay_ms(shortOFF);
PORTD |= (line_display[6]); // ON
delay_ms(ON);
PORTD = 0; // OFF
delay_ms(shortOFF);
PORTD |= (line_display[12]); // ON
delay_ms(ON);
PORTD = 0; // OFF
delay_ms(shortOFF);
PORTD |= (line_display[24]); // ON
delay_ms(ON);
PORTD = 0; // OFF
delay_ms(shortOFF);
PORTD |= (line_display[48]); // ON
delay_ms(ON);
PORTD = 0; // OFF
delay_ms(shortOFF);
PORTD |= (line_display[96]); // ON
delay_ms(ON);
PORTD = 0; // OFF
delay_ms(shortOFF);
PORTD |= (line_display[192]); // ON
delay_ms(ON);
PORTD = 0; // OFF
delay_ms(shortOFF);
That produces this:
||
||
||
||
||
||
||
Now I can put this into an array.
int8_t slantLeft[7]
slantLeft[0] = 3;
slantLeft[1] = 6;
slantLeft[2] = 12;
slantLeft[3] = 24;
slantLeft[4] = 48;
slantLeft[5] = 96;
slantLeft[6] = 192;
But how would I use it and there might be a hundred arrays like this:
It needs to go here somehow:
PORTD |= (line_display[192]); // ON
Or is using an array not what I want to do?
Maybe I should just do as I originally speculated and just turn PORTD on and off directly?
PORTD =3; // ON
delay_ms(ON);
PORTD = 0; // OFF
delay_ms(shortOFF);
PORTD =6; // ON
delay_ms(ON);
PORTD = 0; // OFF
delay_ms(shortOFF);
PORTD =12; // ON
delay_ms(ON);
PORTD = 0; // OFF
delay_ms(shortOFF);
PORTD = 24; // ON
delay_ms(ON);
PORTD = 0; // OFF
delay_ms(shortOFF);
PORTD = 48; // ON
delay_ms(ON);
PORTD = 0; // OFF
delay_ms(shortOFF);
PORTD =96; // ON
delay_ms(ON);
PORTD = 0; // OFF
delay_ms(shortOFF);
PORTD = 192; // ON
delay_ms(ON);
PORTD = 0; // OFF
delay_ms(shortOFF);
Using this method would probable make for less mcu cycles as it would not have to do the math.
Right now I am using the mcu PORTD. I had planned on using a shift register or I2C port expander but the concept of using the mcu as a intelligent shift register or intelligent port expander has a certain appeal.
Of course I will eventually put something into an EEPROM for permanent storage if I do not hard code it as I am doing currently.
Thanks for the help, this is really getting close.
Instead of storing bit patterns as is, you could store 16-bit opcodes. Something along these lines:
If bits 14 and 15 are both "0", then bits 0-7 represent a pattern, bits 8-10 give you the on time (short, medium, long, extra long) and bits 11-13 give you the off time.
If bit 14 is "1" and bit 15 is "0", the reader should save the current position on a stack and then jump to the array position indicated by bits 0-13, a signed value relative to the current position.
If bit 14 and 15 are both "1", the reader should pop a saved position from the stack and return to that location in the array. The remaining bits are ignored.
If bit 14 is "0" and bit 15 is "1", do something else fun with the other bits, like generate them randomly or according to some other rules defined by the other bits.
That way you can define "subroutine" patterns and loops and conditionals and whatever else you want.
Or you could treat it like an LED display and build a font file that you read to turn on/off each pixel/solenoid at a given time.
Thanks bretm and Rick.
Ok, now promise you will not laugh.
Here is my latest "working" code. It is kinda scary in that it works as I have neve used a ponter.
//
unsigned char *arrayName = 0;
int16_t ON, OFF = 0, i;
//uint16_t line_display[256];
//*
unsigned char _array[100];
_array[0] = "slantLeft";
_array[1] = "slantRight";
int8_t slantLeft[7]; // allocate array
slantLeft[0] = 3;
slantLeft[1] = 6;
slantLeft[2] = 12;
slantLeft[3] = 24;
slantLeft[4] = 48;
slantLeft[5] = 96;
slantLeft[6] = 192;
int8_t slantRight[7];
slantRight[0] = 192;
slantRight[1] = 96;
slantRight[2] = 48;
slantRight[3] = 24;
slantRight[4] = 12;
slantRight[5] = 6;
slantRight[6] = 3;
ON = 200;
OFF = 200;
while (1)
{
for (i=0; i < 3; i++) {
arrayName = _array[i];
for (i=0; i < 11; i++) {
PORTD = arrayName[i];
delay_ms(ON);
PORTD = 0; // turn'em off
delay_ms(OFF);
}
}
}
}
bretm you lose me on step 2. I really like step 1 that makes sense I can almost see how to implement it.
The whole concept really seems elegant.
Rick I have looked at font.h and was completely lost.
And Noter just so you will not think that I am not thinking of you, here are my warnings, which I have not been able to fix yet:
WC.c:23: warning: assignment makes integer from pointer without a cast
WC.c:24: warning: assignment makes integer from pointer without a cast
WC.c:48: warning: assignment makes pointer from integer without a cast
_array[0] = "slantLeft";
_array[1] = "slantRight";
arrayName = _array[i];
I really am over my head with this.
Oh when I say it works that just means something is happening and I am not getting errors on the compile.
I am testing now to see if it is actually doing what it is supposed to do.
That's the idea behind pointers, but that's not really pointers. Does it even compile?
To use that technique, _array needs to be "*int8_t" instead of "unsigned char". And then instead of the values "slantLeft" and "slantRight" you would use &slantLeft and &slantRight to get the addresses of those arrays. They would have to be declared before _array. Then it would probably work, except that you go from 0 to 10 instead of 0 to 6 in the nested for loop.
To use my opcode idea, take a look at the actual bit patterns used in the AVR opcodes. These aren't in the regular Atmega168 manual, you need the AVR instruction set manual. See if that gives you any ideas.
Another problem is that you're filling the SRAM arrays at run-time. The code and values to do that ends up in program memory, and then the values are written to SRAM. Then you read them from SRAM. You're limited by program memory anyway so you might as well just use PROGMEM with array initializers (and make sure the make file includes the .data section in the hex file).
This problem goes away with external EEPROM.
Yep, you'll need to fix those warnings. Maybe you would make better progress if you use the nerdkit with the lcd so you can display things as you figure out arrays and pointers. It will be much easier to look at the data that way instead of the single byte at a time you can show with 8 leds. It will take some practice to master arrays and pointers so be prepared to spend some time on it.
Some pseudo-code behind the opcode idea:
// delay lengths
#define NONE 0
#define SHORT 1
#define MEDIUM 2
#define LONG 3
#define EXTRALONG 4
#define EXTRAMEDIUM 5
// instructions
#define SHOW(bits, onTime, offTime) pattern[offset++] = bits | (onTime << 8) | (offTime << 11)
#define GOTO(address) pattern[offset++] = 0x8000 | address
#define CALL(address) pattern[offset++] = 0xC000 | address
#define RETURN pattern[offset++] = 0x4000
// define the program
uint16_t pattern[4000];
int offset = 0; // instruction pointer
int start = offset; // a label
int slant; // a label to be defined later
CALL(slant); // create and store "call" instruction
GOTO(start); // create and store "goto" instruction
slant = offset; // another label
SHOW(0b11000000, MEDIUM, SHORT);
SHOW(0b01100000, MEDIUM, SHORT);
SHOW(0b00110000, MEDIUM, SHORT);
RETURN; // create and store "return" instruction
// the stack--allows 8 levels of subroutine nesting
#define STACK 8
int stack[STACK];
int sp = 0; // stack pointer
offset = 0; // start at first instruction
while (1)
{
uint16_t opcode = pattern[offset++]; // get opcode and increment IP
uint16_t topBits = opcode & 0xC000;
if (topBits == 0x8000) // goto instruction
{
offset = opcode & 0x3FFF; // set new instruction pointer
}
else if (topBits == 0xC000) // call
{
stack[sp++] = offset; // save current instruction
offset = opcode & 0x3FFF; // and jump to new location
}
else if (topBits == 0x4000) // return
{
offset = stack[--sp]; // go back to where we came from
}
else // bits
{
PORTD = opcode; // lower 8 bits only
delay_ms(50 * ((opcode >> 8) & 7));
PORTD = 0;
delay_ms(50 * ((opcode >> 11) & 7));
}
}
This raises the question of why not just write the commands as a "C" program to begin with? The answer is that the Atmega168 can't execute code out of EEPROM, but it can do this.
This is fun. I should code more often.
Yes the code compiled and ran with the noted warnings.
Of course the code did not run correctly :-(
I fixed the warnings!
line 23 and 24 became unsigned char _array[100] = {"slantLeft, slantRight"};
line 48 became arrayName = &_array[i];
The code compiles without warnings or errors but the code does not run correctly.
Oh well back to the drawing board.
My fixes were just guesses.
Now bretm where is the opcode documentation?
Yeah Paul, I will put the I2C LCD on so I can do some debugging.
Now bretm, [quote]And then instead of the values "slantLeft" and "slantRight" you would use &slantLeft and &slantRight to get the addresses of those arrays.[/quote]
I was passing "slantLeft" and "slantRight" as literal text strings in order to name the array on line 48.
I'll have to work on understanding holding the address to an array in an array.
At the moment I do not know how to use it but again an interesting concept.
Moving on thanks everyone with your help I'magonna make it.
Actually just slantLeft and slantRight without the quotes and without the "&" would be the addresses of those arrays. If _array is *int8_t instead of unsigned char, that should be about all you need to do (and fix up the for loops so they don't go past the ends of the arrays).
Oh yeah, the AVR instruction set documentation can be found here.
bretm, the compiler does not like *int8_t.
I get this error:
WC.c:28: error: expected expression before 'int8_t'
Line 28 reads:
*int8_t _array[100];
Hey Ralph, watch some of these and see if all this array and pointer stuff makes more sense -
array tutorials
pointer tutorials
Sorry, int8_t*
Hey Ralph,
Here's antother use for your water curtain setup -
levitating water
Thanks Paul, that is fascinating.
I picked up a strobe light at a yard sale to see what effect there might be.
I have been thinking of incorporating your zero cross code and of course Rick's code from my Reflow Oven.
In order to syncronize the strobe with the mcu I "should" be able to time the strobe exactly to the drops of water, that is the mcu will be firing the solenoids so it should be able to fire the strobe at the same time.
I have never done anything with optics so I picked up a beginners optics kit from Edmunds Scientific to begin learning
at the simplest level. I also picked up a beginner fiber optics kits and some laser modules to see what effects I can build/have.
Of course I am completely stalled trying to get the I2C LCD working so that I can debug/learn how to produce my images.
I have my above code "almost" working.
I suppose I could use both PORTB and PORTC to get my 8 output pins but using one PORT is much cleaner and simpler.
I have never been able to use PB0 in order to use PORTB so using PORTD just makes sense.
I'll post my latest code but right now I am trying to get the I2C LCD working.
I just realized I can use the serial output and see the debug messages on my PC and not worry about the LCD.
Of course I have to get the LCD working.
Oh well it is an absolutely beautiful day and I get to spend it inside at my desk working on my Nerdkit.
I think I'll just go and mow the lawn or do some other outdoor project, it is a waste to be inside.
Duh, of course I can not use serial output (USART) for debugging. I have to get the I2C LCD working for any output beyond the LEDs
which actually tells me quite a bit.
bretm, I really like the opcode concept but just could not get my mind wrapped around to make something useful.
Now thanks to Cliff (clawson) over at AVRFreaks I have a great "Pattern Generator".
Now I'll ask over on Noter's I2C EEPROM how to get the pattern into EEPROM.
Of course then EEPROM to Shift Register.
I should be able to prove within the week if this will even work, it better I have a lot of time and probable $500.00 invested in it so far.
So if anyone would like to work on a section of the long list of things to do please let me know.
I need the EEPROM code.
I need the Shift Register code.
I need a button routine to select the patterns to display.
I need a PC interface to generate a pattern on a pc and send it to the EEPROM or mcu.
I need a texting interface so that someone could enter a text message on their cell phone and have it display on the water curtain.
I need a pc image scanner routine to scan a image drawn on a tablet to a pattern to be displayed on the water curtain.
Sounds like fun right.
Sounds like you have your work cut out for you to me!
Rick
The opcode idea would only be useful if you run out of room to store the patterns by storing repeated patterns only once, or if you wanted to store "code" to generate patterns on-the-fly, e.g. showing the current temperature or showing randomized patterns, etc.
One advantage of using structures is having only one pointer for all your elements instead of each element having it's own pointer. This reduces the number of variables in your program and save's a little time since only one pointer value needs be calculated for an array or even better, passed to a function. Imagine you have a function that needs the values of 25 variables. You could pass the address of each in the call or if using a structure, only a single address to the structure containing the data.
You have a lot on your plate with your water curtain project. I think you should save the eeprom part of your application for last. After you get everything else working using PROGMEM/RAM to store your sample patterns, moving to and further populating eeprom would be the next/final logical step.
How did your feasability test using patterns from PROGMEM tables/structures come out?
By the way, did you get your I2C display going again? I think you're going to need it.
I am having problems picturing exactly how I am going to set up a sequence of patterns to display.
Most likely there will be times when I want to repeat the same patterns over and over but I might want random patterns of a certain group.
Or ...?
I know I want to press a button and bring up a list of all patterns and then step through the list selecting the patterns I want to display.
So I can see three buttons.
Button 1 bring up list and step through.
Button 2 select pattern, and enter in to a display list.
Button 3 save display list and run display.
I am definitely doing the PROGMEM method first, but I wanted to start thinking about how I would use the EEPROM.
No I can not get the I2C LCD to work.
Actually I do not need the I2C LCD as I was using PORTD only for a simplified testing method, having all 8 pins available helped me picture what I needed to do for the pattern generator.
I will be using a shift register (eventually two or three serially chained) so I can use the Nerdkits LCD code.
I would like to use the I2C LCD code but I am really getting pressed for time.
I need the LCD for my button selection routines, I will probable be able to modify Rick's Date/Time button routine from his I2C Real Time Clock project, he positions a list on the lcd for selection.
I need at least two more buttons Button 4 to set the ON delay and Button 5 to set the OFF delay.
I will do button 4 and 5 first I need to get that working for my initial testing.
I have had to completely rework my framework but I should have that done today or tomorrow then I need to wire everything up.
I have had water flowing but with out any control now I need to put my solenoids in line and set up the Nerdkit to generate the patterns to be displayed.
If I have the buttons to vary ON and OFF delay than I can make changes on the spot instead of hardcoding the timings as I have been doing.
So I will do something like:
while (1) {
d_ptr = (uint8_t *)pgm_read_word(&patterns[current_pat].data);
for ( i=0;
i < pgm_read_byte(&patterns[current_pat].len);
i++) {
PORTD = pgm_read_byte(d_ptr + i);
delay_ms(ON);
PORTD = 0;
delay_ms(OFF);
}
}
With ON and OFF coming from Button 4 and 5 (0 - 255).
So far all of my timing values are just guesses.
So yes I have a lot to do so if anyone has any pointers to some clean simplified button routine code that would would help a lot as I really want to be able change my timings without have to recompile my code.
I will not have a PC at the initial testing water curtain display so it would be a bother to have to use inline hardcoded ON/OFF.
Having a problem driving my transistors (BC337).
I was able to test each solenoid in the past but have forgotten how I did it and now I can not see if the transistor is firing to trigger the solenoid.
Here is link to the BC337 transistor spec sheet:.
Here is my wiring diagram:
with a 2.2kΩ resister to the mcu.
My Base voltage is <=4.2v could this be the problem? The spec sheet calls for:
| Vebo | Emitter-Base Voltage | 5 volts | there isn't a + - tolerance given.
I "think" when I was testing each solenoid was wired direct to Vcc at 4.89 volts.
I'll try to set that up again to test.
Any other ideas?
The solenoids have just been sitting in a box since I tested them at my desk three or four months ago so nothing" should" have happened to them.
Only the wiring is different and I am now going through the mcu.
Could I not be supplying enough current?
Currently I have LEDs in series which I'll remove.
Thanks for the help and any suggestions.
Found my problem.
I needed a Common Ground between the mcu 5 volt power supply and the 24 volt power supply for the solenoids!!
I probable have a note somewhere about needing a common ground.
I will have water flowing later on this afternoon so I'll be able to see what will work.
Timing will be a issue, it's to bad right now I have to hard code all of of the timings but once I get the button routines worked out I'll be able to make dynamic changes.
I have 6 mcus so I can hard code different test scenarios but keeping track is a pain.
I am using little stickers on the mcus with the timings noted.
Well here we go, please stand by.
Hi Ralph,
How High are you going to mount the water valves? I suppose you would have to bring in that consideration for timing to get a correct picture with the water?
-missle3944
See my thoughts were to let gravity feed the water rather than pressure. That way, since gravity is a constant, you should be able to calculate the rate of movement thus locking in a timing factor. No matter what you do, the water "display" will stretch the farther it travels if moving downward.
Rick,
That is a great idea. So are Ralph's valves would be not pressurized then right? I looked at this water curtain project and it looks like theirs too is gravity fed except for the pump circluating the water from the bottom container back to the top reservoir.
It's about 5 feet.
I can vary the pressure.
If I fully open the relief there will be minimal back pressure.
Of course once I solved the transistor problem all of a sudden one of my pins no longer gets energized.
This is using the exact same code as yesterday but now one pin no longer lights the led or switches the solenoid.
I am also "trying" to get a shift register working I'll probable post a new thread for that.
The preferred method is to use the shift register so that is what I am currently trying to get working!
I am using Rick's code from the Photography Club thread.
I have had the code working now it doesn't I am not sure of the pin outs.
Ralph
Ralph and Rick
What if you used relays instead of transistors. But you just put a tranistor in series with the relay and the mcu pin connected to the tranistor to drive the relay? I've done that and it seems to be fine. But relays aren't cheap sometimes. I know radioshack has some 120v ac ones for around $2 or $3 but I cant remember the current load. I don't know whats better but you might have less trouble with a relay. Just a suggestion,
Mosfets are fully capable of providing the current needed by the coils of the water valves. Relays would be an un-needed and slow addition. A relay being a mechanical device is too slow to provide the quick switching that would be required for the water to appear as pixels in a display.
Honestly I don't know if standard solenoid controlled water valves would be quick enough themselves to provide a good display. I guess Ralph will find out.
@Rick,
Thanks for the response cleared up my confusion on using them for different applications.
missle3944
The solenoids I have have a 5ms response time (I believe).
You're telling me those can cycle on/off 100 times per second!! (5ms on+5ms off) If that's the case, you shouldn't have a hangup there.
Here is the spec sheet cut:
It list the "response time" as 5 - 8ms.
I got the solenoids off e-bay apparently they were made for a third party spec job as there is no labeling or direct specifications.
The seller said they were 24volt so that is what I have been going by.
They appear to be very high quality, made in Italy in fact.
As soon as I bought them at $5.00 the seller bumped the price to $8.00.
I will need 64 more of them once I have proof of concept. Another $500.00 is going to be tight.
I hope they will take a offer. I would really like to get more of the same valve.
Sweet, I would think those should work fine. I will say though, you have your work cut out for you
It is so unbelievable that every time I make a break through "something" will happen.
I really get depressed.
Usually and it did it again today well it started two days ago but today all of my mcu's or breadboards stopped working!
I had two breadboards, one I was using PORTD and driving the LEDs (in place of the solenoids for testing and development)
All of a sudden one led would no longer light running the same code I have been using for the past week. I switched mcus but with all of them the one LED failed to light which previously was working. The pin was not getting energized.
So I switched back to using the Shift Register. I once had the shift register working but this week I was having problems with remembering the pinout, so I tried re-wiring it and it just plain stopped working. I could no longer get any of the leds to light.
Then I sometimes could not program any mcu. This only happened occasionally, but more often I got the "Programmer Not Responding" error until today it totally stopped working.
So today I built a new breadboard and when I moved the USB cable from the breadboard that had been flaky and then stopped
I discovered that I had a bad wire on the USB cable there probable was only one strand of wire still connected.
The USB wire was acting so flaky that when I removed it from the breadboard it would send my mini MAC (OS X) into zombie land and I would have to reboot this also happened a couple of times today.
AND the bootloader got messed up on the mcus that were in place when the MAC got zapped so I had to fire up my pc to reload the bootloader.
I still have not recovered all of my mcu's nor do I have any running using PORTD or the Shift Register.
I would really like to get the shift register running.
I might not be able to work on this much as my first grandchild is coming to visit over the weekend, I would really like to show off my Water Curtain.
Oh well tomorrow will be a better day.
Well I got a breadboard to work for a couple of days and actually have water flow and my solenoid valves working.
Now I am at the hard part, I knew the timing and pressure were going to be a chalenge.
My daughter says she can perceive some of the patterns in the water flow.
If I use 5ms on 2000ms off with 4" of water column pressure I can start to see a bar effect with PORTD = 0b11111111
on and off.
I definitely have my work cut out for me.
I have a U-tube manometer so I am able to precisely measure the pressure as long as I stay under 8inch water column.
Which is about a 1/4# psi.
If I use more pressure everything looks like a steady fluctuating water stream.
Now for more complaining about this darn ATmega AVR programing environment.
As I mentioned earlier working code stopped working or stopped working correctly.
Well now I had my Windows XP Pro tablet near where I had the water curtain setup "thinking" I could save some time with reprograming the mcu.
I haven't gotten a button routine running yet so I am having to hardcode all timings. Anyways when I tried to change and compile the working code only one pin was energised even thought the the code said PORTD = 0b11111111 it would come out as 0b00010000.
So I would have to go back to my house and compile the exact same code to get 0b11111111 output. This was code compiled on my Mac would run correctly code compiled on my tablet would not, the exact same code.
Yesterday the mcu/breadboard decided that it no longer wanted to be programmed so currently I can not change the code.
I have had to do a lot of dismantling and moving the breadboard back and forth so it is not surprising that it stopped working
but it is a pain. I really need a button routine so I could make timing changes in place.
Now as if I didn't have enough to do it appears I will have to make up my own valving system. I started a new thread asking for help in building a coil.
The original supplier of the solenoid valves only has 11 left and everything I am finding starts at $25.00 so I can not afford to
purchase many more valves.
Did anybody make any progress with the needle valves you were trying to make?
I sure would like any feedback on what you found out.
I believe I can make up something along the lines of what Rick had described.
I now know I do not want a pressurized system so that simplifies the process.
I'll make up a drawing of what I am picturing so far.
The problem is I am not a precision kinda guy.
Close enough is normally fine by me, but if I am going to make up a valve assemble I am going to need a lot of precision.
I think I took Machine Shop 1 about 50 years ago so I have that experience to draw from.
Right now I am picturing a bunch of tapered pins in some tapered holes in a waterproof tray.
The tapered holes I can make with a ream it's the tapered pins I would have to turn on the lathe.
So anybody have any recommendations for a lathe, I have looked at the combo lathe and mill but I just do not know what I should get. Or if I should just get the lathe now and hopefully come up with some more money to buy the mill later on.
Then how would I make the coils?
I am in fact on a very limited budget.
I swear this was working!
Definitely there is more grey in my hair.
Here is the "working" code:
#define F_CPU 14745600
#include <stdio.h>
#include <stdlib.h>
#include <inttypes.h>
#include <avr/io.h>
#include <avr/pgmspace.h>
#include "../libnerdkits/io_328p.h"
#include "../libnerdkits/delay.h"
#include "../libnerdkits/uart.h"
int main(void)
{
uint16_t ON, OFF = 0;
DDRD = 0xFF;
UCSR0B = 0; //Turn off UART
ON = 1250;
OFF = 1000;
while (1)
{
PORTD = 0b11111111;
delay_ms(ON);
PORTD = 0;
delay_ms(OFF);
}
}
This code produces this:
Notice LED1 (first on the right) PD0 is not lit!!
PORTD = 0b00000111;
Produces this:
PORTD = 0b00000011;
Produces:
and finally:
PORTD = 0b00000001;
Only with PORTD = 0b00000001; is PD0 high!!
Please someone tell what I am doing wrong!
Or what I need to do to restore (it used to work) the code.
I am turning off the UART with UCSR0B = 0; so that should not be getting in the way.
Is there something else I need to do to enable PD0 when other pins are high?
Thanks for the help, I need it.
Does is help if you turn off the uart before you mess with DDRD? The uart still controls part of port D at the point you're first touching it.
There's only one current limiting resistor? Am i seeing the picture right? Can you measure the current going through it with 0b11111111?
Try sinking your output from the micro. Turn your LED's around and put a current limiting resistor on each led to VCC. See what happens. With all going through one resistor, you as you light each LED, I believe the current gets divided to between them. So if your resistor limits the current to 20ma, when you light one led it gets 20, when you light 2 they get 10 ea, 4 would be 5 ea, and 8 would only be 2.5 ea. That is why it is always best to put current limiting resistors on each LED. Secondly the max current output when sourcing current for a port is (I'm at work so don't quote me) I think 100ma / port. So you should never drive 8 LED's at 20ma each on one port.
Apparently it was my attempt to use just the single current resistor as with 8 individual led resistors it works!!
I had been using the breadboard outdoors with the transistors also working off the pins so the Leds were just barely visible.
It was only when I brought it inside that I noticed the led was not lit, probable the solenoid also was not firing but I did not count water streams.
So once again I am indebted to you all, thanks a bunch.
Now I have to get the shift registers working.
Using PORTD was just because I could not get a shift register to work.
I have had Rick's code working but now it just will not show patterns.
I think I might have a problem with the pinout.
Oh I saw interesting things in measuring the current just lighting LED1 (PD0) pulls 49ma.
Light 8 leds pulls 99ma this is with 500ms ON and 1000ms OFF using the individual led resister.
Attempting to light the 8 leds with the single current resister pulled only 56ma.
On to the shift register enough time spent on this.
Thanks again,
Please log in to post a reply. | http://www.nerdkits.com/forum/thread/1369/ | CC-MAIN-2018-09 | refinedweb | 13,315 | 79.5 |
What the Singleton Pattern Costs You
What the Singleton Pattern Costs You
Singletons increase coupling, lower testability, and reduce your ability to reason about the code. All of this works towards creating unbreakable assumptions about it.
Join the DZone community and get the full member experience.Join For Free
Do you use the singleton pattern? If not, I’m assuming that you either don’t know what it is or that you deliberately avoid it. If you do use it, you’re probably sick of people judging you for it. The pattern is something of a lightning rod in the world of object-oriented programming.
You can always use Stack Overflow as a litmus test for programming controversy. Someone asked “what was so bad” about singletons, and voting, responding, and commenting went off the charts. Most of those responses fell into the negative category. Outside of Stack Overflow, people call singletons evil, pathological liars, and anti-patterns. People really seem to hate this design pattern — at least some of them do, anyway.
NDepend takes a stance on the singleton pattern as well, both in its rules and on the blog. Specifically, it encourages you to avoid it.
But I’m not going to take a stance today, exactly. I understand the cathartic temptation to call something evil in the world of programming. If some (anti) pattern, framework, or language approach has caused you monumental pain in the past, you come to view it as the tool of the devil. I’ve experienced this and turned it into a blog post, myself.
Instead of going that route here, however, I’m going to speak instead about what it can cost you when you decide to use the singleton pattern — what it can cost the future you, that is. I have a consulting practice assessing third-party codebases, so I’ve learned to view patterns and practices through a dispassionate lens. Everything is just trade-offs.
What Is the Singleton Pattern, Anyway?
Before I go any further, I should probably explain briefly what this thing is, in case you aren’t familiar. I won’t belabor the point, but here’s a quick example. (I’m keeping this to a bare minimum and not worrying about non-core concepts like thread safety.)
public class ASingleton { private static ASingleton _instance; private ASingleton() { } public static ASingleton Instance { get { if (_instance == null) _instance = new ASingleton(); return _instance; } } }
Alright, so why do this? Well, the pattern authors define its charter as serving two principal purposes:
- Ensure that only one instance of the class ever exists.
- Provide global access to that single instance.
So let’s consider implications and usage. When you use this pattern, you define an object that will exist across all application scope and that you can easily access from anywhere in the code at any time. A logger is, perhaps, the most iconic example of a singleton use case. You need to manage access to a resource (file), you only want one to exist, and you’ll need to use it in a lot of places. A marriage made in heaven, right?
Well, in principle, yes, I suppose so. But in practice, things tend to get a lot messier. Let’s take a look at what can happen when you introduce the singleton pattern into your codebase. Let’s look at what it can cost you.
Side Effects Hurt the Ability to Reason About Your Code
Think for a moment about how you would invoke an instance method on a singleton class. For instance, let’s say that you’d implemented your own logger. Using it might look something like this:
public Order BuildSimpleOrder(int orderId) { var order = new Order() { Id = orderId }; Logger.Instance.Log($"Built order {orderId}"); return order; }
The method constructs an order in memory based on the order’s ID and then returns that order, stopping along the way to log. It seems simple enough, but the invocation of the logging presents a simple conundrum. If I’m looking at just the signature of BuildSimpleOrder, say, via IntelliSense, the logging is utterly opaque to me. It’s a hidden side effect.
When I instantiate the class containing BuildSimpleOrder, I don’t inject a reference of a logger into a constructor. I don’t pass the logger in via a setter, and I don’t give it to this method as a parameter. Instead, the method quietly reaches out into the ether and invokes the logger which, in turn, triggers file I/O (presumably).
When you have singletons in your codebase, you lose any insight into what methods are doing. You can take nothing for granted, and you must inspect each and every line of your code to understand its behavior, making it generally harder to reason about. This was why Misko Hevery called singletons “liars.” BuildSimpleOrder now does something completely unrelated to building a simple order — it writes to a file.
You Give Up Testability
Consuming singletons doesn’t just make your code harder to reason about. It also makes your code harder to unit test. Think of writing unit tests for the method above. Without the logger instance, that would be a piece of cake. It’d look like this:
[TestMethod] public void BuildSimpleOrder_Sets_OrderId_To_Passed_In_Value() { const int id = 8; var processor = new OrderProcessor(); var order = processor.BuildSimpleOrder(id); Assert.AreEqual<int>(id, order.Id); }
Then, if building the order demanded a bit more complexity, you could easily add more tests. This is an imminently testable method…without the call to the logger.
With the call to the logger, however, things get ugly. That call starts to trigger file I/O, which will give the test runner fits. Depending on the runner and machine in question, it might throw exceptions because it lacks permissions to write the file. It may actually write the file and just take a while. It may throw exceptions because the file already exists. Who knows? And worst of all, it may behave entirely different on your machine, on Bob in the next cubicle’s machine, and on the build.
Normally when you have external dependencies like this, you can use dependency injection and mock them. But singletons don’t work that way. They’re inline calls that you, as a consumer, are powerless to interpose on.
You Incur Incredible Fan-In and Promote Hidden Coupling
Singletons tend to invite use and to invite abuse. You’d be surprised how handy it can be to have a deus ex machina at your disposal when you suddenly need two objects at the complete opposite ends of your object graph to collaborate.
Man, we could solve this really quickly and with little rework if the login screen could just talk directly to the admin screen. Hey, I know! They both have access to the logger. We could just add a flag on the logger. It’d only be temporary, of course…
Yeah, we both know that flag will NOT be temporary. And this sort of thing happens in codebases that rely on the singleton pattern. More and more classes take direct dependencies on the singletons, creating a high degree of fan-in (afferent coupling) for the singleton types. This makes them simultaneously the riskiest types to change, the hardest types to test, and the most likely types to change, all of which adds up to maintenance headaches.
But it even gets worse. Singletons don’t just invite types to couple themselves to the singleton. They also invite types to couple their own logic through the singleton. Think of the login and admin screens communicating through the singleton. It acts as a vessel for facilitating hidden couplings as well.
And while I will concede that use of the pattern does not automatically constitute abuse, I will say that inviting it is problematic. This holds doubly true when your only method of preventing abuse is vigilant, manual oversight of the entire codebase.
You Lose the Ability to Revisit Your Assumptions
I’ll close by offering a more philosophical cost of the singleton pattern that builds on the last three. You have code with side effects that you have a hard time reasoning about. Likewise, you can’t easily test it, and you create an outsize dependency on the singleton pattern instances. What does all of that wind-up mean?
It means that changing your assumptions about the singleton itself becomes nearly impossible. These things cast themselves in iron in your codebase and insert tentacles into every area of your code. Imagine the horror if a business analyst came along and said something like, “Why can’t we just have two simultaneous log files?” or, “Why can’t we have sessions that stop and then restart later?”
In answer to these seemingly reasonable requests, the architect will turn red and splutter, “Are you CRAZY?! Do you know how much rework that would mean?” When you use singletons, you had better make sure that there could really only ever be one log file, print spooler, session, etc. and that your instance of that will last the same length as the program. Because if that assumption ever needs to change, you’ll find yourself in a world of pain, to the tune of a total rewrite.
I won’t call the singleton pattern evil or rail against it. But I will tell you that it extracts a pretty high toll from you in the long run.
Published at DZone with permission of Erik Dietrich , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/what-the-singleton-pattern-costs-you | CC-MAIN-2020-05 | refinedweb | 1,607 | 64.61 |
How to interact with the Ethereum blockchain and create a database with Python and SQL
Introductory workshops about blockchain often start with the easy-to-digest story of peer-to-peer network and bank ledgers, and then jump straight to coding smart contracts, which is quite abrupt. So instead imagine yourself walking into a jungle and think of the Ethereum blockchain as a strange creature you are just about to study. Today we’ll observe the creature, interact with it and gather all the data about it into a centralized storage for your own use.
Setting up for the first encounter
First, you will need to install web3py. Web3py is a Python library for connecting with the Ethereum blockchain. What you need to know beforehand is that there is no central administrative system where the data can be downloaded from. The interconnected nodes (“peers”), which share resources amongst each other, store a verified copy of the data (or its part). The network executes the Ethereum protocol, which defines the rules of interaction of nodes with each other and/or smart contracts over that network.
If you want to access information about transactions, balances, blocks or whatever else is written into the blockchain you don’t know of yet, the protocol requires you to connect to the nodes. The nodes continuously share new data with each other and verify the data, so in this way you are sure you get 1) data which has not been tampered with and 2) which is most up-to-date.
There are two basic categories of nodes you could use in your first approach to the creature: local or hosted. A local node can run on your machine, which means you first need to download a client like geth that will sync the blockchain to your device, occupying storage and taking time to complete. For the first encounter, a hosted node is a better choice — it is controlled by somebody else but you can easily connect to it and play around with the blockchain on your own.
Go to Infura and make your own free account to access such a hosted node. When you’re done, you’ll see a list of networks you could connect to: the mainnet (the main Ethereum blockchain), and a bunch of testnets, which are there to basically test your smart contracts, so that you can make mistakes on them and correct them before you deploy costly code to the mainnet.
Time for the first approach. Import the Web3 object and establish an HTTP connection.
from web3 import Web3
web3 = Web3(Web3.HTTPProvider(""))
And you’re all set! Now you can explore the data structure with the web3 API.
Retrieving info about specific blocks…
#current block number
>>> web3.eth.blockNumber
5658173
#get the content of the most recently mined block
>>> web3.eth.getBlock('latest')
This command returns the AttributeDict data structure, which is a dictionary of key-value pairs that looks like this:
Not all of these variables will be immediately useful to you, as some are quite technical and their meaning will only make sense once you have a deeper understanding of how blockchain actually works. You can read more about them in the so-called ‘Yellow Paper’ or skip them for the time being and work with the easily understandable ones.
In short, a block contains the block header, a list of verified transactions written to it and a list of uncles (block identifiers of miners who were slightly too slow with their blocks to make it to the main blockchain but still got rewarded with Ether for their computational effort). Below you can read what the meaning is of each variable, which I divided into subcategories.
General
Mining-related
Uncles
Technical
…transactions and their receipts
Now we can also look up single transactions in a block by their unique identifiers, i.e. transaction hashes.
As previously, web3py returns us an attribute dictionary. The table below summarizes what each key stands for.
Finally, we can also look into transaction receipts:
A transaction receipt contains a few repeated and new entries; the new ones are explained below.
For the reference, I included various additional resources besides the Yellow Paper to compile these tables [2, 3, 4, 5].
As you can see, with just a few simple commands you can already connect to the network and get basic info about the transactions, blocks, or states in the raw format. This opens a new window to what can be done with such data!
Database management system
When planning to write your data to a proper database, you probably realize that there are many solutions for management systems out there for Python enthusiasts, such as serverless SQLite, or server-based MySQL, PostgreSQL, or Hadoop. Depending on what you are intending to do, you will have to determine which option is the best for your project. In general, I’ve found these points to be helpful:
- What is the intended size of the database (i.e. can it be processed on a single machine system)?
- Are the entries going to be frequently edited or will they remain fixed?
- Is the database supposed to be accessed and edited by multiple parties/apps simultaneously?
The Ethereum blockchain is growing steadily over time, getting close to 1 TB as of June 2018, which is small, hence not optimal for a distributed processing system like Hadoop. The blockchain database will be written once and then only expanded with new entries, leaving old entries unchanged. The intended use case of this database is to be written by one channel and accessed read-only by the other channels, so we do not really need to run it on a server. Keeping the database locally on your machine will result in a quick read-out, which is desirable and achievable with a serverless management system like SQLite. And Python has a built-in library
sqlite3, thus we don’t even need to install new packages.
Database design
The next step is designing your database. Keep in mind which data fields are the most relevant for your analysis, and aim to optimize both search and storage. For example, if you do not plan to use
stateRoot, you may want to completely skip it or keep it in a separate table. A table with fewer columns can be searched through faster, and if you later on realize that you actually have a use-case for the
stateRoot, you will still be able to access it. You may also want to separate block information from the transaction information; if you don't, block properties like
timestamp will be repeated N times for all transactions in the block, wasting lots of space. Matching a transaction with its block properties will be easy with the
JOIN operation later on.
The database I designed consists of 3 tables:
- Quick: most relevant transaction info for quick access & analysis,
- TX: all remainder transaction info,
- Block: block-specific info.
The naming convention of variables has been slightly altered with respect to the original web3py to get rid of ambiguities, such as calling both block hash and transaction hash the “hash”, or using “from”/”to” as column names, which in SQL have a different meaning and would crash the program.
Transaction values, balances and other big numbers need to be stored in the database as strings. The reason is that SQLite can handle only signed integers stored in up to 8 bytes, with a maximum value of 2⁶³-1 = 9223372036854775807. This is often much lower than the transaction values in wei (e.g. only 1 ETH= 10¹⁸ wei).
Building your mini database
The full code can be found on GitHub. It will organize the blockchain info according to the upper schema and output a blockchain.db file containing data of a pre-specified number of blocks. To test it, go to
database.py file and pick a reasonable number for the number of blocks to be written, e.g.
Nblocks = 10000
By default, you should point the web3 object to your Infura endpoint. You can also switch to the IPC Provider if you have one (i.e. your local node), just uncomment the line
# or connection via node on the VM
#web3 = Web3(Web3.IPCProvider('/path-to-geth.ipc/'))
and fix the path. Then simply run in your command line
python database.py. The code will dump the number of the last written block into the file
lastblock.txt, in case you need to restart where you left off.
How to use the database
Once you’ve written the first entries to the database, you can start communicating with it via ipython shell. For example, to print the first 5 rows of the table “Quick”, you can run the code below.
Local node vs. Infura
If you want to build a big database, you should download geth and sync a node. The synchronization can be done in 3 basic modes:
If you do not need past account states, you can sync your node in fast mode [6].
Below is a plot showing you the speed at which this code writes to a database, communicating with the fully synced node locally (IPC) vs. an address on Infura (Infura). As you can see, it pays off to run this code on a local node, as you get the speed boost of nearly 2 orders of magnitude (aka 100x)!
Summary
Now that you have your own local database of what happened and happens on blockchain, you can start exploring it. For example, you can count the number of transactions since its genesis, see how many addresses are generated as a function of time — the sky is the limit for what you can learn about your creature. We just set the stage for your data science playground. So go ahead and explore it, or check the next posts for potential applications.
Contact analytics@validitylabs.org if you are in interested in blockchain analytics services of Validity Labs. | https://medium.com/validitylabs/how-to-interact-with-the-ethereum-blockchain-and-create-a-database-with-python-and-sql-3dcbd579b3c0 | CC-MAIN-2019-04 | refinedweb | 1,663 | 68.2 |
Sadly.
There's a core conflict:
- The advantages of spreadsheets-as-database are numerous.
- The disadvantage is the lack of any strict, formal control over the schema.
It goes downhill rapidly from that ideal.
Let's look at some scenarios. And. How to cope. And. Python to the Rescue.
Outliers, Special Cases, Anomalies, and other IrregularitiesThe whole point of a "normalized" view of the data is to identify a pattern, assign the lofty title of "Schema" to the pattern, and assure all of the data fits the schema. In rare cases, all of the data fits a simple schema. These cases are so rare they only exist in examples of SQL code in tutorials.
A far more common case is to have several subtypes which are so similar that optional attributes (or "nullable columns" in SQL parlance) allow one schema description to encompass all of the cases. If you're a JSON Schema person, this is the "OneOf" or "AnyOf" type definition.
Some folks will try argue that optional attributes don't always mean that there are several subtypes. They'll ramble on for a while and eventually land on "state change" as a reason for optional attributes. The distinct states are distinct subtypes. Read up on the State design pattern for OO programming. Optional attributes is the definition of subtype.
The hoped-for simple case is a superclass extended by subclasses used to add new attributes. In this case, they're all polymorphic with respect to the superclass. In a spreadsheet page, the column names reflect the union of all of the various attributes. There are two minor variants in the way people use this:
- An attribute value is a discriminator among the subtypes. We like this in SQL processing because it's fast. It also allows for some validation of the discriminator value and the pattern of attributes present vs. attributes omitted. Of course, the pattern of empty cells may disagree with the discriminator value provided.
- The pattern of attributes provided and omitted is used to identify the subtype. This is a more reliable way to detect subtypes. There can, of course, be problems here with values provided accidentally, or omitted accidentally.
The less desirable case disjoint classes with a few common attributes. Worse, the common attributes are not part of the problem domain, but are thinks that feel databasey, like made-up surrogate keys. There's an "ID" in column A or some other such implementation detail. Some of the rows use column A and columns B to G. The other rows use column A and columns H to L. The only common attributes are the surrogate keys, perhaps mixed with foreign key references to rows in other spreadsheet tables or pages.)
This is a collection of disjoint types, slapped together for no good reason. SQL folks like to call multiple master-detail relationships. The master record has children of multiple types. In some cases, the only thing the children have in common is the foreign key relationship with the parent. If you want a concrete example, think of customer contact information: multiple email addresses, multiple phone numbers. The two contacts have nothing in common except belonging to one customer.
These don't belong in a single spreadsheet table. But. There they are. Our code must disentangle the subtypes.
Arrays
A lot of spreadsheet data is a two-dimensional grid. Budgets, for example, might have categories down the page and months across the page.
This is handy for visualization. But. It's not the right way to process the data at all.
This extends, of course, to higher orders. Each tab of a spreadsheet may be a dimension of visualization. There may be groups of tabs with a complex naming convention to include multiple dimensions into tab names. Rows may have multiple-part names, or use bullets and indentation to show a hierarchy.
All of these techniques are ways to provide a number of dimensions around a fact that's crammed into a cell. The budget amount is the fact. The category and the month information are the two dimensions of that cell. In many cases, Star-Schema techniques are helpful for understanding the underlying data, separate from the visualization as a spreadsheet.
Our code must disentangle the dimensions of the meaningful facts.
NormalizationThere are tiers of normalization. The normalization described above is part of First Normal Form (1NF): all rows are the same and all data items are atomic. Pragmatically, it's rare that all spreadsheet rows are the same, because it's common to bundle multiple subtypes into a single table.
Sidebar Rant. Yes, the presence of nullable columns in a SQL table *is* a normalization error. There, I said it. Error. We can always partition the rows of table into a number of separate tables; in each of those tables, all columns are required. We can rebuild the original table (with optional fields) via a union of the various decompositions (none of which have optional fields). The SQL folks prefer nullable columns and 1NF violations over unions and 1NF absolutism. I'm a fan of 1NF absolutism to understand each and every nullable attribute because casual abuse of nulls is a common design error.The other part of 1NF is each value is atomic: there's no internal structure to the value. In manually-prepared spreadsheet data, this is difficult to insist on. Stuff gets combined into a single cell because -- well -- it seemed helpful to the people entering it. They put all the lines of an address into a single cell because they like to see it that way.
Third Normal Form (3NF) forbids derived data (and transitive dependencies). In a spreadsheet, we might have a row-level computation. It helps the person confirm the data is correct. It's not "essential". It breaks the 3NF rule because the computed attribute depends on other field values; a change to one attribute will also change the derived attribute.
When we first encounter spreadsheet data, this isn't always obvious. In some cases, the derived data is computed "off-line" -- i.e., manually -- and entered into the spreadsheet. Really. People pull up a calculator app (or whip out their phone), compute a value, and type it in. In other cases, they look something up manually and enter it.
These kinds of data entry weirdnesses require code to normalize the manually-prepared data. We'll have to decompose non-atomic fields. And we'll have to handle derived data gracefully. (Reject it? Fix it? Warn them about it? Handle it as an exception?)
RelationshipsLet's talk about Second Normal Form (2NF). We really want to have a row in a table represent a single thing. The SQL folks require all of the attributes to be dependent on the row's key. In spreadsheet world, we may have a jumble of attributes with a jumble of dependencies. We may have multiple relationships in a single row. Look at the Second Normal Form page on Wikipedia for examples of multiple relationships mashed together into a single row.
When a spreadsheet has 2NF problems, there will be situations were some collection of attributes is repeated -- verbatim -- in multiple places. The most common example in US-based data is City-State-ZIP Code. These three *always* form a consistent triple of data, and should be repeated as part of an address. In SQL terms, City and State have a functional dependency on the ZIP Code. In an Object-Oriented database, we might have a separate City-State-Zip class definition. In a document datastore, we might combine these items into a sub-document.
In any 2NF problem area, we're forced to write code which normalizes this internal relationship.
And. When we do that we'll find the kinds of problems we find with derived data: The ZIP code 22102 might be McLean or Tysons Corner. One of them is "right" and the other is "wrong", Or perhaps there needs to be an exception to handle this. Or perhaps a correction applied to coerce the wrong values to be right.
The "Association" Table
There's a SQL design pattern called an association table. This is used to handle a many-to-many relationship between two entities. Consider Boats and Owners. A boat will have multiple owners. An owner may have multiple boats. In SQL world, this requires a special table with two foreign keys. In the degenerate case, there are no other attributes. In the boat-owner relationship case, however, there's often a range of dates that specifies when an owner was associated with a boat. The range of dates applies to the relationship itself, not to boat nor to owner.
In a spreadsheet there are numerous ways to represent this. Numerous. A list of boat rows after each owner. A list of owner rows after each boat. A number of owner columns for each boat. A block of text with a list of owner names in a single cell. Creative people will create many creative solutions to this data representation problem.
Note that the association table is a SQL hack. It's an implementation detail, not an essential feature of the problem domain. In Python, for example, we'll need to use weakref objects to handle this cleanly.
When Owner O1 refers to Vessel V1 it's easy to have a list of vessel references under the owner. When the Owner O1 object is no longer needed, it can be removed from memory. This decrements the references count for Vessel V1 to zero, and it will also be removed from memory, too.
When we have mutual references, we have a problem, solved by weakrefs.
If Owner O1 refers to Vessel V1 and we also have Vessel V1 referring to Owner O1, we have mutual references. O1 has a list that includes V1. V1 also has a list that includes O1. This means there are two strong references to O1: some variable, owner, and Vessel V1 also refers to O1. When the variable owner is no longer needed, then the reference count to O1 is decremented from two to one. And the object can't be deleted yet.
If V1 has a weak reference to O1, then the strong reference count -- based on the variable owner -- is only one. The weak reference from V1 doesn't count for memory management purposes. O1 can be removed from memory, references to V1 will be decremented, and it, too, can be removed.
Our code will have to parse and populate the relationships. And we'll need to use weakref to be sure we can cleanly remove objects.
Coping StrategiesAs noted above, we have to cope with manually-prepared spreadsheet data. It looks like this:
- Figure out what the likely data structure is. This isn't simple. We'll look at Pythonic techniques below. When starting, it helps to draw UML class diagrams (or ER diagrams) over and over again to try and depict the data. I'm a fan of using to draw the pictures because they have a super-handy text notation for the relationships and attributes.
- Leverage the Extract-Transform-Load design pattern.
- The "extract" reads the source spreadsheet data. A first version will be trivial use of xlrd or csv module. Or any of the modules listed here:.
- The "transform" should be implemented as a function to transform source to the target model. Pragmatically, this single function will leverage a number of other functions to validate, cleanse, convert, and normalize the data.
- The "load" may not be anything more than creating instances of the underlying model classes. In some cases, the instances of the model classes may wind up in an in-memory dictionary. In other cases, the "load" might be a simple use of pickle or shelve to persist the useful data.
Python To The RescueData modeling must be done slowly and reluctantly. Don't overfit the model to the first spreadsheet.
Here's the place to start
from typing import SimpleNamespace class Model(SimpleNamespace ): pass
This is *enough* modeling to get started. Don't over-engineer the model. We can then do things like this.
class Owner(Model): pass
This defines the class Owner as an instance of some abstract Model class. The SimpleNamespace allows us to have any attributes we think we need.
owner = Owner(vessel=some_id, name=row['name'])
We can leverage the SimpleNamespace to build useful objects with minimal code. This can be replaced with a typing.NamedTuple or a @dataclass class definition when the definition is more mature.
The "extract" code needs to gather row-like objects. Ideally, this is a generator function. Because normalization and dereferencing may require multiple passes through the data, a list can be slightly easier to deal with. We'll come back to normalization and dereferencing below.
For some background in the classes used here, see. (Yes, this is old; I'm thinking of moving it to GitHub and updating it to Python 3.7.)
def load_live_rows(workbook, sheet_name): sheet1 = sheet.EmbeddedSchemaSheet(workbook, sheet_name, schema.loader.HeadingRowSchemaLoader) dict_rows = sheet1.schema.rows_as_dict_iter(sheet1) clean_data = filter(lambda row:not row['Hull No.'].is_empty(), dict_rows) initial_data = take_until(lambda row:row['Hull No.'].to_str() == 'Definitely WB Owners:', clean_data) return list(initial_data)
Step-by-step.
- We're working with a sheet that has the schema embedded in it. That means using the heading rows as column information. The HeadingRowSchemaLoader will be grabbing the first few rows from the EmbeddedSchemaSheet. Sometimes we need more complex loaders to read multiple rows. If the schema is separate from the sheet, then the loader doesn't interact with the source of data.
- Each row is modeled as a simple dictionary in this example code.
- A filter locates rows that have hull numbers. Other rows are quietly discarded.
- The take_until() function reads rows until the matching row is found, then stops. This chops off the bottom of the spreadsheet where manual notes were kept.
The resulting list of rows can be validated, cleansed, and normalized to create the useful instances of the various Model subclasses.
Here's the "transform" portion.
def make_owner_1(row: Dict[str, Cell]) -> Owner: return Owner( last_name=null_strip(row["Owner's Last Name"].to_str()), first_name=null_strip(row["Owner's First Name"].to_str()), display_name=null_strip(row["Display Name"].to_str()), website=null_strip(row["Website"].to_str()), owner_vessel=[], )
We've built an instance of the Owner subclass of Model by extracting a number of attributes from the row. There are other columns not extracted; they are part of various normalizations and dereferencing.().
Multiple PassesWe often touch the source more than once.
- There's a "validate and load" pass to get rows that are sensible to process. A generator might make sense here.
- There may be a "cleanse and convert" pass to reformat the source data, perhaps parsing complex cells into components or combining multiple source rows into a single entity description. This, too, might involve a generator to restructure the spreadsheet rows into something sensible.
- There will be multiple "normalization" passes. Any 2NF relationships need to be extracted to create model objects. Any restructuring of complex dimensions should be handled via restructuring source data from grid to rows, or from multiple sheets to a single, long, sequence of rows with the various dimensions as explicit attributes of each row.
- There may be multiple "load" passes to build final objects from the source rows. This will often lead to including the built objects as part of the source data.
- There will be some final "dereferencing" passes where foreign key relationships are turned into proper references among the objects. These should be weakref references to permit proper garbage collection.
At this point, the application will have tidy collections of Python objects that can be used for the real work.. | http://slott-softwarearchitect.blogspot.com/ | CC-MAIN-2018-26 | refinedweb | 2,628 | 67.04 |
Created on 2011-03-22 10:37 by techtonik, last changed 2012-01-14 05:07 by python-dev. This issue is now closed.
print() function for some reason buffers output on Windows if end!='\n'.
In attached examples "Processing.. " string is shown in Python 3.2 only after the actual processing is finished, while in Python 2.6 it is shown before.
printtest2.py displays directly "Processing.. " on Windows, but not on Linux. It looks like stdout is not buffered on Windows, which looks like a bug to bug :-) I think that it is safer to always call sys.stdout.flush() to ensure that your message is directly displayed. With Python 2, you can use -u flag (unbuffered output) to avoid the explicit flush, but this is very inefficient (slow).
Python 3 uses line buffers, even with python3 -u, for better performances. If you want to see directly "Processing.. ", as Python 2, call sys.stdout.flush().
It is not a regression, it is a choice to be efficient.
From my perspective it is a regression on Windows and a bug in Linux version of Python 2.x, which unfortunately can not be fixed, because of 2.x release process.
If the fact that print statement doesn't output anything when called is not a bug - then should be documented long ago.
> From my perspective it is a regression on Windows and a bug in Linux
> version of Python 2.x, which unfortunately can not be fixed,
> because of 2.x release process.
Line buffering is used by default on most operating systems (ok, maybe not Windows, which looks strange to me) and is not specific to Python.
Yes, Python has subtle differences on different operating systems, but at least it looks like Python 3 has the same behaviour on Linux and Windows ;-)
> If the fact that print statement doesn't output anything when called
> is not a bug - then should be documented long ago.
Can you please write a patch for the doc? Reopen the issue if you have a patch.
How about making print() user-friendly with flushing after every call, and if you want explicitly want speed - use buffered sys.stdout.write/flush()?
> How about making print() user-friendly with flushing after every call,
> and if you want explicitly want speed - use buffered
> sys.stdout.write/flush()?
This is exactly the -u option of Python 2: use it if you would like a completly unbuffered sys.stdout in a portable way.
In Python 3, it is only useful to flush at each line, even if the output is not a console, but a pipe (e.g. output redirected to a file). But nobody asked yet to have a fully unbuffered sys.stdout in Python 3.
You must realize that the most common use case for print(..., end!='\n') is when you want to notify user about intermediate progress of a very long operation.
Making documentation for simple print() statement overloaded with low level buffering details makes language seem overly complicated for new users.
We are talking about different things here:
- When python is run from a console, sys.stdout is line buffered. sys.stdout.write() flushes if there is a carriage return. No need to change anything here.
- print() could call file.flush() if file.isatty(), *after* the multiple calls to file.write()..
The attached patch calls "if file.isatty(): file.flush()" at the end of the print function:
- only when an "end" argument was specified
- errors in file.isatty() are ignored (and then no flush occurs)
> Python 3.2, WinXP, IDLE edit window, F5 Run:
> 'Processing ...' appears immediately, 'Done' 3 sec later.
Terry, IDLE is completely different, its sys.stdout completely bypasses the new io stack, and there is no buffering...
amaury> When python is run from a console, sys.stdout is line buffered.
amaury> sys.stdout.write() flushes if there is a carriage return.
amaury> No need to change anything here.
Anatoly would like a flush after all calls to print().
> print() could call file.flush() if file.isatty(), *after* the multiple
> calls to file.write().
I vote +0 to change print(), call sys.stdout.flush(), if:
- file option is not used (and so, sys.stdout is used)
- sys.stdout is a TTY
- end option is used (fast heuristic to check if print will write a newline or not, a better one whould be to check if end contains a newline character or not, but we had to check for \n and/or \r, for a little gain)
But I don't want to change print() for print(text, file=file), because it would make Python slower and print(... file=file) is not used to an interactive prompt or to display informations to the user.
> Behavior is same when pasting into interactive interpreter ...
> I presume interpreter flushes before or after printing next prompt.
Did you wrote all commands on the same line? Python does change stdout buffer in interactive mode:
------------
if (Py_UnbufferedStdioFlag) {
#ifdef HAVE_SETVBUF
setvbuf(stdin, (char *)NULL, _IONBF, BUFSIZ);
setvbuf(stdout, (char *)NULL, _IONBF, BUFSIZ);
setvbuf(stderr, (char *)NULL, _IONBF, BUFSIZ);
#else /* !HAVE_SETVBUF */
setbuf(stdin, (char *)NULL);
setbuf(stdout, (char *)NULL);
setbuf(stderr, (char *)NULL);
#endif /* !HAVE_SETVBUF */
}
else if (Py_InteractiveFlag) {
#ifdef MS_WINDOWS
/* Doesn't have to have line-buffered -- use unbuffered */
/* Any set[v]buf(stdin, ...) screws up Tkinter :-( */
setvbuf(stdout, (char *)NULL, _IONBF, BUFSIZ);
#else /* !MS_WINDOWS */
#ifdef HAVE_SETVBUF
setvbuf(stdin, (char *)NULL, _IOLBF, BUFSIZ);
setvbuf(stdout, (char *)NULL, _IOLBF, BUFSIZ);
#endif /* HAVE_SETVBUF */
#endif /* !MS_WINDOWS */
/* Leave stderr alone - it should be unbuffered anyway. */
}
#ifdef __VMS
else {
setvbuf (stdout, (char *)NULL, _IOLBF, BUFSIZ);
}
#endif /* __VMS */
------------
(it doesn't check if stdout is a TTY or not, but I don't think that it is very useful to use the interactive mode outside a TTY)
> I have always experienced and expected Python's print to screen
> to be immediately visible. I thought that was pretty standard
> in other languages with a print-to-screen separate from
> general file-write.
Did you try Perl, Ruby, bash and other languages? I know that at least the C language requires an explicit call to fflush(stdout). I always used that.
> Terry, IDLE is completely different, its sys.stdout completely
> bypasses the new io stack, and there is no buffering...
As I wrote: "unbuffered mode" is not implemented for TextIOWrapper. So even with python3 -u, sys.stdout.write("abc") doesn't flush immediatly into the underlying FileIO.
I completely agree that file/socket output should be left alone. Flushing char by char to either is a bit insane. The two interactive to screen use cases I can think of are text progress meters, mentioned by Anatoly, such as :
Working .... (1 dot printed at intervals)
and timed text like
import time
for c in 'Similated 10 cps teletype output':
print(c,end='')
time.sleep(.1)
print()
which works fine from IDLE and whose non-functioning when started otherwise would puzzle any beginner and many beyond.
I've tried to switch to Python 3 once more and stumbled upon this problem once more.
Seems like this regression got stale. Last Victor's proposal seems reasonable for me. Should we open a new, more clear bug report and close this one?
No, I don't think so. Another issue will not magically create more time for anyone.
On Mon, Jan 9, 2012 at 2:03 PM, Terry J. Reedy <report@bugs.python.org>wrote:
>
> Terry J. Reedy <tjreedy@udel.edu> added the comment:
>
> No, I don't think so. Another issue will not magically create more time
> for anyone.
>
But anyone will waste less time to get to the outcome of discussion.
--
anatoly t.
> You must realize that the most common use case for print(..., end!='\n') is when you want
> to notify user about intermediate progress of a very long operation.
References needed.
> Making documentation for simple print() statement overloaded with low level buffering details makes language seem overly complicated for new users.
Why don't anybody require references for that?
The current doc says
."
(The bit about None, said twice, could be factored out and said once after the second sentence.)
This is exactly what print does and Guido today (Python ideas) said that is what it should do and that "Apps that need flushing should call flush()." So a code change is rejected.
The issue title was incorrect. The print function does not do any buffering. The file object it writes to may. Even sys.stdout may or may not.
We could add at the end a sentence or two something like
"Output buffering is determined by *file*. Call file.flush() to ensure, for instance, immediate appearance on a screen."
New changeset bc043cef94f2 by Terry Jan Reedy in branch '3.2':
Closes #11633 Clarify print buffering.
New changeset fb0d61fd1753 by Terry Jan Reedy in branch 'default':
Merge with 3.2
#13761 proposes to add flush=False param with option for True.
I agree with the python-ideas message that ``sys.stdout.flush()`` is surprising / possibly misleading and should be ``file.flush()``. If the other bug report about adding a flush argument is rejected, please consider this. Thanks :)
New changeset 4a767054551b by Terry Jan Reedy in branch '3.2':
#11633 At least 2 people prefer earlier revision.
New changeset 22688f5f9d0f by Terry Jan Reedy in branch 'default':
Merge #11633 At least 2 people prefer earlier revision.
Thank you sir. Should the doc edit be backported to the 2.7 docs, with a mention that it’s only on unix?
Putting the wording into 2.7 might be nice, but I thought it was in bugfix only mode.
Regarding UNIX only, I'd avoid it; any file may be buffered in almost any way on any platform. Saying an explicit flush call may be necessary for immediate output is _not_ UNIX only and would be very misleading. Remembering that ~UNIX != Windows.
Telling users to explicitly call flush to ensure immediate output where that is necessary ensures portable coding (or ought to, user pigheadedness discounted:-)
Bug fixes include doc improvements, so 2.7 is fair game.
Thanks for your suggestion to not mention specific platforms. Let’s just backport the 3.2 text.
New changeset 8935a33773b9 by Terry Jan Reedy in branch '2.7':
#11633 about buffering of print | https://bugs.python.org/issue11633 | CC-MAIN-2018-17 | refinedweb | 1,706 | 76.72 |
On Mon, 2009-01-19 at 22:31 +0100, Oleg Nesterov wrote:> On 01/19, Serge E. Hallyn wrote:> >> > Quoting Oleg Nesterov (oleg@redhat.com):> > > > > > This is the next patch. This one does> > > > > > --- CUR/fs/autofs/inode.c~1_AUTOFS 2009-01-12 23:07:46.000000000 +0100> > > +++ CUR/fs/autofs/inode.c 2009-01-18 06:18:49.000000000 +0100> > > @@ -78,7 +78,7 @@ static int parse_options(char *options, > > > > > > *uid = current_uid();> > > *gid = current_gid();> > > - *pgrp = task_pgrp_nr(current);> > > + *pgrp = task_pgrp_vnr(current);> > > > Ok, that was the one I had looked at earlier (though now I can't find> > it). But that just seems wrong to me. We should certainly not be> > caching a pid_vnr in the kernel. That is imo incomparably worse than> > storing a pid_nr.> > We do not cache it. We use this pgrp as an argument for find_pid()> right after return from parse_options(). And find_pid() uses> current->nsproxy->pid_ns. That is why this is bugfix.> > > Can we just jump straight to caching the struct pid?> > Of course it is ugly to store pid_t and then call find_pid(),> I don't understand why the code was written this way. But I> am not going to cleanup this code ;)> > (note also that the 2nd patch I sent for autofs4 does not use> pid_t at all).> > > > passing pid_t's in from userspace uses current namespace, with> > > or without the patch.> >> > Which makes sense on the one hand, but OTOH could be confusing> > if as I requested we print out init_pid_ns values. (sigh)> > But it is not possible to pass the global pid_t from within> the subnamespace via "pgrp=" option, automount (or whatever)> just can't know it when it runs in the subnamespace.> > > Yes... would it be overkill to just print both?> > perharps, I don't know...> > But this is imho a bit off-topic, we can change the debugging> output later any way we like.The use of pid_ts in debug statements could be confusing, but generally,the debug output will only be used when the person gathering it is awareof the environment it is collected in. We can change these as needed astime passes.We also send the pid_t via autofs request packets and it is used foruser space debug prints. It is open to the same confusion and I'm stillnot sure how to deal with that but it isn't important to sort that outnow either, for the same reason as above.Ian | http://lkml.org/lkml/2009/1/19/556 | CC-MAIN-2014-42 | refinedweb | 403 | 75.4 |
decides to use BizTalk Services to set up a hybrid integration scenario where the integration layer is hosted on Azure and the SAP Server is within the organization’s firewall. Fabrikam uses BizTalk Services in the following ways to enable this hybrid integration scenario:
Fabrikam uses Microsoft Azure BizTalk Services SDK to create a BizTalk Service project. The project includes a XML One-Way Bridge to send messages to a relay endpoint, which in turns sends the message to the on-premises SAP system.
Fabrikam uses the BizTalk Adapter Service component available with BizTalk Services to expose the Send operation on ORDERS05 IDOC as an operation using Service Bus relay endpoint. The XML One-Way Bridge sends messages to this relay endpoint. Fabrikam also creates the schema for Send operation using BizTalk Adapter Service and includes the schema as part of the BizTalk Service project.
Fabrikam uses the Transform component available with BizTalk Services to create a map to transform the PO message in X12 format into the schema required by the SAP Server to invoke the Send operation on the ORDERS05 IDOC.
Fabrikam uses the Microsoft Azure BizTalk Services Portal available with BizTalk Services to create and deploy an EDI agreement under the BizTalk Services subscription that processes the X12 850 PO message. As part of the message processing, the agreement also does the following:
Receives an X12 850 PO message over FTP.
Transforms the X12 PO message into the schema required by the SAP Server using the transform created earlier.
Routes the transformed message to the XML One-Way Bridge that eventually routes the message to a relay endpoint created for sending a PO message to an SAP Server. Fabrikam earlier exposed (as explained in bullet 1 above) the Send operation on ORDERS05 IDOC as a relay endpoint, to enable partners to send PO messages using BizTalk Adapter Service.
Once this is set up, Contoso drops an X12 850 PO message to the FTP location. This message is consumed by the EDI receive pipeline, which processes the message, transforms it to an ORDERS05 IDOC, and routes it to the intermediary XML bridge. The bridge then routes the message to the relay endpoint on Service Bus, which is then sent to the on-premises SAP Server. The following illustration represents the same scenario.
This tutorial is written around the SAPIntegration sample available from the MSDN Code Gallery (SAPIntegration.zip). You could either use the SAPIntegration sample and go through this tutorial to understand how the sample was built or just use this tutorial to create your own application. This tutorial is targeted towards the second approach so that you get to understand how this application was built. Also, to be consistent with the sample, the names of artifacts (e.g. schemas, transforms, etc.) used in this tutorial are same as that in the sample.
The sample available from the MSDN code gallery contains only half the solution, which can be developed at design-time on your computer. The sample cannot include the configuration that you must do on the Azure BizTalk Services Portal. For that, you must follow the steps in this tutorial to set up your EDI bridge. Even though Microsoft recommends that you follow the tutorial to best understand the concepts and procedures. If you really wish to use the sample, this is what you should do:
Download the SAPIntegration.zip package, extract the SAPIntegration sample, and make relevant changes like adding your service namespace, issuer name, issuer key, SAP Server details, and so on. After changing the sample, deploy the application to get the endpoint URL at which the XML One-Way Bridge is deployed.
Use the BizTalk Services Portal to configure the Receive settings as described at Step 5: Create and Deploy the EDI Receive Pipeline and follow the procedures to route messages from the EDI Receive bridge to the XML One-Way Bridge you already deployed.
Drop a test message at the FTP location configured as part of the agreement and verify that the application works as expected.
If the message is successfully processed, it is routed to the SAP Server and you can verify the ORDERS IDOC using the SAP GUI.
If the EDI agreement fails to process the message, the failure/error messages are routed to a relay endpoint on Service Bus. To receive such messages, you must set up a relay receiver service that receives any message that comes to that specific relay endpoint. More details on why you need this service and how to use it are available at Step 6: Test the Solution. | https://msdn.microsoft.com/nl-nl/library/hh859742 | CC-MAIN-2017-51 | refinedweb | 764 | 58.11 |
Spring Web Flow 2 Web Development — Save 50%
Master Spring's well-designed web frameworks to develop powerful web applications.
The first section will focus on configuring the application and its components so that the application can be deployed. The focus of the second section will be a real world application that will be developed using the steps described in the article on Developing the MVC components and in this article. That sets the agenda for this discussion.
Using Spring MVC – Configuring the Application
There are four main steps in configuring of the application. They are:
- Configure the DispatcherServlet
- Configure the Controller
- Configure the View
- Configure the Build Script
The first step will be same for any application that is built using Spring MVC. The other three steps change according to the components that have been developed for the application. Here are the details.
Configure the DispatcherServlet
The first step is to tell the Application server that all the requests for this (Spring MVC based) application need to be routed to Spring MVC. This is done by setting up the DispatcherServlet. The reason for setting up DispatcherServlet is that it acts as the entry point to the Spring MVC and thus to the application. Since the DispatcherServlet interacts with the application as a whole (instead of individual components), its configuration or setting up at application level. And any setup that needs to be done at the application level is done by making the required entries in the web.xml.
The entries required in the web.xml can be divided into the following:
- Servlet mapping
- URL mapping
The former specifies the details of the servlet and the latter specifies how the servlet is related to a URL. Here are the details.
Servlet mapping
Servlet mapping is akin to declaring a variable. It is through servlet mapping that Application Server knows which servlets of the application it needs to support. Servlet mapping, in essence, assigns a name to a servlet class that can be reference throughout web.xml. To set up the DispatcherServlet, first it has to be mapped to a name. that can be done using <servlet-name> and <servlet-class> tags that are the child nodes of the <servlet> tag. The following statement maps the DispatcherServlet to the name "dispatcher".
<servlet>
<servlet-name>
dispatcher
</servlet-name>
<servlet-class>
org.springframework.web.servlet.DispatcherServlet
</servlet-class>
</servlet>
Since the DispatcherServlet needs to be loaded on the startup of the Application Server instead of the loading when a request arrives, the optional node <load-on-startup> with value of 1 is also required. The modified <servlet> tag will be:
<servlet>
<servlet-name>
dispatcher
</servlet-name>
<servlet-class>
org.springframework.web.servlet.DispatcherServlet
</servlet-class>
<load-on-startup>1</load-on-startup>
</servlet>
Next step is to map the URL to the servlet name so that the requests can be routed to the DispatcherServlet.
URL mapping
Once the servlet has been mapped, the next step is to map the servlet name with a URL so that the requests for that particular URL can be passed on to the application via the DispatcherServlet. That can be done using the <servlet-name> and <url-pattern> nodes of the <servlet-mapping> node. The <servlet-name> is used to refer the name that was mapped with the DispatcherServlet class. The <url-pattern> is used to map a URL pattern with a servlet name so that when a request arrives matching the URL pattern, Application Server can redirect it to the mapped servlet. To map the DispatcherServlet with a URL pattern the <servlet-mapping> tag will be:
<servlet-mapping>
<servlet-name>dispatcher</servlet-name>
<url-pattern>*.html</url-pattern>
</servlet-mapping>
With this configuration/setting up of DispatcherServlet is complete. One point to keep in mind is that the URL pattern can be any pattern of one’s choice. However, it’s a common practice to use *.html for DispatcherServlet and *do for ActionServlet (Struts 1.x). Next step is to configure the View and Controller components of the application.
Mapping the Controller
By setting up the DispatcherServlet, the routing of requests to the application will be taken care of by the Application Server. However, unless the individual controllers of the application are setup/configured, the Framework would not know which controller to be called once the DispatcherServlet receives the request. The configuration of the Controller as well as the View components is done in the Spring MVC configuration file. The name of the configuration file is dependent on the name of the DispatcherServlet in web.xml, which is of the form <DispatcherServlet_name-servlet>.xml. So if the DispacherServlet is mapped to the name dispatcher, then the name of the configuration file will be dispatcher-servlet.xml. The file will reside in WEB-INF folder of the application.
Everything in Spring Framework is a bean. Controllers are no exceptions. Controllers are configured as beans using the <bean> child tag of <beans> tag. A Controller is mapped by providing the URL of the request as the name attribute and complete qualified name of the Controller class as the value of the class attribute. For example, if the request URL is say,, then the name attribute will have /hello.html and the value attribute will have the fully qualified class name say, org.me.HelloWorldController. The following statements depicts the same:
<bean name="/hello.html" class=" org.me.HelloWorldController "/>
One point to keep in mind is that the "/" in the bean name represents the relative path. In other words, /hello.html means that hello.html is directly under. If hello.html was under another directory say, jsp which, in turn was directly under the application, then the name attribute will be /jsp/hello.html. Let us move onto configuring the Views.
Configuring the Views
The Controllers always returns the Logical View name and not its physical file name or address. The mapping of the Logical View to the physical file name or URL is done by providing the required entries in the Spring MVC configuration file. Spring MVC uses Resolvers to match the logical name to physical file, process the file and provide the output to the user in desired format (html, PDF etc.). The mapping is done in following steps.
Declaring the Resolver
Resolver is declared using the <bean> tag. The id attribute is used to give the Resolver a name with which it can be referenced. The type of Resolver to be used is mapped to the name using the class attribute. The most commonly used Resolvers are:
- BeanNameViewResolver – resolves the bean class to the provided name
- JasperReportsViewResolver – It can be used when the logical view name maps to a JasperReports file
- InternalResourceViewResolver- It can be used for JSP and JSTL based views
For example, to map the View created in last part of this article, InternalResourceViewResolver can be used as it is JSTL based. The statement will be:
<bean id="viewResolver"
class="org.springframework.web.servlet.view.InternalResourceViewResolver">
</bean>
Specifying the Path
The path to the file is specified using the name and value attributes of <property> child tag of the <bean> tag. The path is passed on as prefix and suffix. Prefix specifies the path (or URL relative to the application) that can be appended before the file name and the suffix is the extension of the file that forms the View. For example, if the path to the file (including filename) is /jsps/HelloWorld.jsp, the suffix and prefix will be declared as follows:
<bean id="viewResolver"
class="org.springframework.web.servlet.view.InternalResourceViewResolver">
<property name ="prefix" value ="/jsps/"/>
<property name = "suffix" value ="jsp"/>
</bean>
With that, configuration of the components comes to an end. Next, step will be about configuring/setting up the build script.
Configuring the build script
To configure the build script and build the application, two files need to be added or modified. They are:
- build.properties
- build.xml
The former is to specify the details such as location of server specific libraries including servlet-impl etc. The latter uses the former to substitute the location related details. The following section is not exhaustive in details about ant or build file. So here are the details
build.properties
The main entries in that go into this file are the home directory of the application server and the second is the lib folder within the application server directory. The following statements do the same, the application server being tomcat and the OS being *nix (Unix/Linux).
appserver.home=/usr/share/tomcat5.5
appserver.lib=${appserver.home}/common/lib
build.xml
Apart from the other entries, the ones that are important for the compilation are <property> tag specifying the properties file name, <include> tag that specifies the libraries to be included, <fileset> tag specifying the path containing the include files. It as the value of dir attribute of the <fileset> that entries that the build.properties come into play. The following statements tell the build environment which files from the app server files need to be included.
<path id="build.classpath">
<fileset dir="lib">
<include name="*.jar"/>
</fileset>
<fileset dir="${appserver.lib}"> <!-- servlet API classes: -->
<include name="servlet*.jar"/>
</fileset>
<pathelement path="${build.dir}"/>
</path>
That completes this section about configuring the components. Next section is about a real world example.
Spring MVC – In Real World
Now let us develop a simple application making use of the steps enumerated in first as well as this discussion. The functionalities of the application will be:
- Retrieve the list of logged-in users
- Display the list
In the first version of the implementation, we will populate the logged-in user list statically. That means the names in the list will be hard-coded for the current implementation. The files that will be the part of the application will be:
- LoggedInUsersController.java - it will retrieve the list of logged in users, create the model and pass it to the View.
- LoggedInUsers.jsp – Will display the model passed by the controller.
- web.xml – for setting up the DispatcherServlet
- spring mvc configuration – configuration for Controller and View
- build.properties – properties for the build file
- build.xml – defines the build configuration for the application
Let us get started. First is the controller.
public class LoggedInUsers("LoggedInUsers");
modelAndView.addObject("list", strList);
return modelAndView;
}
}
Next comes the View.
<html>
<body>
<c:forEach
User #<c:out is
<c:out <br />
</c:forEach>
</body>
</html>
Next is the web.xml. Only the relevant part is displayed here.
>
Next is the Spring MVC configuration file. The name of the file is dispatcher-servlet.xml.
<beans>
<bean name="/loggedInUsers.html" class=" org.me.LoggedInUsersController "/>
<bean id="viewResolver"
class="org.springframework.web.servlet.view.InternalResourceViewResolver">
<property name ="prefix" value ="/jsps/"/>
<property name = "suffix" value ="jsp"/>
</bean>
</beans>
Next is the build.properties. Here again, the target app-server is tomcat and OS is Unix/Linux.
appserver.home=/usr/share/tomcat5.5
appserver.lib=${appserver.home}/common/lib
Lastly, build.xml.
<?xml version="1.0" encoding="UTF-8"?>
<project basedir="." default="build">
<property file="build.properties"/>
<property name="src.dir" value="src"/>
<property name="build.dir" value="classes"/>
<path id="build.classpath">
<fileset dir="lib">
<include name="*.jar"/>
</fileset>
<fileset dir="${appserver.lib}"> <!-- servlet API classes: -->
<include name="servlet*.jar"/>
</fileset>
<pathelement path="${build.dir}"/>
</path>
<target name="build">
<mkdir dir="${build.dir}"/>
<javac destdir="${build.dir}" source="1.5" target="1.5"
debug="true" deprecation="false" optimize="false" failonerror="true">
<src path="${src.dir}"/>
<classpath refid="build.classpath"/>
</javac>
</target>
<target name="clean" description="Clean output directories">
<delete>
<fileset dir="${build.dir}">
<include name="**/*.class"/>
</fileset>
</delete>
</target>
</project>
Summary
With that we come to the end of this discussion. What has been discussed here is just a glimpse of what Spring MVC can do. In the future each component of the will be discussed in detail starting with the Controller component. Till then...
If you have read this article you may be interested to view :
- Getting Started With Spring MVC - Developing the MVC components
- Data Access Using Spring Framework: HibernateTemplate | http://www.packtpub.com/article/spring-mvc-configure-deploy-application | CC-MAIN-2013-48 | refinedweb | 2,004 | 57.47 |
. Kotlin is an officially supported language for developing Android apps, along with Java.
What you must know already.
What you'll learn
- How to use Android Studio to build your app.
- How to run your app on a device or in the emulator.
- How to add interactive buttons.
- How to display a second screen when a button is pressed.
Use Android Studio and Kotlin to write Android apps page..
In this step, you will create a new Android project for your first app. This simple app displays the string "Hello World" on the screen of an Android virtual or physical device.
Here's what the finished app will look like:
What you'll learn
- How to create a project in Android Studio.
- How to create an emulated Android device.
- How to run your app on the emulator.
- How to run your app on your own physical device, if you have one.
Step 1: Create a new project
- Open Android Studio.
- In the Welcome to Android Studio dialog, click Start a new Android Studio project.
- Select Basic Activity (not the default). Click Next.
- Give your application a name, such as My First App.
- Make sure the Language is set to Kotlin.
- Leave the defaults for the other fields.
- Click Finish.
After these steps, Android Studio:
- Creates a folder for your Android Studio project. This is usually in a folder called AndroidStudioProjects below your home directory.
- Builds your project (this may take a few moments). Android Studio uses Gradle as its build system. You can follow the build progress at the bottom of the Android Studio window.
- Opens the code editor showing your project.
Step 2: Get your screen set up
When your project first opens in Android Studio, there may be a lot of windows and panes open. To make it easier to get to know Android Studio, here are some suggestions on how to customize the layout.
- If there's a Gradle window open on the right side, click on the minimize button (—) in the upper right corner to hide it.
- Depending on the size of your screen, consider resizing the pane on the left showing the project folders to take up less space.
At this point, your screen should look a bit less cluttered, similar to the screenshot shown below.
Step 3: Explore the project structure and layout of which is in Project view (2). Project view shows your files and folders structured in a way that is convenient for working with an Android project. (This does not always match the file hierarchy! To see the file hierarchy, choose the Project files view by clicking (3).)
- Double-click the app (1) folder to expand the hierarchy of app files. (See (1) in the screenshot.)
- If you click Project (2), you can hide or show the Project view.
- The current Project view selection is Project > Android.
In the Project > Android view you see three or four top-level folders below your app folder: manifests, java, java (generated) and res. You may not see java (generated) right away.
- Expand the manifests folder.
This folder contains
AndroidManifest.xml. This file describes all the components of your Android app and is read by the Android runtime system when your app is executed. 2. Expand the java folder. All your Kotlin language files are organized here; Android projects keep all Kotlin language files in this folder, together with any Java sources. The java folder contains three subfolders:. It starts out with a skeleton test file.
com.example.myfirstapp (test): This folder is where you would put your unit tests. Unit tests don't need an Android device to run. It starts out with a skeleton unit test file. 3. Expand the res folder. This folder contains all the resources for your app, including images, layout files, strings, icons, and styling. It includes these subfolders:
drawable: All your app's images will be stored in this folder.
layout: This folder contains the UI layout files for your activities. Currently, your app has one activity that has a layout file called
activity_main.xml. It also contains
content_main.xml,
fragment_first.xml, and
fragment_second.xml.
menu: This folder contains XML files describing any menus in your app.
mipmap: This folder contains the launcher icons for your app.
navigation: This folder contains the navigation graph, which tells Android Studio how to navigate between different parts of your application.
values: Contains resources, such as strings and colors, used in your app.
Step 4: Create a virtual device (emulator)
In this task, you will use the Android Virtual Device (AVD) manager to create a virtual device (or emulator) that simulates the configuration for a particular type of Android device.
The first step is to create a configuration that describes the virtual device.
- In Android Studio, select Tools > AVD Manager, or click the AVD Manager icon in the toolbar.
- Click +Create Virtual Device. (If you have created a virtual device before, the window shows all of your existing devices and the +Create Virtual Device button is at the bottom.) The Select Hardware window shows a list of pre-configured hardware device definitions.
- Choose a device definition, such as Pixel 2, and click Next. (For this codelab, it really doesn't matter which device definition you pick).
- In the System Image dialog, from the Recommended tab, choose the latest release. (This does matter.)
- If a Download link is visible next to a latest release, it is not installed yet, and you need to download it first. If necessary, click the link to start the download, and click Next when it's done. This may take a while depending on your connection speed.
- In the next dialog box, accept the defaults, and click Finish.
The AVD Manager now shows the virtual device you added.
- If the Your Virtual Devices AVD Manager window is still open, go ahead and close it.
Step 5: Run your app on your new emulator
- In Android Studio, select Run > Run ‘app', or click the Run icon in the toolbar.
The icon changes once your app is running.
- In Run > Select Device, under Available devices, select the virtual device that you just configured. A dropdown menu also appears in the toolbar..
Step 6: Run your app on a device (if you have one)
What you need:
- An Android device such as a phone or tablet.
- A data cable to connect your Android device to your computer via the USB port.
- If you are using a Linux or Windows OS, you may need to perform additional steps to run your app on a hardware device. Check the Run Apps on a Hardware Device documentation. On Windows, you may need to install the appropriate USB driver for your device. See OEM USB Drivers.
Run your app on a device
To let Android Studio communicate with your device, you must turn on USB Debugging on your Android. Tap Developer options.
- Enable USB Debugging.
Now you can connect your device and run the app from Android Studio.
- Connect your device to your development machine with a USB cable. On the device, you might need to agree to allow USB debugging from your development device.
- In Android Studio, click Run
in the toolbar at the top of the window. (You might need to select View > Toolbar to see this option.) The Select Deployment Target dialog opens with the list of available emulators and connected devices.
- Select your device, and click OK. Android Studio installs the app on your device and runs it.
Troubleshooting
If you're stuck, quit Android Studio and restart it.
If Android Studio does not recognize your device, try the following:
- Disconnect your device from your development machine and reconnect it.
- Restart Android Studio.
If your computer still does not find the device or declares it "unauthorized":
- Disconnect the device.
- On the device, open Settings->Developer Options.
- Tap Revoke USB Debugging authorizations.
- Reconnect the device to your computer.
- When prompted, grant authorizations.
If you are still having trouble, check that you installed the appropriate USB driver for your device. See the Using Hardware Devices documentation.
Check the troubleshooting section in the Android Studio documentation.
Step 7: Explore the app template'll explore some of the panels in the layout editor, and you'll learn how to change properties of views.
What you'll learn
- How to use the layout editor.
- How to set property values.
- How to add string resources.
- How to add color resources.
Step 1: Open the layout editor
- Find and open the layout folder (app > res > layout) on the left side in the Project panel.
- Double-click.
- In the upper right corner of the Design editor, above Attributes, find the three icons that look like this:
These represent Code (code only), Split (code + design), and Design (design only) views.
- Try selecting the different modes. Depending on your screen size and work style, you may prefer switching between Code and Design, or staying in Split view. If your Component Tree disappears, hide and show the Palette.
Split view:
- At the lower right of the Design editor you see + and - buttons for zooming in and out. Use these buttons to adjust the size of what you see, or click the zoom-to-fit button so that both panels fit on your screen.
The Design layout on the left shows how your app appears on the device. The Blueprint layout, shown on the right, is a schematic view of the layout.
- Practice using the layout menu in the top left of the design toolbar to display the design view, the blueprint view, and both views side by side.
Depending on the size of your screen and your preference, you may wish to only show the Design view or the Blueprint view, instead of both.
- Use the orientation icon to change the orientation of the layout. This allows you to test how your layout will fit portrait and landscape modes.
- Use the device menu to view the layout on different devices. (This is extremely useful for testing!)
On the right is the Attributes panel. You'll learn about that later.
Step 2: Explore and resize the Component Tree
-. Click the Hide icon at the top right of the Component Tree.
The Component Tree closes. 4. Bring back the Component Tree by clicking the vertical label Component Tree on the left.
Step 3: Explore view hierarchies
- In the Component Tree, notice that the root of the view hierarchy is a
ConstraintLayoutview.
Every layout must have a root view that contains all the other views. The root view is always a view group, which is a view that contains other views. A
ConstraintLayout is one example of a view group. 2. Notice that the
ConstraintLayout contains a
TextView, called
textview_first and a
Button, called
button_first.
- If the code isn't showing, switch to Code or Split view using the icons in the upper right corner.
- In the XML code, notice that the root element is
<androidx.constraintlayout.widget.ConstraintLayout>. The root element contains a
<TextView>element and a
<Button>element.
<androidx.constraintlayout.widget.ConstraintLayout ... > <TextView ... /> <Button ... /> </androidx.constraintlayout.widget.ConstraintLayout>
Step 4: Change property values
- In the code editor, examine the properties in the
TextViewelement.
<TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Hello first fragment" ... />
- Click on the string in the text property, and you'll notice it refers to a string resource,
hello_first_fragment.
android:text="@string/hello_first_fragment"
- Right-click on the property and click Go To > Declaration or Usages
values/strings.xml opens with the string highlighted.
<string name="hello_first_fragment">Hello first fragment</string>
- Change the value of the
stringproperty to
Hello World!.
- Switch back to
fragment_first.xml.
textview_firstin the Component Tree.
- Look at the Attributes panel on the right, and open the Declared Attributes section if needed.
- In the text field of the.
- Run the app to see the change you made in strings.xml. Your app now shows "Hello World!".
Step 5: Change text display properties
- With
textview_firststill selected in the Component Tree, in the layout editor, in the list of attributes, under Common Attributes, expand the textAppearance field. (You may need to scroll down to find it.)
- Change some of the text appearance properties. For example, change the font family, increase the text size, and select bold style. (You might need to scroll the panel to see all the fields.)
- Change the text color. Click in the textColor field, and enter
g.
A menu pops up with possible completion values containing the letter g. This list includes predefined colors.
- Select @android:color/darker_gray and press Enter.
Below is an example of the textAppearance attributes after making some changes.
- Look at the XML for"
- Run your app again and see the changes applied to your Hello World! string
Step 6: Display all attributes
- In the Attributes panel, scroll down until you find All Attributes.
- Scroll through the list to get an idea of the attributes you could set for a.
What you'll learn
- How resources are defined.
- Adding and using color resources.
- The results of changing layout height and width.
Step 1: Add color resources
First, you'll learn how to add new color resources.
Change the text color and background of the TextView
- In the Project panel on the left, double-click on res > values > colors.xml to open the color resource file.
your app layout (for example, purple for the app bar).
- Go back to
fragment_first.xmlso you can see the XML code for the layout.
- Add a new property to the
TextViewcalled
android:background, and start typing to set its value to
@color. You can add this property anywhere inside the TextView code.
A menu pops up offering the predefined color resources:
- Choose @color/colorPrimaryDark.
- Change the property
android:textColorand give it a value of
@android:color/white.
The Android framework defines a range of colors, including white, so you don't have to define white yourself. 6. In the layout editor, you can see that the
TextView now has a dark blue or purple background, and the text is displayed in white.
Step 2: Add a new color to use as the screen background color
- Back in.
- Go back to
fragment_first.xml.
- In the Component Tree, select the
ConstraintLayout.
- In the Attributes panel, select the background property and press Enter. Type "c" in the field that appears.
- In the menu of colors that appears, select @color/screenBackground. Press Enter to complete the selection.
- Click on the yellow patch to the left of the color value in the background field.
It shows a list of colors defined in
colors.xml. Click the Custom tab to choose a custom color with an interactive color chooser.
- Feel free to change the value of the screenBackground color, but make sure that the final color is noticeably different from the
colorPrimaryand
colorPrimaryDarkcolors.
Step 3: Explore width and height properties
Now that you have a new screen background color, you will use it to explore the effects of changing the width and height properties of views.
- In
fragment_first.xml, in the Component Tree, select the
ConstraintLayout.
- In the Attributes panel, find and expand the Layout section.
The layout_width and layout_height properties are both set to match_parent. The
ConstraintLayout is the root view of this
Fragment, so the "parent" layout size is effectively the size of your screen.
- Notice that the entire background of the screen uses the screenBackground color.
textview_first. Currently the layout width and height are wrap_content, which tells the view to be just big enough to enclose its content (plus padding)
- Change both the layout width and layout height to match_constraint, which tells the view to be as big as whatever it's constrained to.
The width and height show 0dp, and the text moves to the upper left, while the
TextView expands to match the
ConstraintLayout except for the button. The button and the text view are at the same level in the view hierarchy inside the constraint layout, so they share space.
- Explore what happens if the width is match_constraint and the height is wrap_content and vice versa. You can also change the width and height of the button_first.
- Set both the width and height of the
TextViewand the
Buttonback to wrap_content.
In this task, you will add two more buttons to your user interface, and update the existing button, as shown below.
What you'll learn
- How to add new views to your layout.
- How to constrain the position of a view to another view.
Step 1: View constraint properties
- In.
The square represents the selected view. Each of the grey dots represents a constraint, to the top, bottom, left, and right; for this example, from the
TextView to its parent, the
ConstraintLayout, or to the Next button for the bottom constraint. 3. Notice that the blueprint and design views also show the constraints when a particular view is selected. Some of the constraints are jagged lines, but the one to the Next button is a squiggle, because it's a little different. You'll learn more about that in a bit.
Step 2: Add buttons and constrain their positions
To learn how to use constraints to connect the positions of views to each other, you will add buttons to the layout. Your first goal is to add a button and some constraints, and change the constraints on the Next button.
- Notice the Palette at the top left of the layout editor. Move the sides if you need to, so that you can see many of the items in the palette.
2. Click on some of the categories, and scroll the listed items if needed to get an idea of what's available. 3. Select Button, which is near the top, and drag and drop it onto the design view, placing it underneath the
TextView near the other button.
Step 3: Add a constraint to the new button
You will now constrain the top of the button to the bottom of the
TextView.
- Move the cursor over the circle at the top of the
Button.
- Click and drag the circle at the top of the
Buttononto the circle at the bottom of the
TextView.
The
Button moves up to sit just below the
TextView because the top of the button is now constrained to the bottom of the
TextView.
- Take a look at the Constraint Widget in the Layout pane of the Attributes panel. It shows some constraints for the
Button, including Top -> BottomOf textView.
- Take a look at the XML code for the button. It now includes the attribute that constrains the top of the button to the bottom of the
TextView.
app:layout_constraintTop_toBottomOf="@+id/textview_first"
- You may see a warning, "Not Horizontally Constrained". To fix this, add a constraint from the left side of the button to the left side of the screen.
- Also add a constraint to constrain the bottom of the button to the bottom of the screen.
Before adding another button, relabel this button so things are a little clearer about which button is which.
- Click on the button you just added in the design layout.
- Look at the Attributes panel on the right, and notice the id field.
- Change the id from
buttonto
toast_button.
Step 4: Adjust the Next.
To delete a constraint:
- In the design view or blueprint view, hold the
Ctrlkey (
Commandon a Mac) and move the cursor over the circle for the constraint until the circle highlights, then click the circle.
- Or click on one of the constrained views, then right-click on the constraint and select Delete from the menu.
- Or in the Attributes panel, move the cursor over the circle for the constraint until it shows an x, then click it.
If you delete a constraint and want it back, either undo the action, or create a new constraint.
Step 5: Delete the chain constraints
- Click on the Next button, and then delete the constraint from the top of the button to the
TextView.
- Click on the
TextView, and then delete the constraint from the bottom of the text to the Next button.
Step 6: Add new constraints
- Constrain the right side of the Next button to the right of the screen if it isn't already.
- Delete the constraint on the left side of the Next button.
- Now constrain the top and bottom of the Next button so that the top of the button is constrained to the bottom of the
TextViewand the bottom is constrained to the bottom of the screen. The right side of the button is constrained to the right side of the screen.
- Also constrain the
TextViewto the bottom of the screen.
It may seem like the views are jumping around a lot, but that's normal as you add and remove constraints.
Your layout should now look something like this.
Step 7: Extract string resources
- In the
fragment_first.xmllayout file, find the text property for the
toast_buttonbutton.
<Button android:id="@+id/toast_button" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Button"
- Notice that the text
"Button"is directly in the layout field, instead of referencing a string resource as the
TextViewdoes. This will make it harder to translate your app to other languages.
- To fix this, click the highlighted code. A light bulb appears on the left.
- Click the lightbulb. In the menu that pops up, select Extract string resource.
- In the dialog box that appears, change the resource name to
toast_button_textand the resource value to
Toastand click OK.
- Notice that the value of the
android:textproperty has changed to
@string/toast_button_text.
<Button android:id="@+id/button" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/toast_button_text"
- Go to the res > values > strings.xml file. Notice that a new string resource has been added, named
toast_button_text.
<resources> ... <string name="toast_button_text">Toast</string> </resources>
- Run the app to make sure it displays as you expect it to.
You now know how to create new string resources by extracting them from existing field values. (You can also add new resources to the
strings.xml file manually.) And you know how to change the id of a view.
Step 8: Update the Next button
The Next button already has its text in a string resource, but you'll make some changes to the button to match its new role, which will be to generate and display a random number.
- As you did for the Toast button, change the id of the Next button from
button_firstto
random_buttonin the Attributes panel.
- If you get a dialog box asking to update all usages of the button, click Yes. This will fix any other references to the button in the project code.
- In
strings.xml, right-click on the
nextstring resource.
- Select Refactor > Rename... and change the name to
random_button_text.
- Click Refactor to rename your string and close the dialog.
- Change the value of the string from
Nextto
Random.
- If you want, move
random_button_textto below
toast_button_text.
Step 9: Add a third button
Your final layout will have three buttons, vertically constrained the same, and evenly spaced from each other.
- In
fragment_first.xml, add another button to the layout, and drop it somewhere between the Toast button and the Random button, below the
TextView.
- Add vertical constraints the same as the other two buttons. Constrain the top of the third button to the bottom of
TextView; constrain the bottom of the third button to the bottom of the screen.
- Add horizontal constraints from the third button to the other buttons. Constrain the left side of the third button to the right side of the Toast button; constrain the right side of the third button to the left side of the Random button.
Your layout should look something like this:
- Examine the XML code for
fragment_first.xml. Do any of the buttons have the attribute
app:layout_constraintVertical_bias? It's OK if you do not see that constraint.
The "bias" constraints allows you to tweak the position of a view to be more on one side than the other when both sides are constrained in opposite directions. For example, if both the top and bottom sides of a view are constrained to the top and bottom of the screen, you can use a vertical bias to place the view more towards the top than the bottom.>
Step 10: Get your UI ready for the next task
The next task is to make the buttons do something when they are pressed. First, you need to get the UI ready.
- Change the text of the
TextViewto show 0 (the number zero).
- Change the text alignment to center.
- Change the
idof the last button you added,
button2, to
count_buttonin the Attributes panel in the design editor.
- In the XML, extract the string resource to
count_button_textand set the value to
Count.
The buttons should now have the following text and ids:
- Run the app.
Step 11: Fix errors if necessary
The errors occur because the buttons have changed their
id and now these constraints are referencing non-existent views..
Step 1: Add new color resources
- In
colors.xml, change the value of
screenBackgroundto
#2196F3, which is a blue shade in the Material Design palette.
- Add a new color named
buttonBackground. Use the value
#BBDEFB, which is a lighter shade in the blue palette.
<color name="buttonBackground">#BBDEFB</color>
Step 2: Add a background color for the buttons
- In the layout, add a background color to each of the buttons. (You can either edit the XML in
fragment_first.xmlor use the Attributes panel, whichever you prefer.)
android:background="@color/buttonBackground"
Step 3: Change the margins of the left and right buttons
- Give the Toast button a left (start) margin of 24dp and give the Random button a right (end) margin of 24dp. (Using start and end instead of left and right makes these margins work for all language directions.)
- One way to do this is to use the Constraint Widget in the Attributes panel. The number on each side is the margin on that side of the selected view. Type
24in the field and press Enter.
Step 4: Update the appearance of the TextView
- Remove the background color of the
TextView, either by clearing the value in the Attributes panel or by removing the
android:backgroundattribute from the XML code.
When you remove the background, the view background becomes transparent. 2. Increase the text size of the
TextView to 72sp.
android:textSize="72sp"
- Change the font-family of the
TextViewto
sans-serif(if it's not already).
- Add an"
- You can also set the vertical bias using the Constraint Widget. Click and drag the number 50 that appears on the left side, and slide it upwards until it says 30.
- Make sure the layout_width is wrap_content, and the horizontal bias is 50 (
app:layout_constraintHorizontal_bias="0.5"in XML).
Step 5: Run your app.
What you'll learn
- How to find a view by its ID.
- How to add click listeners for a view.
- How to set and get property values of a view from your code.
Step 1: Enable auto imports
To make your life easier, you can enable auto-imports so that Android Studio automatically imports any classes that are needed by the Kotlin code.
- In Android Studio, open the settings editor by going to File > Other Settings > Preferences for New Projects. (Or Settings for New Projects. The text varies between versions of Android Studio.)
- Select Auto Import. In the Java and Kotlin sections, make sure Add Unambiguous Imports on the fly is checked.
3. Close the settings editor.
Step 2: Show a toast
In this step, you will attach a Kotlin method to the Toast button to show a toast when the user presses the button. A toast is a short message that appears briefly at the bottom of the screen.
- Open
FirstFragment.kt. (app > java > com.example.android.myfirstapp > FirstFragment).
This class has only two methods,
onCreateView() and
onViewCreated(). These methods execute when the fragment starts.
As mentioned earlier, the id for a view helps you identify that view distinctly from other views. Using the
findViewByID() method, your code can find the
random_button using its id,
R.id.random_button. 2. Take a look at
onViewCreated(). It sets up a click listener for the
random_button, which was originally created as the Next button.
view.findViewById<Button>(R.id.random_button).setOnClickListener { findNavController().navigate(R.id.action_FirstFragment_to_SecondFragment) }
Here is what this code does:
- Use the
findViewById()method with the id of the desired view as an argument, then set a click listener on that view.
- In the body of the click listener, use an action, which in this case is for navigating to another fragment, and navigate there. (You will learn about that later.)
- Just below that click listener, add code to set up a click listener for the
toast_buttonthat() }
- Run the app and press the Toast button. Do you see the toasty message at the bottom of the screen?
- If you want, extract the message string into a resource as you did for the button labels.
You have learned that to make a view interactive you need to set up a click listener for the view that says what to do when the view (button) is clicked on. The click listener can either:
- Implement a small amount of code directly.
- Call a method that defines the desired click behavior in the activity.
Step 3: Make the Count button update the number on the screen.
- In the
fragment_first.xmllayout file, notice the
idfor the
TextView:
<TextView android:id="@+id/textview_first"
- In) }
- In the
FirstFragmentclass, add the method
countMe(),which takes a single
Viewargument. This method is invoked when the Count button is clicked and the click listener called.
private fun countMe(view: View) { }
- Use the)
- Get the value of the
showCountTextView.
... // Get the value of the text view. val countString = showCountTextView.text.toString()
- Convert the value to a number, and increment it.
... // Convert value to a number and increment it var count = countString.toInt() count++
- Display the new value in the
TextViewby programmatically setting the
textproperty of the
TextView.
... // Display the new value in the text view. showCountTextView.text = count.toString()
Here is the whole method:
private = countString.toInt() count++ // Display the new value in the text view. showCountTextView.text = count.toString() }
- Run your app. Press the Count button and watch the count update.
So far, you've focused on the first screen of your app. Next, you will update the Random button to display a random number between 0 and the current count on a second screen.
What you'll learn
- How to pass information to a second fragment.
Update the layout for the second fragment
The screen for the new fragment will display a heading title and the random number. Here is what the screen will look like in the design view:
The %d indicates that part of the string will be replaced with a number. The R is just a placeholder.
Step 1: Add a TextView for the random number
- Open
fragment_second.xml(app > res > layout > fragment_second.xml) and switch to Design view if needed. Notice that it has a
ConstraintLayoutthat contains a
TextViewand a
Button.
- Remove the chain constraint between the
TextViewand the
Button.
- Add another
TextViewfrom the palette and drop it near the middle of the screen. This
TextViewwill be used to display a random number between 0 and the current count from the first
Fragment.
- Set the
idto
@+id/textview_random(
textview_randomin the Attributes panel.)
- Constrain the top edge of the new
TextViewto the bottom of the first
TextView, the left edge to the left of the screen, and the right edge to the right of the screen, and the bottom to the top of the Previous button.
- Set both width and height to wrap_content.
- Set the textColor to @android:color/white, set the textSize to 72sp, and the textStyle to bold.
- Set the text to "
R". This text is just a placeholder until the random number is generated.
- Set the layout_constraintVertical_bias to 0.45.
This
TextView:
Step 2: Update the TextView to display the header
- In
fragment_second.xml, select
textview_second, which currently has the text
"Hello second fragment. Arg: %1$s"in the
hello_second_fragmentstring resource.
- If
android:textisn't set, set it to the
hello_second_fragmentstring resource.
android:text="@string/hello_second_fragment"
- Change the
idto
textview_headerin the Attributes panel.
- Set the width to match_constraint, but set the height to wrap_content, so the height will change as needed to match the height of the content.
- Set top, left and right margins to
24dp. Left and right margins may also be referred to as "start" and "end" to support localization for right to left languages.
- Remove any bottom constraint.
- Set the text color to
@color/colorPrimaryDarkand the text size to
24sp.
- In
strings.xml, change
hello_second_fragmentto "
Here is a random number between 0 and %d."
- Use Refactor > Rename... to change the name of
hello_second_fragmentto
random_heading.
Here is the XML code for the
TextView that displays the heading:
<TextView android:
Step 3: Change the background color of the layout
Give your new activity a different background color than the first activity:
- In
colors.xml, add a new color resource:
<color name="screenBackground2">#26C6DA</color>
- In the layout for the second activity,
fragment_second.xml, set the background of the
ConstraintLayoutto the new color.
In the Attributes panel:
Or in XML:
android:background="@color/screenBackground2"
Your app now has a completed layout for the second fragment. But if you run your app and press the Random button, it may crash. The click handler that Android Studio set up for that button needs some changes. In the next task, you will explore and fix this error.
Step 4: Examine the navigation graph
When you created your project, you chose Basic Activity as the template for the new project. When Android Studio uses the Basic Activity template for a new project, it sets up two fragments, and a navigation graph to connect the two. It also sets up a button to go from the first fragment to the second. This is the button you changed into the Random button. And now you want to send a number when the button is pressed.
- Open
nav_graph.xml(app > res > navigation > nav_graph.xml).
A screen similar to the Layout Editor in Design view appears. It shows the two fragments with some arrows between them. You can zoom with + and - buttons in the lower right, as you did with the Layout Editor.
- You can freely move the elements in the navigation graph. For example, If the fragments appear with
SecondFragmentto the left, drag
FirstFragmentto the left of
SecondFragmentso they appear in the order you work with them.
Step 5: Enable SafeArgs
This will enable SafeArgs in Android Studio.
- Open Gradle Scripts > build.gradle (Project: My First App)
- Find the
dependenciessection In the
buildscriptsection, and add the following lines after the other
classpathentries:
def nav_version = "2.3.0-alpha02" classpath "androidx.navigation:navigation-safe-args-gradle-plugin:$nav_version"
- Open Gradle Scripts > build.gradle (Module: app)
- Just below the other lines that begin with apply plugin add a line to enable SafeArgs:
apply plugin: 'androidx.navigation.safeargs.kotlin'
- Android Studio should display a message about the Gradle files being changed. Click Sync Now on the right hand side.
After a few moments, Android Studio should display a message in the Sync tab that it was successful:
- Choose Build > Make Project. This should rebuild everything so that Android Studio can find
FirstFragmentDirections.
Step 6: Create the argument for the navigation action
- In the navigation graph, click on
FirstFragment, and look at the Attributes panel to the right. (If the panel isn't showing, click on the vertical Attributes label to the right.)
- In the Actions section, it shows what action will happen for navigation, namely going to
SecondFragment.
- Click on
SecondFragment, and look at the Attributes panel.
The Arguments section shows
nothing.
- Click on the + in the Arguments section.
- In the Add Argument dialog, enter
myArgfor the name and set the type to Integer, then click the Add button.
Step 6: Send the count to the second fragment
The Next/Random button was set up by Android Studio to go from the first fragment to the second, but it doesn't send any information. In this step you'll change it to send a number for the current count. You will get the current count from the text view that displays it, and pass that to the second fragment.
- Open
FirstFragment.kt(app > java > com.example.myfirstapp > FirstFragment)
- Find the method
onViewCreated()and notice the code that sets up the click listener to go from the first fragment to the second.
- Replace the code in that click listener with a line to find the count text view,
textview_first.
val showCountTextView = view.findViewById<TextView>(R.id.textview_first)
- Get the text of the view and convert it to an
Int.
val currentCount = showCountTextView.text.toString().toInt()
- Create an action with
currentCountas the argument to
actionFirstFragmentToSecondFragment().
val action = FirstFragmentDirections.actionFirstFragmentToSecondFragment(currentCount)
- Add a line to find the nav controller and navigate with the action you created.) } }
- Run your app. Click the Count button a few times. Now when you press the Random button, the second screen shows the correct string in the header, but still no count or random number, because you need to write some code to do that.
Step 7: Update SecondFragment to compute and display a random number
You have written the code to send the current count to the second fragment. The next step is to add code to
SecondFragment.kt to retrieve and use the current count.
- In
SecondFragment.kt, add an import for navArgs to the list of imported libraries.
import androidx.navigation.fragment.navArgs
- In
SecondFragment.kt, before
onViewCreated(), add a line to define where the arguments are.
val args: SecondFragmentArgs by navArgs()
- In
SecondFragment.ktbelow where the click listener is created, add lines to get the count argument, get the string and format it with the count, and then set it for
textview_header.
val count = args.myArg val countText = getString(R.string.random_heading, count) view.findViewById<TextView>(R.id.textview_header).text = countText
- Add code to get a random number between 0 and the count.
val random = java.util.Random() var randomNumber = 0 if (count > 0) { randomNumber = random.nextInt(count + 1) }
- Add code to convert that number into a string and set it as the text for
textview_random.
view.findViewById<TextView>(R.id.textview_random).text = randomNumber.toString()
Here is the whole method.
val args: SecondFragmentArgs by navArgs() override fun onViewCreated(view: View, savedInstanceState: Bundle?) { super.onViewCreated(view, savedInstanceState) view.findViewById<Button>(R.id.button_second).setOnClickListener { findNavController().navigate(R.id.action_SecondFragment_to_FirstFragment) } val count = args.myArg val countText = getString(R.string.random_heading, count) view.findViewById<TextView>(R.id.textview_header).text = countText val random = java.util.Random() var randomNumber = 0 if (count > 0) { randomNumber = random.nextInt(count + 1) } view.findViewById<TextView>(R.id.textview_random).text = randomNumber.toString() }
- Run the app. Press the Count button a few times, then press the Random button. Does the app display a random number in the new activity?
Written tutorials
- Android Developer Fundamentals teaches programmers to build Android apps.
- Kotlin Bootcamp codelabs course is an introduction to Kotlin for programmers. You need experience with an object oriented programming language (Java, C++, Python) to take this course..
- Find more at developer.android.com, the official Android developer documentation from Google.
Online courses
- Kotlin Bootcamp for Programmers: This is an introduction to Kotlin for programmers. You need some experience with an object oriented programming language (Java, C++, Python) to take this course.
These interactive, video-based courses were created by Google experts in collaboration with Udacity. Take these courses at your own pace in your own time.
- Developing Android Apps in Kotlin: If you know how to program, learn how to build Android apps. This course uses Kotlin. | https://developer.android.com/codelabs/build-your-first-android-app-kotlin | CC-MAIN-2021-43 | refinedweb | 6,618 | 66.64 |
Replace this line with your code.
Want help for 2/11
nickgrimes50 #2
What error are you experiencing...?
and
2/11 is not very descriptive...
please post a link to the exercise you are working on...
I am unable to write down the if statement
nickgrimes50 #4
alright please post a link to the exercise and I will be able to help you.
public class And {
public static void main(String[] args) {
if (1 < 4 && 0 > 5) {
System.out.println(true);
}
}
nickgrimes50 #6
you do not need to use an
if statement here, just simply:
System.out.println(// conditional that evaluates to True);
thank u so much.i solved the issue
nickgrimes50 #8
You're welcome, I'm glad I could help. | https://discuss.codecademy.com/t/want-help-for-2-11/40562 | CC-MAIN-2018-34 | refinedweb | 122 | 75.91 |
An array of arrays is known as 2D array. The two dimensional (2D) array in C programming is also known as matrix. A matrix can be represented as a table of rows and columns. Before we discuss more about two Dimensional array lets have a look at the following C program.
Simple Two dimensional(2D) Array Example
For now don’t worry how to initialize a two dimensional array, we will discuss that part later. This program demonstrates how to store the elements entered by user in a 2d array and how to display the elements of a two dimensional array.
#include<stdio.h> int main(){ /* 2D array declaration*/ int disp[2][3]; /*Counter variables for the loop*/ int i, j; for(i=0; i<2; i++) { for(j=0;j<3;j++) { printf("Enter value for disp[%d][%d]:", i, j); scanf("%d", &disp[i][j]); } } //Displaying array elements printf("Two Dimensional array elements:\n"); for(i=0; i<2; i++) { for(j=0;j<3;j++) { printf("%d ", disp[i][j]); if(j==2){ printf("\n"); } } } return 0; }
Output:
Enter value for disp[0][0]:1 Enter value for disp[0][1]:2 Enter value for disp[0][2]:3 Enter value for disp[1][0]:4 Enter value for disp[1][1]:5 Enter value for disp[1][2]:6 Two Dimensional array elements: 1 2 3 4 5 6
Initialization of 2D Array
There are two ways to initialize a two Dimensional arrays during declaration.
int disp[2][4] = { {10, 11, 12, 13}, {14, 15, 16, 17} };
OR
int disp[2][4] = { 10, 11, 12, 13, 14, 15, 16, 17};
Although both the above declarations are valid, I recommend you to use the first method as it is more readable, because you can visualize the rows and columns of 2d array in this method.
Things that you must consider while initializing a 2D array
We already know, when we initialize a normal array (or you can say one dimensional array) during declaration, we need not to specify the size of it. However that’s not the case with 2D array, you must always specify the second dimension even if you are specifying elements }
How to store user input data into 2D array
We can calculate how many elements a two dimensional array can have by using this formula:
The array arr[n1][n2] can have n1*n2 elements. The array that we have in the example below is having the dimensions 5 and 4. These dimensions are known as subscripts. So this array has first subscript value as 5 and second subscript value as 4.
So the array abc[5][4] can have 5*4 = 20 elements.
To store the elements entered by user we are using two for loops, one of them is a nested loop. The outer loop runs from 0 to the (first subscript -1) and the inner for loops runs from 0 to the (second subscript -1). This way the the order in which user enters the elements would be abc[0][0], abc[0][1], abc[0][2]…so on.
#include<stdio.h> int main(){ /* 2D array declaration*/ int abc[5][4]; /*Counter variables for the loop*/ int i, j; for(i=0; i<5; i++) { for(j=0;j<4;j++) { printf("Enter value for abc[%d][%d]:", i, j); scanf("%d", &abc[i][j]); } } return 0; }
In above example, I have a 2D array
abc of integer type. Conceptually you can visualize the above array like this:
However the actual representation of this array in memory would be something like this:
Pointers & 2D array
As we know that the one dimensional array name works as a pointer to the base element (first element) of the array. However in the case 2D arrays the logic is slightly different. You can consider a 2D array as collection of several one dimensional arrays.
So
abc[0] would have the address of first element of the first row (if we consider the above diagram number 1).
similarly
abc[1] would have the address of the first element of the second row. To understand it better, lets write a C program –
#include <stdio.h> int main() { int abc[5][4] ={ {0,1,2,3}, {4,5,6,7}, {8,9,10,11}, {12,13,14,15}, {16,17,18,19} }; for (int i=0; i<=4; i++) { /* The correct way of displaying an address would be * printf("%p ",abc[i]); but for the demonstration * purpose I am displaying the address in int so that * you can relate the output with the diagram above that * shows how many bytes an int element uses and how they * are stored in contiguous memory locations. * */ printf("%d ",abc[i]); } return 0; }
Output:
1600101376 1600101392 1600101408 1600101424 1600101440
The actual address representation should be in hex for which we use %p instead of %d, as mentioned in the comments. This is just to show that the elements are stored in contiguous memory locations. You can relate the output with the diagram above to see that the difference between these addresses is actually number of bytes consumed by the elements of that row.
The addresses shown in the output belongs to the first element of each row.
create a dev c++ program where the user can insert fruits and their price and print its list.use 2 dimensional array
help me for this please i really dont know how to do it
i need C program to print the address of particular element in two dimensional array
how to scan a 2d array in matrix way on console ?
like
enter elements
2 3 6
4 5 6
1 2 3
I need a program which stores a sentence in a 2D array. Can you help me with that? | https://beginnersbook.com/2014/01/2d-arrays-in-c-example/ | CC-MAIN-2019-13 | refinedweb | 968 | 53.75 |
12 January 2010 14:00 [Source: ICIS news]
By Nel Weddle
LONDON (ICIS news)--European cracker margins are starting the new year under pressure despite supply problems and healthy demand which has helped to strengthen spot prices, market sources said on Tuesday.
The higher ethylene price for January, which was up by €30/tonne ($43/tonne) at €870/tonne, and stronger co-product values, was not enough to offset the surge in feedstock naphtha prices.
In the week to 8 January, contract margins were assessed at €146/tonne, down from the average €240/tonne seen in December, according to ICIS pricing margin analysis. Generally, €100/tonne is seen as the minimum level at which fixed costs are covered
Since the ethylene contract settlement on 21 December naphtha prices have risen to the mid $700s/tonne CIF (cost insurance and freight) from the mid $600s/tonne, a gain of 12% and few sources anticipate a softening of crude or naphtha in the near to medium term.
Several sellers agreed that sales volumes were good, particularly for spot volumes which had led to an improvement in spot margins, but contract margins were poor. Spot margins were pegged at €174/tonne, above contract margin for the first time since August 2008.
Because of weak cracker margins, sellers were already positioning for an increase for February contracts.
“There will be a very strong correction, otherwise what’s the point of producing,” said a major producer.
Others said that the direction “was quite clearly up” not only because of the need to improve margins but also to take into account the tighter supply and demand balance.
However, it was not clear whether the most of the demand could be attributed to sellers looking to fulfil their contract obligations by searching for alternative volumes because of recent and ongoing production problems, and re-stocking rather than it being a real improvement in structural demand.
No specific targets were divulged at this stage since contract negotiations were not expected to get under way for another couple of weeks.
2009 as whole was disappointing for olefins producers since the average contract margin was the lowest seen throughout the whole decade, having sunk below the previous low in 2002.
That margins were already under pressure in 2010 posed a problem for producers facing a potentially difficult “year of two halves” because of the expected impact of new ?xml:namespace>
($1 = €0.69) | http://www.icis.com/Articles/2010/01/12/9325064/europe-cracker-margins-start-2010-under-pressure.html | CC-MAIN-2014-35 | refinedweb | 403 | 51.92 |
Introduction: Writing the Code
The following information is a single lesson in a larger project. Find more great projects here.
Return to Previous Lesson: Setting Up the Circuit
Lesson Overview:
Now we'll write our crystal ball software!
Step 1: Import the LiquidCrystal Library
The first thing you will do is import the LiquidCrystal library. In the simulator, you can go to the Libraries tab in the Code Editor and press "include." You can also simply copy the code below.
You will also initialize the library, somewhat similar to the way you did with the Servo library, telling the Arduino which pins will communicate with the LCD.
Copy the code below into the Code Editor.
#include <LiquidCrystal.h>
LiquidCrystal lcd(12,11,5,4,3,2); Continue to the next step.
Step 2: Create Constants
Now that you’ve set up the library, it’s time to create some variables and constants.
Create a constant to define the tilt switch pin (switchPin) and a variable for the current state of the switch. You will also create a variable for the previous state of the switch, and one more variable to choose which reply the screen will show.
Copy the code below into the Code Editor. const int switchPin = 6; int switchState = 0; int prevSwitchState = 0; int reply;
Continue to the next step.
Step 3: Initialize the Switch
At the beginning of the setup() function, define the switchPin as an input.
Copy the code into the Code Editor. void setup(){ pinMode(switchPin, INPUT);
Continue to the next step.
Step 4: Print Your First Line
Next you will initialize the LCD by telling the Arduino how large the screen is (16 x 2 cells) and then printing the phrase "Ask the Crystal Ball!"
The lcd.print() function writes to the LCD screen. You’re going to write the words “Ask the” on the top line of the screen. The cursor is automatically at the beginning of the top line. In order to print "Crystal Ball" on the next line, you will need to move the cursor from the top left corner (0,0) to the cell below (0,1). Use the function lcd.setCursor() to move to the right coordinates.
Copy the code into the Code Editor. lcd.begin(16, 2); lcd.print("Ask the "); lcd.setCursor(0, 1); lcd.print("Crystal Ball!"); }
Now when you start the program, the LCD will say "Ask the Crystal Ball!"
This is the end of the setup() function.
Continue to the next step.
Step 5: Start the Loop()
In the loop(), you’re going to check the switch first, and put the value in the switchState variable.
Copy the code below into the Code Editor. void loop(){ switchState = digitalRead(switchPin);
Continue to the next step.
Step 6: Choose a Random Answer
If you haven't guessed already, the digital Crystal Ball doesn't not really predict the future. It just chooses a response at random from a list!
Use an if() statement to determine if the switch is in a different position than it was previously - this indicates that you moved the tilt sensor.
If it has changed, and it is currently LOW, then it’s time to choose a random reply. The random() function returns a number based on the argument you provide to it (responses are defined in step 8).
Copy the code below into the Code Editor. if(switchState != prevSwitchState){ if(switchState == LOW){ reply = random(8); //pick a number from 0 to 7
To start, you’ll have a total number of 8 different responses for the ball. Whenever the statement random(8) is called, it will give a number between 0-7. Store that number in your reply variable.
The if() statement is not over! There is a lot more code to go in this section, including defining the possible replies.
Continue to the next step.
Step 7: Clear the Screen
Within the if() statement, clear the screen with the function lcd.clear() to get it ready for displaying a response.
This also moves the cursor back to location 0,0; the first column in the first row of the LCD. Print out the line “ The ball says:” and move the cursor to 0,1 for the output.
Copy the code below into the Code Editor. lcd.clear(); lcd.setCursor(0, 0); lcd.print("The ball says: "); lcd.setCursor(0,1 );
Continue to the next step.
Step 8: Predict the Future!
The switch() statement executes different pieces of code depending on the value you give it. Each of these different pieces of code is called a "case."
switch() takes the variable reply as an argument, which we previously gave a random value between 0 and 7. Whatever value reply holds will determine what response is displayed!
For example, if reply has a value of 2, then case 2 will be executed.
Copy the code below into the Code Editor. switch(reply){ case 0: lcd.print("Yes"); break; case 1: lcd.print("Most likely"); break; case 2: lcd.print("Certainly"); break; case 3: lcd.print("Outlook good"); break; case 4: lcd.print("Unsure"); break; case 5: lcd.print("Ask again"); break; case 6: lcd.print("Doubtful"); break; case 7: lcd.print("No"); break; } //end of switch } //end of nested if() } //end of outer if()
After each lcd.print() function, there’s another command: break. It tells the Arduino where the end of the case is. When the program hits break, it skips to the end of the switch statement, so it doesn't try to print more than one statement!
Continue to the next step.
Step 9: Update the Switch State
The last thing to do in your loop() is to assign switchState’s value to the variable prevSwitchState. This enables you to track changes in the switch the next time the loop() function runs.
Copy the code below into the Code Editor. //update the tilt switch status prevSwitchState = switchState; } //end of loop()
You're done writing the code! Now it is time to ask the Crystal Ball a question.
Continue to the next step.
Step 10: Use It!
Now you can Upload & Run the code in the simulator, or plug in and program your Arduino Uno.
- After uploading and running your code, check the screen to make sure it says Ask the Crystal ball! If you cant see the characters, try turning the potentiometer. It will adjust the contrast of the screen.
- Ask a question of your crystal ball, and try tilting the switch upside down and back again. You should get an answer to your question on the LCD. If the answer doesnt suit you, ask again!
- Instruction image Recall that in the circuit simulator, you can move the tilt sensor by highlighting it and dragging the slider back and forth. You will have to slide it in both directions to change the answer on the LCD.
- Instruction image Continue to the next step.
- Stuck? HINT: Notice that we didn't need to write any code for the potentiometer! It operates directly as a voltage divider at pin V0 on the LCD to control the screen brightness.
Step 11: Customize Your Project.
The code pictured below enables you to choose from 12 responses at random.
- Continue to the next step.
Step 12: Think About It...
The functions covered here for changing the LCD screen’s text are fairly simple. Once you have a handle on how it works, look at some of the other functions the LiquidCrystal library has. Try getting text to scroll, or continually update. To find out more about how the LiquidCrystal library works, visit: arduino.cc/lcd
- Continue to the next step.
Step 13: Review
Congratulations on completing the Crystal Ball project! We hope that it always gives you positive responses.
In this project, you used a liquid crystal display (LCD) to show text on a screen. The LiquidCrystal library made it easy to control, although the LCD itself required a lot of wire connections with the Arduino.
You also used switch casestatements to generate a list of possible outputs for the LCD. This can be used to define outputs for any kind of variable, not just one generated at random! For more information, visitarduino.cc/SwitchCase
Congratulations, you have completed this project!
Recommendations
We have a be nice policy.
Please be positive and constructive. | http://www.instructables.com/id/Writing-the-Code-9/ | CC-MAIN-2018-09 | refinedweb | 1,389 | 76.52 |
Subject: Re: [boost] [outcome] On the design and documentation
From: Vicente J. Botet Escriba (vicente.botet_at_[hidden])
Date: 2017-05-27 06:38:13
Le 26/05/2017 à 23:23, Andrzej Krzemienski via Boost a écrit :
> 2017-05-26 9:03 GMT+02:00 Vicente J. Botet Escriba via Boost <
> boost_at_[hidden]>:
>
>> Le 26/05/2017 à 08:22, Thomas Heller a écrit :
>>
>>> On 05/25/2017 07:28 AM, Vicente J. Botet Escriba wrote:
>>>
>>>> Le 24/05/2017 à 21:44, Thomas Heller via Boost a écrit :
>>>>
>>>
>>>>>?
>>>>>
>>>> expected is a generalization of optional as it is synchronous and could
>>>> return more information about why the the value is not there.
>>>>
>>> Right, I can see this argument now, let me try to rephrase it a little
>>> bit (correct me if I got something wrong please):
>>> We want to be able to have a mechanism to return (or store) a value which
>>> might not be there, and we need to know why it is not there. The class
>>> that's currently available which almost does what we want is optional, in
>>> fact, it is already used in such situations, so what we miss is the
>>> possible error. So here we are, and then we naturally end up with something
>>> like variant<T, E>. Makes perfect sense.
>>>
>>> My line of thought was mostly influenced by the property of being solely
>>> used as a return object. And well, we already have the asynchronous return
>>> objects, so why not go with something a synchronous return object which
>>> represents a similar interface/semantics.
>>>
>>> With that being said, I am still not sure if the result of both ways to
>>> look at it should converged into the same result.
>>>
>>> expected could be seen as the ready storage of a future.
>>>> future::get block until the future is ready and then returns by
>>>> reference :)
>>>>
>>> Except not quite ;)
>>> excepted indeed sounds like the perfect fit for the value store in the
>>> share state. The only problem here, is that it really requires some kind of
>>> empty (or uninitialized) since the value is only computed some time in the
>>> future (this is motivation #1 for the proposed default constructed
>>> semantics), having either a default constructed T or E doesn't make sense
>>> in that scenario.
>>> So it is more like a variant<monostate, T, E>.
>>>
>> From a high level I will see it as optional<expected<T,exception_pr>>. We
>> optional because we have in addition the not-yet-ready state. Once the
>> future becomes ready we have expected<T,exception_ptr>.
>>
> `optional<expected<T,exception_pr>>` is still not enough for the
> `future<T>`, because in `future` `T` is allowed to be an lvalue reference.
> :)
>
Right :) It cannot be neither std::variant<monostate, T, E>.
You are right: what expected<T&> could be needs to be defined. Not for
this precise future::then case, but for the use cases expected is
intended for.
Best,
Vicente
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2017/05/235324.php | CC-MAIN-2019-43 | refinedweb | 510 | 63.39 |
Did you notice how last time my length comparison on strings was unnecessarily verbose? I could have written it like this:
static int ByLength(string x, string y) static int CompareInts(int x, int y) static Comparison<T> ThenBy<T>(this Comparison<T> firstBy, Comparison<T> thenBy)
{
if (x == null && y == null) return 0:
if (x == null) return -1;
if (y == null) return 1;
return CompareInts(x.Length, y.Length);
}
{
// Positive if x is larger, negative if y is larger, zero if equal
return x – y;
}
{
return (x,y)=>
{
int result = firstBy(x, y);
return result != 0 ? result : thenBy(x, y);
}
}
static int ByLength(string x, string y)
static int CompareInts(int x, int y)
static Comparison<T> ThenBy<T>(this Comparison<T> firstBy, Comparison<T> thenBy);
}
static int CompareInts(int x, int y)
The moral of the story here is that a comparison function that doesn’t compare something is probably wrong. Subtraction is not comparison.
Sometimes I think that instead of returning an int, Compare functions should return an enumeration:
enum CompareType //Built in. With less sucky naming.
{
LeftLarger = 1,
Equal = 0,
LeftSmaller = -1
}
static CompareType CompareInts(int x, int y)
{
if (x > y) return CompareType.LeftLarger;
if (x < y) return CompareType.LeftSmaller;
return CompareType.Equal;
}
The meaning of returning an int to a comparison function strikes me as non-obvious, despite its popularity.
I would have used x.CompareTo(y)… It also shows the intent much better than x – y
But subtraction _is_ comparison, if you take the overflow and carry flags into account!
Brian, the value returned by the function could be used by the sorting function as an optimization hint.
(From memory I read some documentation about this, I think it was for Delphi, too long ago)
It goes something like this:
The further away from zero the returned value is, the more 'unequal' both items are.
An optimized sorting function could use this knowledge in an attempt to skip a few items.
Diff : A – B = 1
Diff : E – F = 1
Diff : A – E = 5
Therefore comparing A to F could be skipped since:
A and B are real close,
E and F are real close
A and E are really far appart
therefore A and F could never be close.
I had the same thought as Thomas. Why would you not use x.CompareTo(y)?
@Bas:
But then your spec for a comparison function would have to say that the value is not only of a certain sign, but also proportional to the difference.
The comparison functions most of us use are only spec'ed as returning positive, negative, or zero, with no importance given to the magnitude of the positive or negative return values. If your sorting function assumes something about the magnitude of those values without the spec requiring it, you are wandering into very dangerous territory.
@Austin, to steal something Eric has said before, if CompareTo did not exist, how would you write it? Because all of the implementations of CompareTo, somebody had to write those. And people are still writing comparison functions, and they need to write good ones. No doubt many if not all of the examples presented in this blog series are from bug reports, either internally at Microsoft or from customers.
Substraction *is* a comparison for unsigned integers. For example, this is wide spread technique in embedded development.
Hmm, I'm not sure how meaningful all this is anymore – if you're defining custom comparers, that's generally because you've a specific usage in mind: one in which you generally will not deal with all possible values the type system permits. In fact, it's generally unavoidable to have errors for values that are legal by the type system but semantically illegal (though indeed a wrong value is generally a worse type of error than some form of exceptional termination).
So, as I see it, these comparison functions aren't so much invalid, they simply have a smaller domain than the type system indicates. The alternative – verifying that inputs *are* in range or using comparisons – is significantly slower; so it's not easy to dismiss out of hand.
And in general, it's not feasible for functions to always verify their input anyhow – so the focus on *these* border cases is nice, but hardly critical. Having undefined behavior when preconditions are violated is common in all kinds of software – probably almost all software.
I'd call these spec-bugs: the functions are in practice perhaps even superior to the "correct" versions – but their preconditions should be documented.
@Bas @Mwf: The optimization hint scenario is feasible, though compare does not promise any meaning on the int, so it is only practical under two situations:
1. The algorithm (e.g., sort) will still run successfully (and within any guarantees of speed it makes) even if the distance from 0 is arbitrary.
2. Only specialized libraries should use this type of optimization, since people will probably not be expecting this kind of optimization, and it may slow things down.
Anyhow, this kind of optimization strikes me as the kind of thing that should *NOT* be built into the framework. The people who need specialized comparisons can do some combination of using their own comparison libraries and just casting the enumeration to an int.
@Sergey: No, it isn't. The range of the result of a subtraction of values constrained only by type is always twice the range of the type, even if the type is unsigned, so wraparound is still problematic. Of course, as Eric alluded to with the size of strings argument, if the values are constrained to a subset of the values allowed by the type, it may work.
I've seen the x – y and -cmp idioms so many times — even in prominent examples of "well-written" code — that I've never thought to question them. Needless to say, I've grepped all my source code to check my comparisons now. Found one bug (-cmp to reverse), and one x – y which was okay because of restricted range, but I've added a comment to it now.
So thanks, Erik, for these posts! 🙂
@Bas: That "optimization" sounds pretty ridiculous. 🙂 I don't think you can safely glean information from any general comparison function. (Maybe you can make some assumptions if you're only sorting integers.) From your example, it does seem clear that A need never be compared to F, but for a different reason:
E – F = 1 (implies E > F)
A – E = 1 (implies A > E)
A > E > F… so A > F.
The origin is probably strcmp – strcmp did this in the original implementation in unix, and this is what the "may return any appropriate-sign value" contract was invented for. It was safe then, since what was being subtracted was chars which are smaller than ints, but it got it into people's heads that this is for some unknown reason a terribly clever thing to do (it avoids one or two branches, which was relevant on the PDP-11 but not so much today, and nevermind that strcmp requires three branches _per character_ until the point they are different)
@Bas Mommenhof: What kind of sorting algorithm could make use of this and be faster [including the extra work for the comparison function to decide if the inputs are "close" or not] than a standard sorting algorithm? Also, this is an extra contract on top of it, and doesn't fit with current usage [with traditional strcmp implementations, "aa" "ab" results in 1, "aaaa" "aaaz" results in 25.
I'd chalk this problem up not to (the use of) comparisons but to the implementation of numbers in programming languages like C#. Mathematically speaking it's rather strange to have several different, bounded, types to represent numbers – and these problems highlight the problems that come with them. After all, Int32.MinValue – Int32.MaxValue == 1, WTF??? Of course in the olden days it used to be the case that representation of numbers had to be constrained by a few bytes at most, because of the inherent speed and memory limitations we had back then. But I'd argue that the present use of Int32's (and other Int## types) are dinosaurs from those early days.
Progress has been made to eradicate many of the other early unwieldy/'dangerous' language structures (however useful they were back then – e.g. use of pointers, use of 0/1 for true/false, etc.) and it is high time we took another good look at the representation of numbers. I'm pretty sure there are programming languages out there that only have one notion of number, and do everything with it. (Well, maybe two – one discrete, one continuous.) Int32's and its friends may be retained for backwards compatibility and/or speed for programs that require them, but I say let's have a new all-encompassing Number type.
@Everyone: I am not actually advocating this.
Im only stating what I think I remember i have read a long time ago.
Brain asked where the int was comming from, this was a possibility.
I am not too happy about everyone disagreeing with it, because that could be an indication that my memory is failing.
I must be getting too old. (I still remember floppy disks.)
"I still remember floppy disks."
Oh for crying out loud. I just _used_ a floppy disk a couple of days ago. If simply remembering floppies makes you old, what does that make me? 🙂 (Granted, it was the "modern" 3.5" style…I admit it's been a few years since I've used a "floppy" where the outside was as floppy as the inside 🙂 ).
"…but I say let's have a new all-encompassing Number type"
Good idea! Count me in. 🙂 (Actually, don't some of the other "less-rigorous" languages have this sort of unbounded, arbitrarily precise numerical data types? You're always going to run into problems with precision for non-integral types, but I agree that otherwise we should not have to worry about overflow any more).
To those suggesting the use of arbitrary precision floating point as the default do let me know what should be supplied when asking for pi.
I can see arguments for using bigint by default in a language, python does this very nicely I think, but the Real vs Rational issue will always be a problem
"To those suggesting the use of arbitrary precision floating point as the default"
Who is doing that? In fact, I specifically called out floating point formats (i.e. "non-integral types") as a case where it's not possible to have the data type arbitrarily unbounded.
Methinks you are failing the "give the benefit of the doubt" test. :p
Believe or not but I made an effort and read your entire blog.
And I want to thank you very much for your precious posts!
Please write a book. You have a gift for showing with crystal clearity
how to go from description to implementation.
By now I am completely bewildered.
Okay, if I need to write Airport control software than C# on .net is perfectly fine.
If I need to write a Plant control software than I guess it will pass also. Will it?
Maybe for spacecraft it's going to be good too…
So then what C and C++ are left for? Only OS components and real time games?
Was all my investment in learning assembly a mistake?
ohh God!
I think it's definitely a good point to point out the integer overflow potential in comparison functions. However, the mathematically-minded purist in me twitches when you say "Subtraction is not comparison". In arbitrary precision universes it is, especially if you use the System.Numerics.BigInteger types from the .NET framework, then use the .Sign property to translate it back into the 32-bit domain 🙂
In my mind, a good software engineer has a leg in two domains – the theoretical computer science domain, as well as in the pragmatic engineering domain. This is a good reminder article for the engineering aspect 🙂
Moral of the Story: Don't use an int as the return value of a function that is supposed to have 3 distinct different outcomes.
This spec of the comapre function may have made sense in the 70's, when the MicroProcessor was the latest new thing, but one wonders how a thing like that could ever have ended up in .NET
I agree with Ferdinand.
The return value of the comparison always just smelled wrong to me, as a remnant of the C++ days.
Agreed, it enables you to write "return x-y" code, but again, is a method that does not compare, a comparer?
What baffles me, is that C# got all those thing right, why was this left shaky at best? | https://blogs.msdn.microsoft.com/ericlippert/2011/01/27/spot-the-defect-bad-comparisons-part-three/ | CC-MAIN-2017-09 | refinedweb | 2,146 | 61.16 |
PufferPages is lightweight but powerful rails >= 3.1 CMS
Interface of PufferPages based on puffer
Keyfeatures
- Full rails integration. PufferPages is part of rails and you can different features related to pages in rails application directly
- Flexibility. Puffer designed to be as flexible as possible, so you can create your own functionality easily.
- Layouts. You can use rails layouts for pages and you can use pages as action layouts!
Installation
You can instal puffer as a gem:
gem install puffer_pages
Or in Gemfile:
gem "puffer_pages"
Next step is:
rake puffer_pages:install:migrations
This will install PufferPages config file in your initializers, some css/js, controllers and migrations
rake db:migrate
Nex step - adding routes:
mount PufferPages::Engine => '/'
To start working with admin interface, you need to add some routes like:
namespace :admin do resources :pages resources :layouts resources :snippets resources :origins end
Introduction
PufferPages is radiant-like cms, so it has layouts, snippets and pages. PufferPages use liquid as template language.
Pages
Pages - tree-based structure of site. Every page has one or more page parts.
PageParts
Page_parts are the same as content_for block content in rails. You can insert current page page_patrs at layout.
Also, page_parts are inheritable. It means, that if root has page_part named
sidebar, all its children will have the same page_part until this page_part will be redefined.
Every page part must have main page part, named by default
body. You can configure main page part name in config/initializers/puffer_pages.rb
Layouts
Layout is page canvas, so you can draw page parts on it. You can use layouts from database or rails applcation layouts for pages.
Rails application layouts
For application layout page_part body will be inserted instead of SUDDENLY! <%= yield %> For yield with no params specified puffer will use page part with default page_part name.
So, main page part is action view and other are partials. So easy.
Liquid
Variables
This variables accessible from every page:
- self - current page reference.
self is an instance of page drop. View this to find list of possible page drop methods
{{ self.name }}
include
include is standart liquid tag with puffer data model 'file_system'
for page_parts
Use include tag for current page page_parts inclusion:
{% include 'page_part_name' %}
for snippets
To include snippet use this path form:
{% include 'snippets/snippet_name' %}
Usage example:
{% include 'sidebar' %} # this will render 'sidebar' page_part {% assign navigation = 'snippets/navigation' %} {% include navigation %} # this will render 'navigation' snippet
stylesheets, javascripts
{% stylesheets path [, path, path ...] %}
Both tags syntax is equal Tags renders rail`s stylesheet_link_tag or javascript_include_tag.
Usage example:
{% assign ctrl = 'controls' %} {% javascripts 'prototype', ctrl %} | https://www.rubydoc.info/gems/puffer_pages/0.5.1 | CC-MAIN-2022-33 | refinedweb | 426 | 55.44 |
Writing a custom stream is easy! Most people are now entirely comfortable using std::vector and std::list, and know the difference between a std::map and a std::set. However, the use and extension of the C++ standard library's streams is still considered difficult.
In this article I am going to look at writing a logging stream. A logging stream inserts the current date and time at the beginning of a buffer full of characters when it is flushed. The buffer is flushed to another stream which can modify the characters further or write them, for example, to the console (std::cout) or to a file (std::ofstream).
In section 13.13.3 of The C++ Standard Library [Josuttis] Nico Josuttis discusses how to write a custom stream in a fair amount of detail. Even though the book is widespread among developers, the section on streams does not appear to be widely read. Therefore in this article I am going to follow reasonably closely the line that Josuttis takes, but will cut out a lot of the unnecessary background which may scare the people who, wrongly, feel it must be read and understood before embarking on a custom stream. I will also discuss and resolve a potential initialisation problem not explored by Josuttis.
The heart of a stream is its buffer. Buffer is a misnomer as it does not have to buffer at all and can, if it so chooses, process the characters immediately.
Along with buffering, if required, the stream buffer does all the reading and writing of characters for the stream. The standard library provides std::basic_streambuf as a base class for stream buffers. Listing 1 shows a stream buffer that converts all the characters streamed to it to upper case and writes them with putchar:
#include <streambuf> ); } };
Listing 1: Example stream buffer
The overflow member function of std::basic_streambuf is called for each character that is sent to the stream buffer. Overriding it allows the behaviour to be modified. The example in Listing 1 above performs the following for each character sent to overflow:
The character is tested to make sure it is not an indication of the end of a file or an error.
The character is converted to uppercase.
The character is written to standard out. If an error occurs while writing the character this is indicated by returning traits::eof().
An indication of whether or not the character represents the end of a file or an error is returned.
Traits are used throughout Listing 1 to ensure that EOF is detected and handled correctly. Streams can be used with any character type that has a corresponding set of character traits. A detailed knowledge of character traits is not required when using the built in character types char and wchar_t as their traits are already part of the standard library. Character traits are discussed in 14.1.2 of Josuttis.
The easiest way to use a stream buffer is to pass it to an output stream as shown in Listing 2 below:
#include <streambuf> #include <ostream> ); } }; int main() { outbuf<char> ob; std::basic_ostream<char> out(&ob); out << "31 hexadecimal: " << std::hex << 31 << std::endl; return 0; }
Listing 2: Passing a stream buffer to an output stream
The output from the example in Listing 2 is:
31 HEXADECIMAL: 1F
The example in Listing 2 demonstrates a working stream, but is not an ideal solution as the stream buffer must be declared separately from the stream itself. A common solution is to create a subclass of std::basic_ostream with the stream buffer as a member which can be passed to the std::basic_ostream constructor as shown in Listing 3:
template<class charT, class traits = std::char_traits<charT> > class ostream : public std::basic_ostream<charT, traits> { private: outbuf<charT, traits> buf_; public: ostream() : std::basic_ostream<charT, traits>(&buf_), buf_() {} };
Listing 3: Subclass of std::basic_ostream
Having the stream buffer as a member introduces a potential initialisation problem. The solution to the problem introduces a further problem hidden deep within the C++ standard [Standard]. However, this second problem is also easily fixed.
If the stream buffer is dereferenced in std::basic_ostream's constructor or in its destructor, undefined behaviour can occur as the stream buffer will not have been initialised. At least one well known and widely used standard library implementation does nothing to avoid this and does not need to. Library implementers know their stream implementations and whether or not protection is needed. We, as stream extenders writing for potentially any number of different stream implementations, do not. There is no guarantee in the C++ standard to fall back on either.
Josuttis places the buffer before std::basic_ostream's constructor in the initialisation list, which makes no difference at all as stated in 12.6.2/5 of the C++ standard: member subobjects are destroyed in the reverse order of initialization.
The fact that the stream buffer is not initialised before it is passed to std::basic_ostream's constructor may not cause a problem with your compiler and library, but why risk it when there is a simple and straightforward solution? On the other hand, it may fail in a screaming fit immediately. Moving the stream buffer to a private base class which is initialised before std::basic_ostream solves the problem nicely. The initialisation order of base classes is specified as stated in 12.6.2/5 above. Listing 4 shows the base class which is used to initialise the stream buffer and how to use it with the output stream.
template<class charT, class traits = std::char_traits<charT> > struct outbuf_init { private: outbuf<charT, traits> buf_; public: outbuf<charT, traits>* buf() { return &buf_; } }; template<class charT, class traits = std::char_traits<charT> > class ostream : private outbuf_init<charT, traits>, public std::basic_ostream<charT, traits> { private: typedef outbuf_init<charT, traits> outbuf_init; public: ostream() : outbuf_init(), std::basic_ostream<charT, traits>(outbuf_init::buf()) {} };
Listing 4: Initialising the stream buffer
basic_ios is a virtual base class of basic_ostream. The C++ standard (27.4.4/2) describes its constructor as follows:
Effects: Constructs an object of class basic_ios (27.4.2.7) leaving its member objects uninitialized. The object must be initialized by calling its init member function. If it is destroyed before it has been initialized the behavior is undefined.
basic_ios::init is called from within basic_ostream's constructor. This is where things get complicated. As basic_ios is a virtual base class of basic_ostream, the objects which make up an ostream object are initialised in the following order (see 12.6.2/5):
... basic_ios outbuf outbuf_init basic_ostream ostream
Therefore the constructors of basic_ios and outbuf are both called before the constructor of basic_ostream and therefore before basic_ios::init is called. This means that if the outbuf constructor throws an exception, basic_ios's destructor will be called before basic_ios::init; resulting in the undefined behaviour described in 27.4.4/2.
The answer to this problem is contained within 12.6.2/5 and is very simple. Making ostream inherit virtually, as well as privately, from outbuf_init causes it to be constructed before anything else:
template<class charT, class traits = std::char_traits<charT> > class ostream : private virtual outbuf_init<charT, traits>, public std::basic_ostream<charT, traits> { private: typedef outbuf_init<charT, traits> outbuf_init; public: ostream() : outbuf_init(), std::basic_ostream<charT, traits>(outbuf_init::buf()) {} };
The initialisation order then becomes:
outbuf outbuf_init ... basic_ios basic_ostream ostream
Now, if output_buf does throw an exception there is no undefined behaviour as the basic_ios has not yet been created.
ostream can be made easier to use by introducing a couple of simple typedefs for common character types:
typedef ostream<char> costream; typedef ostream<wchar_t> wostream; int main() { costream out; out << "31 HEXADECIMAL: " << std::hex << 31 << std::endl; return 0; }
Listing 5: Typedefs for using ostream
That completes the implementation for the simplest possible custom stream.
The previous example of a stream buffer was very basic, potentially inefficient and didn't actually buffer the characters streamed to it. The logging stream mentioned at the start of this article requires the characters to be buffered. When the buffer is flushed the time and date are prepended before it is passed on to the next stream.
Josuttis also has an example of a buffered stream buffer. However, his example uses a fixed array for a buffer that gets flushed when it is full. The logging stream should only flush the buffer when instructed to do so, with a std::endl or a call to flush. To accomplish this, the fixed array can be replaced with a std::vector.
As already mentioned the logging stream simply buffers the characters streamed to it and passes them on to another stream, preceded by a time and date, when flushed. Therefore the stream buffer must contain some form of reference to the other stream.
Listing 6 shows a basic implementation for the logging stream buffer. A std::vector based buffer has been introduced and overflow modified to check for EOF before inserting its character into the buffer.
#include <streambuf> #include <vector> template<class charT, class traits = std::char_traits<charT> > class logoutbuf : public std::basic_streambuf<charT, traits> { private: typedef typename std::basic_streambuf<charT, traits>::int_type int_type; typedef std::vector<charT> buffer_type; buffer_type buffer_; virtual int_type overflow(int_type c) { if(!traits::eq_int_type(c, traits::eof())) { buffer_.push_back(c); } return traits::not_eof(c); } };
Listing 6: Basic implementation of logging stream buffer
As it stands the stream buffer in Listing 6 only buffers characters. It never flushes them. A pointer to an output stream buffer, that the characters can be flushed to, is required. The initialisation and undefined behaviour fixes described in the previous section have the side effect that logoutbuf will be a member of a virtual base class and therefore should have a default constructor. A virtual base class constructor must be called explicitly or implicitly from the constructor of the most derived class (12.6.2/6). A default constructor eliminates the need for explicit constructor calling. This in turn means that a reference to an output stream cannot be passed in through the constructor and therefore a pointer to the output stream buffer must be stored instead and initialised by way of an initialisation function. This is not ideal, but a trade-off to guarantee safety elsewhere. The initialisation function is also in keeping with the buffer initialisation in basic_ios.
template<class charT, class traits = std::char_traits<charT> > class logoutbuf : public std::basic_streambuf<charT, traits> { private: typedef typename std::basic_streambuf<charT, traits>::int_type int_type; typedef std::vector<charT> buffer_type; std::basic_streambuf<charT, traits>* out_; buffer_type buffer_; public: logoutbuf() : out_(0), buffer_() {} void init(std::basic_ostream<charT, traits>* out) { out_ = out; } ... };
Listing 7: Initialising the output stream buffer
Listing 7 shows the logoutbuf stream buffer with the output stream buffer pointer and initialisation function. A constructor has also been added to make sure that the output stream buffer pointer is initialised to 0, so that it can be reliably checked before characters are sent to it.
When basic_ostream::flush is called, either directly or via std::endl, it starts a chain of function calls that finally results in basic_streambuf::sync being called. This is where the buffer should be flushed. The buffer should also be flushed when a logoutbuf object is destroyed, so sync should also be called from the logoutbuf destructor.
template<class charT, class traits = std::char_traits<charT> > class logoutbuf : public std::basic_streambuf<charT, traits> { ... public: ... ~logoutbuf() { sync(); } ... private: ... virtual int sync() { if(!buffer_.empty() && out_) { out_->sputn(&buffer_[0], static_cast<std::streamsize> (buffer_.size())); buffer_.clear(); } return 0; } };
Listing 8: Synchronising the buffer
Listing 8 shows the implementation of the sync function. It checks the buffer to make sure there is something in it to flush and then checks the output stream buffer pointer to make sure the pointer is valid. The contents of the buffer are then sent to the output stream buffer, via its sputn function, and then cleared.
basic_streambuf's sputn function takes an array of characters as its first parameter and the number of characters in the array as its second parameter. std::vector stores its elements contiguously in memory, like an array, so the address of the first element in the buffer can be passed as sputn's first parameter. std::vector's size function is used to determine the number of elements in the buffer and can therefore be used as sputn's second parameter. The type of sputn's second argument is the implementation defined typedef std::streamsize. As the return type of std::vector::size is also implementation defined (and not necessarily the same type), sputn's second parameter must be cast to avoid warnings from compilers such as Microsoft Visual C++. There is a possibility that the number of characters stored in the buffer will be greater than std::streamsize can hold, but this is highly unlikely.
logoutbuf is now a fully functioning, buffered output stream buffer and can be plugged into a basic_ostream object and tested.
... int main() { logoutbuf<char> ob; ob.init(std::cout.rdbuf()); // Flush to std::cout std::basic_ostream<char> out(&ob); out << "31 hexadecimal: " << std::hex << 31 << std::endl; return 0; }
Listing 9: Using logoutbuf
Listing 9 creates a logoutbuf object, sets std::cout's stream buffer as its output stream buffer and then passes it to a basic_ostream object, which then has character streamed to it. The output from the example in Listing 9 is:
31 hexadecimal: 1f
The next step is to generate the time and date that will be flushed to the output stream buffer prior to the contents of the logoutbuf buffer. The different ways of generating a date and time string are beyond the scope of this article so I am providing the following implementation, which will handle both char and wchar_t character types, without any explanation beyond the comments in the code:
#include <streambuf> #include <vector> #include <ctime> #include <string> #include <sstream> ... template<class charT, class traits = std::char_traits<charT> > class logoutbuf : public std::basic_streambuf<charT, traits> { ... private: std::basic_string<charT, traits> format_time() { // Get current time and date time_t ltime; time(<ime); // Convert time and date to string std::basic_stringstream<charT, traits> time; time << asctime(gmtime(<ime)); // Remove LF from time date string and // add separator std::basic_stringstream<char_type> result; result << time.str().erase( time.str().length() - 1) << " - "; return result.str(); } ... virtual int sync() { if(!buffer_.empty() && out_) { const std::basic_string<charT, traits> time = format_time(); out_->sputn(time.c_str(), static_cast<std::streamsize> (time.length())); out_->sputn(&buffer_[0], static_cast<std::streamsize> (buffer_.size())); buffer_.clear(); } return 0; } ... };
Listing 10: Adding date and time
The sync function in Listing 10 now sends a date and time string (plus the separator) to the output stream buffer before flushing the logoutbuf buffer. The result of running the example from Listing 9 is now:
Fri Apr 20 16:00:00 2005 - 31 hexadecimal: 1f
logoutbuf is now fully functional, but there is a further modification that can be made for the sake of efficiency. Currently overflow is called for every single character streamed to the stream buffer. This means that to stream the 31 hexadecimal: string literal to the stream buffer involves 16 separate function calls. This can be reduced to a single function call by overriding xsputn.
... #include <algorithm> template<class charT, class traits = std::char_traits<charT> > class logoutbuf : public std::basic_streambuf<charT, traits> { ... private: ... virtual std::streamsize xsputn(const char_type* s, std::streamsize num) { std::copy(s, s + num, std::back_inserter<buffer_type>(buffer_)); return num; } ... };
Listing 11: Overriding xsputn
xsputn takes the same parameters as basic_streambuf::sputn and uses the std::copy algorithm together with std::back_inserter to insert the characters from the array into the buffer. logoutbuf is now complete.
logoutbuf does of course require its own logoutbuf_init class and basic_ostream subclass, with a few modifications:
template<class charT, class traits = std::char_traits<charT> > class logoutbuf_init { private: logoutbuf<charT, traits> buf_; public: logoutbuf<charT, traits>* buf() { return &buf_; } }; template<class charT, class traits = std::char_traits<charT> > class logostream : private virtual logoutbuf_init<charT, traits>, public std::basic_ostream<charT, traits> { private: typedef logoutbuf_init<charT, traits> logoutbuf_init; public: logostream(std::basic_ostream<charT, traits>& out) : logoutbuf_init(), std::basic_ostream<charT, traits>(logoutbuf_init::buf()) { logoutbuf_init::buf()->init(out.rdbuf()); } }; typedef logostream<char> clogostream; typedef logostream<wchar_t> wlogostream;
Listing 12: logoutbuf_init class and basic_ostream subclass
The logoutbuf_init class is actually the same as the one form the previous section; it's the logostream that is slightly different. The constructor takes a single parameter which is the output stream and its body passes its stream buffer to logoutbuf via init (suddenly the trade off doesn't seem so bad).
The final test example is shown in Listing 13:
... int main() { costream out(std::cout); out << "31 hexadecimal: " << std::hex << 31 << std::endl; return 0; }
Listing 13: Using the stream
The stream buffer is clearly the heart of an output stream. The potential for a stream buffer being accessed before it is initialised is easily avoided, as is the possibility of undefined behaviour, with the minimal of tradeoffs.
The buffering of characters streamed to a stream buffer is easily handled by a std::vector with no need for extra memory handling. Multiple characters can be added to a std::vector just as easily as single characters and the contiguous memory elements make it easy to flush to an output stream.
Writing a custom stream is easy! I believe this article shows just how easy it is, even with a minimum of background knowledge. | https://accu.org/index.php/journals/264 | CC-MAIN-2017-39 | refinedweb | 2,904 | 51.07 |
Oracle portal html template file size limitationspekerjaan
Want u to do oracle db clustering so servers can be run in active passive mode and cluster id can be generated.
I take xml from 1 website. But that sizes for retail. But i will sell wholesale. So can you change this ? Example: 38 size -2 40 size-3 42 size -5 44 size-0 It has to write 38-40-42 2 serial.
.., need creative
.....
...7 ECA – July Semester 2018 (d) Identify possible limitations of your findings. For example, are they limited to a certain city or country? Are you making assumptions about the data which may, or may not, be valid? Present your work for all the tasks (a)-(d) in your report using the provided template (Appendix 3). Provide screenshots of the relevant
Need someone to do show me how we can migrate oracle to cosmo db .
Translate 50 Ads and 75 keywords in 2 campaigns from english to french. Preferably you work directly on the Google Ads panel for proper translations complying with character limitations.
..] $.)
import search filter large json file . search edit export data in many ways xls pdf etc.. will provide sample of small file and example data inside it once we discuss. winner of this project who place best bid . thanks
we have implementation for oracle procurement and sourcing we need to change it look and feel
Oracle HCM Cloud implementation. Need experts who has functional knowledge in the area and explore new functionalities within the system.]
...research 3-the problem, the solution they are proposing including the proof of concept and what is/are the problem/es in the proposed research. 4. Extract the problems (limitations) out of each paper. (Each paper must be review in one paragraph) 2 The proposed framework must be (novel solution) 3. Validation and implementation for the proposed framework
traning , teaching .should teach oracle 11,12,10 and should help me certified oracle 12c .practicals is important .' functio...
...freelancers para realizar el Electr&oa... | https://www.my.freelancer.com/job-search/oracle-portal-html-template-file-size-limitations/ | CC-MAIN-2018-43 | refinedweb | 335 | 67.25 |
Line Attributes class.
This class is used (in general by secondary inheritance) by many other classes (graphics, histograms). It holds all the line attributes.
Line attributes are:
The line color is a color index (integer) pointing in the ROOT color table. The line color of any class inheriting from
TAttLine can be changed using the method
SetLineColor and retrieved using the method
GetLineColor. The following table shows the first 50 default colors.
SetLineColorAlpha(), allows to set a transparent color. In the following example the line line width is expressed in pixel units. The line width of any class inheriting from
TAttLine can be changed using the method
SetLineWidth and retrieved using the method
GetLineWidth. The following picture shows the line widths from 1 to 10 pixels.
Line styles are identified via integer numbers. The line style of any class inheriting from
TAttLine can be changed using the method
SetLineStyle and retrieved using the method
GetLineStyle.
The first 10 line styles are predefined as shown on the following picture:
Some line styles can be accessed via the following enum:
Additional line styles can be defined using
TStyle::SetLineStyleString. For example the line style number 11 can be defined as follow:
Existing line styles (1 to 10) can be redefined using the same method.
Definition at line 18 of file TAttLine.h.
#include <TAttLine.h>
AttLine default constructor.
Definition at line 138 of file TAttLine.cxx.
AttLine normal constructor.
Line attributes are taking from the argument list
Definition at line 155 of file TAttLine.cxx.
AttLine destructor.
Definition at line 165 of file TAttLine.cxx.
Copy this line attributes to a new TAttLine.
Definition at line 172 of file TAttLine.cxx.
Compute distance from point px,py to a line.
Compute the closest distance of approach from point px,py to this line. The distance is computed in pixels units.
Algorithm:
Definition at line 206 of file TAttLine.cxx.
Change current line attributes if necessary.
Definition at line 242 of file TAttLine.cxx.
Reset this line attributes to default values.
Reimplemented in TGWin32VirtualXProxy.
Definition at line 260 of file TAttLine.cxx.
Save line attributes as C++ statement(s) on output stream out.
Definition at line 270 of file TAttLine.cxx.
Invoke the DialogCanvas Line attributes.
Reimplemented in TGWin32VirtualXProxy.
Definition at line 290 of file TAttLine.cxx.
Set a transparent line color.
lalpha defines the percentage of the color opacity from 0. (fully transparent) to 1. (fully opaque).
Definition at line 299 of file TAttLine.cxx.
Line color.
Definition at line 27 of file TAttLine.h.
Line style.
Definition at line 28 of file TAttLine.h.
Line width.
Definition at line 29 of file TAttLine.h. | https://root.cern/doc/master/classTAttLine.html | CC-MAIN-2020-29 | refinedweb | 443 | 61.43 |
30 August 2013 16:24 [Source: ICIS news]
HOUSTON (ICIS)--The ?xml:namespace>
Among the three producers that account for about 85% of the market, one producer is at 40 cents/lb ($882/tonne, €670/tonne) and two producers are at 44 cents/lb. A fourth producer, which accounts for about 15% of the market, is at 47 cents/lb.
Based on those prices, the monthly contract range among the three biggest producers is 40-44 cents/lb, compared with 40-46 cents/lb in August.
The last time the monthly BD contract was in the 40 cents/lb range was July 2009, when it was 45 cents/lb. The lowest monthly contract price in the past five years was 25 cents/lb in March and April 2009.
The US BD monthly contract price began the year at 76 cents/lb, rose to 84 cents/lb in March and April, and has spiraled downward ever since. At the start of the year, sources had expected US BD to rise through the year and eventually reach more than $1/lb. But demand for replacement tyres, which account for about 80% of global tyre sales, has dwindled, driving down the price of BD and the other key tyre raw material, styrene-butadiene-rubber. In July, Goodyear reported that it sold 1.6m fewer tyres in the first six months of 2013 than the year ago period. Most market participants don't expect the replacement-tyre market to recover before the second quarter of 2014. | http://www.icis.com/Articles/2013/08/30/9702073/us-sept-bd-contract-settles-at-split-again-range-40-44-centslb.html | CC-MAIN-2014-42 | refinedweb | 252 | 77.47 |
Hi,I'm writting some console program using ncurses for interface. In this program I want to implement small subshell using readline for command prompt. I'm switching ncurses to shell mode and call the readline() function. After this I return ncurses to program mode. But after I called the readline() function I can't get correct terminal size through the LINES and COLS variables. The terminal's size before readline() is stored in this variables and this values isn't changes anyway after readline() function called.
Is it mine programming error or it is bug of ncurses or readline? I attached small sample program to this letter. Thanks, -- With best regards, Sergey I. Sharybin
#include <ncursesw/ncurses.h> #include <readline/readline.h> #include <malloc.h> #include <signal.h> static int subshell = 0; static void sig_winch (int sig) { if (subshell) { return; } endwin (); refresh (); printf ("%d %d\n", LINES, COLS); fflush (stdout); } int main (int __argc, char **__argv) { struct timespec timestruc; timestruc.tv_sec = 0; timestruc.tv_nsec = 0.2 * 1000 * 1000; initscr (); signal (SIGWINCH, sig_winch); cbreak (); /* take input chars one at a time, no wait for \n */ noecho (); /* don't echo input */ nodelay (stdscr, TRUE); start_color (); for (;;) { wchar_t ch; wget_wch (stdscr, &ch); if (ch == 'q') { break; } else if (ch == 's') { char *buf; subshell = 1; def_prog_mode (); nodelay (stdscr, FALSE); endwin (); buf = readline (">>> "); if (buf) { free (buf); } refresh (); nodelay (stdscr, TRUE); subshell = 0; } nanosleep (×truc, 0); } endwin (); return 0; } | http://lists.gnu.org/archive/html/bug-ncurses/2009-03/msg00050.html | CC-MAIN-2018-05 | refinedweb | 235 | 74.59 |
I am quite new in python programming. I have written a simple python that I can’t run. For an unknown reason, it throws me back an error “cannot concatenate 'str' and 'int' objects”
Here is my sample snippet:
n = raw_input("Enter n: ")
m = raw_input("Enter m: ")
print "n + m as strings: " + n + m
n = int(n)
m = int(m)
c = m + n
str(c)
print "n + m as integers: " + c
Any solution?
You have two ways to solve it,
Either
c = str(c)
Or
You can put a comma (,) in front of the variable c in your print line
n = raw_input("Enter n: ")
m = raw_input("Enter m: ")
print "n + m as strings: " + n + m
n = int(n)
m = int(m)
c = m + n
str(c)
print "n + m as integers: ", c
This program will produce the output:
Enter n: 3
Enter m: 7
n + m as strings: 37
n + m as integers: 10
Thanks
There are two methods to solve the problem which is caused by the last print statement.
print
You can assign the result of the str(c) call to c as rightly shown by @jamylak and then close all of the strings, or you can replace the last print easily with this:
str(c)
c
print "a + b as integers: ", c # note the comma here
in which instance
isn't required and can be deleted.
Output of sample run:
Enter a: 3
Enter b: 7
a + b as strings: 37
a + b as integers: 10
a = raw_input("Enter a: ")
b = raw_input("Enter b: ")
print "a + b as strings: " + a + b # + everywhere is ok since all are strings
a = int(a)
b = int(b)
c = a + b
print "a + b as integers: ", c
Therefore, str(c) returns a new string representation of c, and does not mutate c itself.
Therefore, str(c)
In case you need to close int or floats to a string you should use this:
i = 123
a = "foobar"
s = a + str(i)
c = a + b
str(c)
Really, in this last line you are not altering the type of the variable c. In case you do
c_str=str(c)
print "a + b as integers: " + c_str
The issue here is that the + operator has (at least) two different signification in Python: for numeric types, it imply "add the numbers together":
+
>>> 1 + 2
3
>>> 3.4 + 5.6
9.0
.. and for sequence types, it imply "concatenate the sequences":
>>> [1, 2, 3] + [4, 5, 6]
[1, 2, 3, 4, 5, 6]
>>> 'abc' + 'def'
'abcdef'
As a rule, Python doesn't implicitly alter objects from one type to another1 in order to create operations "make sense", since that would be confusing: for example, you might conceive that '3' + 5 must mean '35', however someone else might think it must mean 8 or even '8'.
'3' + 5
'35'
8
'8'
Likewise, Python won't let you concatenate two different types of sequence:
>>> [7, 8, 9] + 'ghi'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can only concatenate list (not "str") to list
Since of this, you require to do the conversion apparently, whether what you want is concatenation or addition:
>>> 'Total: ' + str(123)
'Total: 123'
>>> int('456') + 789
1245
But, there is a better method. Relying on which version of Python you exercise, there are three different kinds of string formatting available2, which not only approve you to avoid multiple + operations:
>>> things = 5
>>> 'You have %d things.' % things # % interpolation
'You have 5 things.'
>>> 'You have {} things.'.format(things) # str.format()
'You have 5 things.'
>>> f'You have {things} things.' # f-string (since Python 3.6)
'You have 5 things.'
However also approve you to control how values are displayed:
>>> value = 5
>>> sq_root = value ** 0.5
>>> sq_root
2.23606797749979
>>> 'The square root of %d is %.2f (roughly).' % (value, sq_root)
'The square root of 5 is 2.24 (roughly).'
>>> 'The square root of {v} is {sr:.2f} (roughly).'.format(v=value, sr=sq_root)
'The square root of 5 is 2.24 (roughly).'
>>> f'The square root of {value} is {sq_root:.2f} (roughly).'
'The square root of 5 is 2.24 (roughly).'
If you employ % interpolation, str.format(), or f-strings is up to you: % interpolation has been around the longest , str.format() is mostly more powerful, and f-strings are more powerful still (however available only in Python 3.6 and later).
str.format()
Another alternative is to exercise the fact that in case you give print multiple positional arguments, it will link their string illustrations together employing the sep keyword argument (which defaults to ' '):
sep
' '
>>> things = 5
>>> print('you have', things, 'things.')
you have 5 things.
>>> print('you have', things, 'things.', sep=' ... ')
you have ... 5 ... things.
Employ the str function in order to convert "currency" to a string
def shop():
print "Hello " + name + ", what would you like? You have $" + str(currency)
Your value in row[0] is an integer, however you are attempting to combine (concatenate) it with a string (text). You can solve this by Mortally casting row[0] as a string employing the built in str() method.
Attempt this:
arcpy.MakeFeatureLayer_mangement(Fitting, "fcLyr", "OBJECTID = "+ str(row[0]))
Or you could use the format method to insert the integer directly into the query string:
arcpy.MakeFeatureLayer_management(Fitting, "fcLyr", "OBJECTID = {}".format(row[0])) | https://kodlogs.com/37634/cannot-concatenate-str-and-int-objects | CC-MAIN-2021-21 | refinedweb | 879 | 66.78 |
[ latest ] [ categories ]
Rittman Mead is excited to announce the first ever global OBIEE usage survey. Our goal is to provide deep analysis into how organisations use their OBIEE systems and to create a set of industry benchmarks. We will be releasing the results of this research for free to the OBIEE community.
The types of metrics we are looking at are:
Here’s how it works: we need your Usage Tracking data. To make providing this data easier we can send you a script for your system administrator to run to extract this. We even have an option to obfuscate key attributes so we can’t see any usernames or sensitive details.
Once we receive your data, we will analyze your data individually and provide you with a free report designed to provide you unique insight into to your system’s usage, an example of which is available here.
We will also add your obfuscated, depersonalised and aggregated data to our benchmarking database and let you know how your system performs against industry standards.
Please note: you do have to be running the usage tracking feature of OBIEE for this to work. We strongly advise having this running in any system and can help you get it turned on, if required. Also any data you send to Rittman Mead is completely anonymous and holds no personal or sensitive attributes. It will only be used for benchmarking.
At the end of the survey we will perform a detailed analysis of the complete OBIEE usage database and publish the results.
Please email us at ux@rittmanmead.com and we will send full over the scripts and full instructions.
We are currently focused on user engagement of BI and analytics systems and have been conducting research over the last few months. We have found very few tangible studies about enterprise BI usage, in particular OBIEE usage.
We are creating this database from OBIEE users around the world and will use this as the academic basis for furthering our research into user engagement and OBIEE.
For the past few months a friend has been driving me crazy with all his praise for Splunk. He was going on about how easy it is to install, integrate different sources and build reports. I eventually started playing around to see if it could be used for a personal project I’m working on. In no time at all I understood what he was on about and I could see the value and ease of use of the product. Unfortunately the price of such a product means it is not a solution for everyone so I started looking around for alternatives and ElasticSearch caught my eye as a good option.
In this post we will focus on how we can stream Twitter data into ElasticSearch and explore the different options for doing so. Storing data in ElasticSearch is just the first step but you only gain real value when you start analysing this data. In the next post we will add sentiment analysis to our Twitter messages and see how we can analyse this data by building Kibana dashboards. But for now we will dig a bit deeper into the following three configuration options:
We will look at the installation and configuration of each of these and see how we can subscribe to twitter using the Twitter API. Data will then get processed, if required, and sent to Elasticsearch.
Elasticsearch has the ability to store large quantities of semi-structured (JSON) data and provides the ability to quickly and easily query this data. This makes it a good option for storing Twitter data which is delivered as JSON and a perfect candidate for the project I’m working on.
You will need a server to host all the required components. I used an AWS free tier (t2.micro) instance running Amazon Linux 64-bit. This post assumes you already have an elasticsearch cluster up and running and that you have a basic understanding of elasticsearch. There are some good blog posts, written by Robin Moffatt, which were very useful during the installation and configuration stages.
In order to access the Twitter Streaming API, you need to register an application at. Once created, you should be redirected to your app’s page, where you can get the consumer key and consumer secret and create an access token under the “Keys and Access Tokens” tab. These values will be used as part of the configuration for all the sample configurations to follow.
The API allows two types of subscriptions. Either subscribe to specific keywords or to a user timeline (similar to what you see as a twitter user).
We'll start with logstash as this is probably the easiest one to configure and seems to be the recommended approach for integrating sources with elasticsearch in recent versions. At the time of writing this post, logstash only supported streaming based on keywords which meant it was not suitable for my needs but it’s still a very useful option to cover.
To install logstash you need to download the correct archive based on the version of elasticsearch you are running.
curl -O
Extract the archived file and move the extracted folder to a location of your choice
tar zxf logstash-x.x.x.tar.gz
mv logstash-x.x.x /usr/share/logstash/
To configure logstash we need to provide input, output and filter elements. For our example we will only specify input (twitter) and output (elasticsearch) elements as we will be storing the full twitter message.
For a full list of logstash twitter input settings see the official documentation.
Using your favourite text editor, create a file called twitter_logstash.conf and copy the below text. Update the consumer_key, consumer_secret, oath_token and oath_token_secret values with the values from your Twitter Stream App created earlier.
input {
twitter {
# add your data
consumer_key => "CONSUMER_KEY_GOES_HERE"
consumer_secret => "CONSUMER_SECRET_GOES_HERE"
oauth_token => "ACCESS_TOKEN_GOES_HERE"
oauth_token_secret => "ACCESS_TOKEN_SECRET_GOES_HERE"
keywords => ["obiee","oracle"]
full_tweet => true
}
}
output {
elasticsearch_http {
host => "localhost"
index => "idx_ls"
index_type => "tweet_ls"
}
}
This configuration will receive all tweets tagged with obiee or oracle and store them to an index called idx_ls in elasticsearch.
To run logstash, execute the following command from the installed location
bin/logstash -f twitter_logstash.conf
If you subscribed to active twitter tags you should see data within a few seconds. To confirm if your data is flowing you can navigate to which will show you a list of indices with some relevant information.
With this easy configuration you can get Twitter data flowing in no time at all.
Next we will look at using the River Plugins to stream Twitter data. The only reason to use this approach over logstash is if you want to subscribe to a user timeline. Using this feature will show the same information as the Twitter application or viewing your timeline online.
Note!!Twitter River is not supported from ElasticSearch 2.0+ and should be avoided if possible. Thanks to David Pilato for highlighting this point. It is still useful to know of this option in the very rare case where it might be useful.
Before installing the plugin you need to determine which version is compatible with your version of elasticsearch. You can confirm this at and selecting the correct one.
To install you need to use the elasticsearch plugin installation script. From the elasticsearch installation directory, execute:
bin/plugin -install elasticsearch/elasticsearch-river-twitter/x.x.x
Then restart your Elasticsearch service.
To configure the twitter subscriber we will again create a .conf file with the necessary configuration elements. Create a new file called twitter_river.conf and copy the following text. As with logstash, update the required fields with the values from the twitter app created earlier.
{
"type": "twitter",
"twitter" : {
"oauth" : {
"consumer_key" : "CONSUMER_KEY_GOES_HERE",
"consumer_secret" : "CONSUMER_SECRET_GOES_HERE",
"access_token" : "ACCESS_TOKEN_GOES_HERE",
"access_token_secret" : "ACCESS_TOKEN_SECRET_GOES_HERE"
},
"filter" : {
"tracks" : ["obiee", "oracle"]
},
"raw" : true,
"geo_as_array" : true
},
"index": {
"index": "idx_rvr",
"type": "tweet_rvr",
"bulk_size": 100,
"flush_interval": "5s"
}
}
This configuration is identical to the logstash configuration and will receive the same tweets from twitter. To subscribe to a user timeline instead of keywords, replace the filter configuration element:
"filter" : {
"tracks" : ["obiee", "oracle"],
},
with a user type element
"type" : "user",
To start the plugin you need to execute the following from a terminal window.
curl -XPUT localhost:9200/_river/idx_rvr/_meta -d @twitter_river.conf
Depending on how active your subscribed tags are you should see data within a few seconds in elasticsearch. You can again navigate to to confirm if your data is flowing. Note this time that you should see two new rows, one index called _river and the other idx_rvr. idx_rvr is where your twitter data will be stored.
To stop the plugin (or change between keywords and user timeline), execute the following from a terminal window:
curl -XDELETE 'localhost:9200/_river/idx_rvr';
Finally we will look at the most flexible solution of them all. It is a bit more complicated to install and configure but, given what you gain, the small amount of extra time spent is well worth the effort. Once you have Tweepy working you will be able to write you own python code to manipulate the data as you see fit.
As Tweepy is a python package we will use pip to install the required packages. If you don't have pip installed. Execute one of the following, depending on your linux distribution.
yum -y install python-pip
or
apt-get install python-pip
Next we will install the Tweepy and elasticsearch packages
pip install tweepy
pip install elasticsearch
Create a new file called twitter_tweepy.py and copy the following text to the file
import tweepy
import sys
import json
from textwrap import TextWrapper
from datetime import datetime
from elasticsearch import Elasticsearch
consumer_key="CONSUMER_KEY_GOES_HERE"
consumer_secret="CONSUMER_SECRET_GOES_HERE"
access_token="ACCESS_TOKEN_GOES_HERE"
access_token_secret="ACCESS_TOKEN_SECRET_GOES_HERE"
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
es = Elasticsearch()
class StreamListener(tweepy.StreamListener):
status_wrapper = TextWrapper(width=60, initial_indent=' ', subsequent_indent=' ')
def on_status(self, status):
try:
#print '\n%s %s' % (status.author.screen_name, status.created_at)
json_data = status._json
#print json_data['text']
es.create(index="idx_twp",
doc_type="twitter_twp",
body=json_data
)
except Exception, e:
print e
pass
streamer = tweepy.Stream(auth=auth, listener=StreamListener(), timeout=3000000000 )
#Fill with your own Keywords bellow
terms = ['obiee','oracle']
streamer.filter(None,terms)
#streamer.userstream(None)
As with the river plugin you can subscribe to the user timeline by changing the subscription type. To do this replace the last line in the script with
streamer.userstream(None)
To start the listener you need to execute the python file
python twitter_tweepy.py
Navigate to the elasticsearch index list again to ensure you are receiving data.
Getting Twitter data into Elasticsearch is actually pretty simple. Logstash is by far the easiest one to configure and if subscribing to keywords is your only requirement it should be the preferred solution. Now that we have the foundation in place, in the next post we will have a look at how we can enhance this data by adding sentiment analysis and how we can use this data to make decisions.. | http://www.orafaq.com/aggregator/categories/3?page=3 | CC-MAIN-2015-48 | refinedweb | 1,830 | 54.12 |
Go to: Synopsis. Return value. Related. Flags. Python examples.
parent(
[dagObject...] [dagObject]
, [absolute=boolean], [addObject=boolean], [noConnections=boolean], [relative=boolean], [removeObject=boolean], [shape=boolean], [world=boolean])
Note: Strings representing object names and arguments must be separated by commas. This is not depicted in the synopsis..
import maya.cmds as cmds # Create some objects cmds.circle( name='circle1' ) cmds.move( 5, 0, 0 ) cmds.group( n='group1' ) cmds.move( -5, 0, 0 ) cmds.group( em=True, n='group2' ) # Move the circle under group2. # Note that the circle remains where it is. cmds.parent( 'circle1', 'group2' ) # Let's try that again with the -relative flag. This time # the circle will move. cmds.undo() cmds.parent( 'circle1', 'group2', relative=True ) # Create an instance of the circle using the parent command. # This makes circle1 a child of group1 and group2. cmds.undo() cmds.parent( 'circle1', 'group2', add=True ) # Remove group1 as a parent of the circle cmds.parent( 'group1|circle1', removeObject=True ) # Move the circle to the top of the hierarchy cmds.parent( 'group2|circle1', world=True ) # Remove an instance of a shape from a parent cmds.parent('nurbsSphere3|nurbsSphereShape1',shape=True,rm=True) | http://download.autodesk.com/global/docs/maya2014/en_us/CommandsPython/parent.html | CC-MAIN-2019-18 | refinedweb | 190 | 53.98 |
Pairwise distances in R
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
For.
At the start I wrote a naive (and very slow) implementation that look liked this:
naive_pdist <- function(A,B) { # A: matrix with obersvation vectors # (nrow = number of observations) # # B: matrix with another set of vectors # (e.g. cluster centers) result = matrix(ncol=nrow(B), nrow=nrow(A)) for (i in 1:nrow(A)) for (j in 1:nrow(B)) result[i,j] = sqrt(sum( (A[i,] - B[j,])^2 )) result }
When I realized that this is too slow, I started looking for an implementation and I found the pdist CRAN package, which is way faster:
The speed up made me curious about how pdist was implemented in this package. To my disappointment it is the same naive method only written in C (and using float, not double precision) — no vectorization and no tricks involved. So I was pretty sure there was room for improvement.
In search for tricks on computing the pairwise distance a blog post from Alex Smola turned up. He suggest to “use the second binomial formula to decompose the distance into norms of vectors in A and B and an inner product between them”. Translated into R code this solution looks like this:
vectorized_pdist <- function(A,B) an = apply(A, 1, function(rvec) crossprod(rvec,rvec)) bn = apply(B, 1, function(rvec) crossprod(rvec,rvec)) m = nrow(A) n = nrow(B) tmp = matrix(rep(an, n), nrow=m) tmp = tmp + matrix(rep(bn, m), nrow=m, byrow=TRUE) sqrt( tmp - 2 * tcrossprod(A,B) ) }
Now that I knew how to implement pdist with a couple of simple operations, I wanted to know how much faster a C (or C++) implementation would be. Thanks to the excellent Rcpp and RcppArmadillo package, it is easy to translate the above R code into C++:
#include
// [[Rcpp::depends(RcppArmadillo)]] using namespace Rcpp; // [[Rcpp::export]] NumericMatrix fastPdist2(NumericMatrix Ar, NumericMatrix Br) { int m = Ar.nrow(), n = Br.nrow(), k = Ar.ncol(); arma::mat A = arma::mat(Ar.begin(), m, k, false); arma::mat B = arma::mat(Br.begin(), n, k, false); arma::colvec An = sum(square(A),1); arma::colvec Bn = sum(square(B),1); arma::mat C = -2 * (A * B.t()); C.each_col() += An; C.each_row() += Bn.t(); return wrap(sqrt(C)); }
This C++ implementation turns out to be another 6x faster than the vectorized R implementation:
All implementations compared
The time measurements for all implementations:
Unit: milliseconds expr min lq median uq max neval vectorized_pdist(A, B) 26.667005 30.299216 32.945532 34.548596 134.8368 100 fastPdist(A, B) 5.357734 5.581193 5.693534 5.798465 109.9736 100 naive_pdist(A, B) 1259.290444 1280.897937 1290.150653 1320.467180 1425.3864 100 pdist::pdist(A, B) 98.825835 101.955146 103.719962 105.843313 205.7123 100
and the speed up among all implementations:
Conclusion
In my example the (naive) C implementation only acheived a 12x speed up, while the improved R implementation was about 40x faster. These findings agree with what is preached in various blog posts and guides about R: first try to vectorize code, then try to find a faster method (algorithm), and only as last step consider using a faster. | https://www.r-bloggers.com/2013/05/pairwise-distances-in-r/ | CC-MAIN-2021-04 | refinedweb | 554 | 58.82 |
--n3 --n3=<flags>
where flags can be from:
a Anonymous nodes should be output using the _: convention (p flag or not). d Don't use default namespace (empty prefix) i Use identifiers from store - don't regen on output l List syntax suppression. Don't use (..) n No numeric syntax - use strings typed with ^^ syntax p Prefix suppression - don't use them, always URIs in <> instead of qnames. q Quiet - don't make comments about the environment in which processing was done. r Relative URI suppression. Always use absolute URIs. s Subject must be explicit for every statement. Don't use ";" shorthand. t "this" and "()" special syntax should be suppresed. | http://www.w3.org/2003/Talks/0520-www-tf1-a4-commandline/slide3-0.html | CC-MAIN-2015-18 | refinedweb | 111 | 68.16 |
09 May 2013 22:27 [Source: ICIS news]
By Mark Yost
HOUSTON (ICIS)--Normally, May is when petrochemical producers start to see orders from the auto sector start to tail off, mainly because many ?xml:namespace>
Thanks to robust
"This year is different," said one source who sells ABS to auto parts makers.
"Automotive production remains strong with the forecast at 15.9m units for 2013," said one nylon producer. "Weekly production remains strong with indications of increased demand. There has been no indication regarding the normal summer slump this year."
"New projects are launching, and platforms that launched late last year are now ramping up," said another ABS source. "Auto demand is very good for the
Only one source said they weren't seeing better-than-expected orders for the next few months.
"We are still forecasting the typical business cycle of shutdowns in July and August for auto business," said one nylon source. "If this doesn't happen, then it's an upside to our forecasts."
The forecast for
Full-size pickup trucks were a particularly strong segment in April, and one of the reasons petrochemical makers cite for the continuing strong demand from the auto industry. GM reported that April sales of pickups were up by 23% year over year, while sales of Ford's best-selling F-150 pickup jumped by 24% and Chrysler sold 49% more Dodge Ram pickups than it did a year ago.
As a result of strong sales, Ford said it will add about 2,000 jobs at its
New product launches are also boosting demand for petrochemicals. Chrysler is completing the launch of the refreshed 2014 Jeep Grand Cherokee and the 2013 Ram Heavy Duty pickup truck. Chrysler is also ramping up for the launch of the 2014 Dodge
Over at GM, ramp-up for new versions of the Chevrolet Camaro Z28, Cruze Clean Turbo Diesel, Impala and the Silverado pickup could keep demand for plastics at above-normal levels through the summer and into the third quarter. | http://www.icis.com/Articles/2013/05/09/9666938/trucks-new-models-propel-strong-demand-for-automotive.html | CC-MAIN-2014-35 | refinedweb | 337 | 66.67 |
Rails memory issues are frequently more difficult - and more urgent - to resolve than performance problems: a slow Rails app may be painful, but if your app chews through all available memory on a host, the app is down.
This chapter shows how to identify memory-hungry controller-actions and specific memory-hungry requests, provides a visual representation of common pitfalls, and suggestions on fixing memory bloat.
Memory bloat vs. memory leak
Memory bloat is a sharp increase in memory usage due to the allocation of many objects. It’s a more time-sensitive problem than a memory leak, which is a slow, continued increase in memory usage and can be mitigated via scheduled restarts.
Visually, here’s the difference between bloat and a leak:
While memory bloat can quickly cripple a site, it’s actually easier to track down the root cause than a memory leak. If your app is suffering from high memory usage, it’s best to investigate memory bloat first given it’s an easier problem to solve than a leak.
What goes up doesn’t come down
If one of your app’s power users happens to trigger a slow SQL query, the impact is momentary. Performance will likely return to normal: it’s rare for a slow query to trigger long-term poor performance.
If, however, that user happens to perform an action that triggers memory bloat, the increased memory usage will be present for the life of the Ruby process. While Ruby does release memory, it happens very slowly.
It’s best to think of your app’s memory usage as a high-water mark: memory usage has no where to go but up.
This behavior changes how you should debug a memory problem versus a performance issue.
Which endpoint impacts memory usage more?
The chart below shows requests from two endpoints, Endpoint A and Endpoint B. Each circle represents a single request.
Which endpoint has a greater impact on memory usage?
Analysis:
- Endpoint A has greater throughput
- Endpoint A averages more allocations per-request
- Endpoint A allocates far more objects, in total, over the time period.
What you need to know about memory bloat
In order of importance:
- Memory Usage is a high-water mark: Your Rails app will likely recover quickly when it serves a slow request: a single slow request doesn’t have a long-lasting impact. This is not the case for memory-hungry requests: just one allocation-heavy request will have a long-lasting impact on your Rail’s app’s memory usage.
- Memory bloat is frequently caused by power users: controller-actions that work fine for most users will frequently buckle under the weight of power users. A single request that renders the results of 1,000 ActiveRecord objects vs. 10 will trigger many allocations and have a long-term impact on your app’s memory usage.
- Focus on the maximum number of allocations per controller-action: a normally lightweight action that triggers a large number of allocations on a single request will have a significant impact on memory usage. Note how this is very different than optimizing CPU or database resources across an app.
- Allocations and memory increases aren’t correlated on a long-running app. Once your app’s memory heap size has grown to accommodate a significant number of objects, a request that requires a large number of allocations won’t necessarily trigger a memory increase. If the same request happened early in the Rails process’ lifetime, it likely would trigger a memory increase.
- You will see a number of memory increases when a Rails application is started: Ruby loads libraries dynamically, so some libraries won’t be loaded until requests are processed. It’s important to filter out these requests from your analysis.
Using Scout to fix memory bloat
Scout can help you identify memory-hungry areas of your Rails app by:
- Isolating the controller-actions generating the greatest percentage of allocations.
- Viewing transactions traces of specific memory-hungry requests to isolate hotspots to specific areas of code.
- Identifying users triggering memory bloat
Isolating allocation-heavy actions
If you’re looking for a general understanding of which controller-actions are responsible for the greatest amount of memory bloat in your Rails app, a good starting point is the “Endpoints” section of Scout:
Sort by the “% Allocations” column. This column represents the maximum number of allocations recorded for any single request for the given controller-action and timeframe. Why max and not mean allocations? See this section above.
Click on an endpoint to dive into the Endpoint Detail view. From here, you can click the “Allocations - Max” chart panel to view allocations over time.
Beneath the overview chart, you’ll see traces Scout captured over the current time period. Click the “Most Allocations” sort field from the pulldown. You’ll see traces ordered from most to least allocations:
Reading a Scout memory trace
The screenshots below are generated from actual Scout transaction traces. A quick primer on the trace organization:
Method calls displayed in the trace details are organized from most to least allocations. The horizontal bar on the right visually represents the number of allocations associated with the method call(s) on the left. Some of the bars may have two shades of green: the lighter green represents the control case (what we view as a normal request) and the darker green represents the memory-hungry case.
Identifying users triggering memory bloat
It’s common for memory bloat to be isolated to a specific set of users. Use Scout’s context api to associate your app’s
current_user with each transaction trace if it’s not easily identify from a trace url.
Common Pitfalls
Memory bloat often reveals itself in specific patterns - these patterns are illustrated via the Scout transaction trace below.
ActiveRecord: rendering a large number of objects
When rendering the results of an ActiveRecord query that returns a large number of objects, the majority of allocations frequently come from the view and not from instantiating the ActiveRecord objects. Many Strings are allocated to represent each object’s attributes as well as any HTML template code around them (table rows and other HTML elements).
The screenshot below illustrates the difference between rendering a view with 1,000 records and one with ten. Two-thirds of allocations reside in the view:
An example where this occurs:
def employees @company = Company.find(params[:id]) @company.employees end
Fetching and rendering all employees for a company may work fine for the latest small startup, but it fall over for Apple, Inc.
The fix? Pagination via will_paginate or Kaminari.
ActiveRecord: N+1 Database Queries
You probably already know N+1 queries are low-hanging fruit when it comes to speeding up your controller-actions. However, in addition to frequently being slower than a proper
includes, they result in more allocations.
The example below illustrates an N+1 when rendering out a list of 100 users and their associated company. Roughly 2x more allocations result from the N+1:
The steps to fixing N+1 queries are well-documented: the larger challenge is finding the worst offenders. Scout can be used to identify the worst-offending N+1 queries in your app.
ActiveRecord: selecting unused large columns
The standard ActiveRecord finder selects all columns from the respective table:
User.all # User Load (756.6ms) SELECT "users".* FROM "users"
If a table contains a large column (
binary or
text), there’s a cost both in terms of memory usage and time if that column is returned. This is true even if the column is never accessed.
Identifying this scenario is more involved: if a large column isn’t accessed, it will not trigger additional Ruby allocations and will not appear in a memory trace. Instead, look at the change in memory for that request:
It’s also likely the query may run slower as more data is read from the database and sent across the wire to your app host: look for a slow query in the “Time Breakdown” section of the trace.
The fix
A couple possible approaches:
- Only select what you need:
User.select(:name).
- Move the large column to a dedicated table so the default finder is fast by default.
Uploading a large file
Your app will incur a significant memory increase to handle large file uploads. For example, the trace below illustrates the increase in memory usage when uploading a 1 GB file vs. a 1K file:
The majority of allocations occur unwrapping the file in the framework middleware. A common scenario where this behavior occurs: an app manipulates uploaded images that are typically 100-500 kB in size, but then a user attempts to upload a 10 MB image.
The workaround: send large files directly to a third party like S3. See Heroku’s docs on Direct to S3 Image Uploads in Rails.
Suggested Reading
- That’s Not a Memory Leak, It’s Bloat - This digs into more specifics on common ActiveRecord patterns that contribute to bloat. While the post is from 2009, and as expected, some of the tools are outdated, the Ruby theories still apply.
- The Complete Guide to Rails Performance - This book (purchase required) has a chapter dedicated to memory bloat and leaks that digs deeper into the internals of Ruby memory usage.
- How Ruby Uses Memory - This article takes a deeper into the internals of Ruby memory usage.
Subscribe for more
We're publishing chapters as we complete them. Enter your email address below to be notified of updates to the Rails Performance Fieldbook. | https://www.tefter.io/bookmarks/43146/readable | CC-MAIN-2020-05 | refinedweb | 1,594 | 51.99 |
:>Would it be a good idea to have a '__stdlib__' module that magically :>maps its attributes to only Standard Library modules? It would act as :>a namespace to guarantee access to the standard library, no matter how :>weird the sys.path might get. Bengt Richter <bokr at oz.net> wrote: : I just did an experiment that I don't know all the ramifications of, so don't : take this is anything but an expriment, but if you put a file called : __init__.py with nothing in it (mine has two bytes: '\r\n') in the Lib : directory, so the path to it looks something like : D:\Python22\Lib\__init__.py : : (your installation location may vary) Hi Bengt, I didn't think about that one... cool! You're right though: the one problem I'd see about is that this would be platform specific, as Python's libraries live in "lib/pythonX.X" on some Unix systems. I still prefer to use a magic looking name, even if there were no real magic involved, just so that it's easier to see that something unusual is being done. It also makes it that much harder for a user to go ahead and mess this beautiful scheme up by using a package generically named Lib... *grin*. Andrew Bennetts suggested in email about using "std" as an easy name to type. I'd like to subvert his suggestion and propose "__std__". A system like this would be useful because the names of modules in the Standard Library are generic. This is a Good Thing because they're easy to remember. This is a Bad Thing because programmers themselves may often use generic names for their own modules. Furthermore, Section 6.12 on the reference documentation says that the system for importing provides no guarantees, that the import mechanism is implementation dependent: It would be nice to have a well-defined, guaranteed way to get at the Standard Library. Thanks for the suggestions! | https://mail.python.org/pipermail/python-list/2003-January/193900.html | CC-MAIN-2014-15 | refinedweb | 329 | 71.95 |
It’s surprisingly easy to create infinite loops in React.
If you’ve encountered an infinite loop in React, it’s almost certain that you’ve added code inside an effect hook without specifying a dependency. Unfortunately, React doesn’t tell you that you’re doing anything wrong.
Here’s a simple example in which you update and render a counter.
import { useEffect, useState } from "react";
export const MyInfiniteLoop = () => {
const [count, setCount] = useState(0);
useEffect(() => setCount(count + 1));
return <div>Count: {count}</div>;
};
In this case, here’s what happens:
useEffect()runs.
countis incremented by
1. This triggers a re-render. (Go to #1.)
The way to get around this is to specify an array of dependencies to the effect hook. By default, there are no dependencies, which means it will run every time the component renders.
You can instead tell the hook that it has no dependencies, and so it will only render a single time.
import { useEffect, useState } from "react";
export const MyInfiniteLoop = () => {
const [count, setCount] = useState(0);
// Use an empty array as the second arg to specify no dependencies
useEffect(() => setCount(count + 1), []);
return <div>Count: {count}</div>;
};
If you’re using eslint, as is a common practice in modern JavaScript projects, then you can make use of the eslint-plugin-react-hooks plugin, which will throw warnings when there is a risk for more renders than intended.
In some cases, you may want multiple references in a component of unknown quantity. Here’s how you can make it work.
Running React in strict mode with Next.js can lead to useEffect callbacks with zero dependencies to run twice in development. Here’s a way around that.
The foundation for a tab system begins with a state hook, triggered by clicks on tab elements. View the code snippet and use the playground to see it in action. | https://www.seancdavis.com/posts/avoiding-an-infinite-loop-in-the-useeffect-hook/ | CC-MAIN-2022-33 | refinedweb | 310 | 56.25 |
Solution for Locating and reading all .py files
is Given Below:
when i start the programm in vs every thing is working fine, but when i start it with python over the cmd there is no output
import glob Liste = "" progs = glob.glob("*/*.py") for Prog in progs: fh = open(Prog, "r") pCode = fh.readlines() fh.close() for line in pCode: Liste +=(line)
so when i want to print the variable: “Liste” there is (when i am starting in vs) all the codes but when i am starting in cmd its blank, please help
I assume you don’t invoke your script from the same directory in Visual Studio than you do in the console…
glob.glob("*/*.py") creates a list of strings containing relative paths to Python files in any sub directory of the current working directory.
So if you invoke the script using different working directories will return different results.
See also:
It’s because you aren’t actually doing anything with
Liste afterwards. You will see an output if you just print it at the end.
import glob Liste = "" progs = glob.glob("*/*.py") for Prog in progs: fh = open(Prog, "r") pCode = fh.readlines() fh.close() for line in pCode: Liste +=(line) print(Liste)
Also, this would make it more readable.
import glob liste = "" progs = glob.glob("*/*.py") for prog in progs: with open(prog, "r") as fh: p_code = fh.readlines() for line in p_code: liste +=(line) print(liste) | https://codeutility.org/locating-and-reading-all-py-files/ | CC-MAIN-2021-49 | refinedweb | 242 | 72.36 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.