text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Contents
I’ll try something new today: I pick a piece of code from the web and see what improvements I would make to it, using small refactoring steps.
I got across the code on Twitter: Joshua Ogunyinka asked about the safety of the deletion in the
CompoundStatement destructor. He posted the code on ideone, but as far as I can see it is a simplification of part of his “MaryLang” compiler. You can find the project on GitHub.
Please note: this means the code is taken out of context. It may be simplified to an extend that makes some constructs seem unnecessary complex, so I might oversimplify in my refactorings. In addition, it is a single text, the original would be separated into at least a header with the definitions and a
main.cpp.
Follow the steps on GitHub
I put the code on GitHub and commited every single step, as I would have in a real refactoring session. The single commits may feel very small sometimes, but larger commits can mean that you have to repeat a lot of work if you go down a wrong path. With a better test coverage I would probably have been bolder, but it is better to be safe than sorry.
The original code
Here is the original code from IdeOne, except that I changed the indentation to two spaces instead of four and put the opening curly braces of class and function definitions on the same line as I usually do on this blog.
#include <iostream> #include <vector> #include <memory> template<typename T> struct List { List(): _list() {} virtual ~List() {} inline void Append( T const * t ) { _list.push_back( t ); } typedef typename std::vector<T const *>::const_iterator const_iterator; inline const_iterator cbegin() const { return _list.cbegin(); } inline const_iterator cend() const { return _list.cend(); } private: std::vector< T const * > _list; }; // struct List struct DoubleWord { DoubleWord( double c ): c_( c ){} double c_; }; struct Word { Word( int i ): i_( i ) {} int i_; }; std::ostream & operator<<( std::ostream &os, Word const & t ) { return os << t.i_ << " "; } std::ostream & operator<<( std::ostream &os, DoubleWord const & t ) { return os << t.c_ << " "; } struct Statement { virtual void Analyze() const = 0; Statement(){} virtual ~Statement(){} }; struct YetAnotherStatement: Statement { inline void Analyze() const final { std::cout << t << std::endl; } YetAnotherStatement( int i ): t{ ( double ) i * ( 10.6 / 0.7 ) } {} DoubleWord t; }; struct OtherStatement: Statement { inline void Analyze() const final { std::cout << t << std::endl; } OtherStatement( int i ): t{ i } {} Word t; }; struct CompoundStatement: Statement, List<Statement> { CompoundStatement(): Statement(), List(){} ~CompoundStatement(){ for( auto b = cbegin(), d = cend(); b != d; ++b ) delete const_cast<Statement *>( *b ); } void Analyze() const final { for( auto b = this->cbegin(); b != this->cend(); ++b ){ (*b)->Analyze(); } } }; struct Declaration { Declaration( Statement const * const s ): s_( s ){} inline void Analyze(){ s_->Analyze(); } light start
For the start I like to skim the code to see if I see any obvious trivial things that can be simplified. That is nothing I would do to a large code base at once, because it just takes a lot of time and does only marginally affect the code, i.e. the big problems, if there are any, remain untouched. However, if I am to work on a specific small subset of source code it is a good start to get familiar with the code and make life a little easier later.
Wrappers
At first sight, the two structs
Word and
DoubleWord seem to make not much sense. The may be remainders of more complex structures or placeholders for something more complex in the original code. However, they serve no visible purpose here, so I just replace any occurrence by the wrapped types
int and
double, respectively. The wrapper classes including the stream operators can be removed.
Constructors and destructors
Right on the first class template
List, we see a default constructor that is explicitly implemented to do nothing, i.e. we should use the keyword
default. The same goes for the destructor. Since that one is virtual, we can not leave it away. That means we should also have a look at the move and copy operations.
List contains only a
vector, which is fully copy/moveable, so we can default all special members there.
Statement is empty, so it is obvious what the default does and it is sensible to loosen the reigns of the rule a bit and only default the virtual destructor. For all other classes except
CompoundStatement the Rule of Zero applies, they need not be changed.
CompoundStatement itself has a nontrivial destructor due to the fact that it manages the lifetimes of the
List elements. If we look closer it becomes apparent that were we to copy a
CompoundStatement with a nonempty
List, the pointers in that list would get copied as well and deleted twice eventually.
The move constructor will work, but not the move assignment since the old contents will not be deleted and therefore leak. So default and move constructor can be defaulted, the rest has to be deleted, except of course the nontrivial destructor.
Single line blocks
Blocks that consist of a single line, e.g. of function bodies and for loops, should be wrapped in their own curly braces and put on their own line. Putting things on their own line visibly separates the two separate parts of the loop – the header and the loop body. Adding the braces even on one-liners prevent errors that arise from adding more lines to the apparent block without adding the braces then.
This is somewhat a matter of taste and coding style, but many style guides stick at least to the own line for loop bodies. Most people seem to favor the separation over terseness.
inline
In the past, the keyword
inline has been a hint to the compiler that it might try to inline a function. Modern compilers usually ignore it completely and it is only used to obey the One Definition Rule. In other words, use it only if you feel the need to define non-template functions outside a class definition.
In this code, all the functions declared
inline are defined inside a class definition, which means that they are already implicitly declared inline. Therefore the explicit
inline is superfluous and we should simply remove it.
private vs. public:
The member variables of
Declaration and all subclasses of
Statement are public. This does not seem to be necessary, and since the classes are more than plain data containers their members should be private. In fact I like to distinguish classes versus data structures by using the keywords
class and
struct accordingly, but I will leave those as they are in this case.
Another case is the
List base of
CompoundStatement which is in fact more a data member than a base class, so I should make it private, too. However, the
main() function calls
Append, so it’s not that trivial. This misuse of inheritance will be the next thing to go.
Here is the code we have now:
#include <iostream> #include <vector> #include <memory> template<typename T> struct List { List() = default; List(List const&) = default; List(List&&) = default; virtual ~List() = default; List& operator=(List const&) = default; List& operator=(List&&) = default; void Append( T const * t ) { _list.push_back( t ); } typedef typename std::vector<T const *>::const_iterator const_iterator; const_iterator cbegin() const { return _list.cbegin(); } const_iterator cend() const { return _list.cend(); } private: std::vector< T const * > _list; }; // struct List, List<Statement> { CompoundStatement() = default; CompoundStatement(CompoundStatement&&) = default; CompoundStatement(CompoundStatement const&) = delete; CompoundStatement& operator=(CompoundStatement const&) = delete; CompoundStatement& operator=(CompoundStatement&&) = delete; ~CompoundStatement(){ for ( auto b = cbegin(), d = cend(); b != d; ++b ) { delete const_cast<Statement *>( *b ); } } void Analyze() const final { for ( auto b = this->cbegin(); b != this->cend(); ++b ) { (*b)->Analyze(); } } }; struct Declaration { Declaration( Statement const * const s ): s_( s ){} void Analyze() { s_->Analyze(); } private: first impression
After we have gone through the code for the first time, what have we learned about it? We have a generic container class called
List. It contains a
std::vector which makes it’s naming rather odd, so we’ll have a closer look at it later.
We have a little class hierarchy of
Statements, with two trivial concrete classes and a little more complex
CompoundStatement. The trivial classes seem to be there for testing and example purposes only, at least that is the impression I get from the identical use of
std::cout and their naming.
We have the
CompoundStatement on our list for refactoring next, since it seems to have some issues with the ownership management of the container elements. The
Declaration, as it is shown here, seems to be only some sort of container or handle for a single
Statement. We will touch it briefly as we go through the code a second time in more detail.
The
main() function seems to be just an example of intended use of the classes, I won’t pick too much on it. In addition, it is the only thing that can be used as test – I used it to check that the refactored code still compiles and does not change its behavior.
Refactoring CompoundStatement
CompoundStatement looks odd enough to be the next point on our list: Multiple inheritance including a container is dubious, and the manual management in the destructor should be fixed by some RAII class.
Fixing the inheritance
Fixing the inheritance is relatively easy. There is no need for it, we can as well use composition, which should be preferred over inheritance. Replacing the public inheritance with a private data member breaks the compilation:
- The compiler complains about the calls to `cbegin()` and `cend()` in the destructor and the `Analyze()` method. They are no longer inherited, so we have to call them on the new member.
- The `Append()` method which is called from outside is no longer inherited, so we have to write a method that simply routes the call through to the new member.
struct CompoundStatement: Statement { // constructors etc... ~CompoundStatement(){ for ( auto b = _statements.cbegin(), d = _statements.cend(); b != d; ++b ) { delete const_cast<Statement *>( *b ); } } void Analyze() const final { for ( auto b = _statements.cbegin(); b != _statements.cend(); ++b ) { (*b)->Analyze(); } } void Append(Statement const* statement) { _statements.Append(statement); } private: List<Statement> _statements; };
Fix the for loops
The for loops beg to be replaced by a range based for. However, the interface of
List is somewhat minimal, so that is not possible. However, before we jump in and augment it with the needed
begin() and
end() methods, let’s have a closer look at
List – we had that one on our list anyways.
As it turns out,
List is only a wrapper around
std::vector. It is not very intuitive, since for once, we kind of know what a list is from the standard library – and that is not
vector. In addition, a
List<X> is in fact a
vector of pointers to
X, so that fact is obfuscated via the template parameter, too.
When I first looked at the destructor of
CompoundStatement I thought “how can this even compile when he calls
delete on
Statement, that’s not a pointer?”. Don’t mislead your readers like that.
The only thing about
List that made it more than just a
vector was the virtual destructor. However, it is not needed any more, since we don’t derive from
List any more. We didn’t need it back then either, because we did not destroy
CompoundStatement via a
List pointer.
Now, we dismantled
List alltogether. There is no need for it any more after we have replaced the inheritance with composition. So, we can just replace the
List member of
CompoundStatement with the
vector that it is and then we are free to use range based for loops. The
List template itself can be removed completely.
struct CompoundStatement: Statement { // constructors etc. ~CompoundStatement(){ for ( auto&& b : _statements ) { delete const_cast<Statement *>( b ); } } void Analyze() const final { for ( auto&& b : _statements ) { b->Analyze(); } } void Append(Statement const* statement) { _statements.push_back(statement); } private: std::vector<Statement const*> _statements; };
Use RAII
We said we wanted to get rid of the manual memory management in the destructor of
CompoundStatement. We also have the copy constructor and assignment operators
deleted because the compiler generated versions would have lead to leaks and double deletes.
The solution to dilemmas like that usually are RAII classes. For memory management that means we should use smart pointers. It is clear from the implementation of the destructor that
CompundStatement takes full ownership of the
Statements we append, so the right class to use would be
unique_ptr.
After we replace the
vector<Statement const*> with a
vector<unique_ptr<Statement const>> we can obey the Rule of Zero and remove all constructors, the destructor and the assignment operations from the class:
- The generated destructor will destroy the `vector`, which in turn will destroy every `unique_ptr`, deleting the `Statement`s in the process.
- The generated move assingment will now do the right thing, cleaning up the `Statement`s in the target before the move. No more leaks.
- The copy constructor and copy assignment will still be deleted because the compiler can not generate them due to the deleted `unique_ptr` copy operations.
The only thing that is left to do for this refactoring is converting the raw pointer we take as parameter for
Append() to a
unique_ptr. This has to be done explicitly, and it brings us right to a code smell.
Take ownership explicitly
The parameter of
Append() is a raw pointer. That interface does not make it clear, that
CompundStatement takes unique ownership. From all we can tell from the interface, we could do something like this:
OtherStatement statement{22}; CompoundStatement compound; compound.Append(&statement); compound.Append(&statement);
Have you ever tried to delete a stack based object, twice? Don’t.
To fix this just fix the interface of the
Append() method by explicitly demanding that any client pass it a
unique_ptr. It will also make the implementation of that method much more natural. Doing that will enable us to use
make_unique instead of
new in the
main() function – so in addition to the clearer interface, we also get some exception safety for free. Great!
struct CompoundStatement: Statement { void Analyze() const final { for ( auto&& b : _statements ) { b->Analyze(); } } void Append(std::unique_ptr<Statement const> statement) { _statements.push_back(std::move(statement)); } private: std::vector<std::unique_ptr<Statement const>> _statements; };; }
What is left
There still are a few issues left. One of it is naming:
b,
t and
s_ are pretty poor names. The
Declaration taking a pointer as constructor parameter and using it before any check for null is another. The
main() function and most of its content looks rather unpleasant. However, much of this is owed to the example nature of the code and is not an issue in the original sources.
For this post, I wanted to concentrate on the
CompoundStatement and the issues with the
List template. Those were the core classes of this code snippet. We simplified one of them and got completely rid of the other, so we can be content for now.
There is one thing I really like about the original code: The use of
final is something that can give us some more certainty about the correctness of our code, yet I haven’t seen it used too often in real code.
I have to leave a word on test here: The modifications made were fairly simple, and they were done in small steps that we could reason about. For anything more complex we should have brought our code under test first. That
main() function dies not count; it was enough to see if the main use case compiled but not more.
Here is the complete refactored code:
#include <iostream> #include <vector> #include <memory> { void Analyze() const final { for ( auto&& b : _statements ) { b->Analyze(); } } void Append(std::unique_ptr<Statement const> statement) { _statements.push_back(std::move(statement)); } private: std::vector<std::unique_ptr<Statement const>> _statements; }; struct Declaration { Declaration( Statement const * const s ): s_( s ){} void Analyze() { s_->Analyze(); } private: Statement const * const s_; };; }
Conclusion
This was a first try to provide a new kind of posts for my blog. After over 70 posts about clean C++ and similar topics with made up examples, I thought it would be good to show some examples on (more or less) “real world” code.
I’d like to do more of this in the future, but I need some help: Please leave a comment what you think about this format. I would also be grateful if you point me to some open source code that you think would be a good candidate for the next refactoring session.
12 Comments
Permalink
Permalink
Hi! Great post, as always. I really like what you did to the original code. I would like to suggest one more thing that can be done:
Statement const * const s_;
This variable in Declaration is a little bit confusing for me, but maybe that’s just because I’m not used to see “const pointers to const”. Don’t you think it would be better to use “const &” for this purpose? It conveys “I don’t own it and I don’t want to change it” intent more clearly in my opinion. Then, Declaration constructor would be:
Declaration( Statement const & s ): s_( s ){}
and in the main():
Declaration d{ *s };
Permalink
Hi Krzysztof, thanks for your comment.
As I mentioned in the “What is left” section, the Declaration class and that statement pointer would definitely be on my todo list.
A reference would be my choice in that case, too.
Permalink
Hi,
This is a very nice post. Something we should be doing more often.
In the process of refactoring, you had a step where List is copiable (and movable) and has a virtual destructor. A virtual destructor is usually a sign a public inheritance and we know this doesn’t mix well with copying (well, assignment actually) as we are asking to get hit by slicing.
In other words, I would have tried to get rid of inheritance sooner and for other reasons.
BTW, I’d also make Statement non copyable.
Something else that was fishy (and that may deserve discussion with the “OP”), is that List was just a vector of pointers with no responsibility over the pointers. In a pre-C++11 world, I would have enforced the responsibility and used something like a boost::ptr_vector. But still, I wonder why writing this wrapper. In case it’s used elsewhere, it may be useful to provide something — I know some people can’t get used to SL interfaces, the capitalized Append may be a sign of that or something else entirely. It’s easy to simplify this part in a blog post context. On a concrete project where the wrapper may be used dozen of times, fixing List design may be less trivial.
Permalink
Hi Arne!
I think you chose is a very good small example.
What is your stand on using more STL algorithms, even it doesn’t reduce the numbers of characters?
In
CompoundStatement::AnalyzeI’m tempted to write
std::for_each(_statements.cbegin(), _statements.cend(), std::mem_fn(&Statement::Analyze));
because it gets rid of all the type and reference concerns like: Do i use
auto&&or
const auto&.
Permalink
Hi, thanks for your question!
Algorithms can make code clearer, because you give a known name to whatever you are doing to a range of objects. In that sense,
for_eachis not a real algorithm but a mere helper function.
Since we have range based for I would usually not use
for_eachon complete containers. It remains useful when we have only a partial range. I’d also refrain to use things like
std::mem_fn(&Statement::Analyze)– it’s just ugly to read.
There should be no type and reference concerns – just use
auto&&by default. See also my post on range based for loops.
Permalink
Hi,
I am just a new reader of your blog and read about the last 10 articles. So this post does not come as suprising to me as it might feel for you, having written over 70 articles.
Also, I am rather new to C++. I started learning it over 10 years ago but worked on an unrelated but time-consuming job since then. Now I want to “refresh” and extend my memory with modern C++.
Your article is a nice read because it gives some more substance to the theory gained from books. Especially the use of unique_ptr to do RAII gave me some more solid undertanding. From the books it all feels rather abstract. Now to see it in your post, it is starting to make more sense.
I know that “doing” is worth more than “reading” in general but reading something practical is also worth more than reading something theoretical.
Anyway, I would look forward to an article like this once in a while.
Regards
Permalink
Good post!
although refactoring examples are probably the hardest to write (because you have to make big problems fit on a page) they are also very important for people learning to code better.
A few notes of taste:
Wrapping basic types can be quite helpful in statically enforcing correctness. At the end of the day nothing is an int, everything is a something 😉 Although this is probably not a very good example because there are no function that otherwise take 8 ints and there are no special rules (e.g. pointer – pointer is offset, offset + offset is offset, pointer + offset is pointer, pointer + pointer is error etc.)
Rather than making everything derive from Statement why not make a statement “interface” wrapper a la concept based polymorphism. This elides problems like slicing, multiple inheritance etc. and make writing a new statement (of which I assume there are many) super easy, basically anything that is regular and has an analyze() member function is a statement.
Permalink
Hi Odin, thanks for your thoughts!
I agree that refactoring examples are not too easy to write – I used a rather small piece of code, left a lot of things unresolved and still the post is longer than most of my other posts.
As for the wrappers, I agree that they can help when it comes to correctness. In this example I did not see much of a benefit, because they were used only internally in the statement classes but not in their interfaces. In addition, the statement classes themselves don’t do much more than stitching the data to the
Analyze()method, and the wrapper’s names are pretty generic. Therefore I don’t see much of a benefit for the additional code complexity.
I don’t see how the example, especially the
CompoundStatementcould be handled with static polymorphism. A container of arbitrary size with polymorphic elements is a standard use case for dynamic polymorphism. Slicing should not be an issue here since the statements are created on the free store and passed around as pointers or references.
Permalink
God forbid someone make a local copy of one of those references 😉 In other words you have to know that its a reference to a polymorphic base class, otherwise it may bite you (ok the chance is small, its always small but small is not 0%). I hate having to know stuff, it makes me look stupid when I don’t.
Permalink
Great post.
Looking at a how code can be refactored is IMHO a great learning tool, so if this kind of post is an experiment, I hope you’ll do it more frequently in the future.
Just a minor nitpick: at the end of the “Constructors and destructors” paragraph, you say “So default and move constructor as well as the destructor can be defaulted, the rest has to be deleted.”, but you (correctly) noted a few lines above that the destructor is far from being trivial, so I think that “..as well as the destructor…” should be removed.
Permalink
Thanks for nitpicking 😉 I fixed that passage.
Also thanks for the encouragement. I will see to doing this more often. However, it is more time consuming than writing a normal blog post, and it is not easy to pick an interesting piece of code to refactor. On the other hand it’s also not always easy to come up with new topics for normal blog posts.
|
https://arne-mertz.de/2016/03/refactoring-session-01/
|
CC-MAIN-2022-05
|
refinedweb
| 4,044
| 61.06
|
mitem_current(3) UNIX Programmer's Manual mitem_current(3)
mitem_current - set and get current_menu_item
#include <menu.h> int set_current_item(MENU *menu, const ITEM *item); ITEM *current_item(const MENU *menu); int set_top_row(MENU *menu, int row); int top_row(const MENU *menu); int item_index(const ITEM *item);
The function set_current_item sets the current item (the item on which the menu cursor is positioned). current_item returns a pointer to the current item in the given menu. The.
current_item returns NULL on error. top_row and item_index return ERR (the general curses error value) on error. set_current_item and set_top_row return one of the follow- ing: E_OK The routine succeeded. E_SYSTEM_ERROR System error occurred (see errno). E_BAD_ARGUMENT Routine detected an incorrect or out-of-range argument. E_BAD_STATE Routine was called from an initialization or termina- tion function. E_NOT_CONNECTED No items are connected to the menu.
curses(3), menu(3). MirOS BSD #10-current Printed 19.2.2012 1 mitem_current(3) UNIX Programmer's Manual mitem_current(3)
The header file <menu.h> automatically includes the header file <curses.h>.
These routines emulate the System V menu library. They were not supported on Version 7 or BSD versions. The SVr4 menu library documentation specifies the top_row and index_item.
|
http://mirbsd.mirsolutions.de/htman/sparc/man3/top_row.htm
|
crawl-003
|
refinedweb
| 199
| 51.75
|
A simple drawing program that lets you use your keyboard to draw figures on screen, using the turtle graphics module built into Python.
Python, 68 lines
You can experiment with both using this recipe to draw shapes, as well as enhance it to support more drawing operations.
More details and sample output here:
Edited code so you can put the pen up with u and down with d
Program to do drawing using Python turtle graphics.
turtle_drawing.py v0.1
Author: Vasudev Ram
import turtle
Create and set up screen and turtle.
t = turtle
May need to tweak dimensions below for your screen.
t.setup(600, 600) t.Screen() t.title("Turtle Drawing Program - by Vasudev Ram") t.showturtle()
Set movement step and turning angle.
step = 10 angle = 45 def forward(): '''Move forward step positions.'''
def back(): '''Move back step positions.'''
def left(): '''Turn left by angle degrees.'''
def right(): '''Turn right by angle degrees.'''
def home(): '''Go to turtle home.'''
def clear(): '''Clear drawing.'''
def quit():
def penup():
def pendown():
t.onkey(forward, "Up") t.onkey(left, "Left") t.onkey(right, "Right") t.onkey(back, "Down") t.onkey(home, "h") t.onkey(home, "H") t.onkey(clear, "c") t.onkey(clear, "C") t.onkey(quit, "q") t.onkey(quit, "Q") t.onkey(penup, "u") t.onkey(penup, "U") t.onkey(pendown, "d") t.onkey(pendown, "D") t.listen() t.mainloop()
|
https://code.activestate.com/recipes/580544-simple-drawing-tool-with-python-turtle-graphics/
|
CC-MAIN-2022-05
|
refinedweb
| 231
| 73.13
|
The big impact of this proposal is that __xxx__ variables defined immediately following the docstring (if present) would cease to become local variables. This of course has the possiblity of breaking existing code. But I don't believe that developers routinely use such names for local variables, so I don't believe that there will actually be much broken code should this change be implemented.. Easier to accept if we use a more likely variable name for the magic variable. def f(): __version__ = 3 return __version__ f.__version__ += 1 print f() > What about any of the following: > def g(): > if True: > __var__ = 4 > print g.__var__ > __var__ = 4 # creates a local variable because # the assignment isn't at top # of function def. But probably # should generate a warning for # giving a magic name to a local # variable. > x = 3 > def h(x): > __var__ = x*x > return x*x > print h(2), h.__var__ > In __var__ = x*x, x is not in the function's namespace (it's a local variable), so x is undefined. Easier to visualize if we make it __version__ = x*x Same basic idea for your other examples. > > def p(): > import os as __os__ > __os__ would be a local variable (since this is not a simple assignment, i.e. using an equal sign). __os__ is not a good name for a local variable, so interpretor should probably generate a warning. > def q(): # BUG x doesn't get the proper metaclass in 2.3! > class __metaclass__(type): pass > class x: pass > # assert x's metaclass is __metaclass__ > This would do whatever it does now. Paul
|
https://mail.python.org/pipermail/python-list/2004-August/261431.html
|
CC-MAIN-2018-05
|
refinedweb
| 267
| 64.2
|
Maybe your try to put a 'parentID' in your openejb-jar.xml.
I'm doing this sucessfully in the same manner.
Thanks,
Kevin
On 6/5/06, David Jencks <david_jencks@yahoo.com> wrote:
>
> I'm having difficulty interpreting most of your stack traces: were
> some on the client side and some on the server side?
>
> In any case, the java:comp/env namespace is only usable within a j2ee
> application or j2ee app client, not a standalone client. Your client
> app should definitely be looking up "ConverterEJB" not "java:comp/env/
> ConverterEJB"
>
> hope this helps a bit :-)
>
> thanks
> david jencks
>
> On Jun 5, 2006, at 7:01 AM, Frank Neubert wrote:
>
> > Here
>
>
|
http://mail-archives.apache.org/mod_mbox/geronimo-user/200606.mbox/%3C8b77629c0606060057v35a324a9u5350c7d95beee971@mail.gmail.com%3E
|
CC-MAIN-2016-40
|
refinedweb
| 112
| 71.65
|
______________________________________________________________________________________________
Hello:
I'm trying to create some kind of Event class hierarchy.
I have a generic Event class /Generic, and also two subclasses: /Generic/Specific1 and /Generic/Specific2.
Using an event transform, in a mapping which belongs to /Generic, I set the eventClass to /Generic/Specific1 or /Generic/Specific1 depending on some conditions.
As far as I know (and tested it), the event transforms of the specific classes won't be applied.
What I need is to get applied the event transforms for each specific subclass. Today I have a lot of code repeated in the two specific subclasses and I want to put it in a generic (upper-level) event class, and then add more specific code to the event subclasses.
Is there any way to do this ?
Hope you get the idea.
thx,
Mariano.
_______________________________________________
zenoss-users mailing list
zenoss-users@zenoss.org
______________________________________________________________________________________________
It seems so. I will try to figure out if it's possible to implement this change via a zenpack.
thx again.
_______________________________________________
zenoss-users mailing list
zenoss-users@zenoss.org
______________________________________________________________________________________________
There are some changes being made to Event Transforms for 2.4. We're
working on getting some examples added to the documentation and I
don't think they're all in the beta yet. One of the changes will be
cascading event transforms. Event transforms are applied down the
event hierarchy, so that on an incoming /Business/Service/Bus event,
the /Business transform would be applied, then the /Service
transform, then the /Bus transform. This sounds like what you want.
Thanks,
Matt Ray
Zenoss Community Manager
community.zenoss.com
mray@zenoss.com
_______________________________________________
zenoss-users mailing list
zenoss-users@zenoss.org
I'm going to thread necromance this topic to reinforce James's questions, and add another one.
jmp242 wrote (paraphrased):
If for instance, I change via transform from /foo/bar/baz (Transform here) to /foo/baz/test, what happens?If for instance, I change via transform from /foo/bar/baz (Transform here) to /foo/baz/test, what happens?
1. Transforms applied in order: /foo, /foo/bar, /foo/bar/baz, /foo, /foo/baz, /foo/baz/test
2. Transforms applied in order: /foo, /foo/bar, /foo/bar/baz, /foo/baz/test
3. Transforms applied in order: /foo, /foo/bar, /foo/bar/baz
Is it possible at all, assuming the current behavior is (3), to force the destination event class transform to be applied to the event? Is there an unpublished function that can be called from the transform after the class is changed/committed that will re-run the transform using the new mapping?
I kind of answered my own question here. This allows you to fire the transform on the destination class.
evt.eventClass = '/Win/EventLog'
evt.severity = 5
import logging
log = logging.getLogger("zen.Events")
dest = dmd.getObjByPath(''.join(['Events',evt.eventClass]))
if dest.transform:
try:
exec(dest.transform,{'evt':evt})
except Exception, ex:
log.error("Error processing transform on EventClass %s (%s)",dest.getPrimaryId(), ex)
|
http://community.zenoss.org/message/52193?tstart=0
|
CC-MAIN-2014-42
|
refinedweb
| 497
| 67.76
|
The idea is straightforward, with the input string
s, we generate all possible states by removing one
( or
), check if they are valid, if found valid ones on the current level, put them to the final result list and we are done, otherwise, add them to a queue and carry on to the next level.
The good thing of using BFS is that we can guarantee the number of parentheses that need to be removed is minimal, also no recursion call is needed in BFS.
Thanks to @peisi, we don't need stack to check valid parentheses.
Time complexity:
In BFS we handle the states level by level, in the worst case, we need to handle all the levels, we can analyze the time complexity level by level and add them up to get the final complexity.
On the first level, there's only one string which is the input string
s, let's say the length of it is
n, to check whether it's valid, we need
O(n) time. On the second level, we remove one
( or
) from the first level, so there are
C(n, n-1) new strings, each of them has
n-1 characters, and for each string, we need to check whether it's valid or not, thus the total time complexity on this level is
(n-1) x
C(n, n-1). Come to the third level, total time complexity is
(n-2) x
C(n, n-2), so on and so forth...
Finally we have this formula:
T(n) =
n x
C(n, n) +
(n-1) x
C(n, n-1) + ... +
1 x
C(n, 1) =
n x
2^(n-1).
Following is the Java solution:
public class Solution { public List<String> removeInvalidParentheses(String s) { List<String> res = new ArrayList<>(); // sanity check if (s == null) return res; Set<String> visited = new HashSet<>(); Queue<String> queue = new LinkedList<>(); // initialize queue.add(s); visited.add(s); boolean found = false; while (!queue.isEmpty()) { s = queue.poll(); if (isValid(s)) { // found an answer, add to the result res.add(s); found = true; } if (found) continue; // generate all possible states for (int i = 0; i < s.length(); i++) { // we only try to remove left or right paren if (s.charAt(i) != '(' && s.charAt(i) != ')') continue; String t = s.substring(0, i) + s.substring(i + 1); if (!visited.contains(t)) { // for each state, if it's not visited, add it to the queue queue.add(t); visited.add(t); } } } return res; } // helper function checks if string s contains valid parantheses boolean isValid(String s) { int count = 0; for (int i = 0; i < s.length(); i++) { char c = s.charAt(i); if (c == '(') count++; if (c == ')' && count-- == 0) return false; } return count == 0; } }
Greetings. I think your solution is really awesome. The usage of "visited set" speeds up the program a lot especially when dealing with large test cases
I think the isValid function you don't need a stack for it.
boolean isValid(String s) { int count = 0; for (int i = 0; i < s.length(); i++) { char c = s.charAt(i); if (c == '(') count++; if (c == ')') { if (count == 0) return false; count--; } } return count == 0; }
And another improvement is you can add the index of the removal to the queue also.
That is instead of add the new string s you add a tuple (s, i)
Then when you generate the strings for the next level, you start from the index you polled.
This can save you 50% of time.
while (!queue.isEmpty()) { int size = queue.size(); for (int i = 0; i < size; i++) { String cur = queue.remove(); // Valid if (isValid(cur)) { reached = true; res.add(cur); } // Not Valid Then Delete if (!reached) { for (int j = 0; j < cur.length(); j++) { if (cur.charAt(j) != '(' && cur.charAt(j) != ')') continue; String newStr = cur.substring(0, j) + cur.substring(j + 1); if (!visited.contains(newStr)) { queue.add(newStr); visited.add(newStr); } } } } if (reached) break; }
Great idea! I think it is more clear if you can process all candidate strings with same length at one time.
Thanks iBella! When I do BFS, I like to use the
null technique to help me control the levels, sometimes it's redundant (like in this problem), I have updated my code to make it even cleaner.
Hi,I think there is something wrong with your answer.
Whenever you got a solution,you should store the length of this solution.
And after that, when you get a string from your queue, if the length of it is smaller than the length of the solution, you should immediately break the while loop.
ex. input="()(()"
Your code will get the answer ["()()","()",""] if you don't set the length of solution to be 4 when you get "()()".
By the way,can you analysis the running time of your code?Thanks
OK, i posted my solution. By using this additional information, we can avoid generate duplicate strings all together. It runs 16 ms without the Set.
i didn't find a nice way to use tuple in java and I used a inner class for it.
@SenyangZ Hi, there is no such problem with this code. It actually generates only
["()()"] on the given input
"()(()". You may find it weird since the code does not explicitly record the maximum length of the valid parentheses.
However, it does it implicitly. For a string of parentheses to be valid, its number of parentheses should be even. And at any time, strings in
queue will only differ in length of
1 (this is the implicit control). When we find
"()()" to be valid, both
"()" and
"" have not been added to
queue yet and all the shorter strings are of length of
3, which must be invalid.
Thank you @jianchao.li.fighter! I love your explanation!
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
|
https://discuss.leetcode.com/topic/28827/share-my-java-bfs-solution
|
CC-MAIN-2017-51
|
refinedweb
| 978
| 81.83
|
Over the past few weeks I've been learning React, and now it's time to show what I've learned. I decided to make a recipe manager similar to the one that I previously built using vanilla JavaScript. While it was relatively easy to make this transition, I definitely encountered some hiccups that required a little more troubleshooting than I anticipated.
After setting up my project and building my components, I made sure they would render by lining them up in my App and checking them in my browser. I then wrote a useEffect to fetch the recipes from my JSON database and stored that information in state so that any recipe can be rendered using a single fetch. Next, I started to distribute props and added Routes to the components. Right away I knew there was something wrong. The issue I encountered stemmed from using incompatible versions of React and React Router. Of course I figured out where the problem was after I wrote all of my Routes! Because I had installed v18 of React, I had to update my React Router from v5 to v6 and update all of the syntax around my Routes. Ultimately, updating the syntax didn't take very long and in the long run the new version looks much cleaner, so I'm actually glad I ran into this issue and learned a new and updated way of Routing.
From there, I was able to build out a home page using Semantic UI Cards. Each card shows a picture, the recipe title and whether or not the recipe is one of my favorites. Clicking on a recipe title will take you to the recipe's details page, where ingredients, instructions and any comments are displayed. Here is where you can add a comment or favorite/unfavorite a recipe.
This is where I ran into a common issue when using state in React. When updating state within a function, I would often try to utilize the updated state before the function finished and the changes were actually applied within the component.
For example, instead of changing whether or not a recipe was a favorite just by setting the "favorite" state:
function handleFavorite() { const newFavorite = !favorite; setFavorite(newFavorite); };
I used a callback function within my setState hook:
function handleFavorite() { setFavorite(function (favorite) { const newFavorite = !favorite; return newFavorite; }) };
I then paired this function with a useEffect hook that is called whenever the "favorite" state is changed. Within the useEffect, the new "favorite" status gets PATCHed to the recipe database to make sure it is always current. At this point, the "recipes" state that is stored is no longer current, so I have the useEffect also fetch the updated database to store in the "recipes" state.
useEffect(() => { fetch(`{recipe.id}`, { method: "PATCH", headers: { "Content-Type": "application/json" }, body: JSON.stringify({"favorite": favorite}) }) .then(fetch(``) .then(r => r.json()) .then(recipes => { setRecipes(recipes); })) }, [favorite])
I used a similar process for the comments section, so that when a comment is submitted to the recipe, it updates the state of the "comments" array, which triggers a fetch within a useEffect that patches the new array to the database and then fetches the recipes to save into the "recipes" state to keep current with the database.
To set up all of these inputs as controlled inputs, I looked at my database and created a newRecipe state that had all of the keys that I wanted to include in the form. This includes things like the name of the recipe, the author, website, a photo URL, etc... When I got to the keys whose values were arrays, I simply included an empty array or, in the case of the comments, the value was assigned as another state. Take a look:
const [newRecipe, setNewRecipe] = useState({ img: "", video: "", name: "", source: "", author: "", mealtype: "", preptime: "", cooktime: "", servings: "", ingredients: [], instructions: [], comments: commArr });
From here, I made all of the single string inputs controlled by one function to update the values for those items in the newRecipe state. I had to be a little creative with the ingredients and instructions, because recipes don't have a set number of ingredients or instructions to include in a form like this. I couldn't just throw in 5 inputs for ingredients and 5 inputs for instructions. I wanted to be able to click a button and add a new input that would then be included in the new recipe's state. To do this, I wrote a function that would update a state array that simply had numbers in it that would act as my keys later on.
const [numIng, setNumIng] = useState([0, 1, 2, 3, 4]); function handleAddIng() { const newNum = numIng.length; setNumIng([...numIng, newNum], () => {}); };
Once I had that functioning properly I took that state array and mapped it to render one input for each value in the array, using the value as a key. Once the state array updates with a new number, a new input is added to the page with a proper key, className and onChange function for the input to be controlled.
{numIng.map((num) => { return ( <div> <input type="text" key={num} className="add-ingredient" onChange={handleIngredients}></input> </div> ) })}
Then, to make sure these inputs are also controlled and are being stored in the new recipe state object, I wrote a function to keep the array updated. I had to keep in mind that retrieving elements this way gives an HTML collection, and not an array that I can iterate through in the way I wanted, so I used a spread operator to convert the data from a collection to an array that I could use. I then filter out any of the inputs that don't have any text in them and store the resulting array in the new recipe state object.
function handleIngredients() { const ingElements = document.getElementsByClassName("add-ingredient"); const convIng = [...ingElements]; const newIngArr = convIng.filter((ing) => ing.value.length > 0).map((ing) => ing.value) console.log(newIngArr); setNewRecipe({...newRecipe, ingredients: newIngArr}); }
Recipe Manager 2.0 is now functioning the way that I want it to - at least for now. In the future I plan on adding functionality that will display recipes based on an ingredient search, rather than only searching by recipe name. I would also like to filter by tags and include embedded videos from the recipe's author if one is available.
Discussion (0)
|
https://dev.to/lorenmichael/recipe-manager-20-react-52c4
|
CC-MAIN-2022-27
|
refinedweb
| 1,064
| 57.1
|
Extending Microsoft Dynamics AX 2012 Cookbook — Save 50%
A practical guide to extending and maximizing the potential of Dynamics AX using common Microsoft technologies with this book and ebook.
(For more resources related to this topic, see here.)
Creating a Dynamics AX web service
There are a number of web services that have already been created and deployed with the standard Dynamics AX install. There are a lot more services that you can publish as web services through the AOT in just a matter of minutes, allowing you to access and update almost any area of Dynamics AX from other applications.
In this recipe, we will show you how you can create new web services from within the Dynamics AX development environment.
How to do it...
To create a new web service within Dynamics AX, follow these steps:
- From within the AOT explorer, create a new project for the web service.
- From inside the project, right-click on the project name and from the New submenu, select Service Group to create a new web service group.
- Rename your service group to be something a little more appropriate. In this case, we are creating a sales order web service; so we will rename it as SalesOrderService.
- From the AOT browser, open up the Services group, find the service that you want to publish as a web service, and then drag it over onto your new project service group. In this recipe, we selected the SalesSalesOrderService , which has all of the logic to create sales orders.
You can continue adding as many services into your service group as you like.
- When you have finished adding services, right-click on the service group that you created and select the Deploy Service Group menu item. This will process the service group and create a web service for you.
How it works...
To see the web service that was created, open the Inbound ports option from the Services and Application Integration Framework folder of the Setup group in the System administration area page.
Your new service should show up there. If you look at the WSDL URI: field for the inbound port, you will find the URL for the web service itself.
If you browse to that location, you will see the schema for the web service that you will use for other applications to call, in order to update Dynamics AX. For us it's not that user-friendly, but for applications, this is all they need to know.
Creating a web service wrapper
The web services that Dynamics AX creates seem to work best for programming interfaces, and sometimes programs have problems with the format of the web service call. InfoPath is one of these programs. So, we need to wrap the Dynamics AX service within a web service wrapper that InfoPath is able to use. This is not as complicated as it sounds though, and you can quickly do this with Visual Studio.
In this recipe, we will show how you can create a web service wrapper through Microsoft Visual Studio that we can use from within InfoPath.
Getting ready
In order to do this you need to have a copy of Visual Studio. We will be using Visual Studio 2010 in our example, but you should be able to create similar web service wrappers using earlier versions as well.
How to do it...
To create a web service wrapper, follow these steps:
- From within Visual Studio, create a new web project and from the template library, select the ASP.NET Web Service Application template.
- This will create your web service shell that will be modified to call the Dynamics AX web service. To link the Dynamics AX web service to our project so that we are able to call it, right-click on the References folder in Solution Explorer and select the Add Service Reference... menu item.
- From within the Add Service Reference dialog box, paste the URL for your Dynamics AX web service and click on the Go button. This will allow Visual Studio to discover the web service, and you will be able to see all of the operations that are exposed.
- Change the name in the Namespace: field to match the web service name so that it will be easier to remember in the later steps, and then click on the OK button.
When you return to your web service project, you will be able to see the web service reference in the Service References group within Solution Explorer .
- Within the header of the web service code, add an entry for your service reference as follows:
using AXSalesOrderService.SalesOrderServiceReference;
- Now, replace the HelloWorld web method code that is added to the web service by default with the following code that will use the web service to create a new sales order:
[WebMethod] public string NewSalesOrder( string company, string language, string custAccount, string PONumber, string itemID, decimal salesQty, string salesUnit ) { SalesOrderServiceClient client = new SalesOrderServiceClient(); AxdSalesOrder salesOrder = new AxdSalesOrder(); AxdEntity_SalesTable salesTable = new AxdEntity_SalesTable(); AxdEntity_SalesLine salesLine = new AxdEntity_SalesLine(); CallContext callContext = new CallContext(); EntityKey[] keys; EntityKey key; KeyField fld; salesTable.CustAccount = custAccount; salesTable.ReceiptDateRequested = new DateTime(2013, 03, 20); salesLine.ItemId = itemID; salesLine.SalesQty = salesQty; salesLine.SalesUnit = salesUnit; salesTable.SalesLine = new AxdEntity_SalesLine[] { salesLine }; salesTable.PurchOrderFormNum = PONumber; salesTable.SalesType = AxdEnum_SalesType.Sales; salesOrder.SalesTable = new AxdEntity_SalesTable[] { salesTable }; callContext.Company = company; callContext.Language = language; keys = client.create(callContext, salesOrder); key = keys[0]; fld = key.KeyData[0]; return fld.ToString(); }
You can see this in the following screenshot:
- Then, compile your web service.
How it works...
When you compile your program and run it, you will be taken to the web interface for the new web service showing all of the methods that you've exposed.
If you click on the NewSalesOrder web service call, you will be able to see all the parameters that are required to perform the web service.
You can test your web service by filling in the parameters and then clicking on Invoke . This will perform the web service and return with the results of the call.
With a little bit of extra code, you can have the web service return back the order number as well.
To double-check if that everything worked, you can open up Dynamics AX and you should be able to see the new sales order.
Using a Dynamics AX web service in an InfoPath form
InfoPath allows you to quickly create data entry forms that can be saved locally, to SharePoint, and also update data in databases. Additionally, it is able to connect to web services and send and receive data through that channel as well. So once we have a web service wrapper built that links to Dynamics AX, we can create a form that will send information to it in order to add and update data.
In this recipe, we will show how you can create an InfoPath form that uses a Dynamics AX web service wrapper to publish information.
Getting ready
For this recipe, you need to make sure that you have InfoPath installed, since it is usually part of Office Professional Plus or Office 365. Just check that it shows up within the Microsoft Office program group.
How to do it...
To use a web service within an InfoPath form, follow these steps:
- Within the InfoPath designer, create a new form and select the Web Service template.
- This will automatically open up the Data Connection Wizard . Select the Submit data option, since our example will be sending information to the web service to update Dynamics AX, and then click on Next .
- When asked for the web service, type in the URL for the WSDL (Web Services Description Language ) of your web service wrapper and click on Next .
If you don't know how to find the WSDL, just open up the web service that you are calling from InfoPath, and at the top of the page will be a link for Service Description . If you click on that, it will take you to the WSDL page.
The URL for this page is the one that you will want to paste into the Web Service: field on the Data Connection Wizard .
- If your web service has multiple operations published against it, you will see all of them listed in the next step in the wizard. Select the web service operation that you want your InfoPath form to use when submitting data and click on Next .
- Finally, give your data connection a name and click on the Finish button.
- Once the data connection is created, the web service parameters will show up as fields within the Fields browser. You can add them to the form individually by dragging and dropping them over, or you can just grab the whole group of fields and drag them onto the form.
- To default values in particular fields so that the user doesn't have to type in the values every time, select the Properties menu item after right-clicking on the field in the Fields browser. This will open up the Field or Group Properties window and you can specify the default value in the Value: field.
In our example, we will default the Company: and Language: fields.
How it works...
To see the form in action, click on the Preview button on the Home ribbon bar.
This will open up the form in edit mode and you can fill in the remaining fields. To send the data to Dynamics AX, click on the Submit button in the Home ribbon bar.
Now, you should be able to see a new order within Dynamics AX that was created by your new InfoPath form.
Creating custom OData queries to retrieve Dynamics AX data
Dynamics AX has a more generic web service call feature called OData Query that allows you to query tables and return them through a URL. This is useful because they can be used as read-only data sources for other programs such as InfoPath.
In this recipe, we will show how you can register your own custom query within Dynamics AX, and then access it through the OData Query web service.
How to do it...
To create an OData query, follow these steps:
- To access a query through the OData Query feature, we open the Document data sources form from the Organization Administration area page within the Document Management folder of the Setup group.
- To create a new query, click on the New button in the menu bar.
- The Document data sources reference the queries that are built within AOT. Usually, you don't have to build a whole new query because you can use one of the existing ones as a basis. Select a module that you would like the data source to be associated with, and then select Custom Query for the data source type. If you just want to query the table with no filter, you could select the Query Reference option, but we want to filter the data before it's sent to us.
- In the Data source name field, select the query that you want to publish as an OData Query. In this recipe we want a list of customers; so the CustTableListPage works for us.
- On selecting the data source name, AX will open up a query panel, where we can add whatever filters we want, and then we can click on OK to save.
- You may want to change the data source name to help you recognize what it is associated with, and then maybe add a description.
- Finally, to enable the document data source to be used in the queries, select the Activated checkbox.
For the following example, we also need to create a second document data source that queries the EcoResProductListPage , to return back all of the products in the database.
How it works... fields to be populated with dynamic data coming from static lists, databases, and also web data sources, so that users do not have to remember field values such as part codes and customer numbers. Since you are able to query Dynamics AX data through web queries, we can use these queries to create dynamic lookups in our forms.
In this recipe, we will show how you can turn text fields into drop-down lists that use OData queries as a data source.
How to do it...
To use an OData query as a data source for a field, follow these steps:
- We need to first define the data source. To do this, select the From XML File option in the From Other Sources menu in the Get External Data group of the Data ribbon bar in the form designer.
- When the Data Connection Wizard pops up, paste the URL for the OData query that you want to use as a data source and click on Next .
- To store the data source with the form, select the Include… option from the data source location section and click on Next .
- Finally, give your data source a name and click on Finish .
- To use the data source within a field, first you need to change the field's control type to one that will show the data. To do this, right-click on the field and select the Drop-Down List Box option from the Change Control submenu.
- Once the control has been changed, right-click on the field again and select the Drop-Down List Properties option.
- Change the List box choice from Enter choice manually to Get choice from an external data source and from the Data source dropdown, you will be able to find the XML data source that you just created.
- To specify what data is shown in the drop-down box, click on the tree navigation icon to the right of the Entries field. When the XML tree navigator is displayed, find the content node and select it. Selecting the content node will make our drop-down box filter out the metadata information in the XML file that is returned from the OData query, so that we can see all the real records.
- Next, click on the tree navigation icon to the right of the Value: and Display name: fields and select the fields that you want to store in the form, and also to be displayed in the dropdown. This will open up the XML tree navigator again and you can select any of the fields from the query.
- Finally, you may want to select the Show only entries with unique display names checkbox to filter out any duplicates, and then click on OK .
How it works... fields to list selections, and adding a submit image, we can create a kiosk form that is populated from Dynamics AX, and also create sales orders based on the selections.
Summary
InfoPath is an incredibly useful tool for creating forms and gathering information. When you use it in conjunction with Dynamics AX and just a little bit of coding, it becomes even more useful because you are able to create forms that feed back into the database.
In addition to what we showed you in this Article, you can also:
- Publish your forms to a SharePoint Forms repository allowing users to access the latest form templates from a centralized location. If you change the template on SharePoint, the users' local copies will also be upgraded ensuring that they always have the latest version.
- Host the InfoPath forms on a SharePoint site, allowing users to fill in the forms without even having InfoPath installed. This allows you to create forms that customers, vendors, or employees could fill out that could update Dynamics AX.
- Capture signatures through pen-based input devices such as tablets and Surface devices. The signatures can be stored as JPEG files and even posted to the Dynamics AX attachments if you are clever enough.
- Publish the InfoPath forms to SharePoint rather than Dynamics AX, while still indexing the document against the key Dynamics AX fields. These document libraries could be linked to records just like the traditional file document libraries that were shown in the earlier Articles.
Other ideas on where you may want to capture information through InfoPath forms could include:
- Sales people using them to capture store survey information. Pictures from Surface devices could be added to the InfoPath form data as image attachments.
- Logging of quality issues as Cases within Dynamics AX by mobile users without having to log in to be tethered to a normal PC.
- Capturing lead and prospect information through a simple table-based form.
- Simple inquires such as customer details through web-based forms.
Who would have guessed that such an overlooked product could be so useful!
Resources for Article :
Further resources on this subject:
- Foreword by Microsoft Dynamics Sure Step Practitioners [Article]
- Overview of Microsoft Dynamics CRM 2011 [Article]
- Setting up the Microsoft Dynamics GP System [Article]
About the Author :
Murray Fife.
Post new comment
|
http://www.packtpub.com/article/web-services-and-forms
|
CC-MAIN-2014-15
|
refinedweb
| 2,813
| 68.6
|
On Tuesday 07 February 2006 22:17, Jan-Henrik Haukeland wrote: > On 7. feb. 2006, at 19.43, Philipp Berndt wrote: > > Note that checking for the process id is *NOT* enough, because the > > service may > > have to initialize (spawn other processes, claim resources etc.). > > This is exactly the problem, waiting for a process or pid file to be > written is not enough, since this is typically done at the start and > before other initializing tasks. So what should we wait for? If you > have a good _general_ suggestion I'm all ears. > > If it's a socket based server, such as apache or mysql, we could wait > for it to pass a connection w/protocol test. That should work since > we then now it is initialized and the processing machinery works. > However this is not the case for many servers started by monit and > hence not general. You are perfectly right, there is NO GENERAL WAY to determine whether the service is ready or not. That's why it has to be done in the specific start script for that service. (I tried to find an explicit statement to this respect in the Linux Standard Base Specification and only found this (which remains a bit fuzzy): "When an init script is run with a start argument, the boot facility or facilities specified by the Provides keyword shall be deemed present and hence init scripts which require those boot facilities should be started later." I still believe it means this : > > Only after the start script has terminated you can be sure the > > service is > > ready and may be used by dependant services. > > Nope, if the start script calls 'exec' it will never return (I meant init.d scripts which always return.) > or if it > just does a fork ala 'program&' we are still back to the problem > situation described above. Using "&" in init scripts is indeed problematic. Some daemons can be started with some "-d" switch to fork themselves (so you don't need &) and may be well-behaved enough to return only when the service is initialized and ready. Other init scripts may have to be adapted. Lots of people do things like foo & sleep 1 which - works 99 times out of 100, but - is not reliable (especially if the system is under heavy load) and - is slower than necessary (if foo usually only takes 0.1s to start up). To solve the problem I have written (for my own needs) some semi-generic tools to check for a service, which I now use in my init scripts. - One is a program that tries to connect to some specified TCP port (quite generic) - Anotherone tries to contact a CORBA servant registered in a CORBA name service (more specific) - A third one waits until a line appears in a log file (currently very specific to tomcat) ... All of them only return when they succeed (or with an error code in case of a timeout). Another tool is a killproc which takes a pid from a pidfile, kills the process waits for it's termination, optionally does a kill -9 on timeout and removes the pidfile. They are meant as a toolkit to write well-behaved init scripts. I probably will release them on sourceforge once I have done some cleaning up and removed some build-dependencies. BTW, a little feature request :-) it would be nice if monit could optionally use the init scripts' status actions to monitor a service. Rationale: Some services may require again very specific checks that should be located in the service's specific init script. Best regards, Philipp
|
http://lists.gnu.org/archive/html/monit-general/2006-02/msg00006.html
|
CC-MAIN-2015-22
|
refinedweb
| 601
| 68.6
|
04 April 2012 12:42 [Source: ICIS news]
SINGAPORE (ICIS)--Saudi Polyolefins Co (SPC) is in the progress of ramping up its polypropylene (PP) line in ?xml:namespace>
One of the two reactors on the line was shut because of technical issues, with production of about 2,000 tonnes of PP material lost, the source said at the sidelines of the third Gulf Petrochemicals and Chemicals Association (GPCA) Plastics Summit, which runs to Thursday.
The affected PP line and a second PP line at the same site have a combined nameplate capacity of 720,000 tonnes/year, he said.
SPC is a joint venture between the Saudi National Industrialisation Co (TASNEE), which holds a 75% stake, and Netherlands-based LyondellBasell, which holds the remaining
|
http://www.icis.com/Articles/2012/04/04/9547841/saudis-spc-ramps-up-pp-facility-after-brief-outage.html
|
CC-MAIN-2014-49
|
refinedweb
| 124
| 50.2
|
An easy, consistent way to override CSS modules in React components.
CSS Modules
CSS modules are fantastic because they scope your CSS. No longer does everything have to live in the global namespace.
A natural boundary to scope CSS to is within a web component. A React component can have a corresponding
*.css file to accompany it with all the related styles, and it feels very natural.
Overriding Styles
Of course, as soon as you have boxed-up some functionality – in this case some markup and styles inside a component with accompanying css – you’re going to have occasion to open the box and customize it.
Because the scoping of CSS module selectors happens by hashing them at build time, this can be challenging to predictably override.
If you use your CSS module like this in this theoretical
List component:
import css from './list.css' export default props => <ul className={css.list}> <li className={css.listItem}>item!</li> </ul>
You’re going to be hard-pressed to override this from the outside, or client side, of this module. This is because you’ve coupled your component directly to this
css module-scoped variable without any way to override it from the outside.
React-Styleable
react-styleable aims to improve this by providing an easy, consistent way to override CSS modules.
Instead of using the
css variable directly, you would pass it to the higher-order component provided by
react-styleable and then access the css selectors via
props.css:
import styleable from 'react-styleable' import css from './list.css' export default styleable(css)(props => <ul className={props.css.list}> <li className={props.css.listItem}>item!</li> </ul> )
Overriding Styles
Because you are now accessing the CSS via the props (
props.css), you have an opportunity to change the CSS via props. Props, after all, are React’s way of passing parameters into your component.
The
styleable higher-order component will merge the
props.css object with overriding styles as needed.
Let’s say that you created a custom stylesheet, where the list was originally white now should be a darker color.
The original stylesheet snippet (
list.css) looks like:
.list { background: #fff; }
And the overriding stylesheet snippet (
list-dark.css) looks like:
.list { background: #555555; }
When you want to use the
List component but override the styles, you could pass your new stylesheet:
import css from './list-dark.css' import List from './list' <List css={css} />
In this usage of
List, this
.list selector would get the dark background.
Note that if there were other selectors in the overriding css file that they too would be overridden since this is a full override of the stylesheet.
For a partial override, we could be more precise, only overriding the
.list selector specifically:
import css from './list-dark.css' import List from './list' <List css={{ list: css.list }} />
Imagine if we had an app or a component ecosystem where all components were consistent in their usage of such a mechanism. It’d be easier to override styles – we’d have a set of more flexible components that could handle the custom styling scenarios that we always seem to run into.
There are also other methods for overriding CSS modules or component styles. What are some of the ways that have worked best for you?
|
https://jaketrent.com/post/override-css-modules-react-styleable
|
CC-MAIN-2021-21
|
refinedweb
| 552
| 67.45
|
In larger software applications, developer and reviewer must focus on some points to make code easy to read, easy to understand, easy to maintain, reliable, secure and scalable. Developer should also follow the software quality assurance by using standard application design architecture, design principles and design patterns. That will help others developers to understand the code easily for future development and enhancements.
I have divided below check list in two levels, same need to follow by java developer and code reviewer to match the code quality assurance.
- Basic Code Review Check List
- Detailed Code Review Check List
Basic Code Review Check List
While initial code review first impression comes from below points then reviewer will understand the code easily and that will help for detailed review of code.
- Is code clean and easy to understand?
- Is code is following coding guidelines & standards?
- Is functions or class are too big? If yes also having lots of responsibilities?
- Is the same code duplicate?
- Can I debug/ unit test easily to find the root cause of issue/defect?
Detailed Code Review Check List
The following aspects give details idea about what need to consider while reviewing code. Some of the points are not easy to identify in code for beginner reviewer because that come from experience and work on multiple application/software.
1. Clean Code
Code should be easy to read and understandable there are some standard follow in some organizations to make the code maintainable.
10-50-100 Rule
There is very simple rule to avoid Monolithic code, Spaghetti code and make maintainable.
- 10: No package can have more than 10 classes
- 50: No method can have more than 50 lines of code.
- 500: No class can have more than 500 lines of code.
Code Formatting Style
Use standard code template/ code style and shared to all developers in team. So that every individual developer aligned properly while merging and checkout code from repository will not create any conflict. Keep below points in mind while writing code :
- Use alignments (left margins), white spaces and starting and ending points of blocks easily understandable.
- Code should be fit in standard 14 inch laptop screen so no need to scroll horizontally to view the code.
- While code review remove the commented code that can be taken again from GIT/SVN repositories if required.
We can use CheckStyle tool with maven also for reducing this manual effort and share same with all developer in team to follow same standard.
Naming convention
Naming convention for package, classes, methods and variables make code easily understandable:
- Proper naming convention(Pascal, CamelCase etc.) while deciding name for variable, classes, methods.
- Use descriptive and meaningful variable, method and class names so that not relying too much on comments.
For Ex: Method : calculateTax(BigDecimal amount) ; Variable: totalAmount; Class: CustomerAccount.java
Apart from above points there are some more which will make our code more clean and easily readable.
- Class and functions should be small and focus on doing one thing. If there is duplicate code in multiple functions create new method for same and reuse it .
- Function should not take too much parameters. If need to pass more parameters from same object better pass reference of object.
- Declare the variable with smallest possible scope.
- Don’t preserve or create variables that not use again.
2. Architecture/Design
While implementing functionality keep in mind to follow OOPS concepts (A PIE), SOLID design principles, Don’t repeat yourself (DRY) and keep it simple (KISS) . These OOPS concept and principles accomplish “Low Coupling” and “High cohesion“.
OOPS Concept (A PIE)
- Abstraction
- Polymorphism
- Inheritance
- Encapsulation
SOLID Class Design Principles
- Single Responsibility Principle : A class should have one and only one responsibility. If class is performing more than one task, it leads to confusion.
- Open & Close Principle : The developers should focus more on extending the software entities rather than modifying them.
- Liskov Substitution Principle : It should be possible to substitute the derived class with base class.
- Interface Segregation Principle : It’s same as Single Responsibility Principle but applicable to interfaces. Each interface should be responsible for a specific task and should not have methods which he/she doesn’t need.
- Dependency Inversion Principle : Depend upon Abstractions- but not on concretions. This means that each module should be separated from other using an abstract layer which binds them together.
Design Patterns
Design patterns provide standard try and tested approach to handle certain cases. They provide standard terminology which makes developers to collaborate and easier to communicate to each other over globe.
3. Documentation/Comments
- There should be an explanation for any code that is commented out.
- All classes and methods should contain a descriptive JavaDoc comment.
- All methods should contain brief comments describing unobvious code fragments
- All class files should contain a copyright header.
- All class files should contain class comments, including author name.
- All methods should contain comments that specify input parameters.
- All methods should contain a comment that specifies possible return values.
- Complex algorithms should be thoroughly commented.
- Comment all variables that are not self-describing.
- Static variables should describe why they are declared static.
- Code that has been optimized or modified to “work around” an issue should be thoroughly commented, so as to avoid confusion and re-introduction of bugs.
- Code that has been “commented out” should be explained or removed.
- Code that needs to be reworked should have a TODO comment and a clear explanation of what needs to be done.
- When in doubt, comment.
4. Logging/Debugging
- Logging for different level should be configurable.
- Log every transactions or the ones that require logging.
- Use appropriate log level corresponding to messages. For Ex: ERROR for exception.
- Always log execution time of method to check performance.
Check below link for info on logging.
Log4j2 Java Logging Example Tutorial – XML Configuration, Severity Levels, Formatting and Appenders
5. Exception Handling
- Use exceptions as opposed to return codes.
- Code should handle exceptions, not just log them.
- Catching general exceptions is commonly regarded as “bad practice”.
- Some method in the call stack needs to handle the exception, so that we don’t display that exception stacktrace to the end user.
- Exception handling should be consistent throughout the system.
- Don’t ignore or suppress exceptions. Standardize the use of checked and unchecked exceptions. Throw exceptions early and catch them late.
- We need to expand our notion of Exception Handling Conventions.
- When method return reference object always check for null before use that. For Ex:
Emloyee employee= Context.getEmployeeService().getEmployee(employeeId); employee.getAddress().getStreet();
- There should be no catch blocks which catches an exception and throw that again. This is because the exposes the internal behaviour of the system.
6. Security
- Don’t log sensitive data. For Ex: Password, credit card number, CVV etc.
- Don’t throw exception with sensitive information like file paths, server names, host names etc.
- Connect to other systems securely, i.e., use HTTPS instead of HTTP where possible.
- Service methods should have an @Authorize annotation on them.
- Use prepared statements instead of statement to prevent SQL injection attack.
- Release resources (Streams, Connections etc.) to prevent denial of service attack (DoS) and resource leak issues.
- Follow Security best practices for SSL, encryption of session, sensitive data, authentication and authorization etc.
- Passwords should not be stored in the code. In fact, we have adopted a policy in which we store passwords in runtime properties files.
- Clearly document security related information.
7. Performance
- Reuse objects by using caching or Flyweight Design Pattern.
- SQL queries and joins not proper use can impacts performance. For more check JDBC Coding Best Practices
- Backtracking of regular expression impact performance.
- Use appropriate Collections classes as per requirement.
- Inefficient Java coding , algorithm and data structure in frequently executed methods leading to death by thousand cuts.
- Presence of long lived objects like ThreaLocal and static variables holding references to lots of short lived objects.
- Avoid creating unnecessary objects.
- Beware the performance of string concatenation. Use StringBuffer or StringBuilder when need to apply operation on String.
8. Concurrency
- Avoid excessive synchronization.
- Write thread-safe code with proper synchronization and use of immutable objects.
- Keep synchronization section small and favor the use of the new concurrency libraries to prevent excessive synchronization.
- Avoid calling synchronized methods within synchronized methods.
- If objects can be accessed by multiple threads at one time, code altering global variables (static variables) should be enclosed using a synchronization mechanism (synchronized).
- In general, controllers / servlets should not use static variables.
- Write access to static variable should be synchronized, but not read access.
- Even if servlets/controllers are thread-safe, multiple threads can access HttpSession attributes at the same time, so be careful when writing to the session.
- Use the volatile keyword to warn that compiler that threads may change an instance or class variable – tells compiler not to cache values in register.
- Release locks in the order they were obtained to avoid deadlock scenarios.
9. Detach Resource After Usage
- Resources that are not automatically released after usages are freed. Connections, Files, ports are closed properly..
10. On Demand Resource Delivery
- Resources are fetched and delivered only on demand. Necessary options are available for dealing with huge data such as paginations, etc.
11. No Warning/ Console Logs
- No compiler warnings should arise while running the application.
- Logs that are used while developing are cleared and none of the application information (especially sensitive ones) are written in the browser console.
12. Unit Testing/ JUnit
- Never allow a unit test that is written to show 100% coverage and doesn’t do anything that unit test is supposed to do.
- Ensure unit/mock test write properly and able to run independently of other.
- Set up should not be too complicated.
- Mock out external states and services that you are not asserting. For example, retrieving data from a database.
- Avoid unnecessary assertions.
- Start with functions that have the fewest dependencies, and work your way up.
- Write unit tests for negative scenarios like throwing exceptions, negative values, null values, etc.
- Don’t have try/catch inside unit tests. Use throws Exception statement in test case declaration itself.
- Don’t have ant System.out.println(…..)
- Always try to have a unit test for the new piece of code. In an ideal condition, we should have 100% unit test coverage.
- Make sure the JUnit test covers all possible values.
To Learn more on Junit and Mockito follow Mockito + Junit Tutorial
13. Framework & Libraries
- Favor using well proven frameworks and libraries. For Ex: Spring Libraries, Hibernate Libraries, Google Libraries etc.
- Legal use of third party libraries. If using third party libraries that should approved from admin and licensed. For Ex: Oracle.
14. Configurable Items
- Keep environment specific properties like password, directory location in database configuration table.
- Password should store in encrypted form.
- Keep hardcoded values like url, service endpoints in properties file.
15. General Programming Practices
- No syntax/runtime errors and warnings in the code.
- No deprecated functions should use in code.
- No public class attributes.
- Always try to initialize the variable before using that in a function.
- Use class final and object immutable where possible because immutable objects are thread safe and secure. For Ex: String Class.
- Always try to use constants in the left-hand side of the comparison. That is instead of doing if ($variable == “Saurabh”) always use if (“Saurabh” == $variable) because this will help to identify the errors in the earlier stage of development even if we miss and “=” from that statement.
- Check that each function is doing only a single thing. That is a function named createEmployee should never delete the existing employee and create it again.
- Always try to separate out the code with view. Ideally the view/template should be logic free.
- Optimizations may often make code harder to read and more likely to contain bugs. Such optimizations should be avoided unless a need has been identified.
- Always have an eye on the recursive functions and make sure it will have closing condition.
- “Dead Code” should be removed. If it is a temporary hack, it should be identified as such. Check if code has file/class/function level comments. And each of this comments should explain what the file/class/function is doing inside it.
- No magic numbers and hard coded value. This should be defined as a constant well commented about the purpose.
- Never allow bad code with some good comments.
- Always commit /Rollback database transaction at the earliest. Keep the database transaction short as possible.
- When use serialization of object use Serialization ID.
- Use appropriate collections as per requirement.
- Prefer return empty collection instead of Null.
- Use equals over ==.
- Use Array when number of element fixed while ArrayList when elements vary.
- Always check null on reference object before use it.
- When use String search on some text. then match will return index otherwise as -1.
Tools for Code Reviews
Below are some tools to do code analysis on static code for improving code quality. These tools run thoroughly on the entire project and generate reports. Some tools are for backtrack of review comments.
- Use the tools (based on technology) such as SonarQube, NDepend, FxCop, TFS code analysis rules.
- Use plug-ins such as Resharper, which suggests the best practices in Visual studio.
- To track the code review comments use the tools like Crucible, Bitbucketand TFS code review process.
- Use the tool PMD to identify possible bug, dead code, overcomplicated expression, suboptimal code, duplicate code, etc.
Conclusion
The objective of above code review checklist is to introduce and apply concepts and provides a direction to the code reviewer to conduct effective code reviews and deliver good quality code. Initially it take time and required bit practice to check code from different aspects to make you expert code reviewer.
|
https://facingissuesonit.com/java-coding-review-best-practices/
|
CC-MAIN-2019-04
|
refinedweb
| 2,281
| 58.99
|
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
Import cvs file does not work, buttons stay grey
When I try to import a csv file Sales - sales/ customers (listview) the buttons "Validate" and "Import" stay grey instead of red after I have selected my csv file.
I have installed trunk (v8) on ubuntu 13.10 in a vmware 64 bits
This issue can be resolved by making changes in addons/base_import/controller.py file and upgrading base_import module.
___________________________________
import simplejson
from openerp.http import Controller, route, request
class ImportController(Controller):
@route('/base_import/set_file')
def set_file(self, file, import_id, jsonp='callback'):
import_id = int(import_id)
written = request.session.model('base_import.import').write(import_id, {
'file': file.read(),
'file_name': file.filename,
'file_type': file.content_type,
}, request.context)
return 'window.top.%s(%s)' % (
jsonp, simplejson.dumps({'result': written}))
----------------------------------------
Finally, how to solve itï¼
If you know how to edit a python script ( .py ) and how to update modules in openerp, you can use the link I have posted above. Else, you will have to wait for them to fix it and put it in a nightly. you make a fake record in OpenERP, export it, delete it in OpenERP and then reimport the exported file, do the buttons still stay greyed out?
Yes, I have create a customer, exported that customer, deleted him and reimported te export file. Buttons still stay grey.
I do have the same issue. Tried with products, contacts and no luck.
It is a bug in the base.import module, when you select a file to import, it will throw an error. Would post it on the launchpad but did not found how. Here is the text of the error thrown:
2014-04-06 01:11:44,818 1573 ERROR EKYRAIL werkzeug: Error on request: Traceback (most recent call last): ....... File "/usr/share/pyshared/openerp/http.py", line 292, in checked_call return self.endpoint(a, *kw) File "/usr/share/pyshared/openerp/http.py", line 635, in __call__ return self.method(args, *kw) File "/usr/share/pyshared/openerp/http.py", line 338, in response_wrap response = f(args, *kw) TypeError: set_file() takes at least 4 arguments (4 given)
|
https://www.odoo.com/forum/help-1/question/import-cvs-file-does-not-work-buttons-stay-grey-47060
|
CC-MAIN-2016-50
|
refinedweb
| 379
| 68.87
|
One of the most common ways to interact with databases on JVM is the JDBC API.
JDBC stands for Java Database Connectivity, which is a standard Java API for database-independent connectivity between the Java programming language and a wide range of databases.
Aerospike is a fast and durable No-SQL database. It has its own Java client, but this article will introduce you to a supplementary way of interacting with it using SQL.
Get yourself a hot cup of tea or coffee (for the true Java look and feel) and we will dive into the details of the Aerospike journey into the JDBC world.
Introduction
The Aerospike JDBC driver requires Java 8 and is compliant with JDBC 4.2. Also, Aerospike server 5.2+ is required, because the driver uses the new Filter expressions.
The first release of the JDBC driver supports the following SQL statements:
- INSERT
- UPDATE
- DELETE
You can also add WHERE conditions and LIMIT your queries. JOINS, ORDER BY and GROUP BY are not supported.
From the very beginning, the driver was designed to support operations that could be done using the regular Java client, without UDFs and other compute and memory hungry components. So, the original intention is to keep it small and easy to start, without workarounds to support features that aren’t native to the Aerospike Database.
The driver doesn’t support SQL functions as well as the Aerospike collection data types (CDTs).
Getting Started
Install the Aerospike JDBC driver and add the location of it to your classpath.
You can take the JAR file from the releases, add a Maven dependency, or build it from the sources.
The Aerospike JDBC driver is statically registered in the AerospikeDriver class. So the only thing required is to load this class.
Class.forName("com.aerospike.jdbc.AerospikeDriver").newInstance();
The next thing you’ll need to do is to specify the JDBC URL. The URL template is:
jdbc:aerospike:HOST[:PORT][/NAMESPACE][?PARAM1=VALUE1[&PARAM2=VALUE2]
For example the
jdbc:aerospike:localhost URL will connect to the Aerospike database running on a local machine and listening on the default port (3000). The
jdbc:aerospike:172.17.0.5:3300/test URL connects to the test namespace on the Aerospike database running on 172.17.0.5:3300.
After the initial setup let’s see a simple usage example of it:
try { String url = "jdbc:aerospike:localhost:3000/test"; Connection connection = DriverManager.getConnection(url); String query = "select * from ns1 limit 10"; ResultSet resultSet = connection.createStatement().executeQuery(query); while (resultSet.next()) { String bin1 = resultSet.getString("bin1"); System.out.println(bin1); } } catch (Exception e) { System.err.println(e.getMessage()); }
JDBC Client tools
You can browse and manipulate data in Aerospike with any of the available SQL client tools using the JDBC driver.
There are a number of multiplatform and free database tools available like DBeaver, SQuirreL, and others.
Here are the steps to configure the DBeaver SQL Browser with the Aerospike JDBC driver:
- Database -> Driver Manager -> New Fill in settings:
- Driver Name: Aerospike
- Driver Type: Generic
- Class Name:
com.aerospike.jdbc.AerospikeDriver
- URL Template:
jdbc:aerospike:{host}[:{port}]/[{database}]
- Default Port: 3000
- Click the
Add Filebutton and add the JDBC jar file.
- Click the
Find Classbutton.
- Click
OK.
Create a connection:
- Database -> New Database Connection
Aerospikeand click
- Fill in the connection settings
- Host and Port
- Database/Schema: the namespace you are connecting to
- Username and Password if you have security turned on in Aerospike Database Enterprise Edition
- Click
Finish.
Now you can open an SQL editor and query your Aerospike cluster:
Summary
The Aerospike JDBC driver is in its very early stages. It will be great if you could try it and give us some feedback. Any contributions to the project are very welcome.
Check out my previous Aerospike SQL series if you haven’t done this yet.
And don’t forget to subscribe to the Aerospike developer blog to get updated with our latest news.
Discussion (2)
How do you get a specific row by 'user key' or digest?
There is a primary key column "__key", which represents the Aerospike record's user defined key.
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/aerospike/introducing-aerospike-jdbc-driver-4l14
|
CC-MAIN-2021-21
|
refinedweb
| 687
| 55.34
|
In its latest issue, the prestigious Harvard Business Review has named its best-performing CEOs in the world and fresh in at number one is none other than Apple chief Steve Jobs, with Microsoft counterpart Steve Ballmer not even listed
The magazine collected data on 2,000 CEOs, from all fields and around the globe, covering their entire time in office, up to September 30 this year, then came up with its top 200.
Jobs wasn't the only tech CEO to rank high on the list – Yun Jong-Yong, former Samsung Electronics Co. CEO was #2, whilst Cisco Systems Inc. head John Chambers hit #4. Notably, Microsoft's CEO, Steve Ballmer, didn't make the cut.
Not just arbitrary
It wasn't about the name recognition of the CEO. The magazine used a complex methodology based on total shareholder return to determine which CEOs qualified as the very best. Total shareholder return for each company was calculated on a daily basis for the tenure of each candidate.
These figures were then weighed again returns for both industry and country. As if that wasn't enough, the researchers also looked at the changes in a company's market capitalisation over the period. According to the magazine:
"The #1 CEO on the list, Steve Jobs, delivered a whopping 3,188 per cent industry-adjusted return (34 per cent compounded annually) after he rejoined Apple as CEO in 1997, when the company was in dire shape."
Jobs was a co-founder of Apple in 1976, but found himself ousted in 1985. He came back in 1997 and has guided the company through a huge renaissance that's seen its focus move away from computers to the iPod and the iPhone, and making it into a major global brand, and himself into a true celebrity CEO.
Apple's shares have more than doubled in the last year.
Article continues below
|
http://www.techradar.com/news/computing/apple/apples-jobs-gets-top-ceo-gong-659634
|
CC-MAIN-2016-07
|
refinedweb
| 317
| 59.43
|
23 February 2010 21:26 [Source: ICIS news]
HOUSTON (ICIS news)--A US propylene producer nominated an increase of 5 cents/lb ($110/tonne, €80/tonne) for March, market sources said on Tuesday, pointing to yet another likely jump for the monthly contract.
US polymer-grade propylene (PGP) contracts for February settled at 63.50 cents/lb, up by 6.50 cents from January, while chemical-grade propylene (CGP) settled up 6.50 cents at 62.00 cents/lb, according to global chemical market intelligence service ICIS pricing.
The proposed increase for March would extend an uptrend that began in November, when propylene contracts started to climb on the back of firm demand and tight supply.
Propylene spot prices have climbed steadily in recent weeks and will likely support higher contract prices next month.
Refinery-grade propylene (RGP), the main feedstock for higher-purity ?xml:namespace>
Spot PGP for April traded on Monday at 67.00 cents/lb, up from March deals done at 63.50 and 66.00 cents/lb last week, sources said.
Buyers wanted propylene to go down but fundamentals are still net short, a plastics trader noted.
According to the source, monomer production from steam crackers remained limited while derivative export demand continued to be strong.
The acrylonitrile (ACN), alcohol derivates and propylene oxide (
US prices of ACN and alcohol derivatives have increased due to firm demand from the export market, while curtailed
US propylene contracts will likely settle in the first week of March. At least two more
Chevron Phillips Chemical, Enterprise Products, ExxonMobil, LyondellBasell and Shell Chemical are among the major
Dow Chemical, INEOS, Ascend Performance Materials and Total are among the main buyers.
(
|
http://www.icis.com/Articles/2010/02/23/9337275/US-C3-poised-for-new-jump-5-ctlb-hike-sought-for-March.html
|
CC-MAIN-2015-06
|
refinedweb
| 280
| 56.05
|
Important: Please read the Qt Code of Conduct -
Image rotation causes distortion
Hi
I making some image to rotate each 10 degrees. Rotation itself works fine. My problem is related to image. After some iterations image gets distorted. This occurs with any degree value.
My guess is because image bounding box. If so, is there a way to maintain proportions?
Original Image:
!(Original image)!
Distorted Image:
!(Distorted Image)!
This is my code:
@
void Ship::Move(int x, int y)
{
QPixmap rotatePixmap(shipPixels.size());
rotatePixmap.fill(Qt::transparent);
QPainter p(&rotatePixmap); p.translate(rotatePixmap.size().width() / 2, rotatePixmap.size().height() / 2); p.rotate(degree); p.translate(-rotatePixmap.size().width() / 2, -rotatePixmap.size().height() / 2); p.drawPixmap(0, 0, shipPixels); p.end(); shipPixels = rotatePixmap; this->setPixmap(shipPixels); this->move(QPoint(x, y)); degree = 0;
}
@
Thanks for replies.
I would assume it is because you are accumulating errors with every rotation. You rotate the image and replace the original (line 14), then rotate that rotated image, then rotate that image etc. Rotation maths is precise but pixels are in discrete locations so each rotation introduces more error.
You should start from the same, unrotated image and rotate it once by the cumulative amount of rotation desired. If the increments are fixed then you should probably also do this once only and cache the 36 (or whatever) images.
Did you play around with the flags that can be set via QPainter::setRenderHint() ????
Those look related:
- QPainter::Antialiasing
- QPainter::SmoothPixmapTransform
- QPainter::HighQualityAntialiasing
Mulder I tried these hints before. They just smooth the error.
ChrisW67, based on your comment, if I do this I'll continue to accumulate the error eve if I unrotate the image. For me Qt did the translation and preserves the image quality
Well, bitmap (pixel-based) images have a limited resolution by nature. You can't rotate such an image in an "exact" way, except for rotating by multiples of 90°. For other rotations degrees the output has to be interpolated. What you call "distorted" looks like a "nearest neighbor" sampling. Using a "bilinear" or "bicubic" transform will create a more "smooth" result. This usually is much preferred for "natural" images. It may not look that great for a "synthetic" image like yours. Keep in mind that your original image already looks rather "pixelated"...
(I think for what you are doing it might be the better solution to work with vector-based graphics, not bitmaps)
[quote author="qtBeginner" date="1349895757"]
ChrisW67, based on your comment, if I do this I'll continue to accumulate the error ever if I unrotate the image.[/quote]
I didn't say unrotate then re-rotate. I said start with the same, original, unrotated image every time.
See the difference between the two in this example:
@
#include <QtGui>
#include <QDebug>
int main(int argc, char *argv[])
{
QApplication app(argc, argv);
QPainter p;
// Original image QPixmap original("ship1p.png"); // Original rotated once to 30 degrees QPixmap test1(original.size()); test1.fill(Qt::transparent); p.begin(&test1); p.translate(original.width() / 2, original.height() / 2); p.rotate(30); p.translate(-original.width() / 2, -original.height() / 2); p.drawPixmap(0, 0, original); p.end(); test1.save("test1.png"); // Original rotated to 30 degrees in 5 degree steps QPixmap test2(original); for (int i = 0; i < 6; ++i) { // each loop accumulates errors QPixmap temp(test2.size()); temp.fill(Qt::transparent); p.begin(&temp); p.translate(test2.width() / 2, test2.height() / 2); p.rotate(5); p.translate(-test2.width() / 2, -test2.height() / 2); p.drawPixmap(0, 0, test2); p.end(); test2 = temp; } test2.save("test2.png"); return 0;
}
@
Test1: One rotation
!(Test1 One rotation)!
Test 2: Accumulated rotations
!(Test2 Accumulated rotation)!
So, track to total rotation the image requires and do a single rotation from a known good image rather than accumulate errors by continually rotating an image.
MulderR's point about vector graphics is worth considering if it fits your requirements.
That's correct. My bad. I'll try your code right now.
Thanks for both of you.
|
https://forum.qt.io/topic/20484/image-rotation-causes-distortion/1
|
CC-MAIN-2021-17
|
refinedweb
| 668
| 52.87
|
Daniel Phillips <phillips@phunq.net> wrote:> On Monday 25 February 2008 15:19, David Howells wrote:> > So I guess there's a problem in cachefiles's efficiency - possibly due> > to the fact that it tries to be fully asynchronous.> > OK, not just my imagination, and it makes me feel better about the patch > set because efficiency bugs are fixable while fundamental limitations > are not.One can hope:-)> How much of a hurry are you in to merge this feature? You have bits > like this:I'd like to get it upstream sooner rather than later. As it's not upstream,but it's prerequisite patches touch a lot of code, I have to spend timeregularly making my patches work again. Merge windows are completely not fun.> ."> > We already have that hook, it is called bio_endio.Except that isn't accessible. CacheFiles currently has no access to thenotification from the blockdev to the backing fs, if indeed there is one. Allwe can do it trap the backing fs page becoming available.> My strong intuition is that your whole mechanism should sit directly on the> block device, no matter how attractive it seems to be able to piggyback on> the namespace and layout management code of existing filesystems.There's a place for both.Consider a laptop with a small disk, possibly subdivided between Linux andWindows. Linux then subdivides its bit further to get a swap space. What youthen propose is to break off yet another chunk to provide the cache. Youcan't then use this other chunk for anything else, even if it's, say, 1% usedby the cache.The way CacheFiles works is that you tell it that it can use up to a certainpercentage of the otherwise free disk space on an otherwise existingfilesystem. In the laptop case, you may just have a single big partition. Thecache will fill up as much of it can, and as the other contents of thepartition consume space, the cache will be culled to make room.On the other hand, a system like my desktop, where I can slap in extra diskswith mound of extra disk space, it might very well make sense to commit blockdevices to caching, as this can be used to gain performance.I have another cache backend (CacheFS) which takes the form of a filesystem,thus allowing you to mount a blockdev as a cache. It's much faster than Ext3at storing and retrieving files... at first. The problem is that I've muckedup the free space retrieval such that performance degrades by 20x over time forfiles of any size.Basically any cache on a raw blockdev _is_ a filesystem, just one in whichyou're randomly allowed to discard data to make life easier.> I see your current effort as the moral equivalent of FUSE: you are able to> demonstrate certain desirable behavioral properties, but you are unable to> reach full theoretical efficiency because there are layers and layers of> interface gunk interposed between the netfs user and the cache device.The interface gunk is meant to be as thin as possible, but there areconstraints (see the documentation in the main FS-Cache patch for moredetails): (1) It's a requirement that it only be tied to, say, AFS. We might have several netfs's that want caching: AFS, CIFS, ISOFS (okay, that last isn't really a netfs, but it might still want caching). (2) I want to be able to change the backing cache. Under some circumstances I might want to use an existing filesystem, under others I might want to commit a blockdev. I've even been asked about using battery-backed RAM - which has different design constraints. (3) The constraint has been imposed by the NFS team that the cache be completely asynchronous. I haven't quite met this: readpages() will wait until the cache knows whether or not the pages are available on the principle that read operations done through the cache can be considered synchronous. This is an attempt to reduce the context switchage involved.Unfortunately, the asynchronicity requirement has caused the middle layer tobloat. Fortunately, the backing cache needn't bloat as it can use the middlelayer's bloat.> That said, I also see you have put a huge amount of work into this over > the years, it is nicely broken out, you are responsive and easy to work > with, all arguments for an early merge. Against that, you invade core > kernel for reasons that are not necessarily justified:>> * two new page flagsI need to keep track of two bits of per-cached-page information: (1) This page is known by the cache, and that the cache must be informed if the page is going to go away. (2) This page is being written to disk by the cache, and that it cannot be released until completion. Ideally it shouldn't be changed until completion either so as to maintain the known state of the cache.I could set up a radix tree per data storage object to keep track of both thesepoints,fspages, but may have to pin resources for backing them.Further note that PG_private may not be used as I want to be able to usecaching with ISOFS eventually.> * a new fileops methodDo you mean a new address space ops method? Yes. I have to be able to writefrom a kernel page without the use of a struct file. The struct file isn'tactually necessary to do the write, and so is a waste of space. What's worseis that the struct file plays havoc with resource limits and ENFILE production.Ideally I want a couple of hooks: one to do O_DIRECT writing to a file fromkernel pages, one to do O_DIRECT|O_NOHOLE reading from a file to kernel pages(holes in cache files represent blocks not retrieved from the server, so I wantto see ENODATA not a block of zeros).> * many changes to LSM including new object class and new hooks> * separate fs*id from task structIt has been required that I call vfs_mkdir() and suchlike rather than bypassingsecurity and calling inode ops directly. Therefore the VFS and LSM get to denythe caching kernel modules access to the cache data because under somecircumstances the caching code is running in the security context of whateverprocess issued the original syscall on the netfs.Furthermore, the security parameters with which a file is created (UID, GID,security label) would be derived from that process that issued the system call,thus potentially preventing other processes from accessing the cache, inparticular cachefilesd.So, what is required is to temporarily override the security of the processthat issued the system call. We can't, however, just do an in-place change ofthe security data as that affects the process as an object, not just as asubject. This means it may lose signals or ptrace events for example, andaffect what the task looks like in /proc.So what I've done is to make a logical slit in the security between theobjective security (task->sec) and the subjective security (task->act_as). Theobjective security holds the intrinsic security properties of a process and isnever overridden. This is what appears in /proc, and is used when a process isthe target of an operation by some other process (SIGKILL for example).The subjective security holds the active security properties of a process, andmay be overridden. This is not seen externally, and is used whan a processacts upon another object, for example SIGKILLing another process or opening afile.The new hooks allow SELinux (or Smack or whatever) to reject a request for akernel service (such as cachefiles) to run in a context of a specific securitylabel or to create files and directories with another security label. Thesehooks may also be useful for NFSd.> * new page-private destructor hookThe cache may attach state to pages before read_cache_pages() is called.Therefore read_cache_pages() may need to arrange for it to be cleaned up. Theonly way it can know to do this is by examining the page flags. PG_private maynot be overloaded because it is owned by fs/buffer.c and friends on things likeISOFS.> * probably other bits I missedNote that most of these things have been muchly argued over already.> Would it be correct to say that some of these changes are to support > disconnected operation?No.> You have some short snappers that look generally useful:> > * add_wait_queue_tail (cool)Which you complained about above.> * write to a file without a struct file (includes ->mapping cleanup,> probably good)Ditto.> * export fsync_super> Why not hunt around for existing in-kernel users that would benefit so > these can be submitted as standalone patches, shortening the remaining > patch set and partially overcoming objections due to core kernel > changes?The only ones that really fall into that category are the security patches,which admittedly affect a lot of places. That might be acceptable, and thethought has occurred to me, because of NFSd.> One thing I don't see is users coming on to lkml and saying "please > merge this, it works great for me". Since you probably have such > users, why not give them a poke? The problem is that I'm stuck on waiting for the NFS guys to okay the NFSpatches.> Your cachefilesd is going to need anti-deadlock medicine like ddsnap > has.You mean the userspace daemon? Why should it deadlock?> Since you don't seem at all worried about that right now, I suspect you have> not hammered this code really heavily, correct?I had run iozone on cached NFS prior to asynchronising it. However, I've founda bug in my thread pool code that I'm currently chasing, so I need to do moreparallelisation testing> Without preventative measures, any memory-using daemon sitting in the block> IO path will deadlock if you hit it hard enough.cachefilesd doesn't actually seem to consume that much memory, and it'sunlikely to deadlock as it only does one thing at once and has no locking.There is a potential race though between cachefilesd's cull scanner and someonescanning through all the files that are cached in the same order over and overagain. The problem is that we cannot keep the view of old stuff in the cacheup to date, no matter how hard we try. I haven't thought of a good way aroundthat.> A couple of years ago you explained the purpose of the new page flags to > me and there is no way I can find that email again. Could you explain > it again please? Meanwhile I am doing my duty and reading your OLS > slides etc.See above.David
|
http://lkml.org/lkml/2008/2/25/500
|
CC-MAIN-2014-10
|
refinedweb
| 1,766
| 60.55
|
Guido van Rossum wrote: > Nick, and everybody else trying to find a "solution" for this > "problem", please don't. Denying that there's a problem isn't going to make it go away. Many people, including me, have the feeling that the standard way of defining properties at the moment leaves something to be desired, for all the same reasons that have led to @-decorators. However, I agree that trying to keep the accessor method names out of the class namespace isn't necessary, and may not even be desirable. The way I'm defining properties in PyGUI at the moment looks like this: class C: foo = overridable_property('foo', "The foo property") def get_foo(self): ... def set_foo(self, x): ... This has the advantage that the accessor methods can be overridden in subclasses with the expected effect. This is particularly important in PyGUI, where I have a generic class definition which establishes the valid properties and their docstrings, and implementation subclasses for different platforms which supply the accessor methods. The only wart is the necessity of mentioning the property name twice, once on the lhs and once as an argument. I haven't thought of a good solution to that, yet. -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | A citizen of NewZealandCorp, a | Christchurch, New Zealand | wholly-owned subsidiary of USA Inc. | greg.ewing at canterbury.ac.nz +--------------------------------------+
|
https://mail.python.org/pipermail/python-dev/2005-October/057363.html
|
CC-MAIN-2016-30
|
refinedweb
| 228
| 53.31
|
In plain and simple language, Helm is the package manager for Kubernetes. If we assume Kubernetes as an OS then Helm is Yum or apt. In Kubernetes, to install, uninstall, upgrade, update all tasks are performed using YAML manifests. Managing these YAMLs are a bit of pain. To manage these constant repeatable deployments, Helm creates a single package out of it. You can share this package, version it and manage it more easily compare to YAML. Helm has successfully simplified the process.
Introduction to Helm
Helm was created to emphasize configuration re-usability. It can help to maintain a life-cycle for Kubernetes-based deployment. Helm is a CNCF project and developed in Go language.
In Kubernetes-based deployment, YAML manifests are everything. Kubernetes supports running many instances of multiple applications. Applications deployment needs complete lifecycle steps like deployment, upgrade, uninstall, rollback, etc. Maintaining these steps via YAML files is a tedious task. Helm sorted this via templates. You can create a template out of YAML and reuse it as per the requirement. The important feature is that you can pass values in this YAML file as variables. Let’s discuss Helm Components in detail
Helm client
A command-line tool, which provides the user interface to work with Helm.
Chart
In Helm’s vocabulary, a package is called a chart. Every Chart consists few YAML configuration files and templates that are rendered into Kubernetes manifest files. Structure of Chart -
$ tree . 3 directories, 10 files
Every directory and file has a specific function:-
- charts: For chart dependencies
- values.yaml: YAML file of default configuration values for the chart.
- templates: Contains template files with variables and variable default value comes from values.yaml and the command line. The templates use the Go programming language’s template format.
- Chart.yaml: File with metadata about the chart.
- LICENSE: plaintext license file
- README.md: Readme file
Chart Repository
Chart repositories are web locations where packaged charts reside. You can use public repositories for Chart deployment or you can build your own private repository.
Version
Each Helm chart comes with 2 separate versions:
- Chart version itself [version inChart.yaml]
- Application version in chart [appVersion in Chart.yaml]
How to Install Helm?
Download binary release of the Helm on the client machine. Unpack the compressed file and move it to the desired destination (path variables)
$ curl -fsSL -o helm-v3.5.0-rc.2-linux-amd64.tar.gz $ tar -xvzf helm-v3.5.0-rc.2-linux-amd64.tar.gz $ cd linux-amd64 $ sudo cp helm /usr/local/bin/
You can install Helm with a package manager as well as using Homebrew for macOS, Chocolatey for Windows, and Snap for Linux.
How to Use Helm?
Helm Chart can be deployed using URL, directory, and file(compressed). I will install one chart using a public repository for demo purposes. This chart is Elasticsearch operator chart, available on Elasticsearch public repository.
To deploy the chart, you need to add a repository. Helm provides commands to add repositories. You can add multiple repositories as well.
$ helm repo add stable $ helm repo add elastic
Once the repository is added, try to list it
$ helm repo list
We can list all charts available in the repository.
$ helm search repo elastic
If you want to search specific chart,
$ helm search repo eck
Search will not just look for package name, but it will look for string in other fields as well like description
Before using a repository for Chart deployment, make sure you update Chart Data
$ helm repo update
If you want to see values.yaml file. This command inspects a chart (directory, file, or URL) and displays the contents of the values.yaml file
$ helm show values elastic/eck-operator #You can list more details about Chart $ helm show chart elastic/eck-operator $ helm show all elastic/eck-operator
Installing a Chart
$ helm install elastic-operator elastic/eck-operator -n elastic-system --create-namespace # create-namespaces will help you to create namespace if not exist # helm install <chart-install-name> <chart-name>
In the above command, the installation will happen with default values available. Helm provides you the option of passing variable values at runtime as well.
Status of installation:
$ helm ls helm status <chart-install-name> # if the chart is deployed in another namespace $ helm ls --all-namespaces $ helm status elastic-operator -n elastic-system
History of Chart release
$ helm history elastic-operator -n elastic-system
Uninstalling a Chart
$ helm uninstall <chart-install-name>
Rollback! Helm keeps track of all the deployment and provides the feature of rolling back if required.
$ helm rollback elastic-operator 2 $ helm rollback <chart-release-name> <version>
How Helm is connecting to Kubernetes Cluster?
Helm uses the default kubeconfig location to connect with the Kubernetes cluster. If you want to connect any other cluster for which kubeconfig is placed on another location then you need to update the following env variable
$KUBECONFIG (default "~/.kube/config")
apart from this variable, there are other variables as well. Please visit the documentation.
How Helm is keeping track of releases?
When you run commands like Helm Install or upgrade, the helm client connects to the Kubernetes cluster and store the record as Kubernetes secrets. You can see multiple rows, each belong to a specific revision and chart release. e.g.
sh.helm.release.v1.elastic-operator.v1
use get secret to list all k8s secret
$ kubectl get secret
Helm uninstall will delete the chart deployed and the history attached to it. But if you want to keep the release information -
helm uninstall --keep-history
Helm Configuration
If you want to view helm client environment information.
$ helm env
Debug Helm Chart and Deployment
Use Debug flag
It enables the verbose output when the helm command runs.
$ helm install elastic-operator elastic/eck-operator -n elastic-system --create-namespace --debug
Use Dry run
Helm comes up with the dry run option for debugging. Use this option when you don’t want to install the chart but want to validate and debug only against the Kubernetes cluster. It is similar to kubectl dry-run option. It will validate the Chart deployment against the K8s cluster by loading the data into it.
$ helm install elastic-operator elastic/eck-operator -n elastic-system --create-namespace --values custom.yaml --dry-run
Helm templates get compiled against version of Kubernetes Cluster.
Use Template
helm template helps you in template rendering. It will allow you to validate the templates which you created are generating the right YAMLs to be deployed.
Helm template does not do complete validation of the output. So make sure you use the template and dry-run together to get proper results.
Helm Get
You can use this to see which values were supplied during the release.
$ helm get values elastic-operator --revision 2 $ helm get values elastic-operator $ helm get manifest elastic-operator
Helm Install or Helm Upgrade?
Helm install creates a special type of Kubernetes secret that holds release information and deploys the chart if not present. If Chart is present on the cluster then it will fail. To upgrade any deployed chart release, you need to use an upgrade command. Helm provides another option for chart deployment which verifies the Chart is present or not, in case of yes then it will upgrade the chart or it will install the chart.
$ helm upgrade --install elastic-operator elastic/eck-operator
By default, Helm tracks up to ten revisions of each installation.
How to create Helm Chart
We have seen how to deploy or manage charts in the above examples. Now time is to understand how to create one. Helm comes up with the create command to create a sample chart.
$ helm create my-dummy-chart
It will generate the required files to start chart development. You can modify these files to chart for your application.
Helm Official Chart Repository
Helm stable and incubator chart repositories have been moved to GitHub and archived. The rest of the Charts have been moved to community-managed, repositories like Artifact Hub, Bitnami, etc
Helm Self-host Chart Repository
We can host our own repository using ChartMuseum, Harbor, a static web server (Apache), or another system.
That’s it for the post. I will cover more details on Helm in upcoming posts.
Thank you, Stay Safe and Keep learning.
Ref to Medium Post with images !
Discussion (0)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/arunksingh16/getting-started-with-helm-v3-2k71
|
CC-MAIN-2021-21
|
refinedweb
| 1,390
| 56.96
|
A statement can be executed by different threads at different time.
The Thread class static method currentThread() returns the reference of the Thread object that calls this method.
Consider the following statement:
Thread t = Thread.currentThread();
The statement will assign the reference of the thread object that executes the above statement to the variable t.
The following code demonstrates the use of the currentThread() method.
Two different threads call the Thread.currentThread() method inside the run() method of the CurrentThread class.
The program simply prints the name of the thread that is executing.
public class Main extends Thread { public Main(String name) { super(name);// www . jav a 2s. c o m } @Override public void run() { Thread t = Thread.currentThread(); String threadName = t.getName(); System.out.println("Inside run() method: " + threadName); } public static void main(String[] args) { Main ct1 = new Main("First Thread"); Main ct2 = new Main("Second Thread"); ct1.start(); ct2.start(); Thread t = Thread.currentThread(); String threadName = t.getName(); System.out.println("Inside main() method: " + threadName); } }
The code above generates the following result.
When you run a class, the JVM starts a thread named main, which is responsible for executing the main() method.
We can handle an uncaught exception thrown in a thread.
It is handled using an object of a class that implements the java.lang.Thread.UncaughtExceptionHandler interface.
The interface is defined as a nested static interface in the Thread class.
The following code shows a class which can be used as an uncaught exception handler for a thread.
class CatchAllThreadExceptionHandler implements Thread.UncaughtExceptionHandler { public void uncaughtException(Thread t, Throwable e) { System.out.println("Caught Exception from Thread:" + t.getName()); }/*from w w w . java2 s . c o m*/ } public class Main { public static void main(String[] args) { CatchAllThreadExceptionHandler handler = new CatchAllThreadExceptionHandler(); // Set an uncaught exception handler for main thread Thread.currentThread().setUncaughtExceptionHandler(handler); // Throw an exception throw new RuntimeException(); } }
The code above generates the following result.
|
http://www.java2s.com/Tutorials/Java/Java_Thread/0040__Java_Thread_Current.htm
|
CC-MAIN-2017-04
|
refinedweb
| 320
| 52.15
|
Patches item #1516912, was opened at 2006-07-04 13:15 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Core (C code) Group: Python 2.5 Status: Open Resolution: None Priority: 5 Submitted By: Piéronne Jean-François (pieronne) Assigned to: Nobody/Anonymous (nobody) Summary: OpenVMS patches Modules directory Initial Comment: All the patches are delimited by #if defined(__VMS) #endif or some variant of this 8 files are patches: bz2module.c OpenVMS don't support univ_newline so set f_univ_newline to 0 cryptmodule.c OpenVMS don't have a crypt routine, so use the openssl routine instead dlmodule.c OpenVMS CRTL dlopen routine doesn't do any check so in routine dl_open add a test for file accessibility fpectlmodule.c Add the correct setting for Alpha and IA64 processor getpath.c remove old patch and add default PREFIX posixmodule.c OpenVMS don't have any urandom routine, so use the openssl routine instead selectmodule.c clean an old previous patch, move most of the specific VMS code to specific routine (would be probably cleaner) Regards, Jean-François ---------------------------------------------------------------------- You can respond by visiting:
|
https://mail.python.org/pipermail/patches/2006-July/020167.html
|
CC-MAIN-2017-17
|
refinedweb
| 218
| 51.38
|
> Yes, I already did it. Now I just need to remove all the code dealing
> with authentication from the flowscript. Do you see a problem with that?
> I only see benefits:
>
> - less code to maintain
> - possibility of using different user repositories (file, JDBC, LDAP,
> ...) via standard inderfaces
> - Single Sign-On
In general I agree but it makes the deployment more
complicated because the authentication differs from
container to container.
We would need to maintain examples for the different
containers... The login page might also need to
be different per container.
Hm... :-/ don't know
>>> 2. change the format of entries to something more standard, possibly
>>> Atom;
>>
>>
>>
>> Isn't that a question of the right stylesheet?
>
>
> You have to store items in some form or another, even if you can use
> XSL-T to transform them to something else later. I'm proposing to use
> something standard, again (Atom or RSS) instead of a made-up markup.
But wouldn't this limit us to the particular "features"/stuctur of that
markup. (Well, except using a different namespace) And what is the
benefit? We could save a transformation.
Hm... :-/ don't know
Sorry for coming across so negative ;)
cheers
--
Torsten
|
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200403.mbox/%3C4065B39E.4020507@vafer.org%3E
|
CC-MAIN-2018-39
|
refinedweb
| 198
| 67.15
|
The objective of this post is to explain how to install a module to use the SHA-256 algorithm with MicroPython, running on the ESP32.
Introduction
The objective of this post is to explain how to install a module to use the SHA-256 algorithm with MicroPython, running on the ESP32.
We are going to use a module from hashlib which implements the SHA-256 hash algorithm. You can read more about SHA here.
Installing the library
At the time of writing, this was a library that was not included by default on MicroPython’s binary distribution. So, we need to manually install it. Note however that in this case installing means copying the module to our file system and import it from there to use the functions on the command line environment.
So, first of all, we will need copy the module on our MicroPython file system. You can get the source code of the SHA-256 library here. The easiest way is go to the raw view of GitHub, copy the whole code and save it on your computer on a file called sha256.py.
Just to confirm that there are no other module dependencies, you can try to do a control+F on the file to search for the import keyword. None should be found.
Now, we will deal with the upload of the file to the file system. To do so, we will need a Python tool called ampy. You can check this previous tutorial for a detailed explanation on how to install it and use it to upload files to the file system of MicroPython.
So, to proceed with the upload, open a command line and navigate to the directory where you previously saved the sha256.py file, with the ESP32 board connected to your computer. There, just hit the following command, changing COM5 with the serial port where your board is connected:
ampy --port COM5 put sha256.py
The file should now be uploaded to your board’s file system. It may take a while because of the size of the file.
Testing the library
To test the library, just connect to the Python prompt using a program like Putty. Once connected, we will confirm that the file is on our file system by sending the following commands:
import os os.listdir()
As can be seen in figure 1, the library is now on the file system. In my case, I have other files from other projects.
Figure 1 – Sha256.py module on MicroPython’s file system.
Now, we will test the functionalities of the module. First of all, we will need to import it with the command shown bellow.
import sha256
Then, we will create an object of class sha256, passing as input the string with the content that we want to hash. We will use a simple test string, as shown in the code bellow.
testString = "String to hash with sha256" hashObject = sha256.sha256(testString)
Finally, we call the hexdigest method to obtain the hash of our string. This method receives no arguments.
hash = hashObject.hexdigest() print(hash)
You should get an output similar to figure 2.
Figure 2 – Result of the program.
We can use this website to confirm if the result of our hash program matches the expected result of applying the SHA-256 algorithm to the string. Figure 3 illustrates the result for the string we used in the Python code.
Figure 3 – Applying the SHA-256 algorithm to the string defined in the MicroPyton code.
We can confirm this result in MicroPython by copying the content from the validation website and pasting it to a string to compare with our hash string, as shown in figure 4. It should return “True”, indicating that the content of both matches.
Figure 4 – Matching comparison between the hash created on MicroPython and on the validation website.
|
https://techtutorialsx.com/2017/06/18/esp32-micropython-using-sha-256/
|
CC-MAIN-2017-34
|
refinedweb
| 645
| 72.87
|
Extras/SteeringCommittee/Meeting-20060810
From FedoraProject
2006 August 10 FESCo
Meeting Summaries are posted on the wiki at:
Attending
- warren
- thl
- scop
- c4chris
- rdieter
- tibbs
- abadger1999
- bpepple
- dgilmore
Summary
Mass Rebuild
- Plan and questions are on the wiki at
- Which packages need to be rebuilt
- sha256sum wasn't implemented in rpm so that isn't a factor
- Minimal buildroots have been implemented and will influence most packages
- Decided to go with the original plan: If a maintainer thinks a rebuild isn't required, they write a commit message that tells why not.
- Criteria that maintainers should apply is everything that isn't content should be rebuilt.
comps.xml
- The comps.xml script produced a big list:
- Commandline stuff should be included
- Packages should not be listed twice (confuses end users)
- nagmails will be sent so people know they have packages needing looking at
Legacy in buildroots
- Voted to activate legacy in buildroots and discuss maintainers responsibilities later
- tibbs will send out an email regarding maintainer responsibilities
- thl will document that FE branches in maintenance mode use Legacy packages
Ctrl-C Problem
- Async notification seems to be the only method to guarantee commit mail is sent
- Warren to take that up at the infrastructure meeting
Packaging Committee Report
- Python .pyos are to be included directly in packages instead of %ghosted
- c4chris is looking for a script to file bugzilla bugs for packages that need fixing
- scop has related changes to the python spec template prepared:
Sponsorship Nominations
- rdieter was accepted
Misc
- FESCo members are now on both FAB and -maintainers
Future Elections
- Draft is at:
- No objections to it yet. Wait one more week and then vote on acceptance
Package Database
-
-
- c4chris posted some brain dumps about this to extras-list
- c4chris joining infrastructure list so he can help coordinate between the extras community needs and infrastructure people doing the implementation
Free discussion
- zaptel-kmod and kmod policy in general
- Main question: Should kmods which have no intention of ever moving into the mainstream kernel be allowed in?
- If the package is well maintained and end-users accept the risk, why not?
- Fedora kernel developers have stated they will not debug kernel problems where users have kernel mods not from upstream installed
- thl to take the question to FAB
- documents how to remove a package from Extras (in case it is renamed upstream, moved to Core, etc)
After Meeting Discussion
- Too many items on the agenda per meeting? Items to discuss moving onto email instead of in the weekly IRC meetings
- Possibly move packaging committee report to email list
- To be able to discuss before the next packaging meeting, the FESCo meeting can't be directly after the packaging meeting. One would have to change times/dates
- Owners of a task update the status on the wiki before the meeting
- New sponsor discussion could happen entirely on list:
- Use two lists cvs-sponsors and fesco-list
Log
(09:55:13) ***warren here. (09:59:42) thl has changed the topic to: FESCo meeting in progress (09:59:46) thl: hi everyone (09:59:54) thl: who's around? (09:59:57) Rathann: o/ (10:00:01) Rathann: :> (10:00:07) ***c4chris__ is here (10:00:11) scop [n=scop] entered the room. (10:00:15) c4chris__ is now known as c4chris (10:00:18) drfickle left the room (quit: "Please do not reply to this burrito"). (10:00:24) rdieter: here. (10:00:36) tibbs: president. (10:00:41) warren: president? (10:00:56) tibbs: Used to say that in grade school. (10:00:59) c4chris: s/id// (10:01:07) c4chris: :) (10:01:24) thl: okay, then let's start slowly... (10:01:25) ***abadger1999 here (10:01:33) thl has changed the topic to: FESCo meeting in progress -- M{ae}ss-Rebuild (10:01:54) ***bpepple is here. (10:01:55) thl: scop, I assigned that job to you some days ago (10:02:06) scop: works4me (10:02:23) scop: but I've noticed a bunch of "fc6 respin" builds recently (10:02:29) thl: scop, there are some question still open; see (10:03:02) thl: scop, can you work out the details that are still below "to be discussed" (10:03:17) thl: then we can discuss it in the next meeting (10:03:26) thl: (or now if someone prefers) (10:03:45) abadger1999: rpm signing w/ sha256sum seems to affect all packages (10:03:53) scop: I can do that, but I think those look pretty much like no-brainers (10:03:57) abadger1999: So the answer to which packages need rebuilding would be all. (10:04:04) scop: good point (10:04:13) thl: scop, probably, but someone need to work them out anyway ;-) (10:04:14) f13: abadger1999: that didn't make it in (10:04:22) f13: abadger1999: there is no sha256sum. (10:04:38) ***cweyl is here (rabble) (10:04:44) rdieter: is sha256sum signing in place now/yet (no?)? (10:04:48) f13: abadger1999: so the real answer is anything that is gcc compiled. (10:05:08) f13: rdieter: I don't think the patches even went into our rpm package yet. It was nacked to do such signing for FC6/RHEL5 (10:05:27) f13: the option may be patched into rpm, but it won't be enabled by default. (10:05:27) abadger1999: What about rebuilding with minimal buildroots? (10:05:45) ***dgilmore is kinda here (10:05:54) f13: abadger1999: certainly possible criteria (10:06:01) c4chris: And do we start on August 28 ? (10:06:17) f13: now that Extras has the minimal roots, you could add 'anything that hasn't built since <foo>' where foo is the date that the minimal buildroots went into place. (10:06:29) thl: c4chris, that's the plan currently, but I suggest we discuss this next week again (or the week after that) (10:06:37) c4chris: k (10:07:01) dgilmore: abadger1999: minimal buildroots are in place and the buildsys-build packages have been fixed (10:07:20) thl: well, a lot of people don't like a mass-rebuild were everything is rebuild (10:07:28) f13: abadger1999: for Core, the criteria was: Any binary compiled package (for gnu-hash support), any package that hasn't yet been built in brew (our minimal buildroot stuff), and any package that was still being pulled in from FC5. (10:07:49) thl: because it creates lot's of downloads that are unnesessary for people (10:08:07) thl: but if we want to rebuild everything to make sure that it stil builds -> okay for me (10:08:24) c4chris: thl, we are talking devel here (10:08:43) thl: c4chris, yes, but devel will become FC6 (10:08:45) c4chris: I think we need the mass rebuild (10:08:47) rdieter: c4chris++ right, better to be safe than sorry later. (10:08:59) thl: and people updateing from FC5 have the downloads then, too (10:09:11) c4chris: thl, become is the key word... (10:09:15) f13: rebuilding for gnu-hash is a big bonus (10:09:28) f13: I think people would like the performance increase it can give. (10:09:30) warren: FC3 and FC6 will be Extras releases that continue to be used long after FC4 and FC5 are retired for obvious reasons. Rebuilding FC6 Extras now is a good idea. (10:09:47) thl: okay, so a real mass rebuild (10:09:56) warren: I guarantee it will also find more bugs. (10:09:57) thl: we should post this to the list for discussion (10:10:00) scop: it's just plain silly to rebuild eg. huge game content packages that aren't anything else but stuff copied from the tarball to the file system (10:10:05) thl: who will manage that? (10:10:25) warren: scop, what if we setup an explicit exclusion list? owners can request packages to exclude. (10:11:17) scop: sure, if there's someone to review/maintain that list (10:11:36) scop: personally, I think the original plan would work well enough (10:11:45) warren: Are we proposing that we automatically rebuild everything, or first ask maintainers to do it themselves to see who is AWOL? (10:11:58) thl: warren, ask maintainers (10:11:59) c4chris: warren, ask maintainers (10:12:04) rdieter: scop, warren: isn't that what "needs.rebuild" on FC6MassRebuild? (10:12:08) warren: If that's the case, then they can deal with their own exclusions. (10:12:21) c4chris: warren, I think so too (10:12:35) warren: OK, this plan is good. (10:13:03) warren: > are there still 3 orphans in devel repo according to the package report: dxpc gtkglarea2 soundtracker. What do do? Remove them? (10:13:09) thl: does anyone want to annouce this plan to the list for discussion? (10:13:14) f13: ask people, give it a week or two, then step in and buildthe ones that haven't piped up? (10:13:18) thl: or do we simply mention it in the summary? (10:13:19) warren: One more warning on the mailing list asking for owners, with Aug 28th deadline to remove if nobody claims it. (10:13:36) rdieter: warren++ (10:13:47) thl: warren, +1 (10:13:47) c4chris: warren, yup (10:13:55) bpepple: warren: +1 (10:14:03) warren: I'll do that warning now... (10:14:23) scop: does anything else depend on any of those three? (10:14:30) thl: warren, let me check if those three are still around first (or if there are others still around) (10:14:33) warren: good question, I'll check (10:14:36) tibbs: dxpc was rebuilt fairly recently. (10:15:04) thl: warren, no, seems those three are the only ones according to the PackageStatus page (10:15:18) rdieter: I updated/built dxpc recently... so that it would be in good shape for any potential new maintainer. (10:15:23) ***f13 steps out (10:15:37) thl: okay, so again: does anyone want to annouce this new plan to the list for discussion? or do we simply mention it in the summary? (10:15:48) scop: one more item: what happens if a maintainer does not take care of his rebuilds? (10:15:52) warren: rdieter, without anybody responsible though, do we really want to keep it? (10:16:02) scop: thl, I thought warren said he'd announce it (10:16:06) rdieter: warren: no maintainer -> remove it. (10:16:23) warren: scop, I said I'd announce the orphan removal warning (10:16:26) thl: scop, I though warren want's to warn that those three might get removed? (10:16:52) scop: yep... so what's the new plan thl was talking about? (10:17:00) tibbs: BTW, I count 37 pachages belonging to extras-orphan@fedoraproject.org in the current owners.list. (10:17:02) scop: Extras/Schedule/FC6MassRebuild? (10:17:26) thl: scop, I thought we rebuild everything now (besides big data-only packages)? (10:17:38) thl: that's the impression I got (10:18:15) warren: How about rebuild everything *EXCEPT* maintainers can choose to explicitly skip it if they have a good reason? (10:18:16) c4chris: thl, right, but that's pretty much what FC6MassRebuild says, no? (10:18:31) warren: ooh... (10:18:34) scop: c4chris, exactly (10:18:38) thl: warren, we need to define "good reason" in that case (10:18:45) warren: How about rebuild everything *EXCEPT* maintainers can choose to explicitly skip it if they have a good reason? Except they MUST rebuild if it is demonstrated that a rebuild would fail. (10:19:05) warren: Binaries without GNU_HASH always rebuild? (10:19:13) warren: perl modules built against earlier perl versions? (10:19:15) warren: python? (10:19:16) thl: warren, Binaries without GNU_HASH always rebuild +1 (10:19:44) cweyl: warren: so basically everything that isn't content? (10:20:25) c4chris: cweyl, yes that's a good way to put it (10:21:23) abadger1999: cweyl: Sounds good. So everything goes through the minimal buildroot. (10:21:38) thl: "so basically everything that isn't content" -- +1, 0 or -1 please! (10:21:45) scop: as a general rule, works4me (10:21:45) warren: Content must be rebuilt *IF* it would fail to rebuild. (10:21:51) thl: "everything that isn't content" +0.66 (10:22:02) scop: warren, if it fails to rebuild, it can't be rebuilt (10:22:11) warren: Then it must be fixed? (10:22:25) scop: yes (10:22:27) warren: How about a separate exclude.list that contains (10:22:31) warren: packagename reason (10:22:51) warren: uqm-bigasscontent 1GB of game data that doesn't change. (10:23:02) rdieter: warren: do we really care about the reason? (10:23:13) Nodoid [n=paul] entered the room. (10:23:27) scop: I still think that the commit message to needs.rebuild is enough (10:23:38) c4chris: scop, +1 (10:23:42) thl: scop, +1 (10:23:42) abadger1999: scop: +1 (10:24:01) warren: hmm.. I guess (10:24:02) warren: ok (10:24:04) tibbs: !2 (10:24:04) c4chris: "everything that isn't content" +1 (10:24:08) tibbs: crap. (10:24:12) tibbs: +1 (10:24:28) bpepple: +1 (10:24:45) rdieter: I agree with scop, why isn't needs.rebuild sufficient? (or is this orthogonal to that?) (10:25:25) thl: guys we run late (10:25:38) warren: Let's move on (10:25:42) rdieter: ok (10:25:44) thl: afaics the current plan looks like this: (10:25:47) scop: we use needs.rebuild, but append something like "as a general rule, everything that is not pure content should be rebuilt" to FC6MassRebuild (10:25:53) thl: "everything that isn't content need a rebuild" (10:26:06) thl: "a needs.rebuild file will be placed into cvs" (10:26:31) thl: and if people don#t rebuild stuff they have to mention the reasons in the cvs delete commits message of needs.rebuild (10:26:37) thl: that okay for everybody? (10:26:40) ***warren looks at the 37 orphans... (10:26:42) c4chris: thl, scop: +1 (10:26:46) rdieter: +1 (10:26:55) scop: +1 (10:27:02) abadger1999: +1 (10:27:04) tibbs: +1 (10:27:05) bpepple: +1 (10:27:11) thl: okay, then let's move on (10:27:24) thl has changed the topic to: FESCO meeting -- Use comps.xml properly (10:27:26) thl: c4chris ? (10:27:31) warren: I suggested the exclude.list with reasons because it is easier to search than commit messages (10:27:35) warren: but that's fine (10:27:38) c4chris: Well I produced a big list... (10:27:54) c4chris: There was another idea to trim down soem more libs (10:28:17) thl: c4chris, do you want to work further on that idea and the stuff in general? (10:28:22) c4chris: So far, only Hans has added stuff to comps... (10:28:49) scop: warren, searching for needs.rebuild in a folder containing commit mails should work (10:28:49) c4chris: thl, a few things are not really clear: (10:28:56) thl: c4chris, we should send out mails directly to the maintainers when we now what needs to be in comps (and what not) (10:29:07) c4chris: do we also want cmdline stuff? (10:29:09) thl: then at least some maintainers will add stuff (10:29:19) thl: c4chris, cmdline stuff -> IMHO yes (10:29:38) c4chris: and do we allow packages listed twice? (10:29:42) thl: c4chris, or how does core handle cmdline stuff? (10:30:00) thl: c4chris, twiece? good question. Maybe you should ask jeremy or f13 (10:30:16) c4chris: thl, I think there are cmdline tools in Core too (10:30:17) scop: I'd suggest a SIG or a special group for taking care of comps (10:30:20) rdieter: c4chris: twice, as in more than one section/group? (10:30:25) jeremy: c4chris: packages being listed twice should be avoided (10:30:28) c4chris: rdieter, yes (10:30:34) jeremy: it leads to lots of user confusion (10:30:41) c4chris: jeremy, k, I thought so (10:30:54) rdieter: agreed, pick one(primary) group (and stick with it). (10:30:56) thl: scop, well, do you want to run the SIG? (10:31:10) warren: c4chris, twice is fine (10:31:17) warren: jeremy, eh? (10:31:24) thl: scop, I think we need a QA sig that looks after stuff like this (10:31:45) c4chris: thl, we sorta have a QA SIG... :-) (10:31:49) scop: thl, no, I'm not personally terribly interested in it (10:31:50) warren: Hmm, I might be thinking of the common practice of listing packages multiple times in the hidden language groups. (10:32:25) thl: c4chris, well, then it would be good if that sig could handle that ;-) (10:32:32) scop: which is actually why I'd prefer someone who is interested and can keep things consistent and useful would maintain comps (10:32:48) c4chris: thl, k (10:32:58) thl: c4chris, thx (10:33:08) thl: well, was this all regarding this topic for today? (10:33:29) c4chris: thl, yup. I'll see about sending some nagmails... (10:33:43) thl: c4chris, tia (10:33:45) thl has changed the topic to: FESCO meeting in progress currently -- Activate legacy in buildroots (10:33:54) thl: well, we had the discussion on the list (10:34:21) thl: my vote: activate legacy in buildroots now, discuss the maintainer responsibilities later (10:34:22) mspevack is now known as mspevack_afk (10:34:38) dgilmore: +1 (10:34:43) thl: building FE3 and FE4 without legacy is ridiculous (10:34:51) warren: +1 (10:35:00) tibbs: +1 (10:35:01) c4chris: +1 (10:35:09) rdieter: +1 (10:35:09) thl: abadger1999 ? (10:35:18) abadger1999: Yeah, why not? +1 (10:35:35) bpepple: +1 (10:35:38) thl: k, settled (10:35:43) dgilmore: Ill get the mock config changes done, and make sure we sync up the legacy tree (10:35:51) thl: dgilmore, tia (10:36:01) thl has changed the topic to: FESCO meeting in progress currently -- CTRL-C problem (10:36:05) thl: any news? (10:36:08) scop: hold on a bit (10:36:10) abadger1999: tibbs, Are you still going to send out a maintainer resp. email? (10:36:17) thl has changed the topic to: FESCO meeting in progress currently -- Activate legacy in buildroots (10:36:19) thl: scop, ? (10:36:38) scop: it should be also documented somewhere that use of "EOL" FE branches assumess FL is in use too (10:36:40) tibbs: I lost some work in the wiki crash, unfortunately. (10:37:03) thl: scop, agreed (10:37:04) tibbs: I've been trying to feel out where the community is on FE3-4 maintenance. (10:37:13) thl: dgilmore, can you look after that, too? (10:37:42) thl: is the proper place afaics (10:38:05) scop: indeed (10:38:06) warren: What ever happened with the security SIG? The top priority of a security team would be to track issues and file tickets if new issues appear. Has that began? (10:38:18) abadger1999: tibbs: :-( (10:38:18) scop: yes (10:38:20) bpepple: warren: Yeah. (10:38:22) abadger1999: warren: Yes. (10:38:24) thl: warren, that's working afaics (10:38:26) warren: good =) (10:38:28) tibbs: That's been ongoing for some time. (10:38:55) thl: scop, well, I'll put in on if no one else wants (10:39:00) thl: so, let's move on (10:39:04) thl has changed the topic to: FESCO meeting in progress currently -- CTRL-C problem (10:39:19) thl: any news? sopwith still traveling afaik (10:39:22) thl: so probably no (10:39:31) ***thl will skip this one in 20 seconds (10:39:34) warren: Same thought as last week (10:39:46) warren: only way to really fix this is to make CVS commit mail async (10:39:50) warren: do we want to do this? (10:40:12) thl: warren, are there ans disadvantages (10:40:30) thl: s/ans/any/ (10:40:50) scop: yes, someone has to do the work :) (10:41:11) warren: I'll bring it up at the infrastructure meeting today (10:41:21) warren: I don't know how easy it would be (10:41:28) thl: scop, hehe, the usual problem ;-) (10:41:35) thl: warren, tia (10:41:40) scop: and actually, it can be somewhat difficult (10:41:40) thl: k, moving on (10:41:41) warren: tia means? (10:41:46) scop: warren, TIA (10:41:50) warren: ? (10:41:55) scop: Thanks In Advance (10:41:58) warren: oh (10:41:59) thl: thx in advance (10:41:59) warren: ok (10:42:12) thl has changed the topic to: FESCO meeting in progress currently -- Packaging Committee Report (10:42:14) thl: ? (10:42:42) abadger1999: I think the only thing that passed today was changing how .pyos are handled. (10:42:56) abadger1999: They are to be included instead of ghosted. (10:43:16) thl: abadger1999, we probably should run a script over a devel checkout of extras and file bugs (10:43:24) thl: otherwise stuff will never get fixed... (10:43:34) thl: maybe another job for the QA sig? (10:43:37) abadger1999: That's a good idea. (10:43:47) bpepple: Yeah, there should be a lot of python packages this affects. (10:43:54) c4chris: thl, gotcha (10:43:56) thl: or any other volunteers (10:43:59) thl: ? (10:44:08) thl: c4chris, sorry ;-) (10:44:16) c4chris: np (10:44:27) scop: related python spec template changes: (10:44:54) thl: abadger1999, c4chris, can you look after such a script please? (10:45:02) rdieter: it was/is-going to be mentioned on fedora-maintainers too... (10:45:12) scop: grep -lF '%ghost' */devel/*.spec (10:45:23) c4chris: thl, yup, we'll cook something up (10:45:32) thl: scop, + "| file _bugs.sh" (10:45:36) thl: c4chris, tia (10:45:49) abadger1999: thl: Will do. (10:45:49) thl: k, moving on (10:45:58) thl has changed the topic to: FESCO meeting in progress currently -- Sponsorship nominations (10:46:04) thl: any new nominations? (10:46:29) c4chris: not that I know of (10:46:32) dgilmore: thl: yeah ill look after it also (10:46:33) ***bpepple listens to the crickets. (10:46:35) rdieter: Well, it feels dirty, but I'd like to nominate me. (10:46:56) c4chris: rdieter, self nominations are fine (10:46:57) rdieter: (wanted to sponsor someone the other day, and realized I couldn't... yet) (10:47:21) ***thl wonders why rdieter isn't a sponsor yet (10:47:26) warren: +1 rdieter (10:47:27) thl: well, that's probably easy (10:47:32) bpepple: +1 (10:47:33) abadger1999: +1 (10:47:34) c4chris: +1 (10:47:35) thl: I think we don#t need to discus this (10:47:36) scop: huh? -1 (10:47:39) thl: rdieter sponsor +1 (10:47:39) dgilmore: +1 for rdieter (10:47:41) scop: OOPS +1 (10:47:51) thl: k, rdieter accepted (10:48:03) thl has changed the topic to: FESCO meeting in progress currently -- approve kmod's (10:48:03) rdieter: thanks, no I have no excuse for more work.. (: (10:48:04) tibbs: +1 (10:48:08) tibbs: (slow) (10:48:16) ***warren upgrades rdieter (10:48:21) thl: no new kmods up for discussion, moving on (10:48:35) thl has changed the topic to: FESCO meeting in progress currently -- MISC (10:48:53) thl: dgilmore, FE3 and FE4 builders are working fine now (python and elf-utils?) (10:48:57) thl: ? (10:49:01) dgilmore: thl: yep all donr (10:49:02) c4chris: crap, the package database item has been eaten by the wiki crash... (10:49:04) dgilmore: done (10:49:19) thl: dgilmore, thx (10:49:24) warren: BTW, are all FESCO members on fedora-maintainers and fedora-advisory-board? (10:49:29) thl: c4chris, uhhps, yes, sorry (10:49:55) rdieter: -maintainers, probably, fab maybe not (but probably should) 9: (10:49:57) dgilmore: warren: should be though i know us new FESCO guys were only just added to fab (10:50:04) tibbs: warren: My mailbox is certainly bulging from the latter list, yes. (10:50:10) bpepple: warren: I'm on both. (10:50:13) c4chris: warren, I am (10:50:13) thl: all FESCo members should be on FAB now (10:50:16) tibbs: Someone went through and added us. (10:50:22) thl: I checked the subscribers last week (10:50:26) rdieter: good. (10:50:35) dgilmore: fab is high volume :) (10:50:42) warren: As developement leaders in Fedora, your opinions would be valued in many matters of importance discussed on fab. (10:50:55) thl has changed the topic to: FESCO meeting in progress currently -- Future FESCo elections (10:51:29) thl: abadger1999, we wait a bit more for replys to your mail before we proceed? (10:51:29) abadger1999: I posted the draft. Anyone want to propose changes? (10:51:34) dgilmore: warren: i gave a longish reply last night about how i went about doing aurora extras (10:51:52) ***warren reads that mail... (10:51:54) dgilmore: abadger1999: looked pretty sane to me (10:51:58) c4chris: abadger1999, I like the draft (10:52:01) warren: rdieter, upgraded (10:52:21) ***thl votes for "wait another week before we accept the proposal" (10:52:38) c4chris: thl, k (10:52:44) bpepple: thl: +1 (10:52:49) rdieter: thl: +1 (10:52:54) thl: k, so let's move on (10:52:55) abadger1999: thl: +1 (10:53:05) thl has changed the topic to: FESCO meeting in progress currently -- package database (10:53:46) thl: c4chris, warren ,do we want to discuss stuff regarding that topic today? (10:53:46) c4chris: I posted a couple brain-dumps kinda mails (10:54:10) c4chris: any word of advice at this time ? (10:54:23) warren: Keep dumping, next step is to collect and organize everything we want. (10:54:29) thl: c4chris, well, "simply do something until somebody yells" (10:54:44) c4chris: thl, k (10:54:57) thl: c4chris, sorry, but that#s often the only way to really get something done afaics (10:54:57) dgilmore: c4chris: Just that there is alot of things that it needs to support. We need to design it in a modular fashion so it can grow as we grow (10:55:17) mspevack_afk is now known as mspevack (10:55:25) c4chris: warren, k. I'll try to start the collect phase soon... (10:55:26) [splinux] left the room (quit: "Ex-Chat"). (10:55:33) warren: due to the large scope of package database, mailing lists and wiki are most appropriate and a best use of time. Only after we have the mess better organized into plans should we discuss it? (10:55:34) abadger1999: c4chris: Are you on infrastructure list? (10:55:43) thl: warren, +1 (10:55:59) dgilmore: warren: +1 (10:56:08) c4chris: abadger1999, not sure (10:56:11) abadger1999: warren: +1 (10:56:18) bpepple: warren: +1 (10:56:24) c4chris: abadger1999, is it open? (10:56:31) warren: c4chris, yes, to anyone (10:56:40) c4chris: I'll check (10:56:52) thl: k, so let's move on (10:56:57) c4chris: I think I'm on the buildsys list (or soemthing) (10:57:00) warren: (10:57:03) thl has changed the topic to: FESCO meeting in progress currently -- free discussion around extras (10:57:10) thl: anything that we need to discuss? (10:57:11) j-rod: dgilmore: how are you setting -j32? (10:57:18) thl: or was that all for today? (10:57:34) dgilmore: j-rod: thats how many cpus are in the box so its being auto done (10:57:45) nirik: thl: any thoughts further on zaptel-kmod? (10:57:59) scop: this info should find a home somewhere: (10:58:10) dgilmore: thl: I think thats all. Ive seen no further feedback on buildysys issues (10:58:43) thl: nirik, good idea (10:58:44) tibbs: scop: Yes, this is in FESCo's jurisdiction, it seems. (10:58:52) thl has changed the topic to: FESCO meeting in progress currently -- zaptel-kmod (10:58:58) scop: it's in the packaging namespace but is Extras only (at least for now) so that's not quite the correct place for it (10:59:16) thl: well, nirik started a discussion on fedora-devel (10:59:18) j-rod: dgilmore: ah, gotcha -- I was thinking it would be better to use slightly less, with the intention of filling the cpus with multiple simultaneous builds (10:59:25) cweyl: scop: maybe just move to Extras/PackageEndOfLife? (10:59:28) thl: but there was much that came out of it afaics (10:59:47) scop: cweyl, yeah, maybe (10:59:48) nirik: I guess I would say: should the kmod guidelines say "If the upstream has no plans to merge with the upstream kernel, the module is not acceptable for extras" ? (11:00:08) thl: nirik, well, IMHO yes (11:00:13) dgilmore: zaptel as much as i want it in. We need to do something to make sure that it gets upstream. So we should ask the community of someone is willing to do that (11:00:58) thl: dgilmore, sounds like a good idea, but I doubt we'll find someone (11:01:22) cweyl: wait, I've never really understood this. why should it matter if (for whatever, presumably legitimate reason) an upstream decides to not pursue having it merged into the kernel proper? (11:02:12) thl: c4chris, well, drivers belong in the kernel -- kmods are a solution to bridge the gap until they get merged into the kenrel, but no long term solution (11:02:26) thl: s/c4chris/cweyl/ (11:02:31) thl: cweyl, simply example: (11:02:46) thl: kmod-foo breaks after rebase to 2.6.18 (11:02:56) dgilmore: though dave jones comment that he wont provide support for kernels with any kind of external module means that we could have confused users if they file a bug and get in return WONTFIX becaue of the extras kmods (11:02:59) thl: but a new kmod-bar doesn#t build anymore on 2.6.17 (11:03:06) devrimgunduz left the room (quit: Remote closed the connection). (11:03:13) thl: people that need both kmod-foo and kmod-bar will have problems now (11:03:25) tibbs: I personally believe that the length of the solution is up to the maintainer of the Extras package. (11:03:47) cweyl: tibbs: +1 (11:03:49) tibbs: Any external module solution will have problems keeping in step with kernels. At that point it's up to the maintainer. (11:03:56) cweyl: thl: I think we're setting the bar too high (11:04:05) scop: cweyl++ (11:04:17) tibbs: I don't believe that acceptance into extras should be used as any kind of political leverage as I have seen some state before. (11:04:45) tibbs: The issue of bugs and their interaction with the main kernel is quite compelling, though. (11:04:46) thl: tibbs, that was the agreement we settled on before we worked on the kmod stuff at all (11:04:54) cweyl: it sounds like zaptel-kmod is well maintained, isn't going anywhere anytime soon, and isn't going into the mainstream kernel ever due to business reasons... why shouldn't we let a maintainer package it for people who want it? (11:05:04) thl: well (11:05:15) thl: I'll bring it up to fab for discussion (11:05:19) thl: that okay for everybody? (11:05:26) cweyl: wait -- why fap? (11:05:30) thl: FAB (11:05:33) tibbs: thl: I was not a party to that discussion. (11:05:33) abadger1999: To me we have to keep our kernel people happy. (11:05:34) cweyl: err, fab? isn't this just an extras? (11:05:34) thl: sorry, typo (11:05:46) thl: abadger1999, +1 (11:05:47) bpepple: abadger1999: +1 (11:05:47) cweyl: err, a FESCo decision? (11:06:05) tibbs: If the "agreement" is unchangeable then that would be unfortunate. (11:06:08) thl: cweyl, no, this IMHO is something that matter for fedora at a whole project (11:06:13) cweyl: hrm. (11:06:21) thl: tibbs, everything can be changed (11:06:25) ***dgilmore steps back, I want it in but i understand the reasons for not having it in (11:06:35) warren: thl, except Bush's mind. (11:06:45) tibbs: WTF? (11:06:45) thl: warren, :) (11:06:57) cweyl: well, think of it this way too: as a user, I choose to buy a nvidia card, knowing that I'll need a kmod for it. (11:07:07) cweyl: I know there are risks that go along with that, and I'm willing to take them. (11:07:21) c4chris: but when your kernel crash, you usually file a kernel bug... (11:07:30) warren: BTW, vaguely on this topic, there was interesting news yesterday. (11:07:34) cweyl: Same thing with people who want to use zaptek-kmod, or the iscsi module that was discussed a while back... (11:07:40) warren: AMD is planning on open sourcing some of the ATI driver stuff. (11:07:49) tibbs: warren: Link? (11:07:54) thl: warren, are they really planing it? (11:07:59) c4chris: cweyl, that's why we need to keep the kernel maintainers happy (11:08:00) ***warren finds URL (11:08:00) bpepple: warren: Yeah, that looks like it could be promising. (11:08:01) dgilmore: cweyl: not always. my company provides me a system (probably laptop) it needs a kmod and i dont support binary drivers. but i get no say in the purchasing decision (11:08:04) thl: I only heard rumors (11:08:30) ***nirik also only heard rumors. (11:08:31) cweyl: c4chris: I'm not saying we shouldn't :) (11:08:41) tibbs: I've only heard wishful thinking. (11:08:47) warren: (11:08:49) nirik: warren: you talking about: ? (11:08:58) nirik: yeah, I read that as a rumor. (11:08:59) thl: I consider that as rumors (11:09:12) c4chris: cweyl, that means we probably need FAB buying the idea too... (11:09:24) thl: I ascutally asked AMD and ATI guys for clarification already earlier today (11:09:27) warren: I think consulting FAB i sa good idea. (11:09:30) thl: no reply until now (11:09:38) warren: thl, not surprised (11:09:40) cweyl: dgilmore: right. but the point is I want to use h/w that requires a kmod, and my decision to do that doesn't impact anyone else (11:09:54) warren: Anyway, if this becomes true, it will put pressure on NVidia. (11:10:05) ***bpepple thinks it needs to go to FAB also. (11:10:07) cweyl: c4chris: it's not like kmods change their package, or globally affect all fedora users (11:10:09) thl: so, anything else that needs to be discussed? (11:10:18) ***thl will close the meeting in 60 seconds (11:10:24) dgilmore: cweyl: yes and no. its not always my decsion but yes i want my hareware to work (11:10:45) warren: I think we're actually slowly winning the proprietary kernel module war (11:10:55) warren: Intel is leading the way, and hopefully AMD comes next (11:11:00) ***thl will close the meeting in 30 seconds (11:11:11) abadger1999: Approving this? (11:11:21) warren: Our only way to promote further growth is to maintain our hard line stance. (11:11:46) warren: SuSe and Ubuntu both switched away from proprietary modules to a hard line stance. (11:11:50) warren: we're doing the right thing (11:11:58) thl: abadger1999, well, if that something FESCo should approve (11:11:58) warren: it will be painful meanwhile though... (11:12:08) thl: why is it in the Packaging namespace then? (11:12:28) thl: abadger1999, but well, get's a +1 from me (11:12:32) c4chris: abadger1999, looks fine to me (11:12:52) scop: I suggested to put it in the packaging namespace, but others corrected me (11:13:07) abadger1999: thl: scop proposed it in packaging this morning but it seems much more like a FESCo thing. (11:13:17) thl: abadger1999, well, never mind (11:13:26) thl: abadger1999, it actually describes what we do already (11:13:29) thl: so +1 (11:13:35) c4chris: +1 (11:13:36) abadger1999: +1 (11:13:37) rdieter: +1 (11:13:38) bpepple: +1 (11:13:57) tibbs: +1 (11:14:00) thl: k, settled (11:14:16) thl: abadger1999, can you moe it over to a proper place in the wiki please? (11:14:23) ***thl will close the meeting in 30 (11:14:32) abadger1999: will do. (11:14:33) thl: s/moe/move/ (11:14:40) ***thl will close the meeting in 10 (11:14:50) thl: -- MARK -- Meeting end (11:14:55) thl: thx everybody! (11:15:05) tibbs: thl: Thanks. (11:15:11) c4chris: thl, thx (11:15:31) thl has changed the topic to: This is the Fedora Extras channel, home of the FESCo meetings and general Extras discussion. | | Next FESCo Meeting: 2006-08-17 1700 UTC (11:15:43) ***c4chris goes afk: time fer dinner :-) (11:16:10) thl: are you guys still satisfied with the way I run the meetings? (11:16:19) thl: or is there anything we should change? (11:16:33) tibbs: I'm not sure it could be done much better. (11:16:45) scop: thl, absolutely no problem with that (11:16:49) thl: I know I'm a bit hectic now and then (11:16:55) scop: I think the agenda is a bit swollen, though (11:17:01) dgilmore: thl: i think your doinga great job (11:17:21) thl: scop, you mean the wiki (11:17:28) dgilmore: thl: something you may need to step up and say hey were done on this now move on (11:17:40) thl: well, I wanted a better overview for those that missed a meeting or two (11:18:05) scop: thl, no, but I think there are maybe a bit too many things to process per meeting (11:18:39) thl: scop, yes (11:18:52) thl: maybe we should do more via mail (11:19:09) thl: e.g. the reports from the packaging committee maybe (11:19:33) scop: that would work for me (11:19:34) thl: maybe the owners of task should update the wiki pages with a status update *before* the meeting (11:19:51) abadger1999: thl: Both of those would be good ideas. (11:19:52) thl: we could avoid the "status update" questions then (11:20:05) scop: and sponsor stuff could be taken entirely to the list (11:20:09) Rathann: Nodoid: wow, you really did it, #202004 (11:21:08) thl: I'll think about it a bit more (11:21:37) thl: abadger1999, scop, but we need to make sure that we discuss important things from the packaging committee meetings here (11:21:50) scop: good point (11:21:51) thl: the less important things could be done on the list (11:22:11) abadger1999: If it needs to be discussed here then the report needs to be done here. (11:22:22) abadger1999: Or the wekly packaging meeting could be changed. (11:22:27) bpepple: scop: +1 (11:22:58) thl: abadger1999, changeing thepackaging meetings could help (11:23:51) thl: btw, regading sponsor discussions (11:23:59) thl: do we want to do that on cvs-sponsors (11:24:02) thl: or fesco only (11:24:11) ***thl votes for cvs-sponsors (11:24:38) ***bpepple votes for fesco. Could give more frank discussions. (11:24:48) abadger1999: I'll add meeting time to the packaging agenda. (11:25:44) thl: bpepple, but we are getting quite big, so it might more and more often happen that FESCo members don#t know those that were nominated for the sponsor job (11:26:23) bpepple: thl: It's pretty easy to query bugzilla for the reviews. (11:26:48) thl: well, let's discuss this on the list or in the next meeting (11:26:56) bpepple: no problem. (11:27:21) tibbs: c4chris did add bugzilla links to the "top reviewers" table in PackageStatus. (11:27:36) thl: bpepple, the past discussions we had on cvs-sponsors were quite frank iirc (11:27:45) tibbs: That list currently covers twelve reviews and up. (11:28:58) bpepple: yeah, but I'm afraid some of the discussions might discourage the participants enthusiasm, if there not comfortable with criticisms. (11:29:37) scop left the room ("Leaving"). (11:31:29) thl: bpepple, maybe we should do it on both lists? (11:32:00) bpepple: That might be a good idea. (11:34:20) cweyl: if there are criticisms, and they're discussed privately, might I suggest that it's a good idea to offer those criticisms, packaged constructively, to the nominee? That way they know what prevented them from being approved, and what they need to fix (11:34:20) thl: c4chris, /Extras/Schedule/PackageDatabase in place again (11:34:42) thl: cweyl, yeah, that's what I thought already, too (11:35:12) cweyl: and doing that publicly would give others guidance, establish precedent, etc, etc (11:35:30) cweyl: thl: I suspect I'm just stating the obvious here, but someone had to do it :) (11:35:45) dgilmore: thl: whats cvs-sponsers (11:36:26) bpepple: cweyl: Agreed, that was what I was thinking. (11:37:21) thl: dgilmore, a mailing-list where all sponsors are subscribed (it#s actually an alias or something else and no proper mailinglist) (11:38:00) dgilmore: thl: ok well i think its bad to discusss there because some fesco members are not in on that. (11:38:04) nirik: yeah, it's an alias... which unfortunately causes SPF issues. ;( (11:38:28) dgilmore: thl: namely me and im sure others (11:38:40) cweyl: nirik: cvs-sponsors causes skin cancer? (11:38:41) thl: dgilmore, well, we probably really should discuss on both lists (11:39:02) dgilmore: and I know i really dont have the time to dedicate to being a sponsor (11:39:37) nirik: cweyl: Sender Policy Framework... skin cancer might be easier to treat sometimes. ;( (11:40:00) cweyl: nirik: gotcha :) (11:40:36) dgilmore: nirik: people using SPF should add redhat /fedora mailservers to their dns (11:40:39) thl: dgilmore, np, I also don't find enough time to review and sponsor (11:40:49) thl: dgilmore, that's sad and I don#t like it (11:41:15) thl: but that's how it is ATM... :-/ (11:42:10) dgilmore: thl: yeaqh it is. Between Security and Infrastructure and my sparc port of extras, FESCO and maintaining my own packages I review when i can but I dont want to do a half assed job of something (11:42:50) thl: I actually even tried to get rid of a lot of packages in Extras (11:43:05) thl: to have more time for other stuff (11:43:13) dgilmore: I dont have a huge amount of packages. (11:43:36) dgilmore: I try to commit myself to things where i feel ill have a positive effect on something
|
http://fedoraproject.org/wiki/Extras/SteeringCommittee/Meeting-20060810
|
CC-MAIN-2015-14
|
refinedweb
| 7,435
| 65.76
|
How do I bind a key combo to command_mode?
{ "keys": "alt+z"], "command": "setting", "args": {"command_mode": true} }
You use a context:
{ "keys": "alt+z"], "command": "reindent", "context":
{ "key": "setting.command_mode", "operator": "equal", "operand": true }
]
}
Of course, for this to ever work, you need to set command_mode to true first.
Thats what im trying to work out... how do i turn command_mode on? with out doing it via the console whats the key combo... and can i bind one?
Oh... There used to be a "set" command in Sublime 1. I'm not sure whether it's in for Sublime 2 yet, but I can't find it. Short of writing a plugin for it, I don't see any other way.
Plugin:
import sublime, sublime_plugin
class ToggleCommandModeCommand(sublime_plugin.TextCommand):
def run(self, edit):
status = bool(self.view.settings().get('command_mode'))
self.view.settings().set('command_mode', not status)
In a .sublime-keymap:
{
"keys": "j", "j"],
"command": "toggle_command_mode"
}
]
Thank you very much that worked very well!
Now to bind some keys to command mode?has any one built VIM like key bindings?
{ "keys": "d"], "command": "left_delete", "context":{ "key": "setting.command_mode", "operator": "equal", "operand": true }] },
{ "keys": "d", "d"], "command": "run_macro_file", "args": {"file": "Packages/Default/Delete Line.sublime-macro"}, "context":{ "key": "setting.command_mode", "operator": "equal", "operand": true }] }
Why does the second "d", "d" wipe out the first or make it behave odd?
I believe Sublime resolves ambiguous key bindings with a timeout, so that d will presumable trigger only after a short while (so Sublime knows you didn't mean d,d).
ok... im trying to make it work like vims key binds. Can you set how long the time out should wait?
Not feasible right now, as far as I know. I hope a more powerful keybindings system will be added in the future, though.
But you could approach this differently if you're willing to put up with some frustration:
Forget about command mode and open an input_panel instead. Type the command there and parse that. I won't work well for motions, and it's far from ideal, but it's something at least.
On the other hand, simple motions should work well with the existing support for command mode. hjkl, HJKL, b, e, a, E, A, o, O, G, gg, etc...
Ok thanks, yeah would be cool if it could copy Vims edit mode.
But command mode is cool might make up some short cuts to go in there see how it goes
I
|
https://forum.sublimetext.com/t/command-mode-key-bind/1661
|
CC-MAIN-2017-43
|
refinedweb
| 414
| 76.32
|
1.15 anton 1: \ A powerful locals implementation 2: 1.57 anton 3: \ Copyright (C) 1995,1996,1997,1998,2000,2003,2004,2005 1.54 anton 313: also locals-types 314: 315: \ these "locals" are used for comparison in TO 1.1 anton 316: 1.54 anton 317: c: some-clocal 2drop 318: d: some-dlocal 2drop 319: f: some-flocal 2drop 320: w: some-wlocal 2drop 321: 1.1 anton 1.3 anton 333: drop nextname 1.43 anton 334: ['] W: >head-noprim ; 1.1 anton 335: 336: previous 337: 338: : new-locals-reveal ( -- ) 339: true abort" this should not happen: new-locals-reveal" ; 340: 1.22 anton 341: create new-locals-map ( -- wordlist-map ) 1.29 anton 342: ' new-locals-find A, 343: ' new-locals-reveal A, 344: ' drop A, \ rehash method 1.34 jwilke 345: ' drop A, 1.1 anton 346: 1.41 jwilke 347: new-locals-map mappedwordlist Constant new-locals-wl 348: 349: \ slowvoc @ 350: \ slowvoc on 351: \ vocabulary new-locals 352: \ slowvoc ! 353: \ new-locals-map ' new-locals >body wordlist-map A! \ !! use special access words 1.1 anton 354: 355: variable old-dpp 356: 357: \ and now, finally, the user interface words 1.53 anton 358: : { ( -- latestxt wid 0 ) \ gforth open-brace 1.1 anton 359: dp old-dpp ! 360: locals-dp dpp ! 1.53 anton 361: latestxt get-current 1.41 jwilke 362: get-order new-locals-wl swap 1+ set-order 1.32 anton 363: also locals definitions locals-types 1.1 anton 364: 0 TO locals-wordlist 365: 0 postpone [ ; immediate 366: 367: locals-types definitions 368: 1.53 anton 369: : } ( latestxt wid 0 a-addr1 xt1 ... -- ) \ gforth close-brace 1.1 anton 370: \ ends locals definitions 371: ] old-dpp @ dpp ! 372: begin 373: dup 374: while 375: execute 376: repeat 377: drop 378: locals-size @ alignlp-f locals-size ! \ the strictest alignment 379: previous previous 1.32 anton 380: set-current lastcfa ! 1.37 pazsan 381: locals-list 0 wordlist-id - TO locals-wordlist ; 1.1 anton 382: 1.14 anton 383: : -- ( addr wid 0 ... -- ) \ gforth dash-dash 1.1 anton 384: } 1.9 anton 385: [char] } parse 2drop ; 1.1 anton: 1.28 anton 477: \ Implementation: 1.1 anton 478: 1.3 anton 479: \ explicit scoping 1.1 anton 480: 1.14 anton 481: : scope ( compilation -- scope ; run-time -- ) \ gforth 1.36 pazsan 482: cs-push-part scopestart ; immediate 483: 484: : adjust-locals-list ( wid -- ) 1.37 pazsan 485: locals-list @ common-list 1.36 pazsan 486: dup list-size adjust-locals-size 1.37 pazsan 487: locals-list ! ; 1.3 anton 488: 1.14 anton 489: : endscope ( compilation scope -- ; run-time -- ) \ gforth 1.36 pazsan 490: scope? 491: drop adjust-locals-list ; immediate 1.1 anton 492: 1.3 anton 493: \ adapt the hooks 1.1 anton 494: 1.3 anton 495: : locals-:-hook ( sys -- sys addr xt n ) 496: \ addr is the nfa of the defined word, xt its xt 1.1 anton 497: DEFERS :-hook 1.53 anton 498: latest latestxt 1.1 anton 499: clear-leave-stack 500: 0 locals-size ! 501: locals-buffer locals-dp ! 1.37 pazsan 502: 0 locals-list ! 1.3 anton 503: dead-code off 504: defstart ; 1.1 anton 505: 1.3 anton 506: : locals-;-hook ( sys addr xt sys -- sys ) 507: def? 1.1 anton 508: 0 TO locals-wordlist 1.3 anton 509: 0 adjust-locals-size ( not every def ends with an exit ) 1.1 anton 510: lastcfa ! last ! 511: DEFERS ;-hook ; 512: 1.28 anton: 1.30 anton 526: : (then-like) ( orig -- ) 527: dead-orig = 1.27 pazsan 528: if 1.30 anton 529: >resolve drop 1.27 pazsan 530: else 531: dead-code @ 532: if 1.30 anton 533: >resolve set-locals-size-list dead-code off 1.27 pazsan 534: else \ both live 1.30 anton 535: over list-size adjust-locals-size 536: >resolve 1.36 pazsan 537: adjust-locals-list 1.27 pazsan: 1.1 anton 582: ' locals-:-hook IS :-hook 583: ' locals-;-hook IS ;-hook 1.27 pazsan 584: 585: ' (then-like) IS then-like 586: ' (begin-like) IS begin-like 587: ' (again-like) IS again-like 588: ' (until-like) IS until-like 589: ' (exit-like) IS exit-like 1.1 anton: 1.14 anton 630: : (local) ( addr u -- ) \ local paren-local-paren 1.3 anton 631: \ a little space-inefficient, but well deserved ;-) 632: \ In exchange, there are no restrictions whatsoever on using (local) 1.4 anton 633: \ as long as you use it in a definition 1.3 anton 634: dup 635: if 636: nextname POSTPONE { [ also locals-types ] W: } [ previous ] 637: else 638: 2drop 639: endif ; 1.1 anton 640: 1.56 anton 641: : >definer ( xt -- definer ) \ gforth 1.48 anton!}. 1.30 anton 646: dup >does-code 647: ?dup-if 648: nip 1 or 1.4 anton 649: else 650: >code-address 651: then ; 652: 1.56 anton 653: : definer! ( definer xt -- ) \ gforth 1.48 anton 654: \G The word represented by @var{xt} changes its behaviour to the 655: \G behaviour associated with @var{definer}. 1.4 anton 656: over 1 and if 1.13 anton 657: swap [ 1 invert ] literal and does-code! 1.4 anton 658: else 659: code-address! 660: then ; 661: 1.23 pazsan 662: :noname 1.31 anton 663: ' dup >definer [ ' locals-wordlist ] literal >definer = 1.23 pazsan 664: if 665: >body ! 666: else 667: -&32 throw 668: endif ; 669: :noname 1.28 anton 670: comp' drop dup >definer 1.21 anton 671: case 1.30 anton 672: [ ' locals-wordlist ] literal >definer \ value 1.21 anton 673: OF >body POSTPONE Aliteral POSTPONE ! ENDOF 1.35 anton 674: \ !! dependent on c: etc. being does>-defining words 675: \ this works, because >definer uses >does-code in this case, 676: \ which produces a relocatable address 1.54 anton 677: [ comp' some-clocal drop ] literal >definer 1.21 anton 678: OF POSTPONE laddr# >body @ lp-offset, POSTPONE c! ENDOF 1.54 anton 679: [ comp' some-wlocal drop ] literal >definer 1.21 anton 680: OF POSTPONE laddr# >body @ lp-offset, POSTPONE ! ENDOF 1.54 anton 681: [ comp' some-dlocal drop ] literal >definer 1.21 anton 682: OF POSTPONE laddr# >body @ lp-offset, POSTPONE 2! ENDOF 1.54 anton 683: [ comp' some-flocal drop ] literal >definer 1.21 anton 684: OF POSTPONE laddr# >body @ lp-offset, POSTPONE f! ENDOF 685: -&32 throw 1.23 pazsan 686: endcase ; 1.24 anton 687: interpret/compile: TO ( c|w|d|r "name" -- ) \ core-ext,local 1.1 anton 688: 1.58 ! anton 689: : locals| ( ... "name ..." -- ) \ local-ext locals-bar 1.14 anton 690: \ don't use 'locals|'! use '{'! A portable and free '{' 1.21 anton 691: \ implementation is compat/anslocals.fs 1.8 anton 692: BEGIN 1.49 anton 693: name 2dup s" |" str= 0= 1.8 anton 694: WHILE 695: (local) 696: REPEAT 1.14 anton 697: drop 0 (local) ; immediate restrict
|
https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/glocals.fs?annotate=1.58;f=h;only_with_tag=MAIN;ln=1
|
CC-MAIN-2022-27
|
refinedweb
| 1,155
| 72.73
|
In this Python tutorial, let us discuss about sending an email using Python script with appropriate examples.
Introduction of Python Sending Email
Simple Mail Transfer Protocol (SMTP) is a protocol that handles sending and routing of e-mail between mail servers.
Python gives the smtplib module, which defines an SMTP client session object that can be used to send mail to any Internet machine with an SMTP or ESMTP listener daemon.
Syntax
import smtplib smtpObj = smtplib.SMTP( [host [, port [, local_hostname]]] )
Parameters
host
This is the host running your SMTP server. You can specify the IP address of the host or a domain name like tutorialspoint.com. This is.
Example code to send one e-mail using Python script
#!"
Output for the code
"Successfully sent email"
|
https://www.codeatglance.com/python-sending-email/
|
CC-MAIN-2020-40
|
refinedweb
| 125
| 64.61
|
seesaw-clj Discussion of Seesaw UI toolkit for Clojure. <br><a href=""></a> Google Groups Cecil Westerhof 2016-07-19T10:18:57Z Displaying a scaled down picture and drawing a rectangle on it At the moment I am using Image Magick on the command line with trial and error to crop the right part of the photo. For example: But this is a ‘bit’ of work. So I was thinking about writing a Clojure program to do the cropping for me. That would save James Elliott 2016-07-18T05:07:04Z Confusing doc string I can't tell from reading this if id-of returns a string or a keyword. The first paragraph says string, the second says keyword. I am going to have to try it and see, but it would be nice to be spared that step in the future. :) seesaw.core/id-of [w] Returns the id of the given widget if the James Elliott 2016-07-18T04:59:00Z Need to rebuild documentation pages? I was tripped up when trying to follow the example for a button-group listener, because the documentation at had the following: (listen bg :selection (fn [e] (if-let [s (selection e)] (println 胡傲果 2016-07-12T03:11:15Z where to see config! options I know config! colud be used to set :text :background :size, is there a doc for these options? If I want to find a option to be config, where do I go to find it. James Elliott 2016-05-24T22:59:24Z How can I make a button that shows a popup menu when you click in it? I have contextual-menu style popups working fine, but I would like to give new users a visual indicator that the contextual menu exists by having a button with a gear on it which, when clicked without modifier keys, brings up the same set of choices. But I am having trouble figuring out how to James Elliott 2016-05-18T17:20:30Z Using value with repeating structures Hello, everyone, I just started using seesaw this past week in order to put together a user interface that will be helping coordinate visuals for a DJ at a music festival this weekend, and it has been delightful. The current state of my code can be found at Andrew Dabrowski 2016-05-15T21:22:44Z Widgets inside a canvas? Is it possible to place widgets at arbitrary positions inside a canvas? Apparently that can be done in Java, but I haven't come across any examples in seesaw. I tried a paint function like (fn [c g] (.add c button '(x y)) and although it didn't produce an error, it also didn't display the Andreas Olsson 2016-03-21T20:35:08Z Load image(maby a jpg) to a image-buffer. Trying to import an image to a image-buffer but having trouble solving it. heres a try: (def pic (seesaw.graphics/buffered-image 200 200)) (def dopic (.imageio.ImageIO.read pic (str (System/getProperty "user.dir") "\\resources\\grumpy.jpg"))) Am I totaly of?? Andreas Olsson 2016-03-21T09:58:08Z image-buffer problem... .setStroke?? Trying out the buffered-image, but I cant get line 17 right. How do i use it? Amir Teymuri 2016-01-23T10:39:12Z Understanding (show-options) and (show-events) Often when i call the (show-options) of a function it prints mostly the very same options as for other functions. For example calling (show-options (border-panel)) and (show-options (label)) and (show-options (toolbar)) all include a :text options, from which only (label) supports :text, from Amir Teymuri 2016-01-20T11:30:15Z No printing in the REPL In the tutorial there is the chapter on the listbox, i have tried to print out the selections, but it doesn't work for me. Maybe someone could point it out what am i doing wrong and why i don't get anything printed out in the REPL? (def f (frame :title "sandiego")) (def lb (listbox :model [:d Amir Teymuri 2016-01-19T23:12:13Z Button event-handler Adding an event-handler to a button the actions are divided between right and left buttons of the mouse. I was doing something like this: *(def f (frame :title "san-diego"))* *(def btn (button :text "btn" :font "monospaced-bold-40"))* *(listen btn :mouse-pressed #(config! % :foreground :orange)* Amir Teymuri 2016-01-19T09:17:06Z Invoking :font option on (flow-panel) According to (show-options (flow-panel)) (flow-panel) does support the :font option. How is :font to invoke on flow-panel? This does not work: (config! my-frame :content (flow-panel :items ["FILE:" (text "here comes the TEXT")] :font "ARIAL-BOLD-100" :background :red)) Amir Teymuri 2016-01-16T14:27:15Z Running seesaw and overtone libraries togethear Hello, i want to use the overtone <> and seesaw namespaces in one project. However when i load them there seems to be a function named (select) which exists in both seesaw.core and overtone.core ((seesaw.core/select) (overtone.core/select)), why i can't DB Conrado 2015-12-13T16:42:17Z Beginning by doing some exercises from Deitel's book Sup guys! I'm new to the community and also to the Clojure language itself. So, I began my learning by doing some exercises from Deitel's Java How to Program book, 6th ed. You can see what I've done at: Hope it helps other beginners like me jun...@selma.hfmdk-frankfurt.de 2015-11-29T18:25:07Z setting gradient background color on button I'm trying to change the background color for a button. Since the default theme uses gradients I'd like to paint it with gradients, too. Unfortuately the following gives an error: This works: (do (def my-button (button)) (config! my-button :background :red)) This doesn't work: (do (def jun...@selma.hfmdk-frankfurt.de 2015-11-29T14:25:12Z again (please help!): center-align vertical labels Hi, I'm still not getting anywhere trying to center-align labels in a vertical panel. This is what I have tried: (let [f (frame :content (vertical-panel :items [(label :text "One" :halign :left) (label :text "Four" :halign :center) jun...@selma.hfmdk-frankfurt.de 2015-11-23T09:45:55Z center-align widgets in vertical-panel Hi, I try to get a slider and a label above and below to center-align. I tried the following without success: (-> (frame :title "aligntest" :content (vertical-panel :items [(label :id "lbl01" :text "0" :halign :center jun...@selma.hfmdk-frankfurt.de 2015-11-22T18:33:52Z JLayeredPane and JScrollBar? Hi, is there support for JLayeredPane and JScrollBar in seesaw, or, if not, how would I add it? jun...@selma.hfmdk-frankfurt.de 2015-11-22T15:16:40Z howto run a custom function when frame gets closed Hi, is there a way to run a function when closing a frame in order to do some cleanup? dark-h 2015-11-05T23:52:11Z having trouble with seesaw drag and drop (dnd) Hi All, I am a clojure newbee so please bare with me: I am trying to develop a simple gui using seesaw whereby I want to drag and drop swing components (JButtons) from one container to another. But for some reason I can't make it work. I am pretty sure there is something fundamentally wrong but LAWRENCE 2015-09-08T23:41:46Z problem with .contains in Rectangle2D - seesaw.graphics rect Hi, I am using a canvas and populating it with 32 sq rectangles. When I mouse click on a rectangle I want to find out if the rectangle I just clicked on is one in a list of rectangles I already have. In other words, is the mouse click location located in the list. The problem is I don't seem to Andreas Olsson 2015-08-17T13:02:10Z Newbee! Grid-panel, placement of widgets. This might be a stupid question but I'm coming from pythonWorld. Is it possible to decide where the widgets go like ":collumn 1 :row 1"?? rNewCd 2015-07-27T23:45:37Z Problem with seesaw in the repl I am following this tutorial <> but i cant get the seesaw working in the repl. I am using clojure 1.6, and had also problems running lein repl which you can see HERE < ilukinov 2015-07-22T13:12:28Z How to make widget blink? Hello, Can't figure out hot to make my field to blink. I've tried this (defn blink [] (config! (select root [:#my-input-id]) :background "#ff0000") (Thread/sleep 200) (config! (select root [:#my-input-id]) :background "#00ff00")) And seems like this will sleep then set last color Austin Pocus 2015-07-09T00:29:38Z Key events not firing I'm trying to capture the :key-pressed and :key-released events with a (listen) call on the frame, but the events don't seem to be firing. Here's the code: (let [f (frame :title "Ainur" :on-close :exit :size [1024 :by 768] :content (border-pa Jose Comesaña 2015-06-26T09:58:46Z Does seesaw work with java 6 and java 7? It seems it doesn't. Maybe I am doing something wrong? qsys 2015-06-11T09:32:44Z text input delay Something that's pretty useful in some situations is an input component which performs an action if the value is changed, but only after some time. For example, text input with hints or suggestions, where the suggestions come only after x milliseconds, so that not on every keypress a service is qsys 2015-06-10T13:22:04Z incanter update chart - seesaw bindings I'm using seesaw and incanter for a standalone application. The aim is to show a candle-stick graph, depending on the user input. Updates of the graph (and some labels and so forth) are done by using seesaw bindings. However, when I try to update the chart, I get an IllegalArgumentException: Lawrence Krubner 2015-06-09T17:26:27Z Avoid RejectedExecutionException in lein About this: (defn -main [& args] (when-not (first args) (println "Usage: gaidica ") (System/exit 1)) (reset! api-key (first args)) (invoke-later (-> (make-frame) add-behaviors show!)) ; Avoid RejectedExecutionException in lein :( @(promise)) Enrique Manjavacas 2015-06-05T10:20:52Z pack-all-columns in table-x Hi, I am using a the swingx table-x function and it displays nicely and all but I was wondering what is the best way to programatically call pack-all-columns when starting the application. I couldn't find any information about this so far. Thanks! Enrique Corey Williams 2015-05-23T19:19:24Z Scrollable canvas? I'm trying to get an image set up so that you can scroll it, I've tried: (defn main-window [img] (let [scroll (scrollable (canvas :size [(.getWidth img) :by 400] :paint (image-painter img)))] (scroll! scroll :to :bottom) (frame :title "Main Peter Marshall 2015-04-22T13:41:58Z table.clj [seesaw "1.4.5"] Hi In table.clj line 237 the table model is updated with a column index of -1. This calls setValueAt on DefaultTableModel which throws an IndexOutOfBoundsException when the column or row is out of bounds due to the underlying vector. I am seeing this from time to time, but TBH Im struggling Mike Holly 2015-04-14T18:26:17Z Infinite loop Hi there, I'm working on a project for fun which generates an image using genetic algorithms. Anyway, I'm wondering how best to structure the code. Basically I need to continuously augment and evaluate the "evolved" image. I know I have to use a separate thread and the invoke-later macro... Alexandr 2015-03-11T12:59:02Z Should I learn Seesaw or Swing Hello everybody, I am new to Clojure and I have the task to build GUI to perform some experiments. There are should be several text fields to enter parameters and the button to start experiment. The most important part is to plot graphs and plots showing performance while calculations are Cecil Westerhof 2015-03-01T07:17:59Z Disabling close on a showConfirmDialog I have the followng code: (JOptionPane/showConfirmDialog nil "Message" "Title" JOptionPane/YES_NO_CANCEL_OPTION) Works fine, but I would like to disable the close button. Is this Cecil Westerhof 2015-02-28T21:35:01Z :mnemonic does not work with button I have the following code: (def adjust-dlg (dialog :title "Adjust Quotes" :on-close :nothing :resizable? false)) (def adjust-panel (JPanel. (GridBagLayout.))) (def ^JTextField from-str (text :columns 25)) (def ^JTextField to-str (text :columns 25)) (grid-bag-layout Cecil Westerhof 2015-02-27T18:44:18Z Invoke-later As I understood it you need to use invoke-later for long running tasks to keep the GUI responsive. When not using it the button I clicked keep being selected as long as the command is running. (And the GUI does not show other actions.) When I use invoke-later the button returns almost immediatel Cecil Westerhof 2015-02-27T16:09:50Z Retrieve text in listen function I have: (text :columns 40 :listen [ :action (fn [e] (…)) ]) Is it possible to retrieve the text of the JTextField in the listener function? -- Cecil Westerhof Cecil Westerhof 2015-02-27T15:54:07Z What is the difference between action and action-performed With text you have the events action and action-performed. What is the difference between those two? Because it looks like both are triggered when you give enter. -- Cecil Westerhof Cecil Westerhof 2015-02-27T12:42:18Z Why does show! move a frame after a hide I have the following code: (def f (-> (frame :title "Hello", :content "Hello, Seesaw", :on-close :hide) pack! show!)) When I hide the frame and do a show!, the frame is placed at the place it was originally Cecil Westerhof 2015-02-27T07:42:53Z Listen in a button Looking at (doc button) I use: (button :text "Random Quotes" :listen [:mouse-clicked #(alert "NEXT!")]) But when I click on the button I get: Exception in thread "AWT-EventQueue-0" clojure.lang.ArityException: Wrong number of args (1) passed to: core/eval7049/fn--7050 Cecil Westerhof 2015-02-26T16:18:04Z Why is the frame 4 times as high as necessary I have the following code: (-> (frame :content (scrollable (table :model (seesaw.table/table-model :columns [{:key :name, :text "Name"} {:key :likes, :text "Likes"}] :rows Cecil Westerhof 2015-02-26T10:11:34Z Where to put scrollable I have the following function: (defn show-random-quotes ( [] (show-random-quotes 10)) ( [nr] (let [f (frame :size [1000 :by 800] :title "Random Quotes") html-start (str "<html><table border='1' style='width:100%'><tr>" Cecil Westerhof 2015-02-25T19:51:21Z How to convert this Java code to Clojure/Seesaw I am relatively new to Clojure and Spring. In some Java code there is: infoTableFrame = new JFrame(title); SwingUtilities.invokeLater(new Runnable() { public void run() { Document doc; HTMLEditorKit Adam Matic 2015-01-31T14:30:44Z (listen (select panel [:JRadioButton])) only registers 32 listeners? Hi, I have a bunch of radio buttons on a panel, all with different ids, the select function returns a sequence of all of them, but apparently the listen function does not register more than 32 listener functions. Selecting any of the buttons changes text in a label. Is this a bug or am I doing Mihail Ivanchev 2015-01-16T19:29:46Z djnativeswing-clj: DJ Native Swing wrapper now available for Seesaw. Hello everyone, I wanted to notify you, the Seesaw users, that I just released the first version of a Clojure wrapper of DJ Native Swing -- a great Java library providing Swing widgets for native components. It's fully Seesaw compatible and it's available here: Adam Matic 2015-01-11T17:47:12Z seesaw with jgraphx (mxGraph) Hi, I'm trying to get some basic functionality of jgraphx java library:, using seesaw in clojure, but it doesn't seem to work. I'm new to clojure, have some experience with java, though not with very much with swing. I found a repl session code that does not daveray 2015-01-05T04:55:02Z 1.4.5 release Hi, Just a quick note that I've pushed seesaw 1.4.5 to clojars with several bug fixes and small improvements from the last year. I guess I didn't realize how long it had been since the last release. Way it goes :) Cheers, Dave Alex Seewald 2014-11-30T02:55:46Z How To Understand The String Representations of Frames? I am writing a seesaw application. When I call show! on the frame I declared, it does not show up on the screen. In order to locate the problem, I'm logging the string representation of the frame. Is there documentation somewhere describing the various fields and what values those fields are
|
https://groups.google.com/forum/feed/seesaw-clj/topics/atom_v1_0.xml?num=50
|
CC-MAIN-2016-40
|
refinedweb
| 2,783
| 61.77
|
From: Eric Niebler (eric_at_[hidden])
Date: 2007-04-02 18:44:19
Markus Werle wrote:
> [ Sorry, gmane was down on Friday and I could not get access to another
> nntp access point during weekend ]
>
>
> Eric Niebler <eric <at> boost-consulting.com> writes:
>
>> Markus Werle wrote:
>>> So what we really need together with the tutorials and the
>>> documentation is a wiki about the design and evolution of boost.
>> Design rationales are good. A wiki might work, but a better place
>> might be in a "Rationale" section in each Boost library's documentation.
>
> The advantage of a wiki (or any other public read/write access
> documentation) is that not only the designer of some boost part is
> in the role of an author for his design rationales but others can
> easily contribute.
True dat.
> More important than the design rationale for boost::XXX is the
> design rationale for some of the building blocks of it, and since these
> are used across several distinct library parts, we really need
> design rationales for _techniques_ used in boost, a meta-boost design
> rationale.
> (Common tricks like deep_copy in all its flavours etc.)
>
> I sometimes come across a discusion which I feel to be important enough
> to be cut'n'pasted to a CMS, let it be wiki or joomla or whatever you
> like ...
Well, Boost has as wiki, so what's stopping you? :-)
>>> Then Eric Niebler could explain in depth why he needs
>>> "static, aggregate initialization" in proto
>>> (see
> <
>> )
>>> and why mpl does not fit for proto.
>> Static initialization in proto is important so that when DSEL authors
>> declare their global primitives like _1, there are no global
>> initialization order problems.
>
> My skills are not such that this piece of information suffices
> as explanation about what was suboptimal and what was fixed.
> Going back to your code again ... maybe I catch the idea sometimes.
> Never saw that trick before.
>
> What is better in
>
> static type call(Expr const &expr)
> {
> type that = {proto::arg(expr)};
> return that;
> }
>
> than
>
> static type call(Expr const &expr)
> {
> type that = proto::arg(expr);
> return that;
> }
>
> ?
Oh, nothing at all. The benefit comes when declaring global primitives.
Consider this at namespace scope:
namespace my {
struct placeholder {};
proto::terminal<placeholder>::type const _ = {{}};
}
If terminal<>::type were a type with a constructor, that constructor
would have to run *sometime* and the placeholder wouldn't be usable
until after that time. Trouble is, C++ makes very few promises about
when that *sometime* is. In contrast, the code above requires no runtime
initialization. The placeholder just *is*, and there is no time during
your program's execution when it is invalid to use it.
>> The only problem I had using MPL with proto was compile-time
> performance
>> problems when using MPL lambdas.
>> In order to keep compile times down,
>> I've replaced as much template meta-programming in proto as possible
>> with preprocessor meta-programming.
>
> Could you publish an article about that?
> Your article "Conditional Love: Foreach
> Redux" ()
> is a good example about cool things that are hidden in some
> innocent looking piece of code. Reading the header file never would
> have exposed the things that matter (at least to me).
I don't think that would make for an interesting article. See below.
>> And proto is a bit of a special case
>> since TMP-heavy libraries are built on top of proto, so the TMP
> overhead
>> of proto itself should be as small as possible. MPL is a very nice
>> abstraction, but it's not free. Ditto for Fusion.
>
> This raises 2 questions for me:
>
> 1. Could these issues be fixed for mpl and fusion or are you really
> forced to create your own enhanced versions?
I'm not creating enhanced versions of mpl or fusion. I'm using the
preprocessor to generate code that would do the same thing as an mpl or
fusion algorithm invocation. More below ...
> 2. If you build your own high performance versions of typelist etc.:
> isn't fusion and/or mpl a good place to add them instead to "hide" them
> in proto - below xpressive?
>
> <cite url="
> gmane.comp.parsers.spirit.devel/2886/focus=2890">
> > [...] proto::basic_expr<> is the central data container of proto.
> > It's functionality very much resembles a fusion sequence
> </cite>
I think I've been unclear. In proto, I try to keep template
instantiations down. One way I do that is by generating specialization
with Boost.PP instead of calling mpl/fusion algorithms. Consider
deep_copy(), which essentially applies a transform to each child of a
node. There are at least two ways to do this:
1) The Easy Way: Use fusion::transform(). Done!
2) The Hard Way: write N specializations of deep_copy<Node> (where N is
the maximum number of children a node can have), and do the
transformation on each child directly.
The only thing (2) has going for it is that it incurs fewer template
instantiations. No fusion iterators, or transform_views, or
miscellaneous traits like begin<>, end<>, equal_to<>, or anything else.
But would I endorse (2)? No. Just like I wouldn't recommend programming
in assembly, unless you really need that extra 5% speed.
>
>> And yes, I measure compile time performance and don't optimize
>> prematurely.
>
> I do not question what you are doing.
> I am simply looking forward to profit from the fact that you've been
> already there in compiler hell and are willing to give us some good
> advice how to survive when we go there, too.
>
> I am probably asking too much here. It's only that I am not
> able to extract this knowledge from source code ...
I'm sorry to disappoint you, but there is probably no way you can profit
from proto's innards.
> P.S.:
> Apropos proto: are you planning to introduce a glommable disambiguation
> mechanism [Geoffrey Furnish]?
I Googled this:. IIUC,
this is a way to control which sub-expressions can combine with which
others, and with which operators. Proto can do this, and its mechanism
is much more powerful and elegant, IMNSHO. You define the meta-grammar
of your DSEL, and then you tell Proto that only expression types that
conform to that meta-grammar are allowed. Operators that produce invalid
expressions are not even considered.
First read this ("Patterns and Meta-Grammars"):
Then this ("Extending Proto"):
HTH,
-- Eric Niebler
|
http://lists.boost.org/boost-users/2007/04/26753.php
|
CC-MAIN-2015-06
|
refinedweb
| 1,049
| 64
|
What’s this? A post about WebForms?
Is it 2002 all over again? Is GeoCities still going strong? Are we all about to invest endless hours building SOAP services and learning XML?
No, no we’re not.
But, just because it feels like everyone is off building shiny new apps using ASP.NET Core 2.x the truth is there are still plenty of us spending at least some of our days working with “legacy” (and sometimes not so legacy) apps built using WebForms.
And if you’re trying to move to something else at this moment in time you’d be forgiven for feeling overwhelmed.
The web is awash with information covering every topic you could possibly imagine (and then some)…
- SPAs (Angular, React etc)
- MVC 5
- MVC Core
- Razor Pages
- .NET Framework
- .NET Core
Where do you start?
We’ve got to start somewhere so let’s begin with performing a common WebForms task using ASP.NET Core MVC…
If you’re new to MVC you could be forgiven for feeling confused.
There’s a lot of terminology to get to grips with, where Webforms had one or two moving parts MVC has several.
But when you take a step back and explore what WebForms is doing under the hood it becomes easier to translate that to MVC.
Change label text via a Postback
WebForms did its best to act like a Windows app which happened to run in the browser but in reality it uses a little smoke and mirrors to hide what it’s actually up to.
Here’s an example.
Create a page with your button and label, something like this…
<form id="form1" runat="server"> <div> <asp:Button <asp:Label Welcome one and all </asp:Label> </div> </form>
Create a handler to handle the button click event, set the new property value for the label and voila, the label is updated when you click the button.
protected void btnChangeLabel_Click(object sender, EventArgs e) { this.lblGreeting.Text = "Hello, this is the changed greeting :-)"; }
Unpicking the abstraction
This might look and feel like a windows app running on the desktop but “under the hood” ASP.NET is simply making GET and POST requests from the browser to the server.
Yep, it’s doing exactly what every other web application ever written does, make requests to the server.
Here’s what’s actually happening.
You request /SimpleLabel in the browser
Your browser makes an HTTP GET request
ASP.NET handles the GET request and routes to the corresponding WebForm (SimpleLabel.aspx)
ASP.NET renders html for the label/button controls and returns it to the browser
The rendered html includes a form which will be submitted when the button is clicked
You click the button
The browser makes an HTTP POST request to the server (submits the form)
The "btnChangeLabel_Click" event handler runs, changing the label's text
ASP.NET re-renders the html (with the new value for the label) and returns it to the browser
You see the new value in the browser
So you see, it’s not actually magic, just an HTTP GET and an HTTP POST.
How does MVC do it?
Now let’s take the exact same requirement and do it with MVC.
The main difference is that you are no longer quite so abstracted away from the HTTP requests and HTML markup that your app is ultimately going to use.
Instead of a WebForm with ASP.NET controls, you need a view with (mostly) standard HTML tags.
Index.cshtml
@model SimpleLabelModel <form method="post"> <input type="submit" value="Boring, give me another greeting" /> <span>@Model.Greeting</span> </form>
The only non-html part of this markup is the first line and the subsequent reference to
@Model.Greeting which is Razor syntax to enable you to take data coming from the server and mix it up with your HTML.
Now we have our button and a greeting, but how do we get to this in a browser?
We can’t, not without a controller to handle requests (from the browser) to our application.
SimpleLabelController.cs
[Route("[controller]")] public class SimpleLabelController : Controller { [HttpGet("")] public IActionResult Get() { var model = new SimpleLabelModel { Greeting = "Welcome one and all" }; return View("Index", model); } }
Now we’re getting somewhere, a request to go to
/simplelabel in the browser will be routed to our
Get action where we return our view (index.cshtml).
We also return an instance of our view’s model (SimpleLabelModel) which (if you’re interested) looks like this.
SimpleLabelModel.cs
public class SimpleLabelModel { public string Greeting { get; set; } }
Remember this line in the view?
<span>@Model.Greeting</span>
That ensures the value we just set in the Model (“Welcome one and all”) is displayed as a span in the rendered HTML.
How does MVC know which requests go to which controller?
Your MVC app can handle any requests you throw at it but you need to send them somewhere right?
In this example we’re using attribute routing.
[Route("[controller]")]
This attribute on the controller means the first part of the URL will be the name of the controller e.g.
https://<your-site-here>/SimpleLabel.
[HttpGet("")]
This attribute on the controller action (the method in that controller) simply ensures that any get requests to
https://<your-site-here>/SimpleLabel will end up here.
If we’d done this instead…
[HttpGet("fancy")]
… the url to get to this method would be
https://<your-site-here>/SimpleLabel/fancy
Now what about handling that button click?
The button will submit the form, which will post back to
/simple-label but wait, we don’t have any controller action to handle it!
SimpleLabelController.cs
[Route("[controller]")] public class SimpleLabelController : Controller { // GET method omitted for brevity [HttpPost("")] public IActionResult WelcomeMe() { var model = new SimpleLabelModel { Greeting = "Hello, this is the changed greeting :-)" }; return View("Index", model); } }
And there you have it! Click the button, the browser submits the form (via a POST) and the corresponding controller action returns a new instance of the model with our new and improved greeting.
Putting it all together
Here’s the MVC flow then…
You request /SimpleLabel in the browser
Your browser makes an HTTP GET request
ASP.NET handles the GET request and routes to the corresponding Controller Action
ASP.NET renders your html (mixing in values from the model via Razor) and returns it to the browser
The rendered html is the HTML from the view, with the "dynamic data" from the model included.
You click the button
The browser makes an HTTP POST request to the server (submits the form)
The relevant controller action (for the HTTP Post) is called
ASP.NET re-renders the html (with a different model containing a different greeting) and returns it to the browser
You see the new value in the browser
More moving parts
Looking back at this you can see that MVC requires you to handle more moving parts (controllers, views and view models).
But with this increased complexity you also get more freedom to craft your routes, your html, even to start adding javascript to your views where it improves the user experience.
Before you go
Get a handy PDF showing the two flows (WebForms and MVC) next to each other, plus source code for both examples.
|
https://jonhilton.net/from-webforms-to-mvc/
|
CC-MAIN-2020-40
|
refinedweb
| 1,216
| 59.74
|
What?
[This post is part of the series TDD Sample App: The Complete Collection …So Far]
First, get to zero
Before you increase your Xcode warnings, drive any existing warnings down to zero. Fix the problems where you can. Where it’s too much, disable that particular warning for now.
You don’t want noise that hides useful information. If you put up with a list of “the usual warnings,” you won’t notice when a particularly serious one creeps into the list. This also applies if you use #pragma warning, or have a script that flags TODO as a warning — you’re just adding noise. Get to zero.
Turn up Xcode warnings as high as you can stand them
Once you have a clean slate, start enabling more Xcode warnings. Do this at the project level so that it applies across all your targets. Gradually turn it “up to eleven,” rebuilding after each setting change.
Some warnings will be excessive for your codebase. Others simply don’t work well with Apple’s frameworks. Don’t worry about those, just keep going down the list. Check out Peter Hosey’s excellent description of the warnings he uses.
When you reach the bottom of the “Warnings” sections, skip over a few sections and do the “Static Analyzer” sections as well.
XcodeWarnings: An xcconfig for easier set-up
Click-click-clicking through a new project to turn on all these Xcode warnings is a pain. I made an xcconfig to make life easier, called XcodeWarnings.
To add XcodeWarnings.xcconfig to your project, drag it in. Where it prompts “Add to targets,” deselect all targets. (Otherwise, it will be included in the bundle.)
If you don’t have a previous xcconfig, click on your project Xcode’s Navigator pane. In the main editing area, select your project, and select the Info tab. For each of your configurations, select XcodeWarnings at the project level:
(If you do have a previous xcconfig, then edit it to #include "XcodeWarnings.xcconfig".)
Whether you add XcodeWarnings as the root xcconfig or #include it from another one, it’s time to clear out any overrides to let the xcconfig take control. Still at the project level, select the Build Settings tab. Find your way down to the first section of Xcode warnings, labeled Apple LLVM x.0 – Warning Policies. Click the first entry:
Then scroll down through the various Xcode warning sections. When you reach the last one, shift-click the last entry. This should select all the warning settings. Press delete. This will clear the project-level overrides, causing the settings to fall back to what the xcconfig specifies.
Do the same for the static analyzer settings. Keep scrolling down and find the section labeled Static Analyzer – Analysis Policy. Again, click the first setting, then scroll to the last static analyzer section. Shift-click the last entry, and press delete.
…That may sound complicated, but it takes longer to describe than to do!
Once you clear the overrides, XcodeWarnings.xcconfig will be in full effect. If you’re starting a new project, you’re all set. But if you’ve added this to an existing project and rebuilt, you may have a large pile of warnings. Start by commenting out specific lines in XcodeWarnings.xcconfig (use ⌘-/) until you are back to zero.
Then one at a time: Take a line you commented out and hit ⌘-/ to bring that setting back into play. Rebuild, and decide what to do. Either:
- Fix the problems, or
- Comment it back out.
-Weverything is too much for me
There is another approach to turning warning settings “up to eleven”, and that’s to specify -Weverything in “Other Warning Flags”. This turns on all the visible Xcode warnings, along with other Clang compiler warnings that Apple has not made visible in the Xcode project settings.
Because -Weverything is so strong, people usually add other flags to turn off some of those extra Clang warnings.
If that works for you, great. Personally, -Weverything is too much for me. I get frustrated with the invisible nature of those hidden warning settings. So I prefer XcodeWarnings.xcconfig where everything is explicitly listed.
Get that feedback any way you can
Remember, this is all about getting feedback on your code to catch problems early. As long as you turn your settings up to eleven, it probably doesn’t matter how you do it — as long as you do it. Try XcodeWarnings and see if it helps you. Turn up the warnings, drive them to zero, then stay at zero!
Question: Which Xcode warnings do you enable? Which do you disable? Do you use any of those hidden Clang warnings? You can leave a comment by clicking here.
Did you find this useful? Subscribe today to get regular posts on clean iOS code.
[This post is part of the series TDD Sample App: The Complete Collection …So Far]
Thanks for this!
So I used your XcodeWarnings file and I had about 50 warnings popup for “weak property may be unpredictably set to nil”
My question is how are we supposed to handle buttons and other objects in our storyboard that when referenced in code are weak
there are many places in my code where I am hiding or unhiding multiple objects on the screen. Should I be really doing this for every one of them:
> viewcontroller.someButton.hidden = True;
changed to:
>Uibutton *strongButton = viewcontroller.someButton
>if(strongButton){ strongButton.hidden = true;}
or is there a better way to do this?
Prasanth, this warning makes sense for various other weak properties, where you don’t really know when something might disappear out from underneath you. But for view outlets… I just make them strong, since viewDidUnload is no longer relevant.
The other way is to say, “oh well,” and disable that warning.
I think the best approach is to start with -Weverything for warnings, treat warnings as errors is fundamental too, then proceed analyzing the warnings with the rest of the team and agree which ones to silence in the Xcode build settings.
The warnings we ignore in the build settings (having -Weverything turned on) are: -Wno-pedantic -Wno-objc-missing-property-synthesis -Wno-selector -Wno-old-style-cast.
In some cases, properties defined by a protocol, trying to implement custom setters and getters gets you Xcode complaining about accessing ivars directly even though you are completely within your rights to do so. In this case a good
#pragma clang diagnostic push IGNORE_EXTRA_WARNINGS
#pragma clang diagnostic ignored “”
[…]
#pragma clang diagnostic pop IGNORE_EXTRA_WARNINGS
will do the trick.
Any tips for getting the XcodeWarnings.xcconfig file to play nice when using Cocoapods?
The Pods.debug.xcconfig / Pods.release.xcconfig files are generated so there is no easy way to #include XcodeWarnings.xcconfig in them. I can go the other direction and #include “Pods/Target Support Files/Pods/Pods.debug.xcconfig” in the XcodeWarnings.xconfig, but I can’t find a way to conditionally include the release flavor for the release configuration. I tried this, but it’s invalid:
#include "Pods/Target Support Files/Pods/Pods.debug.xcconfig"
#else
#include "Pods/Target Support Files/Pods/Pods.release.xcconfig"
#endif
The other problem with this is that specifying a different base configuration generates a warnings on every pod update command (see).
Anyway, as much as I like the idea of using a xcconfig file to control warnings, this seems a bit awkward if using CocoaPods. Please let me know if I have missed something.
–Mark
…Anybody? Looking for a CocoaPods-friendly way to use xcconfig files.
|
http://qualitycoding.org/xcode-warnings/
|
CC-MAIN-2017-13
|
refinedweb
| 1,261
| 66.44
|
;; #!/usr/bin/env sbcl --scriptto the top.
;; to change package, goto SLIME, type `,` for `, set package`. (declaim (optimize (speed 0) (safety 3) (debug 3))) (defpackage :rasterizer1 (:use :common-lisp))
slime-sync-package-and-default-directory(
C-c ~) to setup package and
cwd.
e), jump to definition (
F), go back (
D)
a), continue(
c), quit(
q), goto frame source (
v), toggle frame details (
t), navigate next/prev frame(
n,
p), begin/end (
<,
>), inspect condition (
c), interactive evaluate at frame (
:)
C-c C-y), switch to repl (
C-c C-z), eval last expr (
C-x C-e), trace (
C-c C-t)
C-c C-t). Write
(break)in code and then run expression to enter debugger, step into (
s), step over (
x), step till return (
o), restart frame (
r), return from frame (
R), eval in frame (
e)
C-c C-k), eval defun (
C-c C-c), trace (
C-c C-t).
C-c C-d h)
M-.), come back (
M-,). Find all references (
M-?)
,!d), set package (
,push-package). Alternatively, execute
(swank:set-default-directory "/path/to/desired/cwd/")
M-x slime-restart-inferior-lisp, or call
(progn (unload-feature 'your-lib) (load-library "your-lib"))
defvar, use
M-x slime-eval-defun(
C-M-x) .
destructuring-casefor pattern matching
letbindings
The above silently compiles, with an SBCL error:The above silently compiles, with an SBCL error:
(defgeneric errormsg (x)) (let* (x (errormsg 12)) x)
This should clue you in that something terrible has happened. The correct form of theThis should clue you in that something terrible has happened. The correct form of the
;; caught STYLE-WARNING: ;; The variable ERRORMSG is defined but never used.
let*requires ONE outer paren group to denote bindings, and ANOTHER paren for each key-value pair. So this should have been written:
But has been written as:But has been written as:
(let* ( ;; <- OPEN pairs of bindings (x (errormsg 12)) ) ... ;; <- CLOSE bindings
This gets interpreted as:This gets interpreted as:
(let* ( x (errormsg 12) ) ... )
The takeaway appears to be thatThe takeaway appears to be that
(let* ( (x nil) ;; notice the `nil` introduction (errormsg 12) ) ... )
SBCLwarnings ought to be treated as errors.
;; 23:49 <@jackdaniel> bollu: I don't know whether this is documented (setf asdf:*compile-file-warnings-behaviour* :error)
Special variables (Mutable globals) should be surrounded by asterisks. These are called earmuffs. (defparameter positions (make-array ...))
(assert (condition) (vars-that-can-be-edited)) ;; (defun divide (x y) (assert (not (zerop y)) (y) ;; list of values that we can change. "Y can not be zero. Please change it") ;; custom error message. (/ x y))
parinfer.
cl-replfor quick command line hackery.
(ql:quickload "str")
|
https://pixel-druid.com/common-lisp-cheat-sheet.html
|
CC-MAIN-2022-27
|
refinedweb
| 440
| 57.06
|
Basics, Thinking in Tables
Review
If your Stats 425 is still kicking around in your head, you might remember Bayes’ Theorem. Which generalizes a nice, symmetric property of Conditional Probability
$P(A|B)P(B) = P(B|A) P(A)$
Into the following
$p(H|D) = \frac{p(H)~p(D|H)}{p(D)}$
Where
H is your Hypothesis and
D is your Data.
Bowls
In a trivial example, we’ve got two identical bags with colored stones inside.
Bag 1 has 10 white stones and 20 black stones. Bag 2 has 15 white stones and 15 black stones. If we picked a bag at random and pulled out a white stone, what's the probability that we selected from Bag 1?
The trick to these problems is to swap the values from the story problem into the Bayes’ equation:
- Our Hypothesis is “Selected from Bag 1”
- Our Data is “Drew a white stone”
$p(B1|W) = \frac{p(B1)~p(W|B1)}{p(W)}$
We’ll assume either bag is as likely and use the proportion of Bag 1’s contents
$p(B1|W) = \frac{(1⁄2)~(1⁄3)}{p(W)}$
The only trick is figuring out the value of P(W), which can be expressed by blowing out the exhaustive conditional probabilities in the denominator
$p(B1|W) = \frac{(1⁄2)~(1⁄3)}{P(W|B1)P(B1) + P(W|B2)P(B2)}$
$p(B1|W) = \frac{(1⁄2)~(1⁄3)}{(1⁄3)(1⁄2) + (1⁄2)(1⁄2)}$
Solving, we get
$p(B1|W) = 0.4$
Tables
As your hypothesis gets more complicated, however, Allen Downey’s book Think Bayes introduces structuring the problem via a helpful table to assist with the bookkeeping.
M&M
The problem that introduces these tables looks like the following:
We’ve got two bags of M&Ms from two different years, where the color distribution found in each bag was
Then, you take an M&M from each bag, and first one is yellow, the second is green.
What is the probability that the yellow M&M came from the 1994 bag?
Equivalently, this also means that the green M&M also came from the 1996 bag. In lieu of writing out the fraction as above, then deconstructing the denominator into all exhaustive probabilities for selecting yellow from either bag and green from either bag, we’ll use the “Table Method.”
But first, we’ll restate the problem in terms of hypotheses.
- A: Bag 1 is from 1994, Bag 2 is from 1996
- B: Bag 1 is from 1996, Bag 2 is from 1994
We take the yellow, green pick as given and construct our table.
from IPython.display import Image Image('images/mnm.PNG')
- Each bag is equally likely, which explains column
B
- The values in each cell for column
Care the probability of each draw, given that hypothesis. e.g. Assuming Bag 1 is 1994, we had a
.2chance to draw yellow and a
.2chance to draw green from Bag 2, assumed to be from 1996.
- Column
Dis just plug and chug
- The value of
D3is the column sum, or that exhaustive probability we didn’t want to do by hand above
- Finally, column
Eis just dividing
Dthrough by the column total, which completes the form of the Bayesian Equation and normalizes the data.
|
https://napsterinblue.github.io/notes/stats/bayes/basics_tables/
|
CC-MAIN-2021-04
|
refinedweb
| 555
| 55.98
|
This Tutorial will cover the Java substring method. We will take a look at the Syntax, brief Introduction, and Java substring Examples:
We will be also be covering the important scenario-based examples as well the frequently asked questions that will help you in understanding this method even better.
Upon going through this Java tutorial, you will be in a position to create your own programs for extracting any substring from the main String and further perform any operation on it.
=> Take A Look At The Java Beginners Guide Here.
What You Will Learn:
Java substring()
As we all know, Java substring is nothing but a part of the main String.
For example, In a String “Software Testing”, the “Software” and “Testing” are the substrings.
This method is used to return or extract the substring from the main String. Now, for the extraction from the main String, we need to specify the starting index and the ending index in the substring() method.
This method has two different forms. The syntax of each of these forms is given below.
Syntax:
String substring(int startingIndex); String substring(int startingIndex, int endingIndex);
In the next section, we will look closely into each of these forms.
Starting Index
In this section, we will discuss the first form of the Java substring() method. The first form returns the substring that starts at the given index and then runs through the entire String. So, whatever you mention in the starting index, it will return the entire String from that particular index.
Given below is the program in which we have demonstrated the extraction by using the first form of substring() method. We have taken an input String “Software Testing Help” and then extracted the substring from index 9.
Thus, the output will be “Testing Help”.
Note: Java String index always starts with zero.
public class substring { public static void main(String[] args) { String str = "Software testing help"; /* * It will start from 9th index and extract * the substring till the last index */ System.out.println("The original String is: " +str); System.out.println("The substring is: " +str.substring(9)); } }
Output:
Starting And Ending Index
In this section, we will talk about the second form of the method. Here, we are going to take an input String “Java String substring method” and we will try to extract the substring by using the second form which is by specifying both the starting as well as ending indices.
public class substring { public static void main(String[] args) { String str = "Java String substring method"; /* * It will start from 12th index and extract * the substring till the 21st index */ System.out.println("The original String is: " +str); System.out.println("The substring is: " +str.substring(12,21)); } }
Output:
Java substring Examples
Scenario 1: What will be the output of the substring method when the specified index is not present in the main String?
Explanation: In this scenario, we are going to take an input String “Java Programming” and we will try to specify the index as 255 and 350 for the starting and ending indexes respectively.
As we know, if the String does not have a 255 index number, then it must throw an error. By the Java predefined rules for the exception, it should throw “index out of range” exception. This is because the index we have specified in the method is out of range for the given String.
public class substring { public static void main(String[] args) { String str = "Java Programming"; /* * It will throw an error after printing the original String. * The index we have specified is out of range for the * main String. Hence, it will throw "String index of range" * exception */ System.out.println("The original String is: " +str); System.out.println("The substring is: " +str.substring(255,350)); } }
Output:
Scenario 2: What will be the output of this method when we provide a negative index value?
Explanation: Here, we are going to take an input String “Java substring Tutorials” and we will try to provide negative starting and ending indexes and check how the program responds.
As the Java String index starts from zero, it should not accept negative integers in the index. So the program must throw an exception.
The type of error should again be the “String index out of range” exception because the specified index is not present in the main String.
public class substring { public static void main(String[] args) { String str = "Java substring Tutorials"; /* * It will throw an error after printing the original String. * The index we have specified is out of range for the * main String because the String index starts from zero. * It does not accept any negative index value. * Hence, it will throw "String index of range" exception */ System.out.println("The original String is: " +str); System.out.println("The substring is: " +str.substring(-5,-10)); } }
Output:
Scenario 3: What will be the output of the substring when we provide (0,0) in the starting and ending indexes?
Explanation: This is yet another very good scenario to understand the String substring() Java method. Here, we will take an input String “Saket Saurav” and try to fetch the substring starting from the zeroth index and ending on the zeroth index.
It will be interesting to see how the program responds.
As we have the starting and ending indexes as the same, it should return a blank. However, the program compiles successfully in this scenario.
It will return blank for all such values where the starting and ending indexes are the same. Be it (0,0) or (1,1) or (2,2) and so on.
public class substring { public static void main(String[] args) { String str = "Saket Saurav"; /* * The output will be blank because of the starting and ending * indexes can not be the same. In such scenarios, the * program will return a blank value. The same is applicable * when you are giving the input index as (0,0) or (1,1) or (2,2). * and so on. */ System.out.println("The original String is: " +str); System.out.println("The substring is: " +str.substring(0,0)); } }
Output:
Frequently Asked Questions
Q #1) How to divide a String into substrings in Java? How to create the same String again from the substrings?
Answer: Below is the program where we have taken an input String and divided the String into substrings by specifying the starting and ending indexes.
Again we have created the same String by using the substrings with the help of String concat operator.
public class substring { public static void main(String[] args) { String str = "Saket Saurav"; // created two substrings substr1 and substr2 String substr1 = str.substring(0,6); String substr2 = str.substring(6,12); //Printed main String as initialized System.out.println(str); //Printed substr1 System.out.println(substr1); //Printed substr2 System.out.println(substr2); //Printed main String from two substrings System.out.println(substr1 +substr2 ); } }
Output:
Q #2) How to find if a String is a substring of another in Java?
Answer: Below is the program where we have taken an input String “Example of the substring”. Then, we have fetched a substring and stored in a String variable “substr”. Thereafter, we have used the Java contains() method to check whether the String is a part of the main String or not.
public class substring { public static void main(String[] args) { String str = "Example of the substring"; // created a substring substr String substr = str.substring(8,10); //Printed substring System.out.println(substr); /* * used .contains() method to check the substring (substr) is a * part of the main String (str) or not */ if(str.contains(substr)) { System.out.println("String is a part of the main String"); } else { System.out.println("String is not a part of the main String"); } } }
Output:
Q #3) What is the return type of substring() method in Java?
Answer: As we know, the String class is Immutable and substring() method is an inbuilt method of the String class. Every time you perform an operation on the String, the subsequent String is a new String that is returned.
The same thing happens with this method as well. Every time we call the substring() method, the resultant String is a new String. Hence, the return type of this method in Java is a String.
Q #4) Is String thread-safe in Java?
Answer: Yes. Just like StringBuffer, the String is also thread-safe in Java. This means that the String can only be used by a single thread at a given point in time and it does not allow two threads using a String simultaneously.
Q #5) What is the difference between two different approaches for initializing a String?
String str1 = “ABC”;
String str2 = new String(“ABC”);
Answer: Both the lines of codes will give you the String object. Now we can list out the differences.
The first line of code will return an existing object from the String pool whereas the second line of code where the String is created with the help of a “new” operator, will always return a new object that is created in the heap memory.
Although the value “ABC” is “equal” in both the lines, it is not “==”.
Now let’s take the following program.
Here we have initialized three String variables. The first comparison is done on the basis of “==” reference comparison for str1 and str2 that returns true. This is because they have used the same existing object from the String pool.
The second comparison was done on str1 and str3 using “==” where the reference comparison differs because the String object was as a part of str3 that is created newly with the help of a “new” operator. Hence, it returned false.
The third comparison was done with the help of the “.equals()” method that compared the values contained by str1 and str3. The value of both the String variables is the same i.e. they are equals. Hence, it returned true.
public class substring { public static void main(String[] args) { String str1 = "ABC"; String str2 = "ABC"; /* * True because "==" works on the reference comparison and * str1 and str2 have used the same existing object from * the String pool */ System.out.println(str1 == str2); String str3 = new String ("ABC"); /* * False because str1 and str3 have not the same reference * type */ System.out.println(str1==str3); /* * True because ".equals" works on comparing the value contained * by the str1 and str3. */ System.out.println(str1.equals(str3)); } }
Output:
Conclusion
In this tutorial, we have discussed the different forms of substring() method. Also, we have included multiple scenario-based questions along with the frequently asked questions that helped you in understanding the method in detail.
Further reading =>> MySQL Substring Function
Syntax, programming examples, and detailed analysis for every scenario and concept were included here. This will surely help you in developing your own programs of substring() method and carrying out different String manipulation operations on each subsequent String.
=> Read Through The Easy Java Training Series.
|
https://www.softwaretestinghelp.com/java-substring-method/
|
CC-MAIN-2021-17
|
refinedweb
| 1,822
| 64.61
|
Base class for texture implementations. More...
#include <StelTextureBackend.hpp>
Base class for texture implementations.
Definition at line 94 of file StelTextureBackend.hpp.
Destroy the texture.
Definition at line 100 of file StelTextureBackend.hpp.
Construct a StelTextureBackend with specified texture path/url.
Note that it makes no sense to instantiate StelTextureBackend itself - it needs to be derived by a backend.
Definition at line 156 199 of file StelTextureBackend.hpp.
Must be called after succesfully loading an image (whether normally or asynchronously).
At this point, texture size is initialized and becomes valid. Asserts that status is loading and changes it to loaded.
Definition at line 184 of file StelTextureBackend.hpp.
Get texture dimensions in pixels.
Can only be called when the texture has been successfully loaded (this is asserted). Use getStatus() to determine whether or not this is the case.
Definition at line 129 of file StelTextureBackend.hpp.
Get a human-readable message describing the error that happened during loading (if any).
Definition at line 142 of file StelTextureBackend.hpp.
Get the "name" of this texture.
The name might be the full path if loaded from file, URL if loaded from network, or nothing at all when generated.
Definition at line 117 of file StelTextureBackend.hpp.
Get the current texture status.
Used e.g. to determine if the texture has been loaded or if an error has occured.
Definition at line 108 of file StelTextureBackend.hpp.
Must be called before loading an image (whether normally or asynchronously).
Asserts that status is uninitialized and changes it to loading.
Definition at line 169 of file StelTextureBackend.hpp.
Full file system path or URL of the texture file.
Definition at line 150 of file StelTextureBackend.hpp.
|
http://www.stellarium.org/doc/0.12.0/classStelTextureBackend.html
|
CC-MAIN-2014-15
|
refinedweb
| 281
| 53.68
|
PD controller is developed to control a quadrotor in 1-dimensional space (height direction only).
1 unknown so 1 equation is needed.
Newton's 2nd law:
States:
Position and velocity are chosen as state variables because,
State vector:
Input vector:
Output vector:
Rewrite (eq. 1) in these new notations:
Rearrange equations to express ẋ(t) and y(t) in terms of x(t) and u(t):
Rewrite as matrix:
The objective is to design a controller (find the input thrust function) which makes the quadrotor track a trajectory (position, velocity, and acceleration as a function of time).
PD controller can be written as,
(eq. 1) can be solved for f(t)
Substitute the PD controller:
Python code simulation. Quadrotor climbs to 1m height.
from scipy.integrate import solve_ivp import matplotlib.pyplot as plt # Constants g = 9.81 # Gravitational acceleration (m/s^2) m = 0.18 # Mass (kg) # dx/dt = f(t, x) # # t : Current time (seconds), scalar # x : Current state, [z, vz] # return: First derivative of state, [vz, az] def xdot(t, x): # Desired z, vz, az z_des = 1 vz_des = 0 az_des = 0 # PD Controller (input, u) kp = 30 kv = 3 u = m * (az_des + kp * (z_des - x[0]) + kv * (vz_des - x[1]) + g) # Clamp to actuator limits (0 to 2.12N) u = min(max(0, u), 2.12) # Quadrotor dynamics (dx/dt = xdot = [vz, az]) return [x[1], u/m - g] x0 = [0, 0] # Initial state, [z0, vz0] t_span = [0, 5] # Simulation time (seconds), [from, to] # Solve for the states, x(t) = [z(t), vz(t)] sol = solve_ivp(xdot, t_span, x0) # Plot z vs t plt.plot(sol.t, sol.y[0], 'k
|
https://cookierobotics.com/051/
|
CC-MAIN-2021-31
|
refinedweb
| 273
| 61.26
|
While programming, you must have encountered situations where we need to know the position of an element in a list. We can use the linear search algorithm for this purpose. In this article, we will implement a linear search algorithm to find the index of an element in a list in python.
What is a linear search algorithm?
In the linear search algorithm, we start from the index 0 of a list and check if the element is present at the index or not. If the element is present at the index, we return the index as output. Otherwise, we move to the next index until we find the element that is being searched or we reach the end of the list.
For example, Suppose that we are given a list
myList=[1,23,45,23,34,56,12,45,67,24] .
Now, we want to search the index of the element 12. For this, we will start from index 0 and check if 12 is present there or not. If yes, we will return 0 as result or we will move to index 1. We will keep moving in this way to index 6 where 12 is present. After checking that 12 is present at index 6, we will return 6 as an output. If any element is not present, we will return -1 which will specify that the element is not present in the list.
Having an overview of the linear search operation, let us define an algorithm for the linear search operation.
Algorithm for linear search
The algorithm for linear search can be specified as follows.
Input to algorithm: A list and an element to be searched.
Output: Index of the element if the element is present. Otherwise,-1.
- Start from index 0 of the list.
- Check if the element is present at the current position.
- If yes, return the current index. Goto 8.
- Check if the current element is the last element of the list.
- If yes, return -1. Goto 8. Otherwise, goto 6.
- Move to the next index of the list.
- Goto 2.
- Stop.
As we have defined the algorithm for linear search, let us implement it in python.
def LinearSearch(input_list: list, element: int): list_len = len(input_list) for i in range(list_len): if input_list[i] == element: return i return -1 myList = [1, 23, 45, 23, 34, 56, 12, 45, 67, 24] print("Given list is:", myList) position = LinearSearch(myList, 12) print("Element 12 is at position:", position)
Output:
Given list is: [1, 23, 45, 23, 34, 56, 12, 45, 67, 24] Element 12 is at position: 6
Drawbacks of linear search algorithm
A linear search algorithm is very costly in terms of time complexity. It has O(n) complexity in the worst case where n is the number of elements in the list.
Another drawback is that it doesn’t consider the arrangement of elements in the list. If the elements are arranged in ascending order and we have to search for the largest element, it will always take a maximum number of steps to produce a result.
Similarly, if an element is not present in the list, it will again take the maximum number of steps to produce the result as it will traverse each element of the list.
Conclusion
In this article, we have discussed the linear search algorithm. We have also implemented it in python..
|
https://www.pythonforbeginners.com/basics/linear-search-in-python
|
CC-MAIN-2021-49
|
refinedweb
| 564
| 71.65
|
Heads up! To view this whole video, sign in with your Courses account or enroll in your free 7-day trial. Sign In Enroll
Pseudocode for Functions1:52 with Reggie Williams
Learn basic conventions for describing functions including function signature, passing arguments and returning values.
Basic function format
function function_name endfunction
Accept arguments
function function_name pass in num1, num2, num3 endfunction
Return a value
function sum_numbers pass in num1, num2, num3 set result to num1 + num2 + num3 return result end function
Call a function
set sum to call sum_numbers with 5, 6, 7
Example pseudocode from video
function calculate_gpa pass in student_grades set grade_total to 0 for each grade in student_grades if grade is not a 1, 2, 3, or 4 print "invalid grade" print grade print "can't complete calculation" exit function else add grade to grade_total endif endfor set gpa to grade_total / number of grades return gpa endfunction set reggie_grades to 4, 4, 3, 4 set reggie_gpa to call calculate_gpa with reggie_grades print reggie_gpa
Let's change the pseudo code from the last video to describe a reusable function 0:01 named student_grades. 0:05 In doing this, we're actually improving our code by thinking through different 0:07 versions of our program, all without actually having to write any code. 0:11 In other words, using pseudo code first, as you think through and 0:14 improve the organization of the code you're going to write. 0:18 If you'd like to follow along, 0:22 open the workspace associated with this video in the gpa.text file. 0:24 With the function, you want to identify the beginning and 0:28 the end of the function clearly like this. 0:32 In addition, you often pass one or more values to a function. 0:34 Information the function can use as part of its programming. 0:38 Here's a simple way to indicate that. 0:41 Most functions return to value. 0:43 So you'll add that usually at the end like this. 0:44 Let's take the code from the last video and change it. 0:47 I'll add a line to mark the beginning of the function and the end of the function. 0:50 And then add a line indicate that you pass in the grades for the student. 0:57 Finally, instead of printing the GPA, we just returned it. 1:03 All of the rest of the code is the same, it's just now wrapped into a function. 1:07 To use that function or call it, you would do something like this. 1:12 First, create a list of grades. 1:16 Then create a new variable named reggie_gpa that you store 1:19 the results of the running function. 1:23 You can use the word call to indicate that you're calling the function and 1:26 use with to indicate what information you're passing to the function. 1:29 Finally, you can do something with that new variable like print it out. 1:33 That's most of what you need to know to write a natural language description of 1:36 a program. 1:40 To help you start using pseudocode, I've provided a reference sheet covering 1:41 the different pseudocode conventions I've shown you in this workshop. 1:44 You'll see that after this video. 1:47 Have fun writing pseudocode. 1:50
|
https://teamtreehouse.com/library/pseudocode-for-functions
|
CC-MAIN-2020-40
|
refinedweb
| 581
| 67.79
|
in working on this pgm, i have made notes to myself ... questions that keep popping up. if anyone wants to clarify some of these concepts for me ....
1. are TRUE and FALSE keywords? if a method says
return (TRUE);
will it work?
2. const on END of method declaration and const IN the parameter of the method declaration ...
sometype somemethod (const sometype& x) const
what do both of these do? are they basically INSURANCE ... just be sure the method won't change the parameter passed in and ... ?
3. does the deconstructor happen at the end of the pgm automatically. i know that you need to manually delete things sometimes ... like in an assignment ... but does the
virtual ~ClassName(); deconstructor somehow automatically called on all objects that have been created at the end of method main?
4. scenario 1
int* d_array;
d_array = NULL;
this is read as d_array points to an int and the object that d_array points to is NULL (which would be 0 in the case of an int) ... is that right?
scenario 2
int* d_array;
*d_array = NULL;
what does this say? would it cause an error? is the '*' only used when declaring a pointer?
5. what is the proper way to do the & and * ...
int *d_array; or
int* d_array;
6. i am confused with the usage of "this" and "self" in C++.
could someone "read" both of these to me (i still have to make a sentence out of statements and methods to understand them and draw a mental picture).
C& C:: operator = (const C& rhs) { if (*this == rhs) return *this; ........... } and then there is this one ..... C& C::operator = (const C& rhs) { if (this == &rhs) return *this; .............. } and then there is this one ..... C& C:: operator = ( const C& rhs){ if (this == &rhs) return (self); }
i think they all do the same thing (check for assignment to itself) ... but i am unsure about the *this vs self and &rhs stuff ... so basicaly i am confused by the whole thing.
7. the use of "new". is that basically only done in a constructor?
THANKS SO MUCH ... you guys have already been so much help to me. i am on a crash course!
crq
|
https://www.daniweb.com/programming/software-development/threads/17401/c-quiz-test-your-knowlegde-and-help-me-in-the-process
|
CC-MAIN-2017-51
|
refinedweb
| 364
| 86.1
|
Synopsis
The CrateDataset object is used to read and write files with multiple blocks.
Syntax
CrateDataset(input=None, mode='rw') If input is not None then it gives the name of the file to read in. The mode argument accepts 'r' (read-only) and 'rw' (read-write).
Description
The CRATES Library uses CrateDataset objects to store a file (dataset) which contains multiple blocks (crates). This is needed if you need to create or use data from multiple blocks in a file or want to ensure that all these blocks are retained if modifying a block. If you only need to read in data from a single block then you are unlikely to need to use a CrateDataset and read_file() should be sufficient.
The read_pha, read_rmf, write_pha and write_rmf routines use sub-classes of the CrateDataset object - namely PHACrateDataset and RMFCrateDataset - to store all the data they need.
Creating a dataset with multiple blocks
In the following example we create a CrateDataset object and then add two blocks two it: the first an image and the second a table with one column.
import sys import time import numpy as np from pycrates import IMAGECrate, TABLECrate, CrateData, CrateDataset, \ set_key toolname = sys.argv[0] tooltime = time.asctime() # First block cr1 = IMAGECrate() cr1.name = "IMG" cd1 = CrateData() cd1.values = np.arange(12).reshape(4,3) cr1.add_image(cd1) set_key(cr1, 'CREATOR', toolname, desc='tool that created this output') set_key(cr1, 'DATE', tooltime, desc='Date and time of file creation') # Second block cr2 = TABLECrate() cr2.name = "TBL" cd2 = CrateData() cd2.name = "x" cd2.values = np.arange(20,31) cd2.unit = 'cm' cd2.desc = 'Average beard length' cr2.add_column(cd2) # The dataset containing both blocks ds = CrateDataset() ds.add_crate(cr1) ds.add_crate(cr2) ds.write("out.fits", clobber=True)
which creates a file which looks like the following (assuming it's stored in a file called mds.py):
unix% python mds.py unix% dmlist out.fits blocks -------------------------------------------------------------------------------- Dataset: out.fits -------------------------------------------------------------------------------- Block Name Type Dimensions -------------------------------------------------------------------------------- Block 1: IMG Image Int4(3x4) Block 2: TBL Table 1 cols x 11 rows
and the table block contains:
unix% dmlist "out.fits[TBL]" cols -------------------------------------------------------------------------------- Columns for Table Block TBL -------------------------------------------------------------------------------- ColNo Name Unit Type Range 1 x cm Int4 - Average beard length
and the header of the image block is
unix% dmlist out.fits header,clean,raw SIMPLE = T / file does conform to FITS standard BITPIX = 32 / number of bits per data pixel NAXIS = 2 / number of data axes NAXIS1 = 3 / length of data axis NAXIS2 = 4 / length of data axis EXTEND = T / FITS dataset may contain extensions COMMENT = FITS (Flexible Image Transport System) format is defined in 'Astronomy / COMMENT = and Astrophysics', volume 376, page 359; bibcode: 2001A&A...376..359H / HDUNAME = IMG / ASCDM block name CREATOR = mds.py / tool that created this output DATE = Thu Nov 29 11:33:41 2018 / Date and time of file creation
Reading in dataset with multiple blocks
Using the file created above, we can read it in by saying
chips> cds = CrateDataset('out.fits') chips> print(cds) Crate Dataset: File Name: out.fits Read-Write Mode: rw Number of Crates: 2 1) Crate Type: <IMAGECrate> Crate Name: IMG 2) Crate Type: <TABLECrate> Crate Name: TBL Ncols: 1 Nrows: 11
and access its methods such as:
chips> msg = "{} has {} crates".format(cds.get_filename(), cds.get_ncrates()) chips> print(msg) out.fits has 2 crates chips> tcr = cds.get_crate('TBL') chips> print(tcr) Crate Type: <TABLECrate> Crate Name: TBL Ncols: 1 Nrows: 11 chips> print(tcr.get_column("x").desc) Average beard length
The Crates module is automatically imported into ChIPS and Sherpa sessions, otherwise use one of the following:
from pycrates import *
or
import pycrates.
|
http://cxc.cfa.harvard.edu/ciao/ahelp/cratedataset.html
|
CC-MAIN-2019-30
|
refinedweb
| 608
| 63.49
|
, MySQL, CVS, Cadaver, subversion, sitecopy, tla, iproute, Zope, logcheck, kdeprint, emil, and GNU Sharutils.
subversion
sitecopy
tla
iproute
logcheck
kdeprint
emil
Problems have been found in the Linux kernel code that handles R128 drives, ISO9660 filesystems, and ncp_lookups that can lead to an attacker gaining root permissions. In addition, there is a problem in the code in the ext3 filesystem that can lead to unauthorized access to information, and a problem in the Sound Blaster driver's code that can be used as part of a denial-of-service attack.
ncp_lookup
ext3
These problem are reported to be fixed in the 2.4.26 Linux kernel. Users should upgrade to a repaired kernel as soon as possible. Packages have been released for Trustix Secure Linux 2.0, 2.1, and Secure Enterprise Linux 2; Debian GNU/Linux; Conectiva Linux; and Mandrake Linux. Affected users should contact their vendors for detailed information.
The scripts mysqlbug and mysqld_multi that are supplied with the MySQL database are reported to be vulnerable to a temporary-file, symbolic-link race condition that can be used by a local attacker to overwrite files on the system with the permissions of the user executing the script.
The versions of the scripts that are in the MySQL source repository are reported to be fixed. Affected users should consider upgrading their mysqlbug and mysqld_multi scripts. Updated packages have been released for Debian GNU/Linux 3.0 alias woody; Red Hat Linux 9; and OpenPKG CURRENT, 2.0, and 1.3.
CVS (the Concurrent Versions System) is an open source networked version control system. Two pathname-related vulnerabilities have been discovered in CVS. The first can be used by a malicious server to create arbitrary files on a client machine when the client checks out or updates code from the server by supplying absolute pathnames in its RCS diff files. The second vulnerability can be used by a remote attacker to view files outside the CVS root directory by using "../" in the path name.
../
Users of CVS should upgrade to a version Stable CVS 1.11.15 or newer as soon as possible and should consider disabling remote CVS operations until CVS has been updated.
Cadaver is a WebDAV client written for the Unix command line that supports collection creation, uploads, downloads, namespace functions, deletion, and locking operations. sitecopy is a utility used in maintaining remote web sites. subversion is a version control system that aims to replace CVS. Multiple format-string vulnerabilities have been reported in the Neon library that is linked to by Cadaver, subversion, sitecopy, and tla. These vulnerabilities can be exploited if the client connects to a server under control of an attacker, and can result in code being executed on the client with the permissions of the user running the application. It is possible that this attack could also be carried using a man-in-the-middle-style attack. Neon is a C language library that provides HTTP and WebDAV client functions.
It is highly recommended that users upgrade to version 0.22.1 of Cadaver or version 1.0.2 of subversion as soon as possible. Users of sitecopy and tla should watch for updated versions. Users may also upgrade neon to version 0.24.5 or newer and recomplile or relink any affected applications.
iproute is a set of tools used in controlling Linux networking. It has been reported that the iproute tools are vulnerable to a locally exploitable denial-of-service attack. The vulnerability is related to iproute using the netlink interface but not checking to insure that a netlink message comes from the kernel and not from a user process.
netlink
Users should watch their vendors for an repaired version of iproute.
Zope, an open source web application server, is vulnerable to a bug that can be exploited by unauthorized users and anonymous users to call arbitrary methods (object-oriented function calls) of catalog indexes.
All users of Zope should upgrade to a repaired version as soon as possible.
Also in Security Alerts:
PHP Problems
Ethereal Trouble
KWord Trouble
XFree86 Trouble
MySQL Trouble
Under some conditions, the logcheck utility is vulnerable to an attack, based on a temporary-file, symbolic-link race condition, that can be used by a local attacker to overwrite arbitrary files on the system with the permission of the user running logcheck (which is most cases will be root). When logcheck is installed, it creates a directory under /var/tmp for its security files. If this directory is removed, the utility becomes vulnerable to attack.
Affected users should watch their vendors for a repaired version of logcheck and should consider not running it until it has been updated.
The kdeprint supplied with SuSE Linux did not use the -dSAFER option when executing Ghostscript.
-dSAFER
Users should upgrade their kdelibs3 packages to fix this problem.
kdelibs3
The emil mail filter program is vulnerable to buffer overflows and format-string vulnerabilities that may, under some conditions, be used by a remote attacker to execute arbitrary code with the permissions of the user running emil.
Affected users should watch their vendors for a repaired version of emil.
The GNU Sharutils package allows the creation and unpacking of SHell ARchives, often used to send large binaries files using email. A buffer overflow has been reported in the shar utility in the code that handles the -o command-line option. In most installations, exploiting the buffer overflow will not gain the attacker any additional permissions, as it is not normally installed with a set user or group id bit. But in some cases, it could be exploited for gain; for example, if shar was being executed by a CGI script and the attacker could control the input it receives.
shar
-o
Affected users should.
|
http://archive.oreilly.com/pub/a/linux/2004/04/21/insecurities.html
|
CC-MAIN-2015-48
|
refinedweb
| 962
| 54.12
|
I. Basic Concepts A. Exception: 1. An unusual event, either erroneous or not 2. Detectable by either hardware or software 3. May require special processing B. Exception handler: A code fragment that processes an exception C. Examples of exceptions 1. End of file 2. Division by 0 3. Bad input 4. Subscript out of range 5. Pointer out of range (seg fault in C) D. Exception handling in languages that do not provide exception-handling capabilities (e.g., C) 1. Function returns a status flag a. program unit that calls the function must check the status flag and take appropriate action 2. Caller passes the name of a function that handles the exception to the callee a. The callee calls the function with appropriate parameters if an exception occurs. b. disadvantage--if several exceptions must be handled, several exception handlers must be provided with every call 3. Programmer uses setjmp/longjmp e.g., result = setjmp(buffer); if (!result) protected code longjmp(buffer, error_code) // longjmp could appear in a subroutine } else error handling based on error_code a. setjmp stores registers, including pc, sp and fp, into a buffer i. first call to setjmp returns 0 ii. call by longjmp restores the registers in the buffer and returns control to the setjmp call. This time setjmp returns error_code, causing control to pass to the error handling routine b. problem: setjmp tosses all nested stack frames by restoring the old values of sp and fp i. any values that were held in registers by the protected code and not written to memory are lost. Hence some changes made by the successfully executed portion of the protected code may be lost ii. C's solution: programmer may use the volatile keyword to indicate that a variable's value may change "spontaneously", e.g., due to an I/O operation or a concurrently executing thread 1) all C implementations must honor the volatile keyword by immediately writing to memory any change to a variable and always loading from memory the value of a variable before using it. Note that frequent loads or writes could significantly slow down the execution of the program because loads from memory require many more instruction cycles then register reads and hence may stall the processor pipeline. 4. Signal handler: Certain system-generated exceptions, such as seg faults and bus errors, are expressed via signals. a. It is possible to provide signal handlers to catch signals. b. A signal handler is a function that takes an int parameter denoting the number of the signal. c. When the signal handler returns, the behavior of the process is often undefined. Often the control picks up where the signal occurred and the offending instruction is re-executed. If the problem causing the signal has not been fixed, the signal will be re-generated, thus causing an infinite series of handler calls. d. You register a signal handler with the kernel by calling the signal() function with the number of the signal and a pointer to the signal handler void sig_handler(int signal_num) { if (signal_num == SIGSEGV) printf("received SIGSEGV\n"); } int main() { signal(SIGSEGV, sig_handler); } e. It is inadvisable to write a signal handler because it is very difficult to fix the problem, and hence your program will seem to go into an infinite recursion unless the signal handler terminates the program. E. Advantages to having built-in exception handling capabilities 1. Code can be considerably less cluttered a. exception handling can be put in a separate section rather than interspersed with the code that's trying to accomplish some task b. separating the exception code from the normal processing code allows the flow of the normal processing code to be uninterrupted, which makes it easier to read and understand by a programmer 2. Encourages a user to consider all of the events that could occur during program execution and consider how they should be handled. The compiler can enforce this consideration by flagging as an error any unhandled exception. In contrast, a C programmer can blissfully ignore an error (maybe the programmer does not even know the function called can raise an error) and then have the program unceremoniously dump core some undetermined amount of time after the software has been released. II. Exception Handling in C++ A. Exception Handlers 1. Specified using a try-catch clause try { // Code that is expected to raise the exception } catch(formal parameter) { // A handler body } ... catch(formal parameter) { // A handler body } catch(...) { // An optional catch-all handler (the ... is actually C++ // syntax) } 2. catch functions can have only a single formal parameter a. parameter can be either a basic type or a user-defined class b. user-defined classes provide a way of passing multiple parameters. c. the parameter can be an ellipsis (...) in which case the handler is a catch-all handler d. the formal parameter can be ignored by not providing a variable name 3. any variable declared within the try block is de-allocated before the catch statement is executed. Hence if you want access to a variable used or assigned a value within the try block, make sure you declare that variable outside the try block. B. Binding Exceptions to Handlers 1. exceptions are raised using the throw command: throw [expression]; Example: throw new BadOperatorException(operatorToken, "bad operator token"); 2. A throw without an operand can only appear in a handler a. Such a throw reraises the exception and propagates it to an outside program unit. If there is another, more general exception handler that could handle the exception in the same catch list as the handler that reraises this throw, the handler will not be called. b. Throws are matched to exception handlers whose formal parameter matches the type of the thrown expression i. A handler with a formal parameter of type T matches an expression of type T, const T, T&, or const T& ii. A handler with a formal parameter that is a class type T matches any expression with class type T or whose type is a derived class of T. Example: class Exception {...} class StackException {...} try { throw StackException(); } All of the following catch statements will catch the above exception: a. catch (StackException e) {...} b. catch (StackException &e) {...} c. catch (Exception e) {...} d. catch (Exception &e) {...} You could put a const in front of any of the formal parameter declarations (e.g., catch (const StackException e)) and each catch would still catch the exception The following catch statement will not catch the above exception because Exception * expects a heap allocated object, not a stack allocated object: catch (Exception *e) {...} iii. A handler with a formal parameter of *T matches only exception objects that are allocated off the heap Example: try { throw new StackException(); } catch (StackException &e) { // does not catch the exception ... } catch (StackException e) { // does not catch the exception } catch (StackException *e) { // catches the exception because it matches // the the type of the exception object, // which is (StackException *). } 1) It is bad form to throw an exception object that is allocated from the heap because the catcher has to remember to de-allocate the object. 2) It is better to throw an exception object that is allocated from the stack because C++ will automatically de-allocate the object. 3) If you throw an object, modify the object, and then rethrow it, use a reference parameter to catch the object. If you use a value parameter, C++ will copy the original thrown object, not the modified object Example: throw StackException(); Right: catch (StackException &e) { e.setValue(10); throw; } Wrong: catch (StackException e) { e.setValue(10); throw; } c. The throw is matched by examining the catch statements that follow the try statement in sequential order. That means that exception handlers with more restrictive types should be placed before exception handlers with more general types. i. If the throw cannot be matched locally then it is dynamically propagated up the call chain. C. Continuation 1. After a handler completes execution, control flows to the first statement following the try construct (the statement immediately after the last catch statement). 2. If the exception was thrown out of the function and up the call stack, then all stack frames up to the catching function are popped off the stack and any stack-allocated local variables/parmeters are de-allocated. a. need to be careful that heap-allocated objects get de-allocated. This may require catching an exception you might otherwise not catch. For example: void g() { throw std::exception(); } void f() { int *x = new int; *x = 3; g(); delete x; } int main() { try { f(); } catch (std::exception e) { ... } } Because f() did not catch the exception thrown by g(), the memory pointed to by x never gets deleted. To fix this problem, you would need to put a try/catch clause into f() and have the catch clause delete x. D. Other Design Choices 1. The standard template library provides a std::exception class a. Included using <exception> b. Contains a virtual what method that returns a char string denoting the type of exception: virtual const char* what(); For the exception class it returns the string "std::exception". c. There are several subclasses of interest, all of which can be caught by your program exception bad_alloc: thrown by new on allocation failure (this is why you don't check to see if new returns NULL--it does not--it throws a bad_alloc error) bad_cast: thrown by dynamic_cast on failure runtime_error: thrown when certain runtime conditions occur underflow_error: thrown when an arithmetic underflow occurs overflow_error: thrown when an arithmetic overflow occurs system_error: thrown by the OS for certain errors and includes an error code that is typically specific to that platform and is non-portable logic_error out_of_range: thrown by vector, queue, dequeue, and some other std classes on out of range errors length_error: thrown by vector or string on resizing error 2. Exceptions cannot be disabled 3. System-detected exceptions cannot be handled except those thrown as system_error (e.g., seg fault and bus error cannot be handled via this exception handling approach because these exceptions do not generate C++ exceptions but instead generate signals) 4. You can specify which exceptions get thrown out of a function without being handled using the throw keyword. For example: void f() throw (DivisionByZeroException) {...} a. The throw statement was deprecated in the C11 standard and should not be used any more b. The C++ compiler does not verify whether or not you throw other exceptions out of the function, so this option is primarily for documentation purposes. E. Example: // superclass for all array exceptions. It can catch any array // exception that is thrown class ArrayException { public: virtual void response() = 0; }; // thrown if the program tries to create an array whose size is // negative class ArraySizeException : public ArrayException { public: int size; ArraySizeException(int s) : size(s) {} void response() { printf("Error: The array size of %d is negative\n", size); } }; class ArrayOutOfBoundsException : public ArrayException { public: int index; // the index that was tried int size; // the size of the array public: ArrayOutOfBoundsException(int i, int s): index(i), size(s) {} void response() {} }; // note that all throws do not involve the allocation of memory // off the heap. Instead they throw a stack-allocated object class SafeArray { protected: int *data; int size; public: SafeArray(int s = 10): size(s) { if (size <= 0) throw ArraySizeException(size); data = new int[s]; for (int i = 0; i < size; i++) data[i] = 0; } void set(int index, int value) { rangeCheck(index); data[index] = value; } int get(int index) { rangeCheck(index); return data[index]; } protected: void rangeCheck(int index) { try { if ((index < 0) || (index >= size)) throw ArrayOutOfBoundsException(index, size); } // note the rethrow of the error condition catch (ArrayOutOfBoundsException &e) { printf("caught it\n"); throw; } } }; int main() { SafeArray *myArray; try { myArray = new SafeArray(100); myArray->set(3, 10); myArray->get(-1); } catch (const bad_alloc &e) { fprintf(stderr, "Out of memory error occurred while trying to allocate memory for myArray\n"); } catch (const ArrayOutOfBoundsException &e) { fprintf(stderr, "index %d is out of range. Must be between 0 and %d\n", e.index, e.size); } for (int i = -2; i <= 100; i++) { try { myArray->set(i, i); if (i == 50) new SafeArray(-10); } catch (const ArrayOutOfBoundsException &e) { fprintf(stderr, "index %d is out of range. Must be between 0 and %d\n", e.index, e.size); } catch (ArrayException &e) { e.response(); } } } Executing this code produces the following output: caught it index -1 is out of range. Must be between 0 and 100 caught it index -2 is out of range. Must be between 0 and 100 caught it index -1 is out of range. Must be between 0 and 100 Error: The array size of -10 is negative caught it index 100 is out of range. Must be between 0 and 100 III. Exception Handling in Java A. Types of Exceptions 1. All Java exceptions are descendants of the Throwable class. Throwable provides three very helpful methods: a. printStackTrace(): prints the standard error message for this class plus a record of the method calls leading up to this exception. This method should not be redefined by subclasses. b. getMessage(), toString(): both methods return a string object that contains the standard error message for this class. Typically each subclass redefines this method. 2. Java pre-defines two exception classes that are subclasses of Throwable a. Error: This class and its descendants are related to errors that are thrown by the Java virtual machine, such as running out of heap memory. i. These exceptions are never thrown by user programs and should never be handled by user programs ii. Your program will be terminated when one of these errors occurs b. Exception: Has two descendents i. IOException: Thrown when an input or output operation has an error ii. RuntimeException: All other run-time errors 1. Java provides a number of pre-defined exception classes such as ArrayIndexOutOfBoundsException and NullPointerException --User-defined exceptions should subclass Exception B. Exception Handlers 1. Same form as C++ except that all formal parameters must be present (can't use ellipsis). 2. Ellipsis can be simulated by catching an Exception object. a. To get the specific name of the class that was thrown, use Java's getClass() method to get the class object and then the getName() method to get the class's name: catch (Exception genericObject) { String name = genericObject.getClass().getName(); ... } C. Exception Binding: Same as C++ D. Continuation: Same as C++ E. Differences from C++ 1. All thrown objects are allocated off the heap, so all throw statements have the form: throw new ExceptionObject; 2. Rethrowing an object: In Java, when you rethrow an object you must specify the object that you are rethrowning. In C++, you simply call throw. a. In Java since all exception objects are allocated off the heap you do not have to worry about what happens when you modify an exception object. The modified object is automatically passed by the rethrow statement Example: Java C++ catch(StackException e) { catch(StackException &e) { e.setValue(10); e.setValue(10); throw e; throw; } } 3. Declaring unhandled exceptions: In Java, if a method throws an exception that it does not handle, then it should declare that it does not do so by using the throws statement. a) Syntax: ret-type methName(param-list) throws exception-list { ... } where exception-list is a comma separated list of exception names Example: char readToken() throws IOException { return (char) System.in.read(); } Notice that you must worry about exceptions thrown by methods that you call. Hence the throws statement is required both for exceptions that a method explicitly throws and for exceptions that are generated but not handled by methods it calls b) Exception to the rule (pardon the pun :)): If the exception is a subclass of Error or RuntimeException, then the method does not need to specify the exception in the throws list c) If you fail to specify an unhandled exception in the throw list, and it is not subject to the above exception, the Java compiler will generate an error message i) In C++ it is not required for a method to declare that it does not handle an exception (although it is optional and can be done using the throw statement). 4) The finally clause: In Java, the finally clause provides a way to execute a block of code regardless of whether the try exits normally or abnormally. You might need to do this to clean up system state, such as closing a file. a) The finally statement is placed after the last catch statement b) The code in a finally statement is executed even if a return statement is executed within the try or one of the catch clauses c) The code in a finally statement is executed even if no catch clause catches the exception Example: public class temp { public temp() {} public void execute() throws MyException { try { throw new MyException(); } finally { System.out.println("Leaving execute via finally"); } } public static void main(String [] args) { temp foo = new temp(); try { foo.execute(); } catch (MyException e) { System.out.println("caught MyException in main"); } System.out.println("Leaving main"); } } class MyException extends Exception {} Executing this code generates the following output: Leaving execute via finally caught MyException in main Leaving main F. Java Example: Here's the Java code that implements the same C++ program shown earlier. Notice that I had to add an additional catch clause to the first try statement in main since Java requires me to handle all exceptions that are declared to be thrown by a method. In the C++ example I did not handle the ArraySizeError exception in the first try statement but in Java I must. class SafeArray { int data []; int size = 10; // Note that we assume the new operator succeeds. If it does // not, Java throws an OutOfMemoryError. Note that this is // a subtype of Error, which means your program should not // attempt to handle it--your program will be terminated public SafeArray(int s) throws ArraySizeError { size = s; if (size <= 0) throw new ArraySizeError(size); data = new int[size]; for (int i = 0; i < size; i++) data[i] = 0; } public void set(int index, int value) throws ArrayOutOfBoundsError { rangeCheck(index); data[index] = value; } public int get(int index) throws ArrayOutOfBoundsError { rangeCheck(index); return data[index]; } void rangeCheck(int index) throws ArrayOutOfBoundsError { try { if ((index < 0) || (index >= size)) throw new ArrayOutOfBoundsError(index, size); } catch (ArrayOutOfBoundsError e) { System.err.println("caught it"); throw e; } } public static void main(String args[]) { SafeArray myArray = null; try { myArray = new SafeArray(100); myArray.set(3, 10); myArray.get(-1); } catch (ArrayOutOfBoundsError e) { System.err.println("index " + e.index + " is out of range. Must be between 0 and " + e.size); } catch (ArrayError e) { // catches the ArraySizeError exception // thrown by SafeArray e.response(); System.exit(1); } for (int i = -2; i <= 100; i++) { try { myArray.set(i, i); if (i == 50) new SafeArray(-10); } catch (ArrayOutOfBoundsError e) { System.err.println("index " + e.index + " is out of range. Must be between 0 and " + e.size); } catch (ArrayError e) { e.response(); } } } } abstract class ArrayError extends Exception { abstract public void response(); } class ArraySizeError extends ArrayError { public int size; public ArraySizeError(int s) { size = s; } public void response() { System.err.println("Error: The array size of " + size + " is negative"); } } class ArrayOutOfBoundsError extends ArrayError { public int index; public int size; public ArrayOutOfBoundsError(int i, int s) { index = i; size = s; } public void response() {} } G. Another Java Example: Stack
|
http://web.eecs.utk.edu/~bvz/teaching/cs365Sp17/notes/ExceptionHandling/
|
CC-MAIN-2017-43
|
refinedweb
| 3,232
| 54.02
|
Today Extension Tutorial: Getting Started
Update Note:This tutorial has been updated to iOS 10 and Swift 3 by Michael Katz. The original tutorial was written by Chris Wagner.
iOS 8 introduced App Extensions: a way for you to share your app’s functionality with other apps or the OS itself.
One of these types of extensions is a Today Extension, also known as a Widget. These allow you to present information in the Notification Center and Lock Screen, and are a great way to provide immediate and up-to-date information that the user is interested in. Today Extensions can also appear on the Search screen, and on the quick action menu on the Home screen when using 3D Touch.
In this tutorial, you’ll write an interactive Today Extension that renders the current and recent market prices a today; tapping or swiping your finger on the chart reveals the exact price for a specific day in the past.
The extension will contain all of these features. Note that the swipe gesture often triggers sliding between the Today and Notifications sections within Notification Center, so it doesn’t really provide the best or most reliable user experience, but a single tap works quite well.
Getting Started
Download the Crypticker starter project to get started. The project contains the entire Crypticker app as described above, but please note that this tutorial will not focus on the development of the container app.
Build and run the project to see what you’re starting with: today extension is specific to BTC. Therefore, its name shall be BTC Widget.
Note: Today and display them in a beautiful chart.”.
This tutorial won’t go into much detail on frameworks themselves, as there’s enough information on them to fill their own tutorial. And wouldn’t you know we’ve done exactly that? ;] If you’d like to know more about creating and managing your own custom frameworks, check out this tutorial. today extensions run inside another host app, so they don’t go through the traditional app lifecycle.
In essence, the lifecycle of the today extension is mapped to the lifecycle of the TodayViewController. For example,
TodayViewController‘s
viewDidLoad method is called when the widget is launched, just like
application(_:didFinishLaunchingWithOptions:) is called when the main app launches.
Open MainInterface.storyboard. You’ll see a clear view with a light Hello World label.
Make sure the BTC Widget scheme is selected in Xcode’s toolbar and build and run. This will launch the iOS Simulator and open the Notification Center,: The name of the widget may be ‘CRYPTICKER’ – once you’ve run the host app the widget uses that name instead.
Build the Interface
Open MainInterface.storyboard and delete the label. Set the view to 110pts tall and 320pts wide in the Size Inspector. This is the default iPhone widget size.
Drag two Labels and a View from the Object Library onto the view controllers view.
- Position one of the labels in the top left corner, and in the Attributes Inspector set its Text to $592.12 and its Color to Red: 33, Green: 73 and Blue: 108. Set the Font to System 36.0. This label will display the current market price. You want to make it nice and big so it’s easily legible in a quick glance.
- Position the other label at the same height right of the one you’ve just set up, but against the right margin. In the Attributes Inspector set its Text to +1.23 and its Font to System 36.0. This displays the difference between yesterdays price and the current price.
- Finally, position an empty view below the two labels, stretch it so it’s bottom and side edges are touching the containing view.:
Don’t worry about laying things out exactly as shown, as you’ll soon be adding Auto Layout constraints to properly define the layout.
Now open TodayViewController.swift in the editor. Add this at the top of the file:
import CryptoCurrencyKit
This imports the
CryptoCurrencyKit framework.
Next, update the class declaration, like this:
class TodayViewController: CurrencyDataViewController, NCWidgetProviding {
Making the
TodayViewController a subclass of
CurrencyDataViewController. later on.
Since
TodayViewController subclasses
CurrencyDataViewController, it inherits outlets for the price label, price change label and line chart view. You now need to wire these up.
Open MainInterface.storyboard again.. The general idea is that views are designed with a single layout that can work on a variety of screen sizes. The view is considered adaptive when it can adapt to unknown future device metrics. This will be useful later when adding size expansion to the widget.
Select the Price Label label and then select Editor\Size to Fit Content. If the Size to Fit Content option is disabled in the menu, deselect the label, and then reselect it and try again; sometimes Xcode can be a little temperamental.
Next, using the Add New Constraints button at the bottom of the storyboard canvas, pin the Top and Leading space to 0 and 0 respectively. Make sure that Constrain to margins is turned on. Then click Add 2 Constraints.
Select the Price Change Label label and again select Editor\Size to Fit Content. Then, using the Add New Constraints button, pin the Top and Trailing space both to 0.
Finally, select the Line Chart View. Using the Add New Constraints button, pin its Leading and Trailing space to 0 and its Top and Bottom space to 8. Make sure that Constrain to margins is still turned on. Click Add 4 Constraints.
From the Document Outline select the view containing the labels and Line Chart View, then choose Editor\Resolve Auto Layout Issues\All Views in Today View Controller\Update Frames. This will fix any Auto Layout warnings in the canvas by updating the frames of the views to match their constraints. If Update Frames is not enabled then you laid everything out perfect and it is unnecessary to run.
Implementing TodayViewController.swift
Now the interface is in place and everything is wired up, open up TodayViewController.swift again.
You’ll notice you’re working with a bog-standard
UIViewController subclass. Comforting, right? Although later you’ll encounter a new method called
widgetPerformUpdate from the
NCWidgetProviding protocol. You’ll learn more about that later.
This view controller is responsible for displaying the current price, price difference, and showing the price history in a line chart.
Now replace the boilerplate
viewDidLoad method with the following implementation:
override func viewDidLoad() { super.viewDidLoad() lineChartView.delegate = self lineChartView.dataSource = self priceLabel.text = "--" priceChangeLabel.text = "--" }
This method simply sets
self as the data source and delegate for the line chart view, and sets some placeholder text on the two labels.
Now add the following method:
override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) fetchPrices { error in if error == nil { self.updatePriceLabel() self.updatePriceChangeLabel() self.updatePriceHistoryLineChart() } } }
fetchPrices is defined in
CurrencyDataViewController, and is an asynchronous call that takes a completion block. The method makes a request to the web-service mentioned at the beginning of the tutorial to obtain Bitcoin price information.
The method’s completion block updates both labels and the line chart. The update methods are defined for you in the super-class. They simply take the values retrieved by
fetchPrices and format them appropriately for display.
Now it’s time to see what you have so far. Select the BTC Widget scheme. Build and run.
- it isn’t already.
Cool! Your widget now displays real-time Bitcoin pricing right in Notification Center. But you may have noticed a problem: the line chart looks pretty squished.
Fortunately, Notification Center supports expandable widgets that can show more information.
At the bottom of
viewDidLoad add the following:
extensionContext?.widgetLargestAvailableDisplayMode = .expanded
This tells the extension context that this widget supports an extended display. This will cause a “Show More” or “Show Less” button to automatically appear on the widget’s title bar.
Note: The main
UIViewController of a today extension will have access to its
extensionContext, which acts like
UIApplication.shared, but for extensions. This provides functions for opening external URLs, and keys to listen for lifetime event notifications.
Next, add the following method:
func widgetActiveDisplayModeDidChange(_ activeDisplayMode: NCWidgetDisplayMode, withMaximumSize maxSize: CGSize) { let expanded = activeDisplayMode == .expanded preferredContentSize = expanded ? CGSize(width: maxSize.width, height: 200) : maxSize }
widgetActiveDisplayModeDidChange is an optional
NCWidgetProviding method. It is called in response to the user tapping the “Show More” or “Show Less” buttons. Setting the
preferredContentSize will change the widget’s height, which in turn updates the chart’s height, giving it more room to breathe.
maxSize is the maximum size allowed for the widget, given its display mode. For the
.compact mode, the maximum size is also the minimum size, but for
.expanded it could be much larger.
After updating the preferred size,.
On the left, you’ll see how the widget appears when the widget is collapsed. On the right, you’ll see how it appears when expanded. Not too shabby!
Spruce up the UI
This looks OK, but it can still benefit from some visual tweaking. Since iOS places the widget on a blurred background, they are practically (and literally) begging for the ‘vibrancy’ effect.
Adding Vibrancy
Open MainInterface.storyboard again.
Drag a Visual Effect View with Blur from the object browser into the main view.
Drag the Line Chart View from the main view into the effect view’s subview. Click the Add New Constraints button and pin all four edges to the parent view with 0 padding. Make sure “Constrain to margins” is not selected. Then click Add 4 Constraints.
Next, select the Visual Effect View and recreate the line chart’s previous constraints:
- Ctrl+drag from the effect view to the main view to bring up the constraint popup. Hold shift and select Leading Space to Container Margin, Trailing Space to Container Margin, and Vertical Spacing to Bottom Layout Guide. Then click Add Contraints.
- Ctrl+drag from the effect view to the Price Label and select Vertical Spacing.
- In the Size Inspector, change the Trailing and Leading Space constants to
0, and the Top Space and Bottom Space constants to
8.
- From the menu bar, choose Editor\Resolve Auto Layout Issues\All Views in Today View Controller\Update Frames.
Finally, in the Attributes Inspector check the Vibrancy box in the Visual Effect View section. This will cause the view to change from a dark color to a clear color.
Wire up the new view
Now open the Assistant Editor. Make sure TodayViewController.swift is the active file on the right.
Ctrl+drag from Visual Effects View in the storyboard editor to the top of the TodayViewController class. In the popup dialog make sure Connection is set to Outlet, Type is set to
UIVisualEffectView, and enter
vibrancyView for the Name. Click Connect.
Then add the following line to the bottom of
viewDidLoad:
vibrancyView.effect = UIVibrancyEffect.widgetPrimary()
This sets the vibrancy effect to the system-defined one for today extensions, ensuring that the coloring will be appropriate on screen.
Add the following to
TodayViewController:
override func lineChartView(_ lineChartView: JBLineChartView!, colorForLineAtLineIndex lineIndex: UInt) -> UIColor! { return lineChartView.tintColor }
The vibrancy effect sets the
tintColor of anything in the
contentView on a visual effect view. This is how labels and template images are automatically drawn with a vibrancy effect. For a custom view like
JBLineChartView, the effect has to be applied manually. The
lineChartView(_:colorForLineAtLineIndex:) delegate method is the place to do that here.
Build and run again.
Very nice! Just a tweak to the line width and this could be downright beautiful.
At the top of
TodayViewController add the following:
var lineWidth: CGFloat = 2.0
This variable will be used to control the line width.
Add this method:
private func toggleLineChart() { let expanded = extensionContext!.widgetActiveDisplayMode == .expanded if expanded { lineWidth = 4.0 } else { lineWidth = 2.0 } }
This uses
widgetActiveDisplayMode to determine if the widget is expanded or collapsed and sets the line width for the chart accordingly.
override func lineChartView(_ lineChartView: JBLineChartView!, widthForLineAtLineIndex lineIndex: UInt) -> CGFloat { return lineWidth }
This delegate method returns
lineWidth for the chart drawing routine’s use.
Finally, add the following to the bottom of
widgetActiveDisplayModeDidChange:
toggleLineChart()
This calls your new method to propagate the line width.
Build and run again. This time, the line width will change along with the size change. How snazzy!
To really see the vibrancy effect pop, set a colorful background. This can be done on the simulator by:
- Open the Photos app.
- Select an image.
- Tap the share icon.
- Select Use as Wallpaper from the bottom row.
- Tap Set and then Set Both.
- Build and run, again.
Make it Interactive
Widgets can be more than simple data displays, by supporting user interaction. The Crypticker app already supports tapping a position on the chart to display the price at that location. You can add that functionality to the widget when it’s expanded.
Go back to MainInterface.storyboard once again.
Drag a another Visual Effect View with Blur from the object browser into the main view.
In the Attributes Inspector check Vibrancy. This will cause the view to change from a dark color to a clear color.
In the Document Outline ctrl+drag from the new Visual Effect View to the previous Vibrancy View. Hold down Shift and select Top, Bottom, Leading, and Trailing. Click Add Constraints. This will place this new view in the same spot and size as the chart view.
Next, drag a Label into the subview of the Visual Effect View. Pin this label to the top and center of its parent view by ctrl+dragging from the label into the parent view and selecting Top Space to Visual Effect View and Center Horizontally in Visual Effect View.
Change the label’s text to be empty.
Select Editor\Resolve Auto Layout Issues\All Views in Today View Controller\Update Frames to rearrange the views. The label should now be invisible on the storyboard, but don’t worry… it’s still there :]
In the Document Outline, ctrl+drag from the Today View Controller to the new label, and set its outlet to priceOnDayLabel.
Now the new label is almost wired up.
Open the Assistant Editor once again, and create an outlet for the new visual effects view in
TodayViewController. Call it
priceSelectionVibrancyView.
In
viewDidLoad add this line to set the vibrancy effect:
priceSelectionVibrancyView.effect = UIVibrancyEffect.widgetSecondary()
The
widgetSecondary vibrancy is a slightly different effect to be used for data that is ancillary to the main data. For this widget, the price at an earlier date on the graph certainly meets that criteria.
Note: Each
UIVisualEffectView view can only have one type of vibrancy effect. Two different effects views are needed here to support both types of vibrancy.
Next, update
toggleLineChart as follows:
private func toggleLineChart() { let expanded = extensionContext!.widgetActiveDisplayMode == .expanded if expanded { lineWidth = 4.0 priceOnDayLabel.isHidden = false } else { lineWidth = 2.0 priceOnDayLabel.isHidden = true } priceOnDayLabel.text = "" }
In addition to changing the chart line width, this now hides or shows the label.
Now add these delegate methods:
func lineChartView(_ lineChartView: JBLineChartView!, didSelectLineAtIndex lineIndex: UInt, horizontalIndex: UInt) { if let prices = prices { let price = prices[Int(horizontalIndex)] updatePriceOnDayLabel(price) } } func didUnselectLineInLineChartView(_ lineChartView: JBLineChartView!) { priceOnDayLabel.text = "" }
These simply update the label’s text when the user taps on the line chart.
Build and run. Expand the widget and tap on a point in the graph. You will see the price displayed, and at a slightly lighter color than the graph line.
Note: If you’re testing on the Simulator a quick ‘tap’ may not be enough to trigger displaying the label – so try a holding the mouse button down a little longer to make it appear.
Show Up On The Home Screen
By default, if there is only one widget in an application, it will show up automatically in the shortcut menu when using 3D Touch on the app’s icon on the home screen. The widget that shows up there can also be explicitly set if you want to choose which one will appear.
Open Info.plist under Supporting Files for Crypticker.
Use Editor\Add Item to add a new row. Choose Home Screen Widget from the drop down (or
UIApplicationShortcutWidget if showing raw keys). In the Value column enter the widget’s Bundle Identifier. The Bundle Identifier for the widget can be found on the General tab of the target info pane.
Build and run the app. Press the Home button (Cmd+Shift+H in the Simulator), and then 3D Touch the app icon. The widget should now appear.
Note: You may not be able to test this on the Simulator unless you have a Mac with a force-touch trackpad.
Wow. You get additional shortcut menu functionality for free! Even though only the collapsed size is available, you can’t beat the price.
Keep the Widget Up To Date with the following code:
func widgetPerformUpdate(completionHandler: (@escaping (NCUpdateResult) -> Void)) { fetchPrices { error in if error == nil { self.updatePriceLabel() self.updatePriceChangeLabel() self.updatePriceHistoryLineChart() completionHandler(.newData) } else { completionHandler(.failed) } } }
This method does the following:
- Fetch the current price data from the web service by calling
fetchPrices.
- If there’s no error the interface is updated.
- Finally – and as required by the
NCWidgetProvidingprotocol – the function calls the system-provided completion block with the
.newDataenumeration.
- In the event of an error, the completion block is called with the
.failedenumeration. This informs the system that no new data is available and the existing snapshot should be used.
And that wraps up your Today Extension! You can download the final project here.
Where To Go From Here?
Download the final project here.
As an enterprising developer, you might want to take another look at your existing apps and think about how you can update them with Today Extensions. Take it a step further and dream up new app ideas that exploit the possibilities of Today Extensions.
If you’d like to learn more about creating other types of extensions, check out our iOS 8 App Extensions Tech Talk Video where you can learn about Photo Editing Extensions, Share Extensions, Action Extensions, and more!
We can’t wait to see what you come up with, and hope to have your Today Extensions at the top of our Notification Centers soon!
Michael Katz
- Tech Editor
Adrian Strahan
- Editor
Chris Belanger
- Final Pass Editor
Mike Oliver
- Team Lead
Andy Obusek
|
https://www.raywenderlich.com/150953/today-extension-tutorial-getting-started
|
CC-MAIN-2017-26
|
refinedweb
| 3,058
| 57.87
|
added copyright headers changes to loadfilename & co. to make savesystem transparent to assertions and ~~: \ Local variables are quite important for writing readable programs, but 23: \ IMO (anton) they are the worst part of the standard. There they are very 24: \ restricted and have an ugly interface. 25: 26: \ So, we implement the locals wordset, but do not recommend using 27: \ locals-ext (which is a really bad user interface for locals). 28: 29: \ We also have a nice and powerful user-interface for locals: locals are 30: \ defined with 31: 32: \ { local1 local2 ... } 33: \ or 34: \ { local1 local2 ... -- ... } 35: \ (anything after the -- is just a comment) 36: 37: \ Every local in this list consists of an optional type specification 38: \ and a name. If there is only the name, it stands for a cell-sized 39: \ value (i.e., you get the value of the local variable, not it's 40: \ address). The following type specifiers stand before the name: 41: 42: \ Specifier Type Access 43: \ W: Cell value 44: \ W^ Cell address 45: \ D: Double value 46: \ D^ Double address 47: \ F: Float value 48: \ F^ Float address 49: \ C: Char value 50: \ C^ Char address 51: 52: \ The local variables are initialized with values from the appropriate 53: \ stack. In contrast to the examples in the standard document our locals 54: \ take the arguments in the expected way: The last local gets the top of 55: \ stack, the second last gets the second stack item etc. An example: 56: 57: \ : CX* { F: Ar F: Ai F: Br F: Bi -- Cr Ci } 58: \ \ complex multiplication 59: \ Ar Br f* Ai Bi f* f- 60: \ Ar Bi f* Ai Br f* f+ ; 61: 62: \ There will also be a way to add user types, but it is not yet decided, 63: \ how. Ideas are welcome. 64: 65: \ Locals defined in this manner live until (!! see below). 66: \ Their names can be used during this time to get 67: \ their value or address; The addresses produced in this way become 68: \ invalid at the end of the lifetime. 69: 70: \ Values can be changed with TO, but this is not recomended (TO is a 71: \ kludge and words lose the single-assignment property, which makes them 72: \ harder to analyse). 73: 74: \ As for the internals, we use a special locals stack. This eliminates 75: \ the problems and restrictions of reusing the return stack and allows 76: \ to store floats as locals: the return stack is not guaranteed to be 77: \ aligned correctly, but our locals stack must be float-aligned between 78: \ words. 79: 80: \ Other things about the internals are pretty unclear now. 81: 82: \ Currently locals may only be 83: \ defined at the outer level and TO is not supported. 84: 85: require search-order.fs 86: require float.fs 87: 88: : compile-@local ( n -- ) \ gforth compile-fetch-local 89: case 90: 0 of postpone @local0 endof 91: 1 cells of postpone @local1 endof 92: 2 cells of postpone @local2 endof 93: 3 cells of postpone @local3 endof 94: ( otherwise ) dup postpone @local# , 95: endcase ; 96: 97: : compile-f@local ( n -- ) \ gforth compile-f-fetch-local 98: case 99: 0 of postpone f@local0 endof 100: 1 floats of postpone f@local1 endof 101: ( otherwise ) dup postpone f@local# , 102: endcase ; 103: 104: \ the locals stack grows downwards (see primitives) 105: \ of the local variables of a group (in braces) the leftmost is on top, 106: \ i.e. by going onto the locals stack the order is reversed. 107: \ there are alignment gaps if necessary. 108: \ lp must have the strictest alignment (usually float) across calls; 109: \ for simplicity we align it strictly for every group. 110: 111: slowvoc @ 112: slowvoc on \ we want a linked list for the vocabulary locals 113: vocabulary locals \ this contains the local variables 114: ' locals >body ' locals-list >body ! 115: slowvoc ! 116: 117: create locals-buffer 1000 allot \ !! limited and unsafe 118: \ here the names of the local variables are stored 119: \ we would have problems storing them at the normal dp 120: 121: variable locals-dp \ so here's the special dp for locals. 122: 123: : alignlp-w ( n1 -- n2 ) 124: \ cell-align size and generate the corresponding code for aligning lp 125: aligned dup adjust-locals-size ; 126: 127: : alignlp-f ( n1 -- n2 ) 128: faligned dup adjust-locals-size ; 129: 130: \ a local declaration group (the braces stuff) is compiled by calling 131: \ the appropriate compile-pushlocal for the locals, starting with the 132: \ righmost local; the names are already created earlier, the 133: \ compile-pushlocal just inserts the offsets from the frame base. 134: 135: : compile-pushlocal-w ( a-addr -- ) ( run-time: w -- ) 136: \ compiles a push of a local variable, and adjusts locals-size 137: \ stores the offset of the local variable to a-addr 138: locals-size @ alignlp-w cell+ dup locals-size ! 139: swap ! 140: postpone >l ; 141: 142: : compile-pushlocal-f ( a-addr -- ) ( run-time: f -- ) 143: locals-size @ alignlp-f float+ dup locals-size ! 144: swap ! 145: postpone f>l ; 146: 147: : compile-pushlocal-d ( a-addr -- ) ( run-time: w1 w2 -- ) 148: locals-size @ alignlp-w cell+ cell+ dup locals-size ! 149: swap ! 150: postpone swap postpone >l postpone >l ; 151: 152: : compile-pushlocal-c ( a-addr -- ) ( run-time: w -- ) 153: -1 chars compile-lp+! 154: locals-size @ swap ! 155: postpone lp@ postpone c! ; 156: 157: : create-local ( " name" -- a-addr ) 158: \ defines the local "name"; the offset of the local shall be 159: \ stored in a-addr 160: create 161: immediate restrict 162: here 0 , ( place for the offset ) ; 163: 164: : lp-offset ( n1 -- n2 ) 165: \ converts the offset from the frame start to an offset from lp and 166: \ i.e., the address of the local is lp+locals_size-offset 167: locals-size @ swap - ; 168: 169: : lp-offset, ( n -- ) 170: \ converts the offset from the frame start to an offset from lp and 171: \ adds it as inline argument to a preceding locals primitive 172: lp-offset , ; 173: 174: vocabulary locals-types \ this contains all the type specifyers, -- and } 175: locals-types definitions 176: 177: : W: ( "name" -- a-addr xt ) \ gforth w-colon 178: create-local 179: \ xt produces the appropriate locals pushing code when executed 180: ['] compile-pushlocal-w 181: does> ( Compilation: -- ) ( Run-time: -- w ) 182: \ compiles a local variable access 183: @ lp-offset compile-@local ; 184: 185: : W^ ( "name" -- a-addr xt ) \ gforth w-caret 186: create-local 187: ['] compile-pushlocal-w 188: does> ( Compilation: -- ) ( Run-time: -- w ) 189: postpone laddr# @ lp-offset, ; 190: 191: : F: ( "name" -- a-addr xt ) \ gforth f-colon 192: create-local 193: ['] compile-pushlocal-f 194: does> ( Compilation: -- ) ( Run-time: -- w ) 195: @ lp-offset compile-f@local ; 196: 197: : F^ ( "name" -- a-addr xt ) \ gforth f-caret 198: create-local 199: ['] compile-pushlocal-f 200: does> ( Compilation: -- ) ( Run-time: -- w ) 201: postpone laddr# @ lp-offset, ; 202: 203: : D: ( "name" -- a-addr xt ) \ gforth d-colon 204: create-local 205: ['] compile-pushlocal-d 206: does> ( Compilation: -- ) ( Run-time: -- w ) 207: postpone laddr# @ lp-offset, postpone 2@ ; 208: 209: : D^ ( "name" -- a-addr xt ) \ gforth d-caret 210: create-local 211: ['] compile-pushlocal-d 212: does> ( Compilation: -- ) ( Run-time: -- w ) 213: postpone laddr# @ lp-offset, ; 214: 215: : C: ( "name" -- a-addr xt ) \ gforth c-colon 216: create-local 217: ['] compile-pushlocal-c 218: does> ( Compilation: -- ) ( Run-time: -- w ) 219: postpone laddr# @ lp-offset, postpone c@ ; 220: 221: : C^ ( "name" -- a-addr xt ) \ gforth c-caret 222: create-local 223: ['] compile-pushlocal-c 224: does> ( Compilation: -- ) ( Run-time: -- w ) 225: postpone laddr# @ lp-offset, ; 226: 227: \ you may want to make comments in a locals definitions group: 228: ' \ alias \ immediate 229: ' ( alias ( immediate 230: 231: forth definitions 232: 233: \ the following gymnastics are for declaring locals without type specifier. 234: \ we exploit a feature of our dictionary: every wordlist 235: \ has it's own methods for finding words etc. 236: \ So we create a vocabulary new-locals, that creates a 'w:' local named x 237: \ when it is asked if it contains x. 238: 239: also locals-types 240: 241: : new-locals-find ( caddr u w -- nfa ) 242: \ this is the find method of the new-locals vocabulary 243: \ make a new local with name caddr u; w is ignored 244: \ the returned nfa denotes a word that produces what W: produces 245: \ !! do the whole thing without nextname 246: drop nextname 247: ['] W: >name ; 248: 249: previous 250: 251: : new-locals-reveal ( -- ) 252: true abort" this should not happen: new-locals-reveal" ; 253: 254: create new-locals-map ' new-locals-find A, ' new-locals-reveal A, 255: 256: vocabulary new-locals 257: new-locals-map ' new-locals >body cell+ A! \ !! use special access words 258: 259: variable old-dpp 260: 261: \ and now, finally, the user interface words 262: : { ( -- addr wid 0 ) \ gforth open-brace 263: dp old-dpp ! 264: locals-dp dpp ! 265: also new-locals 266: also get-current locals definitions locals-types 267: 0 TO locals-wordlist 268: 0 postpone [ ; immediate 269: 270: locals-types definitions 271: 272: : } ( addr wid 0 a-addr1 xt1 ... -- ) \ gforth close-brace 273: \ ends locals definitions 274: ] old-dpp @ dpp ! 275: begin 276: dup 277: while 278: execute 279: repeat 280: drop 281: locals-size @ alignlp-f locals-size ! \ the strictest alignment 282: set-current 283: previous previous 284: locals-list TO locals-wordlist ; 285: 286: : -- ( addr wid 0 ... -- ) \ gforth dash-dash 287: } 288: [char] } parse 2drop ; 289: 290: forth definitions 291: 292: \ A few thoughts on automatic scopes for locals and how they can be 293: \ implemented: 294: 295: \ We have to combine locals with the control structures. My basic idea 296: \ was to start the life of a local at the declaration point. The life 297: \ would end at any control flow join (THEN, BEGIN etc.) where the local 298: \ is lot live on both input flows (note that the local can still live in 299: \ other, later parts of the control flow). This would make a local live 300: \ as long as you expected and sometimes longer (e.g. a local declared in 301: \ a BEGIN..UNTIL loop would still live after the UNTIL). 302: 303: \ The following example illustrates the problems of this approach: 304: 305: \ { z } 306: \ if 307: \ { x } 308: \ begin 309: \ { y } 310: \ [ 1 cs-roll ] then 311: \ ... 312: \ until 313: 314: \ x lives only until the BEGIN, but the compiler does not know this 315: \ until it compiles the UNTIL (it can deduce it at the THEN, because at 316: \ that point x lives in no thread, but that does not help much). This is 317: \ solved by optimistically assuming at the BEGIN that x lives, but 318: \ warning at the UNTIL that it does not. The user is then responsible 319: \ for checking that x is only used where it lives. 320: 321: \ The produced code might look like this (leaving out alignment code): 322: 323: \ >l ( z ) 324: \ ?branch <then> 325: \ >l ( x ) 326: \ <begin>: 327: \ >l ( y ) 328: \ lp+!# 8 ( RIP: x,y ) 329: \ <then>: 330: \ ... 331: \ lp+!# -4 ( adjust lp to <begin> state ) 332: \ ?branch <begin> 333: \ lp+!# 4 ( undo adjust ) 334: 335: \ The BEGIN problem also has another incarnation: 336: 337: \ AHEAD 338: \ BEGIN 339: \ x 340: \ [ 1 CS-ROLL ] THEN 341: \ { x } 342: \ ... 343: \ UNTIL 344: 345: \ should be legal: The BEGIN is not a control flow join in this case, 346: \ since it cannot be entered from the top; therefore the definition of x 347: \ dominates the use. But the compiler processes the use first, and since 348: \ it does not look ahead to notice the definition, it will complain 349: \ about it. Here's another variation of this problem: 350: 351: \ IF 352: \ { x } 353: \ ELSE 354: \ ... 355: \ AHEAD 356: \ BEGIN 357: \ x 358: \ [ 2 CS-ROLL ] THEN 359: \ ... 360: \ UNTIL 361: 362: \ In this case x is defined before the use, and the definition dominates 363: \ the use, but the compiler does not know this until it processes the 364: \ UNTIL. So what should the compiler assume does live at the BEGIN, if 365: \ the BEGIN is not a control flow join? The safest assumption would be 366: \ the intersection of all locals lists on the control flow 367: \ stack. However, our compiler assumes that the same variables are live 368: \ as on the top of the control flow stack. This covers the following case: 369: 370: \ { x } 371: \ AHEAD 372: \ BEGIN 373: \ x 374: \ [ 1 CS-ROLL ] THEN 375: \ ... 376: \ UNTIL 377: 378: \ If this assumption is too optimistic, the compiler will warn the user. 379: 380: \ Implementation: migrated to kernal.fs 381: 382: \ THEN (another control flow from before joins the current one): 383: \ The new locals-list is the intersection of the current locals-list and 384: \ the orig-local-list. The new locals-size is the (alignment-adjusted) 385: \ size of the new locals-list. The following code is generated: 386: \ lp+!# (current-locals-size - orig-locals-size) 387: \ <then>: 388: \ lp+!# (orig-locals-size - new-locals-size) 389: 390: \ Of course "lp+!# 0" is not generated. Still this is admittedly a bit 391: \ inefficient, e.g. if there is a locals declaration between IF and 392: \ ELSE. However, if ELSE generates an appropriate "lp+!#" before the 393: \ branch, there will be none after the target <then>. 394: 395: \ explicit scoping 396: 397: : scope ( compilation -- scope ; run-time -- ) \ gforth 398: cs-push-part scopestart ; immediate 399: 400: : endscope ( compilation scope -- ; run-time -- ) \ gforth 401: scope? 402: drop 403: locals-list @ common-list 404: dup list-size adjust-locals-size 405: locals-list ! ; immediate 406: 407: \ adapt the hooks 408: 409: : locals-:-hook ( sys -- sys addr xt n ) 410: \ addr is the nfa of the defined word, xt its xt 411: DEFERS :-hook 412: last @ lastcfa @ 413: clear-leave-stack 414: 0 locals-size ! 415: locals-buffer locals-dp ! 416: 0 locals-list ! 417: dead-code off 418: defstart ; 419: 420: : locals-;-hook ( sys addr xt sys -- sys ) 421: def? 422: 0 TO locals-wordlist 423: 0 adjust-locals-size ( not every def ends with an exit ) 424: lastcfa ! last ! 425: DEFERS ;-hook ; 426: 427: ' locals-:-hook IS :-hook 428: ' locals-;-hook IS ;-hook 429: 430: \ The words in the locals dictionary space are not deleted until the end 431: \ of the current word. This is a bit too conservative, but very simple. 432: 433: \ There are a few cases to consider: (see above) 434: 435: \ after AGAIN, AHEAD, EXIT (the current control flow is dead): 436: \ We have to special-case the above cases against that. In this case the 437: \ things above are not control flow joins. Everything should be taken 438: \ over from the live flow. No lp+!# is generated. 439: 440: \ !! The lp gymnastics for UNTIL are also a real problem: locals cannot be 441: \ used in signal handlers (or anything else that may be called while 442: \ locals live beyond the lp) without changing the locals stack. 443: 444: \ About warning against uses of dead locals. There are several options: 445: 446: \ 1) Do not complain (After all, this is Forth;-) 447: 448: \ 2) Additional restrictions can be imposed so that the situation cannot 449: \ arise; the programmer would have to introduce explicit scoping 450: \ declarations in cases like the above one. I.e., complain if there are 451: \ locals that are live before the BEGIN but not before the corresponding 452: \ AGAIN (replace DO etc. for BEGIN and UNTIL etc. for AGAIN). 453: 454: \ 3) The real thing: i.e. complain, iff a local lives at a BEGIN, is 455: \ used on a path starting at the BEGIN, and does not live at the 456: \ corresponding AGAIN. This is somewhat hard to implement. a) How does 457: \ the compiler know when it is working on a path starting at a BEGIN 458: \ (consider "{ x } if begin [ 1 cs-roll ] else x endif again")? b) How 459: \ is the usage info stored? 460: 461: \ For now I'll resort to alternative 2. When it produces warnings they 462: \ will often be spurious, but warnings should be rare. And better 463: \ spurious warnings now and then than days of bug-searching. 464: 465: \ Explicit scoping of locals is implemented by cs-pushing the current 466: \ locals-list and -size (and an unused cell, to make the size equal to 467: \ the other entries) at the start of the scope, and restoring them at 468: \ the end of the scope to the intersection, like THEN does. 469: 470: 471: \ And here's finally the ANS standard stuff 472: 473: : (local) ( addr u -- ) \ local paren-local-paren 474: \ a little space-inefficient, but well deserved ;-) 475: \ In exchange, there are no restrictions whatsoever on using (local) 476: \ as long as you use it in a definition 477: dup 478: if 479: nextname POSTPONE { [ also locals-types ] W: } [ previous ] 480: else 481: 2drop 482: endif ; 483: 484: : >definer ( xt -- definer ) 485: \ this gives a unique identifier for the way the xt was defined 486: \ words defined with different does>-codes have different definers 487: \ the definer can be used for comparison and in definer! 488: dup >code-address [ ' bits >code-address ] Literal = 489: \ !! this definition will not work on some implementations for `bits' 490: if \ if >code-address delivers the same value for all does>-def'd words 491: >does-code 1 or \ bit 0 marks special treatment for does codes 492: else 493: >code-address 494: then ; 495: 496: : definer! ( definer xt -- ) 497: \ gives the word represented by xt the behaviour associated with definer 498: over 1 and if 499: swap [ 1 invert ] literal and does-code! 500: else 501: code-address! 502: then ; 503: 504: \ !! untested 505: : TO ( c|w|d|r "name" -- ) \ core-ext,local 506: \ !! state smart 507: 0 0 0. 0.0e0 { c: clocal w: wlocal d: dlocal f: flocal } 508: ' dup >definer 509: state @ 510: if 511: case 512: [ ' locals-wordlist >definer ] literal \ value 513: OF >body POSTPONE Aliteral POSTPONE ! ENDOF 514: [ ' clocal >definer ] literal 515: OF POSTPONE laddr# >body @ lp-offset, POSTPONE c! ENDOF 516: [ ' wlocal >definer ] literal 517: OF POSTPONE laddr# >body @ lp-offset, POSTPONE ! ENDOF 518: [ ' dlocal >definer ] literal 519: OF POSTPONE laddr# >body @ lp-offset, POSTPONE d! ENDOF 520: [ ' flocal >definer ] literal 521: OF POSTPONE laddr# >body @ lp-offset, POSTPONE f! ENDOF 522: -&32 throw 523: endcase 524: else 525: [ ' locals-wordlist >definer ] literal = 526: if 527: >body ! 528: else 529: -&32 throw 530: endif 531: endif ; immediate 532: 533: : locals| 534: \ don't use 'locals|'! use '{'! A portable and free '{' 535: \ implementation is anslocals.fs 536: BEGIN 537: name 2dup s" |" compare 0<> 538: WHILE 539: (local) 540: REPEAT 541: drop 0 (local) ; immediate restrict
|
https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/glocals.fs?f=h;only_with_tag=MAIN;ln=1;content-type=text%2Fx-cvsweb-markup;rev=1.15
|
CC-MAIN-2022-21
|
refinedweb
| 3,128
| 57.4
|
This page is intended as a place to suggest features for DreamPie.
DreamPie is a graphical interactive Python shell which is designed to be reliable and fun. Check it out at
Richer function documentation
This is a feature from IPython which is frequently requested. I think that the model should be Eclipse - when you type a paren after a function name it opens a yellow popup window which displays its argument and a bit of documentation, and lets you make it a real window by pressing F2.
Complete module names
This was a bug report by cool-RR:
IPython completes module names. That's a good idea.
bpython does this too.
More by cool-RR: Complete things that aren't defined yet
Imagine I'm writing this function:
def factorial(n): import random random.whatever() return n * factorial(n-1)
I'd want DreamPie to autocomplete both the whatever thing and the use of factorial inside the definition. Probably hard, I know.
As you said, this is really hard and complicated. I don't see this happening. Sorry! Noam
Magic commands
Chris Colbert gave the example of the Ipython's %timeit command.
Shell support
Probably by prefixing with a '!'. some common commands can work without it.
There shouldn't be a technical problem, as output from processes created by the subprocess is directed to DreamPie.
Debugging support
IPython provides enhanced tracebacks and pdb support. Should check out what this exactly means and what should be implemented.
Save code history between sessions
An idea by Regev: save the last executed code sections, so that history search will include those too. It's useful if there are lines which are executed many times - for example 'execfile'.
Another suggestion roughly about the same subject, by Per Dalgaard Rasmussen: On exit, ask whether to save the history.
I (Noam) think that we can use the "changed" flag of the text buffer for that - it's pretty standard in applications. We should also add a "recent files" menu. The only difference from standard apps is that we should warn even if the history was saved, and note that although the history was saved all the variables will be lost.
Pasting clipboard datas using the mouse middle button
On Windows with the standard interactive Python shell it's very useful to paste clipboard data with a simple mouse right-click.
Since in DreamPie the right mouse button is used by the context menu, it'll be good to have the same feature with the middle mouse button (or the wheel one).
Setting the bottom box area height
The bottom box area have a fixed height at startup. Saving and restoring will be useful.
Saving window position and size
Setting window position and size every time is a pain in the neck.
Saving and restoring window position and size at startup will be an appreciate feature.
Notify new versions
Let DreamPie check and notify if a new version is available at startup (or programmatically).
Autocomplete keyword arguments
Imagine I'm typing this:
x = sum(my_list, sta
In this case we have the sum function, which takes a keyword argument start. It'd be nice if DreamPie could autocomplete the keyword argument itself. (Ram "cool-RR" Rachum.)
Behavior of Ctrl+Something not good enough
I often use ctrl+arrow or ctrl+delete to move through or delete a word. In programmer-friendly editors (Like Wing, Eclipse, Aptana, etc.) this works great. In non-programmer friendly editors (like Notepad, IDLE (shamefully), or textareas in browsers), the cursor often jumps too far in one stroke. Unfortunately it happens in DreamPie as well.
Display version in caption bar on Windows
On Windows, DreamPie creates launch shortcuts for each version of Python on the system. This is great, because it makes DreamPie very useful for doing A/B comparisons between versions. It would be even better if the window caption contained the Python version--for example, instead of just "DreamPie", "DreamPie - Python 2.7".
They don't automatically contain the version number? Because on my system, they do, and they should. Can you please post a bug report, with the version of DreamPie you're using? --Noam
Done: -- tlesher
Highlight all occurrences of the word(token) under the caret
Notepad++ has this feature and I miss it a lot. Very helpful for seeing all uses of a variable, catching misspellings, etc. Should be easy to add.
I agree it's a nice feature sometimes, but you know that saying that it's easy to add is something that I'm supposed to say, not you, unless you're going to implement it. --Noam
Allow cycling through history without ctrl modifier
Currently DreamPie requires the 'ctrl' key along with up or down arrows to cycle through history. Perhaps this could be done with arrow keys alone like this:
When up arrow pressed:
- if we're at the first line in the editing window:
- show the previous item in history
- move cursor up one line
and when down arrow pressed:
- if we're at the last line in the editing window:
- show the next item in history
- move cursor down one line
We should retain the current 'ctrl'-based scheme, too. When the editing window contains several lines, the user can use 'ctrl+arrow' to immediately move to next or previous history item without having to first scroll through all those lines.
In my view using arrows alone will make the navigation more fluid. Also most new users will already know it because that's how all popular shells (of any kind) implement this feature. -- Gurry
Thanks for the suggestion! However, I don't think it will work well with multi-line sections. Say you press 'up' and you get a multi-line section. Pressing 'down' will go to the next line, instead of giving you back the last section. I think that up and down should always cancel one another. So I still think that using ctrl is the best solution. -- Noam
Let 'esc' clear the editing window if in the middle of history
Imagine you're going through history items looking for some command and eventually discover it is not there. Now in order to go back to the empty editing window, you have to cycle all the way back. Perhaps it would help in such situations if pressing escape took us back to the end/bottom of the history queue (the empty window that is).
-- Gurry
Again, thanks for your thought and suggestion! You don't need to scroll all the way down - just press ctrl-a (to select all) and the Delete key. I don't think it's worth to add another shortcut just to keep you two keystrokes, especially since most people won't find this feature. -- Noam
Color-coding decorators
It seems that decorators are currently not color-coded. I suggest they should be. (I'd recommend a gold-like color.) -- Ram
Output format handlers
implement Reinteract like output handlers to customize the console output. I've seen such a feature in Monos csharprepl too where you can hook in a handler. Another quick hack would be to implement a default handler with based on a template engine like jinja, where the output widget is render with a template (model view control like separation) -- Rainer
Snippet templates
another usage for jinjas template would be a snippet like shortcut system for accessing a list of templates to format the bound data -- Rainer
Allow bold fonts
--Rainer
docstring shortcut
shortcut to show the docstring of an object, like ? in IPython
Line Numbers!
--Ilia Zaslavsky
Automatic code per python version
In the preferences under the shell tab you are able to set code that automatically executes when an instance of the shell is fired up.
I have different versions of python installed and would like to customize scripts for each version.
A use case is code like "print()" which would fail on Python 2.6 --benjamin
Two behaviours for path auto-completion
I love dreampie's feature to press tab and get a list to auto-complete the path I'm currently typing in. I'd like to see a small tweak, though. Sometimes I have directories with several thousand files which takes extremely long for dreampie to load and put in the auto-completion pop-up. So I suggest the following: When the pop-up list is open, selecting an entry and pressing enter should just fill in that path without following the contents of that directory. Pressing tab on the other hand should fill in that path and scan the contents for further paths that could be followed.
-- Moritz
Clear Output
Very often, I like to clear the interpreter output. Python interpreter provides a Ctrl+L shortcut. In PythonWin, I can select all text and delete. Can we have something like that (Ctrl+L would be better; not making (or having an option for) the output pane read-only is a substitute). Currently, I can to that if I clear all history. But I would like to retain the history and only clear the output. Also, it would be nice to have an option for a standard history file so that users don't need to be prompted to do that for every session.
-- Ravi
You can press ctrl-minus to fold the last output. I think that's good enough. I plan to improve the "recent files" in the history menu, so that you'll be able to press alt-h, 0 and get the last history you saved.
-- Noam
> You can press ctrl-minus to fold the last output. I think that's good enough.
Not really (at least for me). Ctrl-Minus only seems to fold output of one statement. I am referring to starting a fresh slate, visually. In any case, if this is not interesting to you, I can always fix that in my copy. Thanks.
-- Ravi
==Portability==
It would be awesome if the application could be installed and run from an external drive with either its own portable python install or the native one on the computer. Thank you!
--Jeremiah
Faster history searching using Ctrl-up/Ctrl-down
It is nice to have a separate buffer keep track of the previous command for user to call back using Ctrl-up/Ctrl-down. Currently I have to type multiple ctrl-up for a same command to search up from history.
For example
>>>show_detail() >>>increase_by_one() >>>increase_by_one() >>>increase_by_one() >>>increase_by_one()
When I type Ctrl-up, it should show increase_by_one(), then the next Ctrl-up should show show_detail() instead of the same increase_by_one() line.
-- Shin Guey
Show tab and space characters symbols
Sometimes I mix up leading tabs and spaces. This causes errors when running the python code. It is difficult to determine which code lines have the error.
Showing an transparent tab or space character is very helpful in this case.
Enable manual removal of elements from history
Sometimes code won't work as intended (typo, indexing issue, logical mistake, etc.) and the attempts to correct it might be many. This clutters the history with all sorts of wrong or incomplete or superfluous code.
It would be nice to have a feature to remove all those code snippets that didn't work from history. It would make the reuse of previous interactive sessions much easier.
I remember that matlab has this feature.
-- Thomas
Collapsed text/code in saved history
When saving history as HTML, I would like to be able to expand/collapse collapsed text/code as in DreamPie itself.
|
https://wiki.python.org/moin/DreamPieFeatureRequests?highlight=DreamPie
|
CC-MAIN-2017-04
|
refinedweb
| 1,924
| 63.29
|
Another question that often arises is whether to use raw or cooked files. The answer here is very simple: Use cooked files. The only time to use raw files is for OPS as it is a key requirement.
Raw files offer two performance improvements: asynchronous I/O and no double-caching (i.e., caching of data in Oracle SGA and UNIX file system cache). Quite often, the realized performance gain is relatively small. Some sources say 5?10%, while others wildly claim 50%. There really is no universal consensus on what the real performance gains are for using raw files. However, nearly everyone agrees that raw generally requires more skilled and well-trained administrative staff because none of the standard UNIX file system commands and many backup/recovery suites do not function with raw files. Thus, the administrative headaches alone are reason enough to avoid raw files like the plague.
Again, if you've accepted the prior recommendation for Sun hardware, then there is a clear answer: Use cooked files. Solaris supports asynchronous I/O to both raw and file system data files. The only real penalty is double-caching of data.
If you genuinely believe that you need the performance gain raw files supposedly offer, then I strongly suggest looking at the Veritas file system with its Quick IO feature. Quick IO supports asynchronous I/O and eliminates double-caching. In short, Oracle accesses database files as if they were raw even though the DBA manages them as if they were regular files. Essentially, Quick IO provides a character-mode device driver and a file system namespace mechanism. For more information, I suggest reading the white paper titled "Veritas Quick IO: Equivalent to raw volumes, yet easier." It can be found on Veritas' Web site (, under white papers).
The Veritas file system also supports online file system backups, which can be used to perform online incremental database backups. Furthermore, Veritas' online incremental backup is vastly superior to using Oracle's RMAN. The key difference is that Oracle's RMAN must scan all the blocks during an online incremental database backup to see which blocks have changed. RMAN saves magnetic tapes at the expense of time. The Veritas online incremental database backup knows which blocks have changed via its file system redo logs, so it saves both tape space and time. Finally, Veritas offers one of the easiest to manage UNIX file systems and backup/recovery suites available. Unfortunately, Veritas is only available for Solaris and HP-UX.
As another point of reference, I did my last data warehouse using raw files. I also do not own any shares of Veritas stock. And, I honestly do not feel like I am making this recommendation based on any personal prejudices.
|
http://etutorials.org/SQL/oracle+dba+guide+to+data+warehousing+and+star+schemas/Chapter+3.+Hardware+Architecture/The+Raw+vs.+Cooked+Files+Debate/
|
CC-MAIN-2018-30
|
refinedweb
| 459
| 56.45
|
AWS Big Data Blog
Querying Amazon Kinesis Streams Directly with SQL and Spark Streaming
Amo Abeyaratne is a Big Data consultant with AWS Professional Services
Introduction
What if you could use your SQL knowledge to discover patterns directly from an incoming stream of data? Streaming analytics is a very popular topic of conversation around big data use cases. These use cases can vary from just accumulating simple web transaction logs to capturing high volume, high velocity and high variety of data emitted from billions of devices such as Internet of things. Most of these introduce a data stream at some point into your data processing pipeline and there is a plethora of tools that can be used for managing such streams. Sometimes, it comes down to choosing a tool that you can adopt faster with your existing skillset.
In this post, we focus on some key tools available within the Apache Spark application ecosystem for streaming analytics. This covers how features like Spark Streaming, Spark SQL, and HiveServer2 can work together on delivering a data stream as a temporary table that understands SQL queries.
This post guides you through how to access data that is ingested to an Amazon Kinesis stream, micro-batch them in specified intervals, and present each micro-batch as a table that you can query using SQL statements. The advantage in this approach is that application developers can modify SQL queries instead of writing Spark code from scratch, when there are requests for logic changes or new questions to be asked from the incoming data stream.
Spark Streaming, Dstreams, SparkSQL, and DataFrames
Before you start building the demo application environment, get to know how the components work within the Spark ecosystem. Spark Streaming receives live input data streams and divides the data into batches, which are then processed by the Spark engine to generate the final stream of batched results. For more information, see the Spark Streaming Programming Guide.
In Spark Streaming, these batches of data are called DStreams (discretized streams), continuous streams of data represented as a sequence of RDDs (Resilient Distributed Datasets), which are Spark’s core abstraction for its data processing model.
Spark SQL, the SQL-like extension for the Spark API also has a programming abstraction called DataFrames, which are distributed collections of data organized into named columns. In this demo, you are making use of Spark’s ability to convert RDDs into DataFrames in order to present them as temporary tables that can be manipulated using SQL queries.
Sample use case and application design
For the purposes of this post, take a simple web analytics application as an example. Imagine that you are providing a simple service that collects, stores, and processes data about user access details for your client websites. For brevity, let’s say it only collects a user ID, web page URL, and timestamp. Although there are many ways of approaching it, this example uses Spark Streaming and Spark SQL to build a solution because you want your developers to be able to run SQL queries directly on the incoming data stream instead of compiling new Spark applications every time there is a new question to be asked.
Now take a look at the AWS services used in this application. As shown in the diagram below, you write a Python application that generates sample data and feeds it into an Amazon Kinesis stream. Then, an EMR cluster with Spark installed is used for reading data from the Amazon Kinesis stream. The data is read in micro-batches at predefined intervals. Each micro-batch block is accessible as a temporary table where you can run SQL queries on the EMR master node using the HiveServer2 service. HiveServer2 is a server interface that enables remote clients to execute queries against Hive and retrieve the results. At the same time, those micro-batches are stored in a S3 bucket for persistent longer-term storage as parquet files. These are also mapped as an external table in Hive for later batch processing of workloads.
Create an Amazon Kinesis stream
Use Amazon Kinesis to provide an input stream of data to your application. Amazon Kinesis is a fully managed service for real-time processing of streaming data at massive scale. For your application, the Amazon Kinesis consumer creates an input DStream using the Kinesis Client Library (KCL). For more information, see Developing Amazon Kinesis Streams Consumers Using the Amazon Kinesis Client Library.
If you already have AWS command line tools installed in your desktop, you may run the following command:
aws kinesis create-stream --stream-name mystream --shard-count 1
This creates an Amazon Kinesis stream in your account’s default region. Now, spin up an EMR cluster to run the Spark application.
Create the EMR cluster
With.
You may use the following sample command to create an EMR cluster with AWS CLI tools. Remember to replace myKey with your Amazon EC2 key pair name.
aws emr create-cluster --release-label emr-4.2.0 --applications Name=Spark Name=Hive --ec2-attributes KeyName=myKey --use-default-roles --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge --bootstrap-actions Path=s3://aws-bigdata-blog/artifacts/Querying_Amazon_Kinesis/DownloadKCLtoEMR400.sh,Name=InstallKCLLibs
As the response to this command, you should receive a cluster ID. Note this cluster ID, as you need it when you log in to the master node via SSH. Until the cluster starts up, look at the sample code for the Python application.
Download the Python application
To generate sample data, you can use the AWS SDK for Python, also referred to as boto. You can install boto on your client machine.
The following code sample generates some random user IDs and URLs and gathers the current timestamp. Then it feeds each of those as a single record with comma-separated values into your Amazon Kinesis stream. This simulates an Amazon Kinesis producer application. Please make sure the region and kinesisStreamName variables are changed according to your Amazon Kinesis stream created in the earlier step.
import string import random import time from datetime import datetime from boto import kinesis def random_generator(size=6, chars=string.ascii_lowercase + string.digits): return ''.join(random.choice(chars) for x in range(size)) #connecting to Kinesis stream region = 'us-east-1' kinesisStreamName = 'myStream' kinesis = kinesis.connect_to_region(region) # generating data and feeding kinesis. while True: y = random_generator(10,"techsummit2015") urls = ['foo.com','amazon.com','testing.com','google.com','sydney.com'] x = random.randint(0,4) userid = random.randint(25,35)+1200 now = datetime.now() timeformatted = str(now.month) + "/" + str(now.day) + "/" + str(now.year) + " " + str(now.hour) + ":" +str(now.minute) + ":" + str(now.second) #building the pay load for kinesis puts. putString = str(userid)+','+'www.'+urls[x]+'/'+y+','+timeformatted patitionKey = random.choice('abcdefghij') # schema of the input string now userid,url,timestamp print putString result = kinesis.put_record(kinesisStreamName,putString,patitionKey) print result
Download the Spark Streaming application
Now that you have created an Amazon Kinesis stream and developed a Python application that feeds sample data to the stream, take a look at the main component of this solution, the Spark Streaming application written using Scala code. Download the sample code for this application.
This is an Amazon Kinesis consumer application developed using the Kinesis Client Library for spark. It captures data ingested through Amazon Kinesis into small batches of data based on the defined batch interval. Those batches (technically, RDDs) are converted to DataFrames and eventually converted to temporary tables. The application code presents such a temporary table through the HiveServer2 instance that is started during its runtime, and the temporary table name is passed as an argument to the application.
Additionally, I have also included an action such as .saveAsTable(“permKinesisTable”,Append) for demonstrating how to persist the batches on HDFS by appending every batch into a permanent table. This table is accessible through the Hive metastore under the name permKinesisTable. Instead of HDFS, you may also create a S3 bucket and point to the S3 location.
KinesisWatch.scala need to be built before it can be deployed on an EMR cluster and run as a Spark application. Use a build tool such as sbt, a simple build tool for Scala or maven. Use the build.sbt file or the .jar file already built and tested on an EMR 4.1.0 cluster to experiment with this functionality.
Run the Spark Streaming application
Now you have everything you need to test this scenario; take a look at the steps required to get the demo up and running.
Log in to the EMR cluster
Replace the cluster ID noted above with the following AWS CLI command to login to your cluster. Also you should point command at your EC2 key-pair-file location we allocated to the cluster instances.
aws emr ssh --cluster-id <Your_cluster_ID> --key-pair-file <path/to/myKey.pem>
Run components on the master node
After you are logged in to the master node, download the scripts and jar files from the repository and start running each component in your application.
Python code
Download this sample Python application. You can run this application on the EMR master node to simulate the process of an external application feeding data into Amazon Kinesis. On the master node, you may run the following commands on SSH terminal to start it:
# wget # python webaccesstraffic_generator.py
Now you may see entries being ingested in your screen, similar to the following:
Sample Spark Streaming application
After the Python simulator has started feeding the Amazon Kinesis stream, the next step is to download the Spark Streaming application that you built and compiled into a .jar file and submit it to the Spark cluster as a job using the commands given below. Open a new SSH terminal to the master node and run this on a separate session:
# wget # export HIVE_SERVER2_THRIFT_PORT=10002 # spark-submit --verbose --master local[*] --class "org.apache.spark.examples.streaming.KinesisWatch" --jars /usr/lib/spark/extras/lib/amazon-kinesis-client-1.2.1.jar "test-spark-streaming" "myStream" "" "10000" "userid url timestamp" "," "inputkinesis"
Note that inside this Spark application, you are also starting an instance of HiveServer2. This is a server interface that enables remote clients to execute queries against Hive and retrieve the results. The current implementation, based on Thrift RPC, is an improved version of HiveServer and supports multi-client concurrency and authentication. You are allocating a port for this service to start up during runtime by setting the environment variable HIVE_SERVER2_THRIFT_PORT.
This sample application provides you with the ability to pass in as arguments a batch interval (such as 10000ms), a schema string for the incoming data through Amazon Kinesis, a delimiter (such as “,”) and a table name for the temporary table (such as “inputkinesis”). This is something you can change at your discretion in order to make your Spark application more generic and portable between different use cases. See the appendix for descriptions of each argument passed into the application.
After the streaming application has started, you may see something similar to this on your screen. In the Scala code, the displayed number is the output for “SELECT count(*) FROM tempTableName” command embedded in the code. Because you defined the micro-batch interval as 10000ms, every 10 seconds you should see a count of the records populated into this temporary table.
Testing the Application
The next step is to run a SQL query through a JDBC connection to test the functionality of your demo application. Now the Amazon Kinesis stream is ingested with records by your Python script and the Spark Streaming application is capturing it in 10-second intervals. It also presents those small batches as temporary tables via HiveServer2. How can you connect to HiveServer2 service?
Open a new SSH terminal again to keep things simple and try the following commands. Beeline is a tool comes that bundled with the Hive applications stack within Hadoop. It is based on SQLLine, which is a Java console-based utility for connecting to relational databases and executing SQL commands. The following commands use Beeline and connect to the temporary tables generated as the Amazon Kinesis stream gets populated with data.
# beeline # beeline>!connect jdbc:hive2://localhost:10002
When prompted for a user name, type “spark” and press Enter. Leave the password as blank and press Enter. Now you should be able to run SQL queries in the prompt, as shown below. Note that ‘isTemporary’ is marked true for the “inputkinesis” table.
To access this JDBC connection from outside your cluster, you may use an SSH tunnel to forward the local port to a remote port on your client computer:
ssh -i @<keyName@>-L 10002:<master-node-dns-or-ip>:10002 hadoop@<master-node-dns-or- ip@>
This allows you to access these tables via external tools installed on your laptop. However, please note that these temporary tables are not compatible with Tableau at this stage.
Conclusion
In this post, I discussed how a sample Spark streaming application can be used to present a micro-batch of data in the form of a temporary table, accessible via JDBC. This enables you to use external SQL based to query streaming data directly on an Amazon Kinesis stream.
This feature comes in handy as it allows you to run SQL queries against recent data (within the batch interval window) and at the same time gives the ability to join them with persisted previous batch data using SQL statements. As this approach provides you a platform to dynamically query your stream, it saves the hassle of having to rewrite and rebuild spark code every time you need to ask a new questions from your data.
By running this on AWS you have the ability to scale Amazon Kinesis in to multiple shards based on your throughput requirements. Also you can dynamically scale EMR clusters to improve the processing times based on demand.
Once the testing is completed, do not forget to terminate your EMR cluster and delete the Amazon Kinesis stream.
Appendix
Spark Streaming application command line arguments
Usage: KinesisWatch <app-name> <stream-name> <endpoint-url> <batch-length-ms> <schmaString> <inputDelimiter> <tempTableName> <app-name>Name of the consumer app, used to track the read data in DynamoDB. <stream-name> Name of the Amazon Kinesis stream. <endpoint-url> Endpoint of the Amazon Kinesis Analytics service (e.g.,). <batch-length-ms> Duration of each batch for the temp table view taken directly from the stream, in milliseconds. <schmaString> Schema for CSV inputs via Amazon Kinesis. Block within double quotes (e.g., "custid date value1 value2" for a 4-column DataFrame). This has to match the input data. <inputDelimiter> Character to split the data into fields in each row of the stream (e.g., ","). <tempTableName> Name of the tempTable accessible via JDBC Hive ThriftServer2 for the duration of the batch.
If you have a question or suggestion, please leave a comment below.
AWS Big Data Support Engineer Troy Rose contributed to this post.
Related:
Visualizing Real-time, Geotagged Data with Amazon Kinesis
|
https://aws.amazon.com/blogs/big-data/querying-amazon-kinesis-streams-directly-with-sql-and-spark-streaming/
|
CC-MAIN-2019-30
|
refinedweb
| 2,503
| 53.51
|
16 January 2013 22:58 [Source: ICIS news]
HOUSTON (ICIS)--?xml:namespace>
The NAHB/Wells Fargo Housing Market Index (HMI) was unchanged at 47, and components of the index were mixed. The gauge of current sales conditions remained at 51, while the component gauging sales expectations for the next six months fell a point to 49. The component gauging traffic of prospective buyers gained a point to 37.
"Conditions in the housing market look much better now than at the beginning of 2012, and an increasing number of housing markets are showing signs of recovery, which should bode well for future home sales later this year," said Barry Rutenberg, chairman of the NAHB and a home builder from Gainesville, Florida.
Uncertainties in regard to the federal government’s fiscal cliff negotiations dragged on builder confidence, and the future of the mortgage interest deduction could add to the worries in the near future, Rutenberg said.
NAHB Chief Economist David Crowe concurred. “Persistently tight mortgage credit conditions, difficulties in obtaining accurate appraisals and the ongoing stalemate in
The HMI three-month moving average showed gains across all regions, with the west posting a four-point increase to 51 and the South logging a three-point gain to 49. The northeast and m
|
http://www.icis.com/Articles/2013/01/16/9632501/US-January-builder-confidence-holds-at-six-year-high.html
|
CC-MAIN-2014-52
|
refinedweb
| 210
| 57.2
|
Heres the scenario
You have an application which is making use of AWS cloudwatch logs. You want a beautifully simple way to parse those logs for errors and send a message to monitoring slack channel. We have several log streams (service1, service2, etc) for 2 different environments. One for staging and one for production. These environments should send messages to their respective slack channels.
Using qualifiers
Qualifiers are a way to name the different versions of your code; such as production, staging, etc.. AWS lambda qualifiers are good for dividing environments as well as promoting versions of code in staging before promoting production. Qualifiers allow for unique triggers. From this we attach our staging qualifier to watch a set of streams. For example:
From the image above you can see I've set the qualifier for version 1 of my code to have 2 cloudwatch log triggers based on different text patterns. Each stream can be turned on and off at a click of a button with out redeploying. I find this super cool.
Unfortunately you can't set environment variables based on a qualifier. Only versioned code will be given static environment variables. However, there is a work around.
function isNotAnAliasName(context) { return isNumber(context.alias) || context.functionName === context.alias; } // Add this to your exports.handler context.alias = context.invokedFunctionArn.split(":").slice(-1)[0]; console.log("Alias: " + context.alias); // Set a default value if (isNotAnAliasName(context)) { context.alias = "UAT"; }
From there you can namespace your environment variables similar to this:
const slackPostPath = process.env["SLACK_POST_PATH_" + context.alias]; const slackBotUsername = process.env.SLACK_BOT_USERNAME + " " + context.alias;
The code
The code might seem dense but this is because there are no dependencies. Set up a lambda with a copy of the linked code and save it. Set up the qualifiers you want along with the appropriate triggers and slack details and bam! Log monitoring on the cheap.
Discussion (0)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/kyleparisi/understanding-aws-lambda-qualifiers-16nc
|
CC-MAIN-2021-25
|
refinedweb
| 317
| 50.63
|
> On Fri, Mar 2, 2012 at 7:36 AM, Steven D'Aprano <steve at pearwood.info> wrote: >> Still, I reckon a directive is the right approach. Why? That's how I do it because I am/was paranoid about compatibility, but surely fixing dicts is important enough that, done right, nobody would object if the semantics of comparison change subtly to allow for unordered container comparison in the "natural" way? (Is replying to a quoted quote acceptable on mailing lists?) On Fri, Mar 2, 2012 at 12:56 AM, David Townshend <aquavitae69 at gmail.com> wrote: > It seems that the problem with any solution based on interpreting repr > (especially when nothing in know about the object) is that there are just > too many exceptions.Another approach might be to allow for a custom > compare function to be defined on doctest. E.g., in the module to be > tested: The definition/use of an alternate comparison function needs to be inside the doctests. Two issues: Suppose we're running the doctests on module A, which defines a different compare function. Module B also defines a different comparison function, and is imported but not run as a doctest. Since both of them did a global set-attribute to set the comparison function, but B did it later, B wins and A's doctests are run under the rules for module B. Also, don't forget that doctests are quite often run in things that are _not_ python source files. In particular, tutorial-like documentation these days is frequently in the form of reStructuredText. > import doctest > > def _compare(got, expected): > return (sorted(eval(got)) == sorted(eval(expected)) or > doctest.compare(got, expected)) > > doctest.usercompare = _compare This function is wrong in the context of the above discussion. Why sort a dict or set? Worse, why sort a list or tuple? > The compare function would only need to deal with the idiosyncrasies of > types actually used in doctests in that module. Punting it to a user-defined function is nice for _really_ crazy situations, but dicts and sets are not idiosyncratic or in any way exceptional. doctest itself should handle them the way a naive user would expect. -- Devin
|
https://mail.python.org/pipermail/python-ideas/2012-March/014333.html
|
CC-MAIN-2017-30
|
refinedweb
| 364
| 63.49
|
As mentioned in , we want to move to a centralizing top-half that unwraps and checks for proper DOM interface chaining of the |this| object before specializing to the bottom half to be stored in a slot on the JSFunction.
At what point do we intend to put the bottom halves into slots on the objects? There doesn't seem to be a convenient time to do so. They are passed in and wrapped by JS_DefineProperties() and JS_DefineFunctions(), and then we do not have clean access to the JSFunctions. Boris, did you have some insight on this that I am missing?
So we have two options here, I think. At least if I understand the JS engine setup for this stuff....
Option 1 is to stop using JS_DefineProperties and JS_DefineFunctions and instead basically reimplement that stuff ourselves, but using function objects with reserved slots that we then stash things in.
Option 2 is to rely on the vp[0] in the top half being the callee function, which had better be our DOM function, right? So it seems like we could just poke at its native to get our bottom half... That assumes there's no weirdness with function proxies and so forth, though, which is something I don't know much about.
Oh, and I guess option 1 _still_ relies on getting the function from vp[0], doesn't it?
Yes, but won't the native in the JSFunction callee always be the centralized top-half native? The whole point was to have the specialized function in a reserved slot, right? Or am I misunderstanding?
Either way, by the time we get to the centralized native, it's way too late to be thinking about how to get the specialized native into a slot. Indeed I expect the top-half to rely upon unpacking a slot from the callee it finds in vp[0], but that doesn't save us from our definition problems.
If we are going to not use JS_DefineProperties(), I should probably also undo the changes I made to it, or better yet not land those changes at all, and just do this first.
So the way I was thinking of this is that we would generate a property spec that has the following information in it:
{ general_getter, specialized_getter_for_this_prop }
The general_getter function pointer would be the same for all the props on an interface. The specialized_getter functions would all be different for the different props.
I'd been thinking about storing the specialized_getter in a slot, but there's no need for that: the JSFunction just has a pointer to it already, right?
Does that make sense?
So in the new scheme JSJitInfo is a misnomer, as it's really more like "super secret dom bottom-half stash that's like a slot, but not really because it has more data in it"? ;)
So we get the callee, ask it for its jitinfo, and then use the bottom half from there. And we don't have addition problems either, since we will have already modified JS_DefineProperties() and JS_DefineFunctions() to accept JitInfo structs.
That's much much simpler :)
That was the thinking, yes. ;)
Created attachment 644081 [details] [diff] [review]
Punch a hole for Binding Codgen to get JSJitInfos back
applies on top of Codegen patch in bug 747287
Created attachment 644083 [details] [diff] [review]
Patch
Applies on top of patch in bug 775289.
Comment on attachment 644081 [details] [diff] [review]
Punch a hole for Binding Codgen to get JSJitInfos back
Review of attachment 644081 [details] [diff] [review]:
-----------------------------------------------------------------
::: js/src/jsfriendapi.h
@@ +313,5 @@
>
> +struct Function {
> + Object base;
> + uint16_t nargs;
> + uint16_t flahs;
flahs?
@@ +1249,5 @@
> bool isConstant; /* Getting a construction-time constant? */
> };
> +
> +static JS_ALWAYS_INLINE const JSJitInfo *
> +JSVAL_TO_JITINFO(jsval v)
const Value&
@@ +1251,5 @@
> +
> +static JS_ALWAYS_INLINE const JSJitInfo *
> +JSVAL_TO_JITINFO(jsval v)
> +{
> + return reinterpret_cast<js::shadow::Function *>(JSVAL_TO_OBJECT(v))->jitinfo;
&v.toObject()
Comment on attachment 644081 [details] [diff] [review]
Punch a hole for Binding Codgen to get JSJitInfos back
Review of attachment 644081 [details] [diff] [review]:
-----------------------------------------------------------------
::: js/src/jsfriendapi.h
@@ +1251,5 @@
> +
> +static JS_ALWAYS_INLINE const JSJitInfo *
> +JSVAL_TO_JITINFO(jsval v)
> +{
> + return reinterpret_cast<js::shadow::Function *>(JSVAL_TO_OBJECT(v))->jitinfo;
How about naming it FUNCTION_VALUE_TO_JITINFO and adding
JS_ASSERT(JS_GetClass(&v.toObject()) == &FunctionClass)
?
Comment on attachment 644083 [details] [diff] [review]
Patch
Review of attachment 644083 [details] [diff] [review]:
-----------------------------------------------------------------
::: dom/bindings/Codegen.py
@@ +960,5 @@
> return "JSPROP_READONLY | " + flags
> return flags
>
> def getter(attr):
> + return ("{(JSPropertyOp)genericGetter, &%(name)s_getterinfo}"
Drop trailing whitespace.
|
https://bugzilla.mozilla.org/show_bug.cgi?id=773546
|
CC-MAIN-2016-44
|
refinedweb
| 736
| 60.14
|
:
Grant Edwards schrieb:
> On 2007-08-29, Thomas Heller <theller@...> wrote:
>> Grant Edwards schrieb:
>>> I'm still battling with numpy/Numeric issues. The basic
>>> problem is that py2exe places all of Numeric's files at the
>>> "root" of the library (directly in the 'dist' directory) rather
>>> than in a Numeric directory (dist/Numeric).
>>
>> You mean .pyd's, do you?
>
> Yes. The problem is that py2exe will try to put multiple
> different files with the same name in the library root
> directory, so the package that owns the "last one written" wins
> and the package the owns the first one written losses.
>
>>> The result is that this changes Numeric's modules into global
>>> modules (what's supposed to be Numeric.foobar is now just
>>> foobar) causing them to override modules with the same name in
>>> other packages.
>>>
>>> The solution I found in the list is to create a directory
>>> named Numeric and move the conflicting Numeric files into it.
>>> That keeps them from overriding other package modules, but it
>>> also hides them from use by the Numeric package.
>>>
>>> In the "normal" library tree, all of Numeric's modules are in
>>> <library>/site-packages/Numeric. Why does py2exe place them in
>>> the library "root" directory?
>>
>> Because people don't like deep directory trees:
>
> Why? They don't really const anything, and there's information
> in that directory structure that py2exe is throwing away. That
> information is required so that the right files get loaded by
> the right packages. Throwing that information is breaking
> packages.
>
>> they want a single directory, or, better yet, a single file.
>> Usually.
>
> Wanting a shallow directory tree is fine except that I don't
> think it should take precidence over having a working directory
> tree.
Sure. ;-)
> py2exe is causing breaking in two related ways:
>
> 1) By "deleting" when it overwrites them with a second file of
> the same name.
>
> 2) By placing files at a different place in the search order
> than they need to be.
>
>>> Why not put them in a Numeric directory like they are in the
>>> normal library tree? Wouldn't that avoid all these namespace
>>> collision problems?
>>
>> That may be. A better approach for py2exe might be to rename
>> them into 'Numeric.foobar.pyd'.
> How would the file that's loading the .pyd files know they have
> been renamed by py2exe?
>
Loading .pyds from the dist directory does not use the normal mechanism
anyway - the dist directory is not on sys.path (not by default).
If you open the library.zip file with winzip or a similar utility
you'll see that it has a xxx.pyc or xxx.pyo file in it (in the correct
location relative to the package) for each xxx.pyd file that is in the
dist directory. These xxx.py[co] files contain this code:
<loader code>
# This loader locates extension modules relative to the library.zip
# file when an archive is used (i.e., skip_archive is not used), otherwise
# it locates extension modules relative to sys.prefix.
def __load():
import imp, os, sys
try:
dirname = os.path.dirname(__loader__.archive)
except NameError:
dirname = sys.prefix
path = os.path.join(dirname, '%s')
#print "py2exe extension module", __name__, "->", path
mod = imp.load_dynamic(__name__, path)
## mod.frozen = 1
__load()
del __load
<loader code/>
In other words, the pyds are loaded with a call to 'imp.load_dynamic',
passing the module name and the .pyd file path.
This code can be found in Lib/site-packages/py2exe/build_exe.py, near line 68.
Just to explain how it works.
Thomas
View entire thread
|
https://sourceforge.net/p/py2exe/mailman/message/8410703/
|
CC-MAIN-2018-13
|
refinedweb
| 589
| 68.26
|
On 2016-10-18 14:54, Mark Linimon wrote: > On Tue, Oct 18, 2016 at 08:49:06AM +0100, Matthew Seaman wrote: >> Yes, there is a lot of useful stuff in the ports tree to support local >> ports or even whole local categories of ports. I can't recall now how I >> learned about all this stuff -- it may well have been just be a >> combination of reading Makefiles and hints dropped on mailing lists. I >> cannot recall a document describing this stuff anywhere. > > I don't believe that there is one. > > I'm sure there are N locally-grown solutions out in the wild. > > We ought to work together to poll people on what they use. > > As for adding the category, I think there's a quick fix, if you don't > care about building INDEX. Add USE_LOCAL_MK=yes to your Makefile > invocations, and use the patch below. > > Note: I haven't tried this yet, so adding the category to ports/Makefile > may also be necessary to pacify ports/Mk/bsd.port.subdir.mk (e.g. INDEX.) > > Index: ports/Mk/bsd.local.mk > =================================================================== > --- ports/Mk/bsd.local.mk (revision 423944) > +++ ports/Mk/bsd.local.mk (working copy) > @@ -14,6 +14,8 @@ > # time should live. > # > > +VALID_CATEGORIES+= local > + > .endif # !defined(_POSTMKINCLUDED) && !defined(Local_Pre_Include) > > .if defined(_POSTMKINCLUDED) && !defined(Local_Post_Include) > > mcl _______________________________________________ freebsd-ports@freebsd.org mailing list To unsubscribe, send any mail to "freebsd-ports-unsubscr...@freebsd.org"
|
https://www.mail-archive.com/freebsd-ports@freebsd.org/msg71009.html
|
CC-MAIN-2017-26
|
refinedweb
| 236
| 56.55
|
How Has Open Source Helped You Commercially? 96
Slithe asks: "In the past few years, OSS has proven that sharing one's source code can be beneficial to both businesses and their customers. More than a few young programmers are thankful that they were allowed to learn from professional developers by browsing through and hacking on 'enterprise quality' code. My question to developers of commercial OSS is this: Have you, personally, ever benefited from having the source code to your project freely available and dowloadable, instead of being kept under lock-and-key? Have you ever fixed a bug in your spare time? Have you ever sought outside help (providing source code snippets) on a particularly nasty problem?"
Hmm (Score:5, Informative)
same argument for presidents (Score:2)
FTFA:
Let's substitute "the presidency" and "the people"
open source got me LAID! (Score:5, Funny)
Re:open source got me LAID! (Score:1, Funny)
I'm a web developer. Mostly PHP, MySQL/PostgreSQL, etc. When I get my nice heafty pay-cheque from whomever I do a project for, I go out and pickup a hooker or two.
;-)
-- if only it was that easy.
:-(
Re:open source got me LAID! (Score:1)
Free Freedom...
Free love!
True story! (Score:1, Funny)
Re:open source got me LAID! (Score:1)
codes snippets != open source (Score:3, Interesting)
Enterprise (Score:5, Funny) [thedailywtf.com] [thedailywtf.com]
Excerpt for the lazy:
public class SqlWords
{
public const string SELECT = " SELECT ";
public const string TOP = " TOP ";
public const string DISTINCT = " DISTINCT ";
public const string FROM = " FROM ";
public const string INNER = " INNER ";
public const string JOIN = " JOIN ";
public const string INNER_JOIN = " INNER JOIN ";
public const string LEFT = " LEFT ";
}
Re:Enterprise (Score:3, Informative)
Ie, if you do it right it is an elegant solution to catch spelling errors which otherwise might go unnoticed, if you do it the wrong way you get unreadable code.
Re:Enterprise (Score:3, Funny)
Yeah, I can see why you'd think that approach was much less error prone !
sql = sprintf("%s col1, col2, col3, %s tab1 %s %s tab2 ON tab1.pk=tab2.pk %s col1=%d and col2 in (%s %s fkey %s tab3 %s col4=3);", SELECT, FROM, LEFT, JOIN, WHERE, 1, SELECT, DISTINCT, FROM, WHERE);
res = PQexec(sql);
> if you do it the wrong way you get unreadable code
come on then, show me an improvement
Re:Enterprise (Score:3, Informative)
sql = sprintf(SELECT " col1, col2, col3, " FROM " tab1 " LEFT " " JOIN " tab2 ON tab1.pk=tab2.pk " WHERE " col1=%d and col2 in (" SELECT " " DISTINCT " fkey " FROM " tab3 " WHERE " col4=3);", 1);
Why you can't rely on the SQL logging mechanism I really don't know.
Postgresql will emit all of the sql executed and label is with ERROR if it didn't execute. I doubt another RDBMS can't do the same.
Re:Enterprise (Score:3, Interesting)
Re:Enterprise (Score:2)
Yes, that sure makes a lot of sense! It's so much better doing that than actually checking the error code from the SQL library! And because it's more LOC, you are also more productive!
REPL (Score:2)
Sorry that your language doesn't seem to provide one.
I'm SCO - I benefitted a lot (Score:3, Funny)
Insincerely, Darly McBride
You may be joking, but Microsoft benefitted (Score:4, Insightful)
Re:You may be joking, but Microsoft benefitted (Score:1, Interesting)
Re:You may be joking, but Microsoft benefitted (Score:3, Informative)
So for instance I have profited from the fact that SAP's R/3 software is in a way open source that a registered developer on a SAP R/3 system can not only browse his own code or the code of fellow developers, but also the code SAP provides (very useful for debugging!), and (with a warning that this
Re:You may be joking, but Microsoft benefitted (Score:3, Interesting)
Re:Enterprise (Score:2, Insightful)
1) string a = SELECT + a + FROM + b + WHERE + "param=" + c
2) string a = "SELECT " + a + " FROM " + b + " WHERE param=" + c
But putting it into a class that is completely isolated and doesn't have any methods (and otherwise SELECT will look like SqlWords.SELECT) is indeed insane.
Snippet? (Score:3, Insightful)
Re:Snippet? (Score:2)
Another case where identifying the snippet may not be enough is when the problem is limited to one (or a just a few) platforms, and you don't have access to those platforms. Then you need access to the platform experts.
All of the example above (except APL) are drawn from m
Bullshit! (Score:3, Informative)
Some code (eg. device driver code) is often extremely difficult to trace and debug and the cause and effect can often be difficult to tie togther. In once case I saw a problem where a device initialisation sequence of less than 20 lines was wrong, but very subtly so. The problem persisted for manny weeks. This was cured by a code snippet.
Yup. Timing window and/or reentrancy errors... (Score:2)
I don't have much experience with the former, but I've had to track down a few of the latter, and it *is* satisfying once you find it and fix it.
CPAN modules == $money (Score:5, Interesting)
Not to mention that by releasing it, I get a whole bunch of people to hammer my code and find bugs, so I don't have to. It's a win-win situation!
Of course, since it's all on public display, uploading crappy badly-rating bug-ridden slop would probably have the opposite effect
Re:CPAN modules == $money (Score:5, Interesting)
Unfortunately, much of the closed source stuff I've worked with is crappy poorly-written bug-ridden slop. With the bugs in many closed source apps, I would guess that under the hood there is more of the same. I find that a vast majority of the GNU stuff to be very well written, easy to understand, and relatively bug free. I'm talking "real" GNU stuff, not slop that is GPLed and thrown on sourceforge (I'm not bashing all of sourceforge by any stretch of the imagination, I have stuff there
Personally, I owe my career to open source. I learned the inside out of a kernel, how to program, the whole nine yards. Open source taught me as much or more than college did. College did not get me a career.
Now, I'm going to nitpick the original post, because it seems confusing.
1) In the past few years, OSS has proven that sharing one's source code can be beneficial to both businesses and their customers. OK, pretty much a statement of fact.
2) More than a few young programmers are thankful that they were allowed to learn from professional developers by browsing through and hacking on 'enterprise quality' code. OK, pretty much a statement of fact reinforced by my experience as noted above.
3) My question to developers of commercial OSS is this: Have you, personally, ever benefited from having the source code to your project freely available and dowloadable, instead of being kept under lock-and-key?
Yes. I've gotten job offers from it. Having the source enabled me to fix bugs in things and/or customize them.
4) Have you ever fixed a bug in your spare time? Yup. Even when I was "working".
5) Have you ever sought outside help (providing source code snippets) on a particularly nasty problem?"
I guess this is where I got confused, and by the previous posts, this seems to be the problem.
Open source is _the_ way to go. It actually should be mandatory. Also, I wish it was that way with hardware as well. Even if its a pseudo-schematic, I would like to know how things work. I have some semi-pro audio gear, and they provided pseudo-schematics and I was able to figure out the signal path and what the adjustments did. English text is not anywhere as good as seeing a signal path so I know the chain of events, just like OSS.
answers (Score:3, Informative)
Do you mean me, personally, or do you mean the company I work for? No matter, the answer is yes in either case.
> Have you ever fixed a bug in your spare time?
Yes.
> Have you ever sought outside help (providing source code snippets) on a particularly nasty problem?"
Yes.
Ok, that was easy. Next article.
Re:answers (Score:1)
Open source sure helps (Score:2)
Used gcc and friends as a development environment for turnkey embedded products. Mostly this means just using gcc as a product, but on occasion I've gone into the gcc/binutils code to understand how to get around a compiler limitation/bug.
Used open source as a reference. For example, when I've had problems initialising a d
Yes, no doubt about it (Score:3, Interesting)
Re:Yes, no doubt about it (Score:2)
How has the fact that the code is open and available made it better as opposed to closed sources. Both can be free, but how has the openness of the code benefitted you?
Re:Open source is NOT about profit!!! (Score:5, Informative)
If by "FOSS advocates", you mean "FOSS advocates who still live in Moms basement". The GPL is about freedom, yes, but is not anti money.
Re:Open source is NOT about profit!!! (Score:5, Interesting)
Free software IS about profit (Score:2, Interesting)
It is the belief that knowledge, time and services are greater commodities than just stuff you dig out of the ground and sell because it is shiny.
Otherwise, a lot of corporations are making the wrong bet.
Let's face it: if I do web design for a living, I benefit if more people, worldwide, are making websites and using them because it increases the likelihood that one of those new sales will come my way.
Re:Free software IS about profit (Score:1)
> worldwide, are making websites and using them because it increases the
> likelihood that one of those new sales will come my way.
Only if your assumption/ gamble pays off that you are:
1) Really good
2) Really cheap
3) Really good at SEO or self promotion
Otherwise, your assumption is relying on the kindness of strangers, nicht?
Re:Free software IS about profit (Score:1)
Um... dude? I run a business. In all fairness, if I'm not good at self-promotion, I'm screwed. If I'm not cheap, I'll be beaten. If I'm not good, I'll never be contacted.
So, I will gladly play those assumptions, because that is just the nature of business.
If business were easy, everyone would do it.
Re:Open source is NOT about profit!!! (Score:2)
Re:Open source is NOT about profit!!! (Score:2, Insightful)
It's all about the margins (Score:1)
Open Source has made my life way easier (Score:4, Funny)
* These censored bits brought to you by men in black coats, and my NDA. Enjoy!
Re:Open Source has made my life way easier (Score:3, Funny)
* Sorry, can't tell you why it's censored. Talk to [CENSORED*] if you want more info.
It's a Mad Lib !!!! (Score:3, Insightful)
I work for The President , and we use the open source app SNORT by snagging it's source, modifying it a bit, and th
Yes, but in a different way (Score:5, Informative)
As an example, take a look at the functions in the standard I/O library for C. The various scanf() and prinf() variations use much the same arguments, but each one has them in a different order. There's no rhyme or reason to it, you either have to memorize the order or look it up. Not so with the functions Dan wrote! Part of his planning for a subroutine/function package was deciding what order the arguments would go in, and they were in exactly that order every time. (Many of the routines used either the same set of arguments, or a subset of them.) I was working with him because he'd gone blind from diabetes, and in all the time we worked on that package, he never got the arguments wrong because he'd planned it out ahead of time. In this case, there were only three functions that the average user'd need, and the rest were helpers for them. Still, if anybody needed them, they were there, and easy to use.
Now, imagine if this code were being used in a current OSS project. (Unlikely; not only is it in FORTRAN, the problem it solved had to do with command lines and batch files, mostly on a VAX.) Not only would it be easy to use, it'd be easy for somebody else to check the calls and make sure everything was in the right order. Sanity checks become quicker and there are less obscure bugs caused by misordered arguments. He also kept his variable delcarations alphabatized, as well as keeping his functions (except main() of course) in alphabetical order. Made it much easier to find the one you wanted, I can assure you.
Re:Yes, but in a different way (Score:2)
IMHO every function call where the function takes more than one parameter should be done by name, not position.
Re:Yes, but in a different way (Score:2)
Oddly enough, that's roughly what that subroutine package did. Instead of having a batch file call a program with a huge list of paramaters (Most of them set to their default value and having to be in exactly the right order.) you'd create a namelist file. In it, you'd list variable names and values, in whatever order you wanted. The namelist reader would set the variables to the right value, not touching any others. There was also a namelist writer that would out
Re:Yes, but in a different way ... architect (Score:2)
Sounds more like a software architect or a software engineer to me?
Re:Yes, but in a different way ... architect (Score:2)
Yes, yes, yes. (Score:5, Interesting)
My career is almost solely attributable to OSS. Of course, I'd like to think I have some talent helping me, too.
:)
I started at Borland, as a Perl jockey, mostly. I got in trouble with customers for not using Delphi to power the Web site. But something about OSS made me feel safe -- I had been very poor before the Borland job, and I didn't like the idea of hanging my career onto products that cost $2000 -- what if I became poor again and couldn't afford the next release? It seemed like a way to lock myself out of my own toolset.
I never became poor again, though. I fell in love with PHP & Linux. I started to specialize in LAMP. For a while I ran some OSS teams at SST, Arzoo, and Actuate. I bought more & more into the idea that there you give away the tools and sell the service. I started doing freelancing. I got a reputation for being the guy who fixes the bugs in apps that have lost their original developers.
I partly got that reputation because I have fixed a lot of other people's products [outshine.com] for free. And when I create a Web site (for myself, for profit), I package up my enhancements and release them [freshmeat.net] to [outshine.com] the [outshine.com] community [outshine.com]. In return, I get calls from recruiters, from people who will pay me $50 for a quick product install, and from people who see my work and want to hire me for big projects. Some of my Web sites have donation buttons, and they actually get used (not as much as I'd like, but still
:)
Anyway, to conclude, by integrating myself into the community, the community has helped me to stay afloat. I can pay my mortgage, and feed my kids. In return, the free products I use to make my living get free patches from me.
My current big freelance project is building the auction for Napa Valley Vintner's charity auction [napavintners.com]. It's a Flash interface, which I didn't make, powered by a PHP backend, which is where I come in. I'm doing something worthwhile, and they're giving me fair pay. I may not have 10,000 customers downloading my product for $29.95, but I do have 10,000 friends who send me big jobs. They know that if I have paying jobs during the week, I'm patching their products during the weekend. It's a good way to make a living.
-Tony
Re:Yes, yes, yes. (Score:2)
It's been great, commercial-ly (Score:5, Funny)
Hoo ah. Tough crowd.
Various ugly HACKS (Score:5, Interesting)
Well, ATI has just as much glue code as Nvidia to tie the binary module to various kernels, and much of the glue code is open. AGP tends to open more of their drivers than Nvidia, including the AGP detection -- maybe the full support, I'm not sure. At any rate, it was broken -- it kept refusing to detect my card as AGP 3.0, and my video card would not work in 2x/4x mode.
So, I found the detection code, commented it out, and hardcoded it as AGP 3.0. I didn't have the knowledge to do it right -- give an option (compile-time, module load time, kernel commandline) to force a particular mode, or figure out why it got the wrong mode in the first place. This hack would obviously break the module on anything but an AGP 3.0 system. But, it worked for me.
I would not have been able to play games on my Linux without this hack. The hack involved would probably never be supplied by a proprietary vendor, and would take a bit more work to make it acceptible for open source -- or for other developers to even notice the problem. But I was able to make it work, for myself, on my own system, and I could not have done that without source code.
And yes, this was a critical bug. I tried other workarounds; they all failed. I'm sure if this bug existed an entirely closed driver, like the one they distribute for Windows, I would never have been able to see 3D acceleration on my box.
The counter-argument, of course, is that the Windows driver worked fine, because Windows is more popular, and more popular means hardware manufacturers write drivers for Windows, not the other way around. But every now and then, there's some showstopping bug, and I can either dig through the source and hack it (or fix it legitimately), or I can wait for a fix. On closed-source platforms, I just have to wait for the fix.
Of course it has (Score:5, Interesting)
And yes, I've also written patches/worked on OSS projects in my spare time. I'm an OSS developer for several years now and also learned a great deal how to code (and how NOT to code) from several open source projects. On a related side note: if you'd like to see how to manage a project (OSS or not) and how to write high quality software, I really recommend looking at SubVersion.
More interesting question (Score:3, Insightful)
1) I love the stability of RedHat Enterprise Linux and the slower and more careful release schedule, but do not need the tech support- CentOS has been a boon for the organizations I work for.
2) Robust internet services for free running on commodity and inexpensive hardware = less overhead. Who needs a dual xeon 3.0 ghz with registered memory just to run a small DNS or email server? End of lease hardware from tiger direct works great. A 2.4 ghz P4 is still overkill for a lot of things, but for a hundred bucks or so, who can complain.
3) yum in conjunction with RPMs was a godsend for pushing out configurations/software to lab-fulls of identical machines. Simply push out an rpm that requires a package list and voila, yum makes sure that the machines grab those packages and their requirements. This is an oversimplification, but being able to manage several hundred machines with a few keystrokes is a miricle in itself, let alone the fact its free
and many more
Now the more interesting question, how have businesses you've worked for contributed to open source?
I've often found myself working on a commercial project that depends on some open source code either as a dependency or as the framework for expansion. There are many cases where I've fixed show-stopping bugs or contributed new features that enhanced the OSS project in a non-trivial way.
Every time such a situation crops up, it reminds me that OSS and commercialism are not in as much opposition as some in the industry think.
The free time and hobby interest that many have is a huge part of OSS, definately, but commercial interest has produced a heaping pile of very real and sometimes previously very expensive code.
OpenSource has and will continue to revolutionize the growth of knowledge and the capabilities of our machines, as well as lower the learning and creation overhead that is required to run a business. Things that used to take gobs of time to setup and maintain and wouldn't even be worth doing can now be done as an afterthought and an extra. Not to say that OSS replaces admins, but over time, as products improve and manage/configure themselves (rpms, etc) admins certainly can focus on other things.
I for one welcome the OSS revolution.
As a bartender? Yes, actually (Score:5, Interesting)
I'm no longer a professional geek. These days I run the night shift at a bar in central Montana.
Amusingly, though, Linux has appeared and helped my bar in the form of a digital jukebox that runs a Linux-based front end.
This thing brings in more cash in a night than our old mechanical CD jukebox did in a week.
The downside is that our net connection seems to die every Monday morning, so I have to show up to deal with that (being "the computer guy").
-l
Re:As a bartender? Yes, actually (Score:1)
1 shutdown the network connection
2 do what ever magic you need to do
3 restart the network connection
Re:As a bartender? Yes, actually (Score:2)
It's "open source" as far as the box being Linux-based. Beyond that? I can't do a damn thing.
-l
OSS has helped me but.. (Score:1)
There is another piece of software that I have used in my CMS project (a client wanted a bespoke
Re:OSS has helped me but.. (Score:2)
That is correct if you choose to redistribute it. If you don't send your code to anyone else, you can just use it, which is what I imagine you're doing anyways with a web app.
The people who replied in the forum you linked to and complained about there not being a license are basically being silly. If he released it as pub
Re:OSS has helped me but.. (Score:1)
Apache easy liscensing is helpful. (Score:2)
I guess weve deployed over 40 Apache servers on everything from zSeries hardware to laptops. Some deployments are extremly ephemeral (usually for training or testing). To have to get a software license for each install would have slowed development, testing and training down to a crawl, and would have added a headc
In Unexpected Ways (Score:3, Interesting)
My company, Sûnnet Beskerming [beskerming.com], has benefited from the OSS model in unexpected ways. In addition to providing a technological base which is infinitely customisable, many products and tools available under OSS-friendly licences allow us to quickly setup sandboxes and other testing environments where we can focus on researching and pursuing high risk (high return) ideas which would be cost prohibitive under commercial licencing.
The OSS approach to openness has also aided us in determining legitimate sources of Information Security threat data that is then distributed via our Free Security Mailing List [skiifwrald.com]. Having the source code at hand allows us to independently verify the reports that we uncover, and from there make an assessment of the relative technical merit of that particular source. This also means that we can more easily identify the gems amongst the sea of reports and risk announcements, allowing us to elevate the weight of what would otherwise be an unknown source.
Free gadgets!! (Score:3, Interesting)
OpenSource has help our Company (Score:1, Interesting)
Open Source Has HURT My Company! (Score:1, Informative)
We weren't OSS zealots that pushed open source as the only way, if that's what you're thinking. We offered Microsoft and Novell produ
Free Software Helpful! (But not everything.) (Score:1)
Some of the software is GPL, namely all the development libraries, and the robot platform usually runs Linux (though you can get Windows on it if you really want). We also use a lot of free software in development, or depend on free libraries. This lets a software development staff of four turn out a lot of useful stuff quickly, and using Linux of course brings down the customer's final cost by a few hundred dollars, as
phpSurveyor in an hour (Score:2)
The next morning, I looked around freshmeat and found phpSurveyor -- grabbing and exploding the tarball to the right directory took about three minutes. Then I spent about 15 minutes setting it up and making changes to the source code to get around quirks of my ISP. I had a survey read
Not 'commercially' (yet) but 'academically' (Score:2)
The open source platform has helped (Score:2)
Worked for me... (Score:2, Interesting)
Fast forward six years. Working on so much open source has gotten me a ton of experience in many different areas of software, and it also landed me a kickass job at a kickass startup who, in turn, uses and contributes to many open so
Allowed me to get ahead of the competition (Score:1)
Sometimes you have to compromise to get things done quickly.
Do you know Google? (Score:1)
|
https://slashdot.org/story/06/05/04/0133224/how-has-open-source-helped-you-commercially
|
CC-MAIN-2018-09
|
refinedweb
| 4,423
| 70.13
|
Easy way to calculate response bias with Python.
In signal detection theory d-prime index is used as sensitivity index, but the case is different combinations of hit rates and false alarm rates can lead to the same value d-prime index, which means d-prime captures only a part of signal detection space. So additional index known as RESPONSE BIAS is needed showing hit/miss tendecy or yes/no tendency.
In other words response bias determines whether someone tends to respond YES or NO more often. Responce bias is orthogonal or unrelated to d-prime because very different d-primes can be associated with the same bias.
The formula for response bias:
BIAS = – ( z(FA) - z(H) ) / 2
where z(H) is z-score for hits and z(FA) is z-score for false alarms.
hit rate H: proportion of YES trials to which subject responded YES = P("yes" | YES)
false alarm rate F: proportion of NO trials to which subject responded YES = P("yes" | NO)
BIAS=0 is considered as NO BIAS.
BIAS>0 is considered as tendency to say NO.
BIAS<0 is considered as tendency to say YES.
Simple example of d-prime calculation:
import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # hit rates and false alarm rates hitP = 22/30 faP = 3/30 # z-scores hitZ = stats.norm.ppf(hitP) faZ = stats.norm.ppf(faP) # RESPONSE BIAS respBias = -(hitZ+faZ)/2 print(respBias)
OUT: 0.32931292116725636
|
https://bratus.net/knowledgebase/response-bias-with-python/
|
CC-MAIN-2022-05
|
refinedweb
| 247
| 64.51
|
Hello,<br>
We are getting the following error message when we try to run PLS.
Error # 6886. Command name: BEGIN PROGRAM
>The External Program could not be loaded.
>This command not executed.
Load library from C:\Program Files\SPSSInc\SPSS16\invokepython.dll failed
I have tried installing the Python version that came on the SPSS 16 cd but that did not work.
Thanks
Warren
Answer by SystemAdmin (532) | Jul 23, 2009 at 03:54 PM
Did you install the PLS module from Developer Central? You need that in order to run the procedure.<br>
If you have already installed it, run the following code to get a more information error message
begin program.
import spss, PLS
end program.
HTH,
Jon Peck
Answer by SystemAdmin (532) | Jul 23, 2009 at 11:22 PM
Hi Jon,
Thank you for you reply. I ran the code you asked and here is the output.
GET
FILE='C:\Projects\Jeff 2008\A1860\A1835 Ont Travel Tracking 08\A1860 - USA Middle Markets\usa_mid-stacked.sav'.
DATASET NAME DataSet1 WINDOW=FRONT.
begin program.
import spss, PLS
end program.
>Error # 6886. Command name: begin program
>The External Program could not be loaded.
>This command not executed.
Load library from C:\Program Files\SPSSInc\SPSS16\invokepython.dll failed.
>Error # 6887. Command name: begin program
>External program failed during initialization.
Answer by SystemAdmin (532) | Jul 24, 2009 at 01:49 AM
Did you install the Python plug-in itself? That's a prerequisite for any Python-based modules. If so, you should have an invokepython.dll in your SPSS installations directory.<br>
Regards,
Jon Peck
Answer by SystemAdmin (532) | Jul 28, 2009 at 03:23 PM
Hi,<br>
I installed the python program that was came on the CD with SPSS program and download a plugging from the SPSS website which had pry(?) extension which I placed in the python folder with the others.
Answer by SystemAdmin (532) | Jul 28, 2009 at 08:19 PM
The plugin download is an installer that you have to run. It wouldn't have a pry extension, though.
Answer by CanduraDario (0) | Apr 15, 2014 at 02:38 PM
Hello everyone, I need some help with my Spss "Propensity Score Matching" Analysis.
I managed to download both FUZZY and PSM extensions in my Spss version 20.
Unfortunately, I am still not able to run the analysis. It comes up with the message:
Error # 6886. Command name: begin program
The External Program could not be loaded.
Execution of this command stops.
Load library from InvokePython failed.
Any idea how I can solve the problem? Thanks for your help,
Dario.
46 people are following this question.
Script build up of memory 0 Answers
Sync HttpSendRequest failed 4 Answers
Create a Stand-Alone Pyton Program for Distribution to Users without Python 1 Answer
spssio32.dll and ctypes 3 Answers
To open .spv file through python 1 Answer
|
https://developer.ibm.com/answers/questions/226814/invokepythondll-failed.html?sort=votes
|
CC-MAIN-2019-51
|
refinedweb
| 480
| 68.36
|
import "debug/gosym"
Package gosym implements access to the Go symbol and line number tables embedded in Go binaries generated by the gc compilers.
DecodingError represents an error during the decoding of the symbol table.
func (e *DecodingError) Error() string
type Func struct { Entry uint64 *Sym End uint64 Params []*Sym // nil for Go 1.3 and later binaries Locals []*Sym // nil for Go 1.3 and later binaries FrameSize int LineTable *LineTable Obj *Obj }
A Func collects information about a single function..
NewLineTable returns a new PC/line table corresponding to the encoded data. Text must be the start address of the corresponding text segment.
LineToPC returns the program counter for the given line number, considering only program counters before maxpc. Callers should use Table's LineToPC method instead. is a function symbol, the corresponding Func Func *Func }
A Sym represents a single symbol table entry.
BaseName returns the symbol name without the package or receiver name.
PackageName returns the package part of the symbol name, or the empty string if there is none.
ReceiverName returns the receiver type name of this symbol, or the empty string if there is none.
Static reports whether this symbol is static (not visible outside its file).
type Table struct { Syms []Sym // nil for Go 1.3 and later binaries.
NewTable decodes the Go symbol table (the ".gosymtab" section in ELF), returning an in-memory representation. Starting with Go 1.3, the Go symbol table no longer includes symbol data..
UnknownFileError represents a failure to find the specific file in the symbol table.
func (e UnknownFileError) Error() string
UnknownLineError represents a failure to map a line to a program counter, either because the line is beyond the bounds of the file or because there is no code on the given line.
func (e *UnknownLineError) Error() string
Package gosym imports 6 packages (graph) and is imported by 70 packages. Updated 2018-06-08. Refresh now. Tools for package owners.
|
https://godoc.org/debug/gosym
|
CC-MAIN-2018-30
|
refinedweb
| 324
| 58.18
|
a label for your data. Commonly, you might want to predict fraud, customers that will buy a product, or home values in an area.
In unsupervised machine learning, you are interested in clustering data together that isn’t already labeled. This is covered in more detail in the Machine Learning Engineer Nanodegree. However, we will not be going into the details of these algorithms in this course.
In simple linear regression, we compare two quantitative variables to one another.
The response variable is what you want to predict, while the explanatory variable is the variable you use to predict the response. A common way to visualize the relationship between two variables in linear regression is using a scatterplot. You will see more on this in the concepts ahead.
Scatter plots
Scatter plots are a common visual for comparing two quantitative variables. A common summary statistic that relates to a scatter plot is the correlation coefficient commonly denoted by r.
Though there are a few different ways to measure correlation between two variables, the most common way is with Pearson’s correlation coefficient. Pearson’s correlation coefficient provides the:
- Strength
- Direction
of a linear relationship. Spearman’s Correlation Coefficient does not measure linear relationships specifically, and it might be more appropriate for certain cases of associating two variables.
Correlation Coefficients
Correlation coefficients provide a measure of the strength and direction of a linear relationship.
We can tell the direction based on whether the correlation is positive or negative.
A rule of thumb for judging the strength:
Strong Moderate Weak
0.7 \leq |r| \leq 1.0 0.3 \leq |r| < 0.7 0.0 \leq |r| < 0.3
Calculation of the Correlation Coefficient
r = \frac{\sum\limits_{i=1}^n(x_i – \bar{x})(y_i – \bar{y})}{\sqrt{\sum(x_i – \bar{x})^2}\sqrt{\sum(y_i – \bar{y})^2}}
It can also be calculated in Excel and other spreadsheet applications using CORREL(col1, col2), where col1and col2 are the two columns you are looking to compare to one another.
What is the value of the correlation coefficient?
Use the CORREL function.
A line is commonly identified by an intercept and a slope.
The intercept is defined as the predicted value of the response when the x-variable is zero.
The slope is defined as the predicted change in the response for every one unit increase in the x-variable.
We notate the line in linear regression in the following way:
\hat{y} = b_0 + b_1x_1
where
\hat{y} is the predicted value of the response from the line.
b_0 is the intercept.
b_1 is the slope.
x_1 is the explanatory variable.
y is an actual response value for a data point in our dataset (not a prediction from our line).
The main algorithm used to find the best fit line is called the least-squares algorithm, which finds the line that minimizes \sum\limits_{i=1}^n(y_i – \hat{y_i})^2.
There are other ways we might choose a “best” line, but this algorithm tends to do a good job in many scenarios.
How Do We Determine The Line of Best Fit?
You saw in the last video, that in regression we are interested in minimizing the following function:
\sum\limits_{i=1}^n(y_i – \hat{y}_i)^2
It turns out that in order to minimize this function, we have set equations that provide the intercept and slope that should be used.
If you have a set of points like the values in the image here:
In order to compute the slope and intercept, we need to compute the following:
\bar{x} = \frac{1}{n}\sum x_i
\bar{y} = \frac{1}{n}\sum y_i
s_y = \sqrt{\frac{1}{n-1}\sum\limits(y_i – \bar{y})^2}
s_x = \sqrt{\frac{1}{n-1}\sum\limits(x_i – \bar{x})^2}
r = \frac{\sum\limits_{i=1}^n(x_i – \bar{x})(y_i – \bar{y})}{\sqrt{\sum(x_i – \bar{x})^2}\sqrt{\sum(y_i – \bar{y})^2}}
b_1 = r\frac{s_y}{s_x}
b_0 =\bar{y} – b_1\bar{x}
But Before You Get Carried Away…
Though you are now totally capable of carrying out these steps….
In the age of computers, it doesn’t really make sense to do this all by hand. Instead, using computers can allow us to focus on interpreting and acting on the output. If you want to see a step by step of this in Excel, you can find that here. With the rest of this lesson, you will get some practice with this in Python.
PIP installs STATSMODELS will install the library to your machine
import pandas as pd
import numpy as np
import statsmodels.api as sm
df[‘intercept’]=1
lm = sm.OLS(df[‘price’], df([[‘intercept’, ‘area’]])
results = lm.fit()
result.summary()
Here is a post on the need of an intercept in nearly all cases of regression. Again, there are very few cases where you do not need to include an intercept in a linear model.
We can perform hypothesis tests for the coefficients in our linear models using Python (and other software). These tests help us determine if there is a statistically significant linear relationship between a particular variable and the response. The hypothesis test for the intercept isn’t useful in most cases.
However, the hypothesis test for each x-variable is a test of if that population slope is equal to zero vs. an alternative where the parameter differs from zero. Therefore, if the slope is different than zero (the alternative is true), we have evidence that the x-variable attached to that coefficient has a statistically significant linear relationship with the response. This in turn suggests that the x-variable should help us in predicting the response (or at least be better than not having it in the model).
The Rsquared value is the square of the correlation coefficient.
A common definition for the Rsquared variable is that it is the amount of variability in the response variable that can be explained by the x-variable in our model. In general, the closer this value is to 1, the better our model fits the data.
Many feel that Rsquared isn’t a great measure (which is possible true), but I would argue that using cross-validation can assist us with validating with any measure that helps us understand the fit of a model to our data. Here, you can find one such result on why an individual doesn’t care for Rsquared.
Housing Analysis
import numpy as np
import pandas as pd
import statsmodels.api as sm;
df = pd.read_csv(‘./house_price_area_only.csv’)
df.head()
1. Use the documentation here and the statsmodels library to fit a linear model to predict price based on area. Obtain a summary of the results, and use them to answer the following quiz questions. Don’t forget to add an intercept.
df[‘intercept’] = 1
lm = sm.OLS(df[‘price’], df[[‘intercept’, ‘area’]])
results = lm.fit()
results.summary()
Regression Carats vs. Price
import numpy as np
import pandas as pd
import statsmodels.api as sms;
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv(‘./carats.csv’, header= None)
df.columns = [‘carats’, ‘price’]
df.head()
1. Similar to the last notebook, fit a simple linear regression model to predict price based on the weight of a diamond. Use your results to answer the first question below. Don’t forget to add an intercept.
df[‘intercept’] = 1
lm = sms.OLS(df[‘price’], df[[‘intercept’, ‘carats’]])
results = lm.fit()
results.summary()
2. Use scatter to create a scatterplot of the relationship between price and weight. Then use the scatterplot and the output from your regression model to answer the second quiz question below.
plt.scatter(df[‘carats’], df[‘price’]);
plt.xlabel(‘Carats’);
plt.ylabel(‘Price’);
plt.title(‘Price vs. Carats’);
HomesVCrime
import numpy as np
import pandas as pd
import statsmodels.api as sm
from sklearn.datasets import load_boston
import matplotlib.pyplot as plt
%matplotlib inline
boston_data = load_boston()
df = pd.DataFrame()
df[‘MedianHomePrice’] = boston_data.target
df2 = pd.DataFrame(boston_data.data)
df[‘CrimePerCapita’] = df2.iloc[:,0];
df.head()
The Boston housing data is a built in dataset in the sklearn library of python. You will be using two of the variables from this dataset, which are stored in df. The median home price in thousands of dollars and the crime per capita in the area of the home are shown above.
1. Use this dataframe to fit a linear model to predict the home price based on the crime rate. Use your output to answer the first quiz below. Don’t forget an intercept.
df[‘intercept’] = 1
lm = sm.OLS(df[‘MedianHomePrice’], df[[‘intercept’, ‘CrimePerCapita’]])
results = lm.fit()
results.summary()
2.Plot the relationship between the crime rate and median home price below. Use your plot and the results from the first question as necessary to answer the remaining quiz questions below.
plt.scatter(df[‘CrimePerCapita’], df[‘MedianHomePrice’]);
plt.xlabel(‘Crime/Capita’);
plt.ylabel(‘Median Home Price’);
plt.title(‘Median Home Price vs. CrimePerCapita’);
## To show the line that was fit I used the following code from
##
## It isn’t the greatest fit… but it isn’t awful either
import plotly.plotly as py
import plotly.graph_objs as go
# MatPlotlib
import matplotlib.pyplot as plt
from matplotlib import pylab
# Scientific libraries
from numpy import arange,array,ones
from scipy import stats
xi = arange(0,100)
A = array([ xi, ones(100)])
# (Almost) linear sequence
y = df[‘MedianHomePrice’]
x = df[‘CrimePerCapita’]
# Generated linear fit
slope, intercept, r_value, p_value, std_err = stats.linregress(x,y)
line = slope*xi+intercept
plt.plot(x,y,’o’, xi, line);
plt.xlabel(‘Crime/Capita’);
plt.ylabel(‘Median Home Price’);
pylab.title(‘Median Home Price vs. CrimePerCapita’);
Recap
In this lesson, you learned about simple linear regression. The topics in this lesson included:
- Simple linear regression is about building a line that models the relationship between two quantitative variables.
- Learning about correlation coefficients. You learned that this is a measure that can inform you about the strength and direction of a linear relationship.
- The most common way to visualize simple linear regression is using a scatterplot.
- A line is defined by an intercept and slope, which you found using the statsmodels library in Python.
- You learned the interpretations for the slope, intercept, and Rsquared values.
Up Next
In the next lesson, you will extend your knowledge from simple linear regression to multiple linear regression.
|
http://tomreads.com/2018/03/10/simple-linear-regression/
|
CC-MAIN-2019-04
|
refinedweb
| 1,732
| 57.98
|
Add arbitrary paths to the Maya Python environment in the
userSetup.py file.
userSetup.py is a Python file (not a module) which gets automatically executed on Maya startup.
userSetup.py can live in a number of locations, depending on the os and environment variables.
When Maya starts, it will execute the contents of the userSetup file. Adding Python paths here will allow it to find modules:
import sys sys.path.append("/path/to/my/modules")
This will make Python module files in '/path/to/my/modules' available for import using the standard
import directive.
For more advanced setups, the
site module can do the same using the
addsitedir() function.
site.addsitedir() supports .pth files which configures multiple paths in one go.
For example, three folders of unrelated Python could be arranged like this:
python_files | +---- studio | + module1.py | + module2.py | +---- external | +---- paid | + paidmodule.py | +---- foss + freemodule.py
Using
sys.path directly you'd have to add
python_files/studio,
python_files/external/paid and
python_files/external/paid manually. However you could add a .pth file to the root of
python_files that looked like this:
studio external/paid external/foss
and call this in userSetup:
import site site.addsitedir("/path/to/python_files")
and you'll get all of the paths in one go.
|
https://riptutorial.com/maya/example/24602/using-usersetup-py
|
CC-MAIN-2021-49
|
refinedweb
| 210
| 51.65
|
3.4 merge window is closed
Posted Apr 1, 2012 15:15 UTC (Sun) by zdzichu (subscriber, #17118)
[Link]
Posted Apr 1, 2012 15:44 UTC (Sun) by Zenith (subscriber, #24899)
[Link]
Legit?
Posted Apr 1, 2012 15:55 UTC (Sun) by corbet (editor, #1)
[Link]
OTOH, this is not legit. Neither are this or this.
UUIDs for all!
Posted Apr 1, 2012 21:18 UTC (Sun) by jzbiciak (✭ supporter ✭, #5246)
[Link]
the gconf registry
Posted Apr 2, 2012 2:17 UTC (Mon) by abartlet (✭ supporter ✭, #3928)
[Link]
Posted Apr 3, 2012 13:29 UTC (Tue) by jengelh (subscriber, #33263)
[Link]
Why, this has some merit even if not legit. (For certain values of merit.) After all, people are already using e.g. /dev/disk/by-uuid/. Might as well turn the symlinks into real block device nodes :-)
Posted Apr 1, 2012 16:15 UTC (Sun) by gmaxwell (subscriber, #30048)
[Link]
Posted Apr 1, 2012 21:32 UTC (Sun) by jzbiciak (✭ supporter ✭, #5246)
[Link]
I'm a little confused. We already have two sets of libraries on most 64-bit systems. (At least, the ones I use do.) This adds a third set, actually.
I happen to like the idea of something like this, personally. Pointer-heavy programs blow up in size on 64-bit machines with little to show for it if they don't actually need 2+ gigabytes. (I say 2GB, because that's the more usual cutoff for x86 32-bit programs.) I've only superficially reviewed specifics of x32, though, so I can't say for certain whether I like their approach overall. What warts I did see mainly seem to be driven by practical issues, so it's hard for me to argue against them. It looks promising.
I've wanted a "small mode 64-bit" environment for awhile--that is, 32-bit pointers, but otherwise you get all the other goodies such as native 64 bit arithmetic when you need it, more registers to play with, and a modernized calling convention.
Of course, you could always rewrite your pointer-heavy application to manage memory manually and maintain "pointers" as offsets relative to the application heap. But, you lose out on a ton of language help here making it work, and debuggers won't know what you're up to either. Sure, in C++ you can make a type that overrides operator* and friends, but every abstraction is leaky, and that one would be leakier than most.
Posted Apr 1, 2012 21:49 UTC (Sun) by gmaxwell (subscriber, #30048)
[Link]
I don't know what you're running— but Fedora runs fine without having any 32-bit libraries at all. I know that debian was seriously behind on the initial x86_64 transition but I assume that they would have caught up by now.
Of course, there are programs that actually need less than 2 gigabytes— but perhaps fewer than you think because memory usage is a runtime requirement for most non-trivial programs. I like the fact that my browser no longer crashes due to running out of VM when I have many hundreds of tabs open, thank you!
(And I like the fact that when I want to work on larger data sets than the author of the software expected I usually don't have to waste hours porting the software to 64-bit anymore...)
There is savings— but whats the set of pointer-heavy programs which are large enough that the savings of halving pointer sizes matters but which are small enough that the loss of virtual memory space isn't a material restriction? And is that set big enough to offset the constant cost of yet another copy of all the common shared libraries in memory the moment you open something that can make use of more memory (like a browser)?
(and— I fully expect that the cost of yet another copy of all the libraries will be enough that a lot of things which really should be build as 64 bit to be built as x32— but I suppose we'll see).
Of course you could rewrite your uses-lots-of-memory application to manage memory more intelligently or do its own swapping. But just like the pointer compression that often makes a mess— and its development work that isn't likely to happen.
I too see the advantages of having a small memory model, but I think the realities of shared libraries don't make it a realistic tradeoff on a whole system level. I'll be happy to have my negative expectations disproven.
Posted Apr 1, 2012 22:06 UTC (Sun) by jzbiciak (✭ supporter ✭, #5246)
[Link]
Compilers, for one thing, are quite pointer heavy. Granted, recent features like LTO can really blow up the memory usage. (I hear GCC recently trimmed the footprint for LTOing Firefox from monumental 8GB to a merely staggering 3GB.) But for my own work, I don't remember ever having a compile fail due to exhaustion of the virtual address space on a 32-bit machine. I have, however, had compiles fail due to exhausting physical memory, when trying to build a GNU toolchain some years back on a PC 7300. That was a bit different.
Interpreters and simulators are also both pointer heavy, but not necessarily memory heavy. I run a ton of interpreted code (Perl scripts, mostly), and none of that really benefits from the large address space. My perl scripts are either moving files around, or streaming information through. I also run (and occasionally write) instruction set simulators. (We make new processors at work.) Lots of pointers there, especially function pointers.
As for what system I'm running: Ubuntu at home, RHEL and SLES at work. All three have 32-bit libraries installed. At home, it's for convenience--I have some 32-bit binaries kicking around. At work, it's because we have many, many binary-only packages that are 32-bit. Even though we may have upgraded 64-bit versions available also, we often need to carry multiple versions around for long-running projects that fix on a tool version. And some have no 64-bit version at present. So, yeah, x32 would add a third set of libraries on those systems.
Posted Apr 2, 2012 5:05 UTC (Mon) by JoeBuck (subscriber, #2330)
[Link]
If you're an architect for a distro, you might want to spend a few cycles thinking about how you will eventually accommodate it.
Posted Apr 2, 2012 5:51 UTC (Mon) by jzbiciak (✭ supporter ✭, #5246)
[Link]
Posted Apr 2, 2012 5:52 UTC (Mon) by jzbiciak (✭ supporter ✭, #5246)
[Link]
Posted Apr 2, 2012 7:24 UTC (Mon) by HelloWorld (guest, #56129)
[Link]
Posted Apr 3, 2012 13:32 UTC (Tue) by jengelh (subscriber, #33263)
[Link]
Posted Apr 3, 2012 16:51 UTC (Tue) by dlang (✭ supporter ✭, #313)
[Link]
Posted Apr 3, 2012 17:02 UTC (Tue) by jengelh (subscriber, #33263)
[Link]
Posted Apr 3, 2012 17:10 UTC (Tue) by dlang (✭ supporter ✭, #313)
[Link]
Posted Apr 2, 2012 11:21 UTC (Mon) by farnz (guest, #17727)
[Link]
For those of us with MIPS experience, this is a lot like o32/n32/n64 ABIs. The established i386/x86-32 ABI is equivalent to o32. The amd64/x86-64 ABI is equivalent to n64; x32 is an attempt to add an n32 equivalent to x86-64.
For those without MIPS experience; o32 is the legacy ABI for 32-bit only processors, and is only used on systems that can't run n64 binaries. n64 is the full fat 64-bit ABI. n32 is equivalent to n64 (and interworking between the two ABIs is simple, with care over pointers coming from the n64 to n32 world), but with 32-bit pointers instead of 64-bit pointers, and is reasonably common as a result.
Posted Apr 2, 2012 10:21 UTC (Mon) by ballombe (subscriber, #9523)
[Link]
Actually, initial Debian x86_64 releases only included the 64bit libraries and almost no 32bit libraries, so certainly system can run without 32bit libraries at all.
Posted Apr 2, 2012 11:13 UTC (Mon) by jzbiciak (✭ supporter ✭, #5246)
[Link]
Then again, the machines at work are part of a heterogeneous compute farm (mixture of Linux and Solaris, 32-bit and 64-bit), so it's not too surprising. We also have plenty of binary-only 32-bit apps kicking around.
(All that said, our newest SLES systems don't seem to have quite as many 32-bit compat libraries installed. I believe we're all being nudged in the direction of full 64-bit environments. I still have a 32-bit machine under my desk, though.)
Posted Apr 2, 2012 11:44 UTC (Mon) by paulj (subscriber, #341)
[Link]
Posted Apr 2, 2012 13:02 UTC (Mon) by james (subscriber, #1325)
[Link]
Multiprocess web-browsers
Posted Apr 2, 2012 16:12 UTC (Mon) by gmatht (guest, #58961)
[Link]
Personally, I usually prefer an application to crash rather force my entire system into swap-death. There are a number of processes that have no justification for allocating even 1GB, but feel the need to emulate a "while(1){malloc(1)}". x32 could be quite nice arch for my 2GB netbook, since it doesn't really have memory or CPU to waste, and perhaps also for some VMs I run (though I am not sure x86 is actually faster than x32/x64 code when running in a VM).
Posted Apr 2, 2012 3:39 UTC (Mon) by kevinm (guest, #69913)
[Link]
Posted Apr 2, 2012 3:50 UTC (Mon) by jzbiciak (✭ supporter ✭, #5246)
[Link]
I did not realize that. And, sure enough, I tried it with a simple program and was able to allocate 4GB. Consider me better informed.
elysium:/tmp$ cat alloc.c
#include <stdio.h>
#include <stdlib.h>
int main()
{
int alloc = 0;
while (malloc(4096))
alloc++;
printf("Allocated %lld bytes\n", (unsigned long long)alloc * 4096ull);
return 0;
}
elysium:/tmp$ gcc -m32 alloc.c
elysium:/tmp$ file a.out
a.out: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, not stripped
elysium:/tmp$ ./a.out
Allocated 4282732544 bytes
Minimal DoS
Posted Apr 3, 2012 19:41 UTC (Tue) by geuder (subscriber, #62854)
[Link]
#include <stdlib.h>
int main()
{
while (malloc(4096)) {}
return 0;
}
Posted Apr 3, 2012 19:55 UTC (Tue) by jzbiciak (✭ supporter ✭, #5246)
[Link]
Posted Apr 4, 2012 10:23 UTC (Wed) by niner (subscriber, #26151)
[Link]
Posted Apr 4, 2012 10:29 UTC (Wed) by jzbiciak (✭ supporter ✭, #5246)
[Link]
Posted Apr 5, 2012 7:07 UTC (Thu) by geuder (subscriber, #62854)
[Link]
Of course, if the goal were to forbid memory consumption over a certain limit.
But my goal would be to slow down the memory hog just enough such that the overall system remains responsive. That should be possible in a multi-tasking system. (I have 4GB of RAM and 5 GB of swap, more than enough to run the 32 bit binary without fatally impacting the rest of the system. The 64 bit binary needs to be killed at some point, so setting a ulimit of ~ 6 GB virtual memory might be appropriate. But having the system unresponsive right away when it's started is clearly suboptimal.)
Maybe I could do that with cgroups and freezing, I have not looked into it now. My naive expectation would just have been that this is already been taken care of in a major distro.
Posted Apr 5, 2012 7:11 UTC (Thu) by geuder (subscriber, #62854)
[Link]
> $ ulimit -a | grep virt
> virtual memory (kbytes, -v) 4861680
Not sure why it did not work, no time to investigate it now (given that every attempt will turn this (production) machine unusable for a couple of minutes.
Posted Apr 1, 2012 16:50 UTC (Sun) by grobian (guest, #83608)
[Link]
markus@x4 ~ % g++ -mx32 -w -O3 -ffast-math -march=native tramp3d-v4.cpp
markus@x4 ~ % ./a.out --cartvis 1.0 0.0 --rhomin 1e-8 -n 20
...
Time spent in iteration: 5.19397
markus@x4 ~ % ldd ./a.out
linux-vdso.so.1 (0xffbff000)
libstdc++.so.6 => /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.0/x32/libstdc++.so.6 (0xf76a1000)
libm.so.6 => /libx32/libm.so.6 (0xf7312000)
libgcc_s.so.1 => /libx32/libgcc_s.so.1 (0xf768a000)
libc.so.6 => /libx32/libc.so.6 (0xf6fb2000)
/libx32/ld-linux-x32.so.2 (0xf7595000)
markus@x4 ~ % ll ./a.out
1572324 Apr 1 18:31 ./a.out
(64bit default, slightly slower and the binaries are bigger)
markus@x4 ~ % g++ -w -O3 -ffast-math -march=native tramp3d-v4.cpp
markus@x4 ~ % ./a.out --cartvis 1.0 0.0 --rhomin 1e-8 -n 20
...
Time spent in iteration: 5.22383
markus@x4 ~ % ll ./a.out
1634176 Apr 1 18:32 ./a.out
x16 ABI coming up
Posted Apr 2, 2012 2:47 UTC (Mon) by ncm (subscriber, #165)
[Link]]
Posted Apr 4, 2012 11:05 UTC (Wed) by mads (subscriber, #55377)
[Link]
Can you have several programs occupying 4GB of virtual memory each if you have enough RAM for it?
Posted Apr 4, 2012 11:51 UTC (Wed) by Jonno (subscriber, #49613)
[Link]
Posted Apr 4, 2012 12:36 UTC (Wed) by khim (subscriber, #9252)
[Link]
Posted Apr 5, 2012 15:11 UTC (Thu) by Jonno (subscriber, #49613)
[Link]
Posted Apr 6, 2012 15:25 UTC (Fri) by slashdot (guest, #22014)
[Link]
Linux is a registered trademark of Linus Torvalds
|
http://lwn.net/Articles/490033/
|
CC-MAIN-2013-20
|
refinedweb
| 2,248
| 69.21
|
Setting Up Ektron
You can set up your Ektron website in the following ways. You decide which configuration is the best fit for your particular situation.
Use Ektron’s approval process and scheduled publishing of content to manage when content goes live.
To set up this configuration:, and the development/staging site is.\Workarea\applicationAPI.aspand
dev.example.com\Workarea\applicationAPI.asp.
uploadedimages/and
uploadedfiles/folders IIS virtual directories that point to the same physical directory.
To test and QA an upgrade, make a temporary copy of the site’s display layer on a separate server connected to the development/staging database.
If the development/staging database is the master, back it up before upgrading.
To set up this configuration:, and the development/staging site is.
uploadedimages/and
uploadedfiles/folders IIS virtual directories point to the same physical directory.
To set up this configuration:
To set up this configuration:\Workarea\applicationAPI.aspand
dev.example.com\Workarea\applicationAPI.asp.
uploadedimages/and
uploadedfiles/folders IIS virtual directories point to the same physical directory.
When your project is ready to be tested in house, move the site to a staging server. When the project is ready to go live, move the site to a production (live) server. You can use the same database for all environments. Back up that database often to keep it safe. Alternatively, create a separate database for each server.
To create new environments:
C:/cmsstageand/or
C:/cmsproduct. to
c:/cmsstageand to
c:/cmsproduct.
c:/assetcatalogand
c:/assetlibraryfolders to the other servers.
web.config. Then, update the database connection information so that it points to the new databases.
See Configuring Internet Server Certificates (IIS 7).
siteroot/web.configfile.
<add key="ek_UseSSL" value="false" /> <add key="ek_SSL_Port" value="443" />
ek_UseSSLto true.
WARNING! If
ek_UseSSL is true, but you did not install the certificate to the Web Server, you cannot log into Ektron.
ek_SSL_Portto
443(unless you specified another SSL port).
<add key="WSPath" value=" name/site name/Workarea/ServerControlWS.asmx" />
Edit the
<wsHttpBinding>/<security> element so it looks like this:
<security mode="Transport"> <transport clientCredentialType="None" proxyCredentialType="None" realm=""> </transport> </security>
Ektron’s
web.config file lets you control many key functions of your content management system. When you install Ektron,
web.config is placed into
webroot/siteroot.
If your server is currently running another .NET application, you must merge that
web.config file with this one. To distinguish Ektron’s tags, they begin with
ek_ and reside within the
<appSettings> tags of the
web.config file.
The following sections show the settings in the
web.config file.
Analytics
For SQL Server—Use this connection string to define an SQL server.
ektron.DbConnection
System.Data.SqlClient
NOTE: This value can be blank if you are using Windows authentication.
NOTE: This value can be blank if you are using Windows authentication.
IMPORTANT: After changing any database settings, you must stop and restart the Ektron Windows Service. See also: Handling Background Processing Functions with the Ektron Windows Service.
ek_appPathvariable. By default,
ek_appPathis set to
webroot/siteroot/workarea/. So, by default, this folder is set to
webroot/siteroot/workarea/images/application/.
ek_appPathvariable. By default,
ek_appPathis set to
webroot/siteroot/workarea/. So, by default, this folder is set to
webroot/siteroot/workarea/Xslt.
false. Setting it to
trueuses the functionality from 9.00 and earlier versions. (Blog subjects will display but will not allow changes to be saved without setting the value to
true.)
ek_RedirectToLoginURLkey sends the user from a forum page to a login page and back to the previous page.
For example, a user tries to reply to a forum post but is not logged in. The user is sent to the login page, then returned to the original page.
For example, you visit a community group’s page and click Private Message Admin. You are directed to the private message screen. When you click post, you return to the community group’s page. For additional information, see ActiveTopics.
See also: Creating User-Friendly URLs with Aliasing.
.aspx,.htm,.html. By default, the list contains
.aspx. See also: Creating User-Friendly URLs with Aliasing.
NOTE: You can enter several extensions. Each extension must begin with a period, and the last extension must be followed by a comma (,).
NOTE: This key has been removed from the
web.config file. However, you can still use this key by adding it between the
<appSettings> tags. For example,
<add key=”ek_TreeModel” value=”0”> changes the Workarea folder tree to legacy.
ek_sitePathpath is prefixed to this location. Only change this value if you want to move the location of the xml files relative to the Web root.
These images appear before a user logs in, so cannot be stored in the database. Update as needed. Their location is set in the
ek_appImagePath variable. See also: ek_appImagePath.
SMTP server configuration
See Enabling Email Notification.
See Updating web.config to Use SSL.
Active Directory Server Configuration
See Using Active Directory with Ektron.
NOTE: This setting only works if
ek_UserMenuType is set to zero (0).
See also: Enabling/Disabling Support for Multiple Language Content.
<img src=””…/>and
<href….references point to server named here instead of the local one.
Machine Translation
Lets you enter the path to the Google Translation Service API key. See also: Enabling Machine Translation.
ek_appPathvalue. By default,
ek_appPathis set to
webroot/CMS400Min. So, by default, this folder is set to
webroot/CMS400Min/assets.
NOTE: Users can upload any amount of files. The system handles them 4 at a time.[ek_cmsversion]/webhelp
Change this path if you install help files on a local server. See also: Installing Help Files on a Local Server.
false. If set to
true, when a user inserts a quicklink, Ektron inserts a special link instead of a quicklink. A special link determines the correct quicklink to use when a site visitor clicks it. For example, a user adds a content block to folder A. A quicklink to that content is
a.aspx?id=10. Later, if an administrator changes the folder’s template but doesn’t update the quicklink within the content block, the quicklink is broken. To avoid this problem, enable link management.
Page 1 of 2
[First Page] [Previous Page] [Next Page] [Last Page]
NOTE: The above text changes depending on the page you are viewing.
RetError.aspxpage.
<table>tags create the border. If the border looks wrong or inappropriate, change setting to div. If you do,
<div>tags are used to draw the border instead of
<table>tags. This change typically solves the problem.
1—Error: log errors.
2—Warning: log errors and warnings.
3—Information: log errors, warnings and informationals.
4—Verbose: Everything is logged.
IMPORTANT: Ektron has discontinued new development on its eCommerce module. If you have a license to eCommerce, you will continue to receive support, but if you need to upgrade, contact your account manager for options.
See Conducting eCommerce.
WARNING! Do not change the default currency or measurement system after your eCommerce site is live.
See also: Managing Multimedia Assets.
add verb="*" path="*.png" type="URLRewrite.StaticFileHandler,
Ektron.Cms.URLRewriter" />
This section explains how to migrate your website to Ektron, as follows.
If you can browse the starter site and it works properly, Ektron is properly installed.
Best Practice
Keep a working version of the starter site to help you debug problems. For example, if you encounter errors on your site, try to reproduce on the starter site. If you can, that may indicate a problem with the installation. If you cannot, the installation is probably OK and an external factor is causing the problem.
NOTE: You can use the Site Setup utility to perform these tasks by choosing Start > Programs > Ektron > CMS400 > Utilities > Site Setup. See Installing a Site.
If you're using Windows 8 or 2012, press the Windows key ()/Q then enter CMS400 Site Setup. Right click and choose Run as Administrator.
siteroot/workareafolder into your site folder. These files operate the workarea, library, and content functions.
web.configfile installed to the Ektron site root directory. In that file, update the
<ConnectionString>tags to point to your server, database, user, and pwd.
NOTE: If you are using SSL, web.config settings are explained in Setting Up SSL.
After creating the folders, assign permissions and
Best Practice
Limit permissions for the “Everyone” group, as this gives every user permissions to content. Similarly, limit the “Everyone” group’s inclusion in the workflowa core element of Ektron, workflow lets you set up a sequence of approvers who control the publication of content to your website. to restrict which users can publish content.
Best Practice
Because dynamic templates include URL parameters, make each main landing page and other important pages static tags. This makes it easier for you to remember if you need to provide that link to someone.
For instance, each main landing page from your home page could use the static tag. Then, as you go deeper into that section, subsequent pages use a dynamic tag.
NOTE: All images and files must be uploaded and inserted into the content separately.
After installing Ektron, it is easy to create another site. While creating the new site, you can create a sample or minimal site and database. To create a site for your content, you typically install a minimal site and database, then create your Web page templates. Later, add users and content.
If you're using Windows 8 or 2012, press the Windows key ()/Q then enter CMS400 Site Setup. Right click and choose Run as Administrator.
If you're using Windows 8 or 2012, press the Windows key ()/Q then enter CMS400 Site Setup. Right click and choose Run as Administrator.
CMS400Min.sln.
At this point, you can build the project and log in. If you cannot log in because you have not set up the license key, use the builtin account: by default, the username is builtin and password is builtin.
IMPORTANT: You should only use the builtin account temporarily. As soon as possible, you should insert the license key and log in under a user name assigned in Ektron.
To learn about creating templates and using server controla server control uses API language to interact with the CMS and Framework UI to display the output. A server control can be dragged and dropped onto a Web form and then modified.s, see Ektron Server Controls.
Ektron’s multi-site support lets you set up and manage several websites under one CMS. (The multi-site support feature does not support multiple databases.) You manage content in the additional sites the same way you work with content in the root site. You log into a root site then begin editing content in an additional site. Regardless of which site you are using, you can use the library to insert common hyperlinks, images, files, and quicklinks.
IMPORTANT: Place any file (such as an XSLTExtensible Stylesheet Language Transformations file) that needs to be shared among sites in a multi-site environment in a virtual folder. Also, you cannot create a quicklink within content, a collectiona list of Ektron content links for display on a Web page., menu, and so on to a form that resides in another site.
- All sites reside on the same server.
- Each site has a multi-site license key. To purchase additional licenses, contact Ektron sales.
Advantages of multi-site configurations:
In the Workarea, sites appear in Ektron’s folder structure with a globe icon.
A folder to which a production domain is assigned is a domain folder. Links to content in a domain folder are activated via
linkit.aspx, which redirects to the appropriate domain name using the appropriate template for the folder or content.
Best Practices
IMPORTANT: Do not remove your root site!
There are 2 ways to install multi-site support. (The automatic setup is easy to use and minimizes issues.)
IMPORTANT: Before creating a multi-site configuration, you must have installed an Ektron website. All installed folders must remain in that site. The original site cannot have virtual folders. Also, you cannot nest a multi-site under another IIS site.
C:\Program Files (x86)\Ektron\CMS400vxx\Utilities\MultiSiteInstall\Multisite.exe.
NOTE: Site folders must reside within the site root folder.
IMPORTANT: Production and staging URLs must be unique across multi-sites. In other words, in a multi-site configuration, one site's production or staging URL cannot be the same as another site‘s production or staging URL.
NOTE: This text is adapted from Microsoft’s IIS help.
IIS lets you create multiple websites on a single server.
Adding a website to a server requires careful preparation before running the Website Creation Wizard. Consider these recommendations.
If you use a non-standard TCP port number to identify a new website for special situations (such as a private website for development/testing), select a TCP port number above 1023. In this way, the number does not conflict with well-known port numbers assigned by the Internet Assigned Numbers Authority. (For more information about IANA and port assignments, see
List of TCP and UDP port numbers.)
To organize home directories for multiple websites on one server, create a top-level directory for all home directories, then subdirectories for each site.
You can create a home directory
You can also create virtual directories that map to physical directories. For more information, see “Setting Home Directories” and “Using Virtual Directories” in IIS help.
IIS provides 2 methods for adding a new website.
iisweb.vbscommand-line script".
world.episerver.com/ektron/.
To add a site, use the following syntax:
appcmd add site /name: string /id: uint /physicalPath: string /bindings: string
The variable namestring is the name, and the variable iduint is the unsigned integer that you want to assign to the site. The variables namestring and iduint are the only variables that are required when you add a site in Appcmd.exe.
NOTE: If you add a site without specifying values for the bindings and physicalPath attributes, the site will not be able to start.
The variable physicalPathstring is the path of the site content in the file system.
The variable bindingsstring contains information that is used to access the site, and it should be in the form of
protocol/IP_address:port:host_header. For example, a website binding is the combination of protocol, IP address, port, and host header. A binding of
http/*:85: enables a website to listen for HTTP requests on port 85 for all IP addresses and domain names (also known as host headers or host names). On the other hand, a binding of
http/*:85:marketing.contoso.com enables a website to listen for HTTP requests on port 85 for all IP addresses and the domain name
marketing.contoso.com.
To add a website
See also: Appcmd.exe (IIS 7).
Load Balancing has 2 purposes:
To enable load balancing, set up several servers that include the same files.
IMPORTANT: The physical path to the Ektron website must be the same on all load balanced servers. Also, sticky sessions must be enabled.
Purchase load balancing equipment to evenly distribute content requests among servers. Then, whenever an image or file is uploaded, regardless of the Web server the user is working on, the assetan external file, such as a Microsoft Word document or image, stored in one of these Ektron siteroot folders: assets, privateassets, uploadedfiles and uploadedimages. An asset can be managed like native Ektron content. is replicated on all servers.
The client browser is unaware that more than one server is involved. All URLs point to a single website. The load balance software resolves them.
Ektron provides different strategies for load balancing library images and files and DMSDocument Management System; Ektron's way of managing assets (Microsoft Office files and other types of files) assets. See also: eSync in a Load Balanced Environment.
Library load balancing is important when your configuration consists of 2 or more websites that share one database. When uploaded, library files are saved to the site root folders
uploadedfiles and
uploadedimages.
To support load balancing, library files on all servers must be identical. To maintain this state, whenever a user uploads a library item, it is copied to the corresponding folder on other servers in the configuration.
C:\Program Files (x86)\Ektron\EktronWindowsService40\Ektron.ASM.EktronServices40.exe.configusing a word processor such as Notepad.
LibraryLoadBalancedproperty to 1.
AssetsLoadBalancedproperty to 1.
LoadBalServerCountproperty to the number of servers in your load balance cluster.
See also: Handling Background Processing Functions with the Ektron Windows Service.
Asset load balancing is important when your configuration consists of 2 or more websites that share one database. To balance requests to work with assetan external file, such as a Microsoft Word document or image, stored in one of these Ektron siteroot folders: assets, privateassets, uploadedfiles and uploadedimages. An asset can be managed like native Ektron content.s across multiple servers, Ektron ensures that each server in the configuration has a copy of every asset.
So, after setting up asset load balancing, any asset added to one server is copied to the corresponding folder on the other servers in the configuration. There is no limit to the number of servers that can be load balanced.
To set up load balancing for assets, follow these steps on all servers in the load balance configuration.
/siterootfolder, open the
AssetManagement.configfile using a word processor such as Notepad.
LoadBalancedelement to 1.
C:\Program Files (x86)\Ektron\EktronWindowsservice40\
Ektron.ASM.EktronServices40.exe.config
LibraryLoadBalancedproperty to 1.
LoadBalServerCountproperty to the number of servers in your load balance configuration.
Prerequisites
- You must be a member of the Administrators group to access the Load Balancing screen.
- Your Ektron license key contains a load balance component
Asset and Library files may become out-of-date or lost due to equipment failures, power outages, or other events. You Load Balance software may have the ability to sync these files.
Ektron also provides a screen which ensures that all files in the DMS Assets folders and the Library Files and Images folders are identical across the servers in the LB configuration. In addition, the Refresh screen ensures that the contents of the
siteroot\Templates and
siteroot\Workarea folders are identical across servers.
NOTE: For load balancing refresh to work properly, open Port 8732 on load balanced servers.
To refresh load balanced files:
IMPORTANT: If you click Start and the screen quickly returns and files do not refresh, or if a server is missing from the status list, check Load Balancing settings in
Ektron.ASM.EKtronServices40.exe.config.
Prerequisites
- You must be a member of the Administrators group to access the Load Balancing screen.
- Your Ektron license key contains a load balance component
To check the load balancing status:
The installation automatically sets up user permissions based on data collected during setup. However, if you have issues with user permissions, this section describes how to install manually.
NOTE: If you are using SQL Authentication, you only need to set up the SQL user. If you are using Windows Authentication, you need to set up IUSR and an IIS_WPG or Network Service user.
NOTE: Before doing this, review your users and their permissions. Adjust as necessary for your configuration. Also, if you use Windows Authentication and all users are domain users (and the database administrator wants it this way), you may not have to perform this step.
C:\Program Files (x86)\Ektron\CMS400vnn\Utilities\SiteSetup\Database\cms400_permissions.sql. (nn represents the release number)
[MACHINENAME or DOMAINNAME\USERNAME]with your domain name, backslash (\), and ASPNET (the ASP.NET machine account). For example,
[ws10080\ASPNET].
NOTE: If you are using Microsoft Windows 2003 Server or Microsoft Widows Vista, the user is
IIS_WPG. For example,
[ws10080\IIS_WPG]. If you are using Microsoft Windows 2008 Server, the user is
Network Service.
[ws10080\IUSR_ws10080]. Click Execute Query (
Microsoft’s SMTPSimple Mail Transport Protocol; an internet standard for electronic mail. service can be set up to send an email that notifies a user when a task (such as approving a content block) was performed or needs to be performed. This section explains how to enable email notification in Ektron.
To process email, Ektron uses CDOSYS. Using Simple Mail Transport Protocol (SMTPSimple Mail Transport Protocol; an internet standard for electronic mail.) and the Network News Transfer Protocol (NNTPNetwork News Transfer Protocol; used for transporting Usenet articles between news servers.) standards, CDOSYS enables Windows applications to route email and USENET-style news posts across multiple platforms. CDOSYS lets authors create and view sophisticated emails using HTML and data sources.
NOTE: If CDOSYS is not installed on the SMTP email server, it tries to use the CDONTS mail server protocol.
For CDOSYS to work, set up the SMTPSimple Mail Transport Protocol; an internet standard for electronic mail. server on your Ektron server or a remote system that sends and receives email. It is good practice to run SMTP on a server separate from your Ektron server. However, your Ektronserver must relay email messages to your SMTP server.
NOTE: To access an SMTP server on a local or remote system, consult your organization's email administrator.
“ek_SMTPServer" value=“localhost” “ek_SMTPServer" value=“127.0.0.1” “ek_SMTPServer" value=“myname”
“ek_SMTPServer" value=“smtp.example.com” “ek_SMTPServer" value=“example.com”
Use this article to configure SMTP in IIS7: Configuring SMTP E-mail in IIS 7.
Next, configure Ektron to use SMTPSimple Mail Transport Protocol; an internet standard for electronic mail..
siteroot/web.configfile.
<!-- SMTP Server configuration --> <add key="ek_SMTPServer" value="localhost" /> <add key="ek_SMTPPort" value="25" /> <add key="ek_SMTPUser" value="" /> <add key="ek_SMTPPass" value="" /> <add key="ek_SMTP_EnableSsL" value="" />
ek_SMTPServervalue.
ek_SMTPPortto the port your system will access to retrieve email. In most cases, the port is 25. If that is not the case, consult your organization's email administrator.
ek_SMTPUserto the username that is set up for the SMTP server to send and receive email. Typically, the username is an email address, such as
..
"ek_SMTPUser" value="yourname@example.com"
This retrieval of email is based on how basic authentication is set up for you. You do not need a username when using a local SMTP server. Check with your system administrator for details. If you are using a remote system for accessing email, you must provide an authenticated username before you can send or receive email.
ek_SMTPPassto the password set up for the SMTP server to send and receive email. This password is based on basic authentication. Ektron only accepts encrypted passwords.
C:\Program Files (x86)\Ektron\CMS400vxx\Utilities
EncryptEmailPassword.exe. The Encrypt Utility dialog appears.
web.configfile's
"ek_SMTPPass" value.
ek_SMTP_EnableSsLto
true.
NOTE: If you do not see this option, enable it through your server's Roles, or by adding it as a Windows feature.
/workarea/ServerControlWS.asmxfile.
NOTE: To access the features for the ServerControlWS.asmx file:
1. Right click the Workarea folder and select Switch to Content View.
2. Locate the ServerControlWS.asmx file.
3. Right click the file and select Switch to Features View.
4. Open the IP Address and Domain Restrictions feature.
If you want to use smtp.google.com as an SMTP server, use TLS encryption and port 587.
You cannot use implicit SSL. This is because .Net Framework does not support implicit SSL which, by default, uses port 465. See also: SmtpClient.EnableSsl Property.
When submitting content to an approval process, if you get an error message listed below, it is generated by the SMTPSimple Mail Transport Protocol; an internet standard for electronic mail. server on which you set up the mail system, not by Ektron.
Ektron's automated system sends email to users when an action has been, or needs to be, performed. See also: Customizing Ektron email with Tokens.
Email is generated when any of the following content actions takes place.
To be notified of these actions, the following must be set:
See also: General Tab.
See also: Managing Users and User Groups.
The Tasks feature also has automatic email notification. See Setting Up Task Types and Categories.
Ektron can send email notification to users, informing them that actions have taken place or are requested of them. For example, a content contributor receives an email that the contributor's content was published. These emails are stored in resource files, where each email consists of one string for the subject and one for the body. To learn about editing the resource file, see Translating the Workarea.
Each message is called in the presentation layer by its message title. Ektron does not support HTML email, however the message text is fully customizable.
The body of an email can include tokens, located between @ symbols. Ektron replaces them with the information for that instance of the email. For example, @appContentTitle@ in the following sentence is replaced with the email’s title.
You can customize the emails, move the tokens, add text, rewrite and reorganize.
Carriage Return/Line Feeds are represented by @appCRLF@. These cause the email to move down one line. For example:
Thank you!
Ektron email tokens are specialized for the type of email message you need to send.
NOTE: You must be logged in to see the changes.
Click
<a href= to accept.
@appInviteId@>here</a>
NOTE: You must be logged in to see the changes.
You can insert these membership tokens into the confirmation message.
The list shows tokens you can use to customize email messages. When the email is sent, the corresponding description replaces the token.
NOTE: You must be logged in to see the changes.
The following messages are used if basic workflow is applied to content. To view the messages with advanced workflow, see Notifying Users of Advanced Workflow Activities.
Message Title: content changes approved.
Message Text: Content changes have been approved.
Message Title: content has been changed.
Message Text: Content changes have been made.
Message Title: approval request declined.
Message Text: Content approval request declined.
Message Title: content deletion approved.
Message Text: Deletion of content has been approved.
Message Title: content has been deleted.
Message Text: Content has been deleted.
Message Title: request for approval.
Message Text: Request for content approval.
In addition to automatic email, Ektron lets you email a user or user group from many screens. An email icon () next to a user or group name or on the toolbar indicates your ability to do this. Screens in the following features support instant email.
When you click one or more user/group names then the toolbar's email icon, a screen appears.
NOTE: The email software must be configured for your server. See Enabling Email Notification.
When the email screen appears, the following information is copied from Ektron into the email.
This section describes how to log in and out, restrict login attempts, and manage passwords.
Prerequisites
Prerequisite
To log into an Ektron site:
If you are using an Ektron sample site, you can use any of 3 standard users that demonstrate Ektron’s flexible user-permissions model.
admin; Password:
????; Permissions: All
NOTE: When Ektron is installed, you are prompted to change the admin user's name and password.
jedit; Password:
jedit; Permissions: Basic (for example, add/edit content, manage library files, and so on)
jmember; Password:
jmember; Permissions: Read-only permission to private content
Ektron can lock out a user after 5 unsuccessful attempts to log into one computer. You control login security via the
ek_loginAttempts element in the
siteroot/web.config file.
Possible values for
ek_loginAttempts.
If a user unsuccessfully tries to log in more than the specified number of times, an error appears: The account is locked. Please contact your administrator. After that happens, even if the user enters the correct password, the user is locked out.
NOTE: You can change the error message text in the resource file. See also: Translating the Workarea.
When an account is locked out, the Account Locked field is checked on the Edit User screen.
To unlock the account, an administrator user (or a user assigned to the User Admin role) accesses the Edit User screen and unchecks the box. At this point, the user can login.
NOTE: To unlock all users, open siteroot/
web.config and set
ek_login Attempts to -1.
You can use the Account Locked field to manually lock a user out of Ektron.
That user cannot login until either you uncheck the box or open siteroot/
web.config and set
ek_login Attempts to -1.
You can change the images used for the login and logout buttons. To do so:
Workarea\images\application.
web.configfile in your website’s root directory.
<add key="ek_Image_1" value="btn_close.png" /> <add key="ek_Image_2" value="btn_login.png" /> <add key="ek_Image_3" value="btn_login_big.png" /> may appear as a new tab. You can change this behavior by turning off tabs within the browser.
This section explains various aspects of managing passwords.
WARNING! Use the builtin user only to correct a bad or expired license key. It is not designed for regular Ektron operations, such as editing content.
The builtin user is an emergency user to use if you cannot log into Ektron. The builtin user can log in to Ektron whether or not Active Directory or LDAPLightweight Directory Access Protocol; permits access to distributed information. is enabled.
If you log into the Workarea as the builtin user, you can access only the following screens on the Settings tab.
If the builtin user password was changed and you don’t know it, you cannot log in. In this case, use the BuiltinAccountReset.exe utility, which resets the username/password to
builtin/
builtin. This utility is located in
C:\Program Files (x86)\Ektron\CMS400versionnumber\Utilities.
The builtin username and password are entered during installation. You can change them on Ektron's setup screen.
Prerequisite
You are a member of the Administrators group.
If you use the Workarea or the API to add a CMS or membership user, or if you change an existing user's password, Ektron enforces a security policy. By default, the policy enforces these criteria:
You can modify the criteria by editing the Regex Expression tab on the Application Setup screen. See also: Password Regex Tab.
IMPORTANT: This policy is new as of Ektron Release 9.10. If you upgrade from an earlier version, this policy does not affect existing users' passwords.
Ektron has a security feature that forces an administrator or user with the Commerce Admin role to change the password at least every 90 days. This feature is only enabled if the
ek_ecom_ComplianceMode key in the site’s
web.config file is set to
true.
If such a user goes 85 days without changing the password, a dialog appears upon log-in, asking to change the password. If the user does not want to, click Skip. The user can repeat this for the next 5 days. After 90 days since the password was entered, the user must enter a new password before he or she can log in.
Ektron has a password security feature that automatically logs out an administrator or user with the Commerce Admin role after 15 minutes of inactivity. Activity is based on requests made to the server.
This feature is enabled if the site’s
web.config file‘s
ek_ecom_ComplianceMode key is set to
true. In addition, if you are using IIS7Internet Information Services (IIS) for Windows® Server, version 7, the
<add name="EkUrlAliasModule"... line in the following code needs to appear between the
<modules> tags in the
web.config file. This line is a part of the default install—make sure it has not been removed.
<modules> <add name="MyDigestAuthenticationModule" type="Ektron.ASM.EkHttpDavHandler.Security.DigestAuthenticationModule, Ektron.ASM.EkHttpDavHandler" /> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" preCondition="integratedMode" /> <add name="EkUrlAliasModule" type="UrlAliasingModule" preCondition="integratedMode" /> </modules>
Ektron has a password security feature that forces an administrator or user with the Commerce Admin role to use at least 7 characters in a password. Further, the password must contain at least one alphabetic and one numeric character.
This feature is enabled only when the
ek_ecom_ComplianceMode key in the site’s
web.config file is set to
true.
Ektron has a security feature which ensures that when an administrator or user with the Commerce Admin role enters a new password, it does not match that person's previous 4 passwords. This feature is enabled only if site’s
web.config file has the
ek_ecom_ComplianceMode key is set to
true. 4.
The Ektron password validation provider lets developers create custom password validation strategies. These providers can enforce custom password rules inside the system, beyond the out-of-box capabilities.
This section explains how to create a custom password validation provider for Ektron.
usingstatements.;
Ektron.Cms.Extensibility.Commerce.Samples, rename your class to
CustomPasswordProvider, and inherit from the
Ektron.Cms.Commerce.PasswordValidation.Provider.PasswordValidationProviderclass and the
Ektron.Cms.Commerce.IPasswordValidationinterface.
namespace Ektron.Cms.Extensibility.Commerce.Samples { public class CustomPasswordProvider : Ektron.Cms.Commerce.PasswordValidation.Provider.PasswordValidationProvider, Ektron.Cms.Commerce.IPasswordValidation
#region constructor, member tokens public CustomPasswordProvider() { } #endregion
GetRegexFormethods required by the
PasswordValidationProviderbase class. These methods return postss.
public override bool PasswordExpirationEnabled() { return RequestInformation.CommerceSettings.ComplianceMode; } public override bool RequiresPasswordExpiration(long userId) { return (userId == 1); }
siteroot/web.configfile lets you manage password providers within Ektron.
<passwordValidationProvider...>tag in the
web.configfile.
<providers>key.
defaultProviderattribute, as shown below.
<passwordValidationProvider defaultProvider="CustomPasswordProvider"> <providers> <add name="CustomPasswordProvider" type="Ektron.Cms.Extensibility.Commerce.Samples .CustomPasswordProvider, CustomPasswordProvider" /> </providers> website..
The OnTrek starter site includes a samples of a Facebook Login., the user is forwarded to an Ektron form that prompts the person to register or log in to Ektron.
You specify which form appears via the Facebook Login server control's form. .
Enter the Facebook Login button text. The default is Connect with Facebook.
Enter additional text that appears above the Facebook Login button. The default is Sign in using your Facebook account.
Enter text that appears above the Facebook Login button. The default is Sign in using your Facebook account..
See also: Creating Personalized Web Experiences with the Targeted Content Widget
Prerequisite
You are a member of the Administrators group.
You must complete this before any user can access your Ektron website.
In Workarea > Settings > Configuration > Setup, you can enter or edit information for the Ektron website including:
The Application Setup screen appears. Click Edit to modify the settings.
NOTE: Do not confuse the default application language with the ek_ DefaultContentLanguage variable in
web.config. For more information on that, see Setting the Default Language.
NOTE: Checking this box disables the Web Alerts feature on your server.
WARNING! Ektron strongly urges you to change the default password assigned to the builtin user. Opportunities to do this are presented during installation and in the above field.
IMPORTANT: Editor tab settings apply only to the eWebEdit400 editor.
NOTE: Ektron does not recommend enabling this feature.
The following fields change the default Web page after log-in and the default Workarea page. The default values are automatically applied to all new users, and to all existing users when you upgrade. Normally, you can modify these values for any user via the Edit User screen. But, you can force these values on all users, removing the ability to personalize them.
By default, the page from which the user logged in reappears.
IMPORTANT: If you are logging in from the OnTrek sample site, this field is ignored. OnTrek has its own landing page after login, regardless of this setting. setting must be unchecked. Otherwise, new users will receive an error message when they sign-up using this control. See also: Checkout.
Use this button to clear Ektron's cache, which recycles the application pool. For example, you updated the
web.config file but cannot yet see the changes.
Under certain circumstances, Ektron's support group may instruct you to click this button.
Administrators would use this button if they cannot access the hosting servers yet need to reset their website. The button is an alternative to submitting a request to their IT department or hosting company.
After you click Restart, the first request takes longer than usual since the application needs to recompile. Subsequent requests should be processed normally.
To minimize the impact on site visitors, visit your home page immediately after the restart, so that your request is the first "hit."
See also: Managing Application Pools in IIS 7
Use the Application Setup screen's Password Regex tab to customize Ektron's password security policy, and the error text that appears if a user's entry does not conform to the policy. Ektron provides a default policy and error text. The default policy enforces these criteria:
The password policy is enforced if either the Workarea or the API is used to add a CMS or membership user, or an existing user's password is changed.
Additional Password Policy Notes
Prerequisite
- Knowledge of RegEx.
- You are a member of the Administrators group.
To customize the password security policy:
To restore the default password policy and error text:
If this option is enabled, each time you create a new content or library folder in Ektron, a corresponding physical folder is created on the file system to organize library files on your file server. The following image shows a library folder tree and its corresponding system folder structure.
NOTE: If you are upgrading, the installation does not create sample website folders on the file server. You must add these folders manually. However, all folders that you create are also created on the file server when enabled.
Ektron provides a Windows service (EWSEktron Windows Service) to handle the following background processing functions.
Also, the EWS propagates updates that are made to the database connection string or the site path in the
web.config file. The service copies the new value to the
data.config and
sitedb.config files, which are located in
C:\Program Files (x86)\Ektron\EktronWindowsservice40. Any Ektron components that reference these values can retrieve the current information from these files.
The
data.config and
sitedb.config files are updated once each day at a time prescribed in the
updateTime value in
C:\Program Files (x86)\Ektron\EktronWindowsservice40\Ektron.ASM.EktronServices.exe.config. You can change this time.
WARNING! Do not edit the
data.config and
sitedb.config files. They are dynamically generated by Ektron. If these files have incorrect values, edit the
web.config file, which is used to generate them.
The EWS starts automatically when Ektron is installed, and again whenever the server is restarted.
To see the status of the service, go to Start > Computer, then right click and choose Manage.
If you're using Windows 8 or 2012, press the Windows key () /Q then enter Services.
Look for Ektron Windows Services. You can see its status in the Status column.
On your file system, the EWS is located in
C:\Program Files (x86)\Ektron\EktronWindowsservice40. Within that folder, the
Ektron.ASM.EktronServices.exe.config file runs the EWS.
Upgrading the Ektron Windows Service
The EWS has an Activity Log that tracks all related events. To view detail for any event, double click it.
A common source of errors is that the service cannot find Ektron sites, because they have not been created yet, as shown in the sample below.
This section explains how to set up a local site to use Amazon BLOB storage. The resulting configuration stores assets in BLOB storage rather than the local file system.
This configuration also allows files to be served from the BLOB store, by redirecting requests to the blob URL rather than the local system.
AWSSDK.dllfile resides in the Ektron site's bin folder..Amazon.Storage"/> <namespace name="Ektron.Storage"/> <container name="storageContainer"> <register type="IFileService" mapTo="S3FileService"/> <register type="IDirectoryService" mapTo="S3DirectoryService"/> </container> </unity.storage>
<add key="ek_CloudStorageType" value="AMAZON" /> <add key="BlobStorageName" value="ektronbuket" /> <!--bucket name--> <add key="AWSAccessKey" value="*******" /> <!--bucket access key--> <add key="AWSSecretKey" value="*********" /> <!--bucket secret key--> <add key="AWSRegion" value="ap-southeast-1" /> <!--bucket region-->
<add name="EkBlobModule" type="BlobRedirectModule" preCondition="integratedMode" />
web.config) from the following folders on the Amazon)]
This document explains how to set up a local site to use Azure BLOB. The resulting configuration stores assets in a BLOB store rather than the local file system. This configuration also allows files to be served from the BLOB store, by redirecting requests to the BLOB URL rather than the local.
Create a new Storage Account on Azure.
Install-Package WindowsAzure.Storage -Version 8.7.0..Azure.Storage"/> <namespace name="Ektron.Storage"/> <container name="storageContainer"> <register type="IFileService" mapTo="CloudFileService"/> <register type="IDirectoryService" mapTo="CloudDirectoryService"/> </container> </unity.storage>
<add key="ek_CloudStorageType" value="AZURE"/> <add key="ek_CloudAccountId" value="ektronblob"/> <!--storage account name--> <add key="ek_CloudAccountKey" value="xxxxxxxx"/> <!--storage account primary key--> <add key="ek_CloudContainer" value=""/> <add key="BlobOrCdnUrl" value=""/> <!-- url for storagecontainer --> <!--update this with accurate values. Note: For CMS 9.20 or older, you should remove "EndpointSuffix=core.windows.net" in the connection string --> <add key="StorageConnectionString" value="DefaultEndpointsProtocol=https;AccountName=ektronblob;AccountKey=xxxxxxxxx" />
<add name="EkBlobModule" type="BlobRedirectModule" preCondition="integratedMode" />
web.config) from the following folders on the Azure>[xxxxxxxxxx]</SubscriptionID> <StorageAccountID>[ek_CloudAccountId from web.config]</StorageAccountID> <StorageAccountKey>[ek_CloudAccountKey from web.config]</StorageAccountKey> <UseHttps>True</UseHttps> <CDNEndpoint /> <ContainerConfigName>amazon<>[xxxxxxxx]</SubscriptionID> <StorageAccountID>[ek_CloudAccountId from web.config]</StorageAccountID> <StorageAccountKey>[ek_CloudAccountKey from web.config]</StorageAccountKey> <UseHttps>True</UseHttps> <CDNEndpoint /> <ContainerConfigName>azure<)]
|
https://webhelp.episerver.com/Ektron/documentation/documentation/wwwroot/cms400/v9.10/Reference/Web/Installation/Setting_Up_Ektron.htm
|
CC-MAIN-2020-45
|
refinedweb
| 6,865
| 50.94
|
s:cache and compile-time view creationDaniel Lichtenberger Jul 28, 2008 6:41 PM
Hi,
I'm using s:cache to cache a fairly complex (read-only, plain HTML) navigation tree that is built recursively using Facelets template components. In my first approach I encapsulated the navigation area directly, i.e.
<s:cache <!-- Renders the navigation tree --> <my:navigationTree .../> </s:cache>
The navigationTree component requires (relatively) expensive database calls to build the navigation for the current node. I did not experience a lot of performance gains with this, and using a profiler I noticed that the entire component tree of the navigation area was being built as before - it skipped only the render phase, but did not save me of the expensive view construction phase. It makes perfectly sense since this is essentially a recursive view declaration, and the entire tree is constructed at compile- and not at render-time.
My workaround is to add a compile-time tag around the body content that proceeds with view building only if the page fragment is not cached yet:
<s:cache <c:if <!-- Renders the navigation tree --> <my:navigationTree .../> </c:if> </s:cache>
myPageCache.cached returns true if the given region/key combination is present in PojoCache. This skips the expensive view building entirely if the fragment is already cached and probably works only as long as my:navigationTree does not use JSF components. Any thoughts on this? Is there an easier way to achieve
complete caching?
Daniel
1. Re: s:cache and compile-time view creationSebastian Hennebrueder Jul 28, 2008 10:07 PM (in response to Daniel Lichtenberger)
Well, actually not.
I submitted a patch proposal in the dev list, which will reduce EL calles for
renderedby 80 % in the cache but this is more in case of post requests.
The problem is that we cache rendering which happens at the end of all phases and when the object tree is created, we don't know, if it is cached.
Anyway, I will take your post as encouragement to have another look at the cache.
Regards
Sebastian
2. Re: s:cache and compile-time view creationSebastian Hennebrueder Jul 28, 2008 10:41 PM (in response to Daniel Lichtenberger)
just to be sure, your db queries are of course not redone. it it just the restoring of the state.
If not, please add some more informations like:
post or get request and a code snippet showing what you are doing
Regards
Sebastian
3. Re: s:cache and compile-time view creationDaniel Lichtenberger Jul 29, 2008 9:56 AM (in response to Daniel Lichtenberger)
In my case the DB queries are triggered through EL expressions in the tree template, so they were resubmitted (unless using a c:if
guardas described above). Maybe using POST-requests would have helped, but I have to use RESTful URLs and pure HTTP-GET by specification.
I think the documentation of s:cache (especially regarding specific Facelets issues like render-time vs. compile-time components) is a little sparse, some examples of what will (and will not) be cached would be useful.
Daniel
4. Re: s:cache and compile-time view creationPete Muir Jul 29, 2008 12:34 PM (in response to Daniel Lichtenberger)
Sebastian, it would be great if you could write your findings up for this, and your post to the dev list, up in JIRA so that we have the info in one place.
5. Re: s:cache and compile-time view creationSebastian Hennebrueder Jul 30, 2008 5:38 PM (in response to Daniel Lichtenberger)
I would not expect that DB queries are still generated.
Could you provide a code snippet showing the behaviour. If possible with normal components. If only your component issue this behaviour, it would be great to have the source.
@Pete
The subject was
'Caching issues with JSF rendered attribute'
on the dev list.
Best Regards
Sebastian Hennebrueder
6. Re: s:cache and compile-time view creationPete Muir Jul 30, 2008 6:27 PM (in response to Daniel Lichtenberger)
Yes dev list != JIRA ;-)
Seriously, if you just write stuff to the dev list, it gets lost. The place for this is JIRA.
7. Re: s:cache and compile-time view creationPete Muir Jul 30, 2008 6:35 PM (in response to Daniel Lichtenberger)
Cheers :)
8. Re: s:cache and compile-time view creationDaniel Lichtenberger Jul 31, 2008 3:04 PM (in response to Daniel Lichtenberger)
I created a testcase that illustrates the behaviour. It's a recursive template that gets the displayed nodes from some backing bean, which will cache the values in request/page scope.
Note that if you refresh the page the DB-access message gets printed although the content is rendered from the cache (indicated by a comment in the source code or by debugging CacheRenderer).
/testCache.xhtml, which includes the s:cache tag and inserts a template:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" xmlns: <body> <s:cache <ui:include <ui:param </ui:include> </s:cache> </body> </html>
/testCacheChild.xhtml, which represents the navigation component in my project:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" xmlns: <ui:composition> <ul> <c:forEach <li> #{i}. <ui:include <ui:param </ui:include> </li> </c:forEach> </ul> </ui:composition> </html>
And finally, the backing bean, TestBean.java:
@Name("testBean") @Scope(ScopeType.PAGE) public class TestBean { private Map<Integer, List<Number>> dataCache = new HashMap<Integer, List<Number>>(); public List<Number> loadChildren(int id) { if (!dataCache.containsKey(id)) { System.out.println("Accessing DB for id " + id + "..."); final List<Number> data = new ArrayList<Number>(); if (id <= 1) { // create test data (only for two nodes) for (int i = id + 1; i < 10; i++) { data.add(i); } } dataCache.put(id, data); } return dataCache.get(id); } }
Actually, there is an even simpler example that exhibits the same behaviour:
<s:cache <c:forEach #{node}, </c:forEach> </s:cache>
The issue is the compile-time c:forEach tag. If replaced with ui:repeat, no backing bean lookups will be made for the cached version (because it won't be rendered). However, since c:forEach is called during view construction, the s:cache tag fails to prevent its execution (unless the s:cache body is wrapped in a c:if construct as described in my original post).
Of course, in the last example it's easy to replace c:forEach with ui:repeat, but it's not that easy (if possible at all) with more complex, recursively nested templates.
Cheers, Daniel
9. Re: s:cache and compile-time view creationSebastian Hennebrueder Aug 1, 2008 5:21 PM (in response to Daniel Lichtenberger)
mmh,
I cannot reproduce it here. If I enable caching the EL is not called.
Can you validate that your cache is running?
Which version of Seam do you use?
Whcih version of JSTL?
How do you access your page (s:link or ...? )
Best Regards
Sebastian Hennebrueder
10. Re: s:cache and compile-time view creationDaniel Lichtenberger Aug 6, 2008 9:25 AM (in response to Daniel Lichtenberger)
Seam 2.0.0.GA, the cache is running (CacheRendererBase writes the cached content). The page is accessed directly via GET.
I'm running on oc4j 10, the JSTL version of the server seems to be 1.1.2 (is this actually relevant when I'm using Facelets?).
Cheers,
Daniel
|
https://developer.jboss.org/message/674394
|
CC-MAIN-2019-04
|
refinedweb
| 1,220
| 63.59
|
Visual C# IntelliSense
Visual C# IntelliSense is available when coding in the editor, and while debugging in the Immediate Mode command window.
Completion lists
The IntelliSense completion lists in Visual C# contain tokens from List Members, Complete Word, and more. It provides quick access to:
Members of a type or namespace
Variables, commands, and functions names
Code snippets
Language Keywords
Extension Methods
The Completion List in C# is also smart enough to filter out irrelevant tokens and pre-select a token based on context. For more information, see Filtered Completion Lists.
Code Snippets in Completion Lists
In Visual C#, the completion list includes code snippets to help you easily insert predefined bodies of code into your program. Code snippets appear in the completion list as the snippet's shortcut text...
enum keyword: When you press the SPACEBAR after an equal sign for an enum assignment, a completion list appears. An item is automatically selected in the list, based on the context in your code. For example, items are automatically selected in the completion list after you type the keyword return and when you make a declaration..
Most recently used members.
override.
Automatic Code Generation
Add using
The Add using IntelliSense operation automatically adds the required
using directive to your code file. This feature red squiggle appears on that line of code because the type reference cannot be resolved. You can then invoke the Add using through the Quick Action. The Quick Action is only visible when the cursor is positioned on the unbound type.
Click the light bulb icon, and then choose using System.Xml; to automatically add the using directive.
Remove and sort usings
The Remove and Sort Usings option sorts and removes
using and
extern declarations without changing the behavior of the source code. Over time, source files may become bloated and difficult to read because of unnecessary and unorganized
using directives. The Remove and Sort Usings option compacts source code by removing unused
using directives, and improves readability by sorting them. On the Edit menu, choose IntelliSense, and then choose Organize Usings. Quick Actions light bulb is displayed. The light bulb Quick Actions light bulb is displayed. The light bulb..
A red wavy underline appears under each undefined identifier. When you rest the mouse pointer on the identifier, an error message appears in a tooltip. To display the appropriate options, you can use one of the following procedures:
Click the undefined identifier. A Quick Actions light bulb appears under the identifier. Click the light bulb.
Click the undefined identifier, and then press Ctrl + . (Ctrl + period).
Right-click the undefined identifier, and then click Quick Actions and Refactorings.
The options that appear can include the following:
Generate property
Generate field
Generate method
Generate class
Generate new type... (for a class, struct, interface, or enum)
Generate event handlers.
Note
If a new delegate that is created by IntelliSense references an existing event handler, IntelliSense communicates this information in the tooltip. You can then modify this reference; the text is already selected in the Code Editor. Otherwise, automatic event hookup is complete at this point.
If you press Tab, IntelliSense stubs out a method with the correct signature and puts the cursor in the body of your event handler.
Note
Use the Navigate Backward command on the View menu (Ctrl + -) to go back to the event hookup statement.
See also
Using IntelliSense
Visual Studio IDE
|
https://docs.microsoft.com/en-us/visualstudio/ide/visual-csharp-intellisense
|
CC-MAIN-2017-51
|
refinedweb
| 565
| 56.45
|
We've come a long way from the world of 7-bit ASCII. In the beginning, the language of the PC was most definitely English, with a smattering of hard-to-find characters for a few Western European languages. Now the PC works in just about any language. The Unicode Consortium did some stunning work reconciling all of the many disparate standards, and Unicode is now part of the PC's basic plumbing. The .NET framework uses Unicode from the ground up, and its programming languages keep improving the ways they handle it. Representing language will never be completely trouble-free or automatic, but this is as good as it's ever been.
Now that people can spell their names correctly and write in their own languages, the opposite problem comes up – how to normalize all the different (correct) spellings. The first question is: Why would you do such a crazy thing after people have gone through all the trouble to put in the correct accents? There's only one admissible answer, but that one answer is a very important one: "Searching".
The Latin alphabet is used for many languages. In order to adapt it to individual languages, letters had to be altered and diacritics put in. While the basic Latin alphabet is about 30 characters, there are around 900 variations. It's less of a problem when a program works in one language alone, but when a program coordinates multiple languages, something has to be done. For instance, a French keyboard doesn't have an "a" with a tilde, but the French customer service person still needs to find the Portuguese client whose name has that character. It's great to show your customers respect, but you'll still annoy them if you lose their data.
What we need to do is to "normalize" the data, so that we're comparing apples to apples. And to do that, we need to find a basis that everyone has in common. So, as far as we've come with Unicode, it looks like ASCII isn't quite dead yet. ASCII – the new common denominator! What we want to do is to strip out the diacritics, come up with an ASCII string, and use the result for searching. That way, everyone has an equal chance of finding their data.
This technique is purposefully "lossy", so we don't want to replace the correct values with simpler ones. Rather, we want to use these values alongside the originals, internally, out of the end-user's sight. Behind-the-scenes isn't a bad thing, since what we're about to do to the text may cause the end-users to worry. For instance, in real German, the name "Händel" resolves to "Haendel". By stripping diacritics, it resolves to "Handel". No, it's not real German, but that's not our goal.
Our aim is to be language-independent, and to get the same data out as the data that went in, without special keyboards. This is a practical issue, not a scholarly one. The good news is that in true democratic fashion, every language gets fractured equally, and also … we have the original text anyway. Nothing's lost, and data is found.
There are two cardinal rules in searching:
Since you only need to calculate values when they change, the best place to normalize strings is at data entry time. During data entry, most of the clock cycles are spent waiting for the user to type something, so there are very few calculations at this stage that are too expensive. Besides stripping the accents, you can also convert everything to upper or lower case, which solves the more mundane issues of normalization.
To co-opt a stupid joke, "Händel's not composing any more. Now he's decomposing." If Mr. Händel wants the umlaut out of his name, that's exactly what he'll need to do – decompose. Unicode has a concept of composition, which means that we combine several simple characters to get a single complex character. Unicode has the opposite concept as well. You can view a complex character as one character, or, you can view it as the combination of several simple ones. "Decomposition" is what we need to get back the simple ASCII characters we're looking for.
There's a bit of complexity, and there are multiple flavors of decomposition. All of this is covered in a set of scholarly papers by the Unicode Consortium (). To cut to the chase, we want the most granular form of decomposition. The other thing we need to know is that the main characters (the ASCII characters, that is), are the most significant, and are guaranteed to come at the beginning of the decomposed sequence. It turns out that our work is easy.
I'll present two approaches here. The easiest approach is to use the inbuilt string normalization function, which is new to .NET 2.0. This function closely parallels the normalization functions in Java, and is a great addition to the language. The idea is to take each character of a string, put it through the strongest decomposition, and then extract the ASCII characters. There are three things we need to be aware of:
private string LatinToAscii(string InString)
{
string newString = string.Empty, charString;
char ch;
int charsCopied;
for (int i = 0; i < InString.Length; i++)
{
charString = InString.Substring(i, 1);
charString = charString.Normalize(NormalizationForm.FormKD);
// If the character doesn't decompose, leave it as-is
if (charString.Length == 1)
newString += charString;
else
{
charsCopied = 0;
for (int j = 0; j < charString.Length; j++)
{
ch = charString[j];
// If the char is 7-bit ASCII, add
if (ch < 128)
{
newString += ch;
charsCopied++;
}
}
/* If we've decomposed non-ASCII, give it back
* in its entirety, since we only mean to decompose
* Latin chars.
*/
if (charsCopied == 0)
newString += InString.Substring(i, 1);
}
}
return newString;
}
The advantage of this code is that it's short, simple, and largely built-in, so this is the code to use if your needs are simple and the output doesn't cause you any trouble. You should test the output, of course, before putting the code into production!
One thing to note is that the decomposition here is very conservative. The Unicode Consortium had nobler things in mind than the cheap-and-nasty job that we're doing here. For instance, the combined character "æ" stays combined, since that character has its own identity. Similarly, the Scandinavian "Ø" stays as-is, since to Scandinavians, it's not an "O with a stroke" – it's an "Ø". However, basic accents are taken care of, and we have a working function.
If none of the drawbacks bother you (after testing!), consider the job done. If you need to take care of other characters, obviously, we need to do some more work. You could adapt the code to test specific characters which don't decompose, and if you only have a few exceptions, that's easy. If you have strong opinions (and a lot of them), the code will need some organization. You could either use a switch/case statement (which could get bulky), or a collection such as a hash table.
switch/case
Option Two takes the hash table approach, and loads the entire Latin character set with its decomposition values. Rather than figuring out the decomposition at runtime, we decide what it is at compile time, and then simply look up the value. We suffer a bit when we load the table, and we gain a bit when we do the lookups. The table is static, and it's only loaded once.
Option Two is a much more labor-intensive solution, though, of course, the labor has already been done, and it's presented to you here. If you have strong opinions and lots of them, this is the approach to take. And if you haven't upgraded to Visual Studio 2005, this is the only approach to take, since you don't have normalization yet. Option Two is more complete, and can also be tweaked if you don't like the output as it is.
The way this was done was:
The code for this approach is even simpler than Option One. The first time the function is used, a static HashTable is loaded with values. Making the data static means that we only have to load it once. Otherwise our performance would be atrocious.
HashTable
After the initial load, every character gets a lookup. We have to check to see if our item is in the HashTable first, since we'll get an exception if we search for a non-existent item. If the item isn't in our table, we return it as-is, since it's outside our domain anyway.
In theory, this code should be faster, both for the static data and the pre-calculated mappings, though it seems to average about the same as Option One. In any case, performance shouldn't be a huge issue, since we're using the code intelligently
public static string LatinToAscii(string InString)
{
string returnString = string.Empty, ch;
if (mCharacterTable == null)
InitializeCharacterTable();
for (int i = 0; i < InString.Length; i++)
{
ch = InString.Substring(i, 1);
if (!mCharacterTable.Contains(ch))
returnString += ch;
else
returnString += mCharacterTable[ch];
}
return returnString;
}
I hope all of this is useful. All of our differences are so much easier to celebrate once we find we have something in common - even if it's only ASCII!
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
public static string LatinToAscii(string InString)
{
StringBuilder builder = new StringBuilder(InString.Length);
if (mCharacterTable == null)
InitializeCharacterTable();
for (int i = 0; i < InString.Length; i++)
{
char ch = InString[i];
if (!mCharacterTable.Contains(ch))
builder.Append(ch);
else
builder.Append(mCharacterTable[ch]);
}
return builder.ToString();
}
public static char ToUnichar(string HexString)
{
byte[] b = new byte[2];
UnicodeEncoding ue = new UnicodeEncoding();
// Take hexadecimal as text and make a Unicode char number
b[0] = Convert.ToByte(HexString.Substring(2, 2), 16);
b[1] = Convert.ToByte(HexString.Substring(0, 2), 16);
// Get the character the number represents
char[] chars = ue.GetChars(b);
if (test) System.Diagnostics.Debug.WriteLine("BOYA");
test = false;
return chars[0];
}
static string StripAccent(string stIn)
{
string normalized = stIn.Normalize(NormalizationForm.FormD);
StringBuilder sb = new StringBuilder();
foreach (char c in normalized) {
UnicodeCategory uc = CharUnicodeInfo.GetUnicodeCategory(c);
if (uc != UnicodeCategory.NonSpacingMark) {
sb.Append(c);
}
}
return (sb.ToString());
}
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/13503/Stripping-Accents-from-Latin-Characters-A-Foray-in
|
CC-MAIN-2016-30
|
refinedweb
| 1,800
| 64.51
|
Difference between revisions of "Precompiled headers"
Revision as of 21:45, 11 April 2006
Ctaegory:Application Development Are you tired of waiting until your project's compilation finishes? Are you using huge libraries for your project (like wxWidgets)?
If you answered yes above, then you 'd be happy to know that Code::Blocks, as of version 1.0rc2, supports precompiled headers for the GCC compiler.
Using precompiled headers (PCH from now on) speeds up the compilation of large projects (like Code::Blocks itself) by large amounts. This works by creating a header file which #includes all the rarely-changing header files for your project. Then the compiler is instructed to pre-compile this header file. This creates a new file which is now in a binary format that only the compiler understands. But, since it doesn't have to re-parse your header files for every source file that includes them, the process will now be considerably faster ;)
Enough ranting. Let's see step-by-step what you must do to create a precompiled header for your program.
Creating a precompiled header
To create a precompiled header for your project, just create a new header file. Say you named it "pch.h". Put the following in it:
#ifndef PUT_A_UNIQUE_NAME_HERE #define PUT_A_UNIQUE_NAME_HERE // #include your rarely changing headers here #endif
replacing PUT_A_UNIQUE_NAME_HERE with something unique. Also #include your rarely-changing headers in that file, e.g.
#ifndef PUT_A_UNIQUE_NAME_HERE #define PUT_A_UNIQUE_NAME_HERE #if ( !defined(WX_PRECOMP) ) #define WX_PRECOMP #endif // basic wxWidgets headers #include <wx/wxprec.h> #ifndef WX_PRECOMP #include <wx/wx.h> #endif // #include other rarely changing headers here #endif
Marking a header for precompilation
Now this file is ready to be marked as PCH. To do this, find the file in the ProjectManager tree, right-click on it and select "Properties".
Click "Compile file" and make sure it's checked. Do not click "Link file". Also, set the priority weight to zero, to force it to be compiled before all other files (default priority is 50 - the lower this number, the higher the priority). Exit this dialog by clicking "OK".
Using a precompiled header
You 're almost there. Only thing left is to actually include this file so that it can be used. There are two ways to do this.
- Include it in every source file of your project (_not_ header files, only source files like *.cpp). This *must* be the very first C token in the file. In other words, put it really first. Only comments are harmless before it.
- Go to "Project->Build options" and add the following in "Compiler->Other options":
-Winvalid-pch -include "pch.h"
The first line will emit a warning when building your project, if the PCH is _not_ used. It's nice to know this. The second line, does all the magic: it's like adding #include "pch.h" at the top of each of your source files, except you don't have to edit them ;) This doesn't work in all situations, but it's the quick way to add PCH in your project. Specifically, it won't work if the PCH is in the same directory as the project file. If that's the case, go to "Project->Properties" and set the PCH mode to "Generate PCH in a directory alongside the original file" (first option).
I hope things are clearer now. Yiannis.
|
http://wiki.codeblocks.org/index.php?title=Precompiled_headers&diff=prev&oldid=2730
|
CC-MAIN-2020-16
|
refinedweb
| 558
| 66.03
|
Add Click support to your SmallD bot.
Project description
SmallD-Click
SmallD-Click is an extension for SmallD that enables the use of Click CLI applications as discord bots.
Installing
Install using pip:
$ pip install smalld-click
Example
import click from smalld import SmallD from smalld_click import SmallDCliRunner @click.command() @click.option("--count", default=1, help="Number of greetings.") @click.option("--name", prompt="Your name", help="The person to greet.") def hello(count, name): """Simple program that greets NAME for a total of COUNT times.""" for x in range(count): click.echo("Hello %s!" % name) smalld = SmallD() with SmallDCliRunner(smalld, hello, prefix="++"): smalld.run()
For this CLI example, if a user sends the message "++hello --count=2", then the bot will ask the user - by sending a message in the same channel - for their name, "Your name:".
If the user answers with "lymni", for example, the bot will send the message, "Hello lymni", twice.
Notice that the bot responds in a single message, instead of two, even though we call
click.echo multiple times.
This is because calls to echo are buffered. However, calls to prompt will cause this buffer to be flushed and its
content is sent immediately.
There is also a timeout for how long the bot will wait for the user's message, if the timeout is exceeded the bot will simply drop the execution of the command.
For an example with multiple commands that run under different names (i.e, with no common base command name) see the multicommands bot.
Guide
SmallDCliRunner(smalld, cli, prefix="", name=None, timeout=60, create_message=None, executor=None)
The
SmallDCliRunner is the core class for running CLI applications.
smalldthe
SmallDinstance for your bot.
clia
click.Commandinstance to use for running the commands.
prefixeach command invocation must start with this string.
namethe name of the CLI application, defaults to
cli.name. Can be used to change the command's name, or completely remove it by passing the empty string. Used with the prefix to determine what messages to consider as invocations of the CLI application.
timeouthow long will the bot wait for the user to respond to a prompt in seconds.
create_messagea callback for creating the message payload for discord's create message route. By default, text is sent as is in the content field of the payload.
executoran instance of
concurrent.futures.Executorused to execute commands. By default, this is a
concurrent.futures.ThreadPoolExecutor.
Instances of this class should be used as a context manager, to patch click functions and to properly close the executor when the bot stops.
Conversation(runner, message)
Represents the the state of the command invocation. Holds the runner instance, and the message payload. Also manages the interactions between the user and the CLI application.
After each prompt, the message is updated to the latest message sent by the user.
get_conversation()
Returns the current conversation. Must only be invoked inside of a command handler.
Patched functionality
You can use
click.echo, and
click.prompt directly to send/wait for messages.
prompts that are hidden, using
hide_input=True, are sent to the user DM, and cause the conversation to continue there.
Note that, echo and prompt will send a message in the same channel as the message that triggered the command invocation.
Calls to echo are buffered. When the buffer is flushed, its content is sent in 2K chunks (limit set by discord.) The buffer can be flushed automatically when there is a prompt, or the command finishes execution, or the content in the buffer exceeds the 2K limit.
It's also possible to flush the buffer by passing
flush=True to
click.echo call.
Acknowledgements
Original idea by Princess Lana.
Contributing
- Tox is used for running tests.
- Run
tox -eto run tests with your installed python version
- Run
tox -e fmtto format the code
- Conventional Commits is used for commit messages and pull requests
Developing
Tox is used to setup and manage virtual environments when working on SmallD-Click
To run tests:
$ tox
To run the examples greet bot:
$ tox -e run -- examples/greet.py
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/smalld-click/
|
CC-MAIN-2021-31
|
refinedweb
| 701
| 58.48
|
C#, or C Sharp was built to be used with the .NET Microsoft framework. C Sharp is mainly used for developing applications for Windows, for web development and for networking. C# is a fully object oriented language – it supports polymorphism, data abstraction, data encapsulation and inheritance. The character “#” comes from the musical key “sharp”, which corresponds to an increase in the pitch in music. Similarly, the language C# was designed to address the shortcomings of C and C++ and provide an improved, updated language to the programmer. C# has an improved garbage collector which the available memory automatically, without the developer having to worry about it. C# also is much more stable than C and C++, and you don’t have to type as much code to perform programming tasks. C# is also a platform-independent language, in the mold of Java. This means that you can run a C# program on any machine, regardless of the architecture present (as long as the .NET framework has been installed). Learn more about how C# works with the .NET framework with this course.
We’re going to take a look at the C# partial class in this tutorial. You need to be familiar with the basics of the language (the structure and syntax of the program) to understand the tutorial. Microsoft and other developers have invested a lot of effort into it. Chances are, it will become one of the most popular programming languages in the future. It’s definitely worth your while to learn it. You can sign up for our simple, easy-to-understand C# course for complete beginners. We’ll teach you everything you need to know about the language- you’ll be writing your own programs in no time. You’ll find it easier to learn the language if you have some knowledge of C, C++ and Java. Those pressed for time, can instead take this primer to learn C# in just 1 hour.
What is a C# Partial Class?
A partial class is a class that has been separated into parts. If you use the partial keyword when you’re declaring a class, your class may be split into separate files. You can provide separate methods for the different parts of your class.
Why do you need a partial class? In C#, a single project cannot have two separate classes. But sometimes you do need two classes for it – if the code is bloating the class, for example. In this case, you can use the partial modifier to divide the class. In applications that use partial classes, you will find that one of the partial classes contains code that has to be edited frequently while the other partial class will be rarely edited or contain machine-generated code that isn’t understandable to the user. Sometimes several developers need to work on a single project. In this case, it’s easier to make several partial classes with separate code. This code is then gathered together during runtime and executed as a single unit.
Example of a C# Partial Class
Let’s write a simple program that demonstrates the concept of a C# partial class.
class Employee { static void Main() { E.E1(); E.E2(); } } //part 1 of the partial class using System; partial class E { public static void E1() { Console.WriteLine("This is Employee Number 1"); } } //part 2 of the partial class using System; partial class E { public static void E2() { Console.WriteLine("This is Employee Number 2"); } } Output: This is Employee Number 1 This is Employee Number 2
First, we created a class called Employee. In the main method of the program, we declared two files, which were a part of E: E1 and E2. E can then be declared as a partial class. In the 1st part of the partial class declaration, we printed “This is Employee Number 1” to the screen using the Console.WriteLine method. In the 2nd part of the partial class declaration, we printed “This is Employee Number 2” using the same method. Simply put, all we’ve done is divide a single class into two separate files. The 1st file will be saved as E1.cs, while the 2nd file will be saved as E2.cs in the system. The “partial” keyword must be included in the declaration, of course. You will encounter an error without it. The name of every section of a class that you make partial has to be the same (during declaration). The name of the source file for every section of the partial class, however, can be different. Also, it’s required that you keep all parts of the partial class in the same namespace.
For additional resources on this topic, you can check out the official Microsoft documentation. Alternatively, you can just sign up for our C# sharp course – we cover all the aspects of C# in 10 easy steps.
Keep in mind that partial classes have the same accessibility – if you declare one of them public, all of them must be public. Also, if a partial class inherits an interface, all of the other partial classes inherit it too.
During Compile Time
What happens to the partial classes at compile time? The partial classes will be merged into a single class. The files E1.cs and E2.cs will be merged into a single file for class E. The methods found in class E1 and class E2 will be merged into a single code block, while the two partial classes will be merged into class E. It will look like this:
internal class E { public static void E1() { Console.WriteLine(“This is Employee Number 1”); } public static void E2() { Console.WriteLine(“This is Employee Number 2”); } }
As you can see, the two classes have been merged into a single internal class E. The methods of those two classes are also placed inside the same class and are executed simultaneously.
Benefits of Using a Partial Class
So what are the benefits of using a partial class? Let’s take a look at some of them:
- Several developers can work on a project simultaneously if they work with partial classes. This is, perhaps, the biggest benefit of using a partial class. Large projects that require many developers can be finished faster and with less effort because of this.
- Large, bloated classes can be made smaller by making them partial. This allows you to make a program that is easier to understand and maintain. It also allows you to divide code into understandable sections, or separate machine level code from normal code, or to separate code that you regularly edit with code you don’t edit at all. For example, when Windows Forms programs are created in Visual Studio, the machine generated code is categorized separately from the normal code.
- It’s easier to categorize code with partial classes. You can separate business logic from design logic, for example- which is often done in Visual Studio. Developers with experience in different aspects and applications of C# sharp can work on separate parts of the same project because of this.
- Another major advantage of partial classes is that it’s easier to add new code without editing the original source file. Just declare a new partial class and you’re done.
Learning to use partial classes will help you develop efficient and easy to maintain applications. Once you’re comfortable with the basics, you can even learn how to build Android applications using C# with this special course!
|
https://blog.udemy.com/c-sharp-partial-classes/
|
CC-MAIN-2017-34
|
refinedweb
| 1,250
| 64.71
|
MatchCollection.Count Property
Gets the number of matches.
Assembly: System (in System.dll)
Property ValueType: System.Int32
The number of matches.
ImplementsICollection.Count
Accessing individual members of the MatchCollection object by retrieving the value of the collection's Count property causes the regular expression engine to populate the collection using direct evaluation. ln contrast, calling the GetEnumerator method (or using the foreach statement in C# and the For Each...Next statement in Visual Basic) causes the regular expression engine to populate the collection on an as needed basis using lazy evaluation. Direct evaluation can be a much more expensive method of building the collection than lazy evaluation.
Because the MatchCollection object is generally populated by using lazy evaluation, trying to determine the number of elements in the collection before it has been fully populated may throw a RegexMatchTimeoutException exception. This exception can be thrown if a time-out value for matching operations is in effect, and the attempt to find a single match exceeds that time-out interval.
The following example uses the Count property to determine whether the call to the Regex.Matches(String, String) method found any matches. If not, it indicates that no matches were found. Otherwise, it enumerates the matches and displays their value and the position in the input string at which they were found.
using System; using System.Text.RegularExpressions; public class Example { public static void Main() { string pattern = @"\d+"; string[] inputs = { "This sentence contains no numbers.", "123 What do I see?", "2468 369 48 5" }; foreach (var input in inputs) { MatchCollection matches = Regex.Matches(input, pattern); Console.WriteLine("Input: {0}", input); if (matches.Count == 0) Console.WriteLine(" No matches"); else foreach (Match m in matches) Console.WriteLine(" {0} at index {1}", m.Value, m.Index); Console.WriteLine(); } } } // The example displays the following output: // Input: This sentence contains no numbers. // No matches // // Input: 123 What do I see? // 123 at index 0 // // Input: 2468 369 48 5 // 2468 at index 0 // 369 at index 5 // 48 at index 9 // 5 at index 12
The regular expression pattern \d+ matches one or more decimal characters in an input string.
Available since 8
.NET Framework
Available since 1.1
Portable Class Library
Supported in: portable .NET platforms
Silverlight
Available since 2.0
Windows Phone Silverlight
Available since 7.0
Windows Phone
Available since 8.1
|
https://msdn.microsoft.com/en-us/library/system.text.regularexpressions.matchcollection.count(v=vs.110).aspx
|
CC-MAIN-2018-05
|
refinedweb
| 389
| 51.55
|
Mirror of :pserver:anonymous@cvs.schmorp.de/schmorpforge libev
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
5.4 KiB
5.4 KiB
To include only the libev core (all the ev_* functions):
#define EV_STANDALONE 1
#include "ev.c"
This will automatically include ev.h, too, and should be done in a
single C source file only to provide the function implementations. To
use it, do the same for ev.h in all users:
#define EV_STANDALONE 1
#include "ev.h"
You need the following files in your source tree, or in a directory
in your include path (e.g. in libev/ when using -Ilibev):
ev.h
ev.c
ev_vars.h
ev_wrap.h
ev_win32.c
ev_select.c only when select backend is enabled (which is by default)
ev_poll.c only when poll backend is enabled (disabled by default)
ev_epoll.c only when the epoll backend is enabled (disabled by default)
ev_kqueue.c only when the kqueue backend is enabled (disabled by default)
"ev.c" includes the backend files directly when enabled.
PREPROCESSOR SYMBOLS
Libev can be configured via a variety of preprocessor symbols you have to define
before including any of its files. The default is not to build for mulciplicity
and only include the select backend.
EV_STANDALONE
Must always be "1", which keeps libev from including config.h or
other files, and it also defines dummy implementations for some
libevent functions (such as logging, which is not supported). It
will also not define any of the structs usually found in "event.h"
that are not directly supported by libev code alone.
EV_USE_MONOTONIC
If undefined or defined to be "1", libev will try to detect the
availability of the monotonic clock option at both compiletime and
runtime. Otherwise no use of the monotonic clock option will be
attempted.
EV_USE_REALTIME.
EV_USE_SELECT
If undefined or defined to be "1", libev will compile in support
for the select(2) backend. No attempt at autodetection will be
done: if no other method takes over, select will be it. Otherwise
the select backend will not be compiled in.
EV_USE_POLL
If defined to be "1", libev will compile in support for the poll(2)
backend. No attempt at autodetection will be done. poll usually
performs worse than select, so its not enabled by default (it is
also slightly less portable).
EV_USE_EPOLL
If defined to be "1", libev will compile in support for the Linux
epoll backend. Its availability will be detected at runtime,
otherwise another method will be used as fallback. This is the
preferred backend for GNU/Linux systems.
EV_USE_KQUEUE
If defined to be "1", libev will compile in support for the BSD
style kqueue backend. Its availability will be detected at runtime,
otherwise another method will be used as fallback. This is the
preferred backend for BSD and BSd-like systems. Darwin brokenness
will be detected at runtime and routed around by disabling this
backend.
EV_COMMON this:
#define EV_COMMON \
SV *self; /* contains this struct */ \
SV *cb_sv, *fh;
EV_PROTOTYPES
If defined to be "0", then "ev.h" will not define any function
prototypes, but still define all the structs and other
symbols. This is occasionally useful..
EXAMPLES
For a real-world example of a program the includes libev
verbatim, you can have a look at the EV perl module
(). It has the libev files in
the liev/ subdirectory and includes them in the EV/EVAPI.h (public
interface) and EV.xs (implementation) files. Only EV.xs file will be
compiled.
|
https://git.lighttpd.net/mirrors/libev/src/commit/5dd46d018abe6d5f6ed87dc8eaa4810107c8960d/README.embed
|
CC-MAIN-2021-49
|
refinedweb
| 593
| 67.04
|
The Data Science Lab.
In order to train a PyTorch neural network you must write code to read training data into memory, convert the data to PyTorch tensors, and serve the data up in batches. This task is not trivial and is often one of the biggest roadblocks for people who are new to PyTorch.
In the early days of PyTorch, you had to write completely custom code for data loading. Now however, the vast majority of PyTorch systems I've seen (and created myself) use the PyTorch Dataset and DataLoader interfaces to serve up training or test data. Briefly, a Dataset object loads training or test data into memory, and a DataLoader object fetches data from a Dataset and serves the data up in batches.
You must write code to create a Dataset that matches your data and problem scenario; no two Dataset implementations are exactly the same. On the other hand, a DataLoader object is used mostly the same no matter which Dataset object it's associated with. For example:
class MyDataSet(T.utils.data.Dataset):
# implement custom code to load data here
my_ds = MyDataset("my_train_data.txt")
my_ldr = torch.utils.data.DataLoader(my_ds, 10, True)
for (idx, batch) in enumerate(my_ldr):
. . .
The code fragment shows you must implement a Dataset class yourself. Then you create a Dataset instance and pass it to a DataLoader constructor. The DataLoader object serves up batches of data, in this case with batch size = 10 training items in a random (True) order.
This article explains how to create and use PyTorch Dataset and DataLoader objects. A good way to see where this article is headed is to take a look at the screenshot of a demo program in Figure 1. The source data is a tiny 8-item file. Each line represents a person: sex (male = 1 0, female = 0 1), normalized age, region (east = 1 0 0, west = 0 1 0, central = 0 0 1), normalized income, and political leaning (conservative = 0, moderate = 1, liberal = 2). The goal of the demo is to serve up data in batches where the dependent variable to predict is political leaning, and the other variables are the predictors.
The 8-item source data is stored in a tab-delimited file named people_train.txt and looks like:
1 0 0.171429 1 0 0 0.966805 0
0 1 0.085714 0 1 0 0.188797 1
1 0 0.000000 0 0 1 0.690871 2
1 0 0.057143 0 1 0 1.000000 1
0 1 1.000000 0 0 1 0.016598 2
1 0 0.171429 1 0 0 0.802905 0
0 1 0.171429 1 0 0 0.966805 1
1 0 0.257143 0 1 0 0.329876 0
Behind the scenes, the demo loads data into memory using a custom Dataset object, and then serves the data up in randomly selected batches of size 3 rows/items. Because the source data has 8 lines, the first two batches have 3 data items, but the last batch has 2 items. The demo processes the source data twice, in other words, two epochs.
This article assumes you have intermediate or better skill with a C-family programming language. The demo program is coded using Python, which is used by PyTorch and which is essentially the primary language for deep neural networks. The complete source code for the demo program is presented in this article. The source code and source data are also available in the file download that accompanies this article.
The Demo Program
The demo program, with a few minor edits to save space, is presented in Listing 1. I indent my Python programs using two spaces, rather than the more common four spaces or a tab character, as a matter of personal preference.
Listing 1: DataLoader Demo Program
# dataloader_demo.py
# PyTorch 1.5.0-CPU Anaconda3-2020.02
# Python 3.7.6 Windows 10
import numpy as np
import torch as T
device = T.device("cpu") # to Tensor or Module
# ---------------------------------------------------
# predictors and label in same file
# data has been normalized and encoded like:
# sex age region income politic
# [0] [2] [3] [6] [7]
# 1 0 0.057143 0 1 0 0.690871 2
class PeopleDataset(T.utils.data.Dataset):)
def __len__(self):
return len(self.x_data) # required
def __getitem__(self, idx):
if T.is_tensor(idx):
idx = idx.tolist()
preds = self.x_data[idx, 0:7]
pol = self.y_data[idx]
sample = \
{ 'predictors' : preds, 'political' : pol }
return sample
# ---------------------------------------------------
def main():
print("\nBegin PyTorch DataLoader demo ")
# 0. miscellaneous prep
T.manual_seed(0)
np.random.seed(0)
print("\nSource data looks like: ")
print("1 0 0.171429 1 0 0 0.966805 0")
print("0 1 0.085714 0 1 0 0.188797 1")
print(" . . . ")
# 1. create Dataset and DataLoader object
print("\nCreating Dataset and DataLoader ")
train_file = ".\\people_train.txt"
train_ds = PeopleDataset(train_file, num_rows=8)
bat_size = 3
train_ldr = T.utils.data.DataLoader(train_ds,
batch_size=bat_size, shuffle=True)
# 2. iterate thru training data twice
for epoch in range(2):
print("\n==============================\n")
print("Epoch = " + str(epoch))
for (batch_idx, batch) in enumerate(train_ldr):
print("\nBatch = " + str(batch_idx))
X = batch['predictors'] # [3,7]
# Y = T.flatten(batch['political']) #
Y = batch['political'] # [3]
print(X)
print(Y)
print("\n==============================")
print("\nEnd demo ")
if __name__ == "__main__":
main()
The execution of the demo program begins with:
def main():
print("\nBegin PyTorch DataLoader demo ")
# 0. miscellaneous prep
T.manual_seed(0)
np.random.seed(0)
. . .
In almost all PyTorch programs, it's a good idea to set the system random number generator seed values so that your results will be reproducible. Unfortunately, because of execution across multiple processes, sometimes your results are not reproducible even if you set the random generator seeds. But if you don't set the seeds, your results will almost certainly not be reproducible.
Next, a Dataset and a DataLoader object are created:
train_file = ".\\people_train.txt"
train_ds = PeopleDataset(train_file, num_rows=8)
bat_size = 3
train_ldr = T.utils.data.DataLoader(train_ds,
batch_size=bat_size, shuffle=True)
The custom PeopleDataset object constructor accepts a path to the training data, and a num_rows parameter in case you want to load just part of a very large data file during system development.
The built-in DataLoader class definition is housed in the torch.utils.data module. The class constructor has one required parameter, the Dataset that holds the data. There are 10 optional parameters. The demo specifies values for just the batch_size and shuffle parameters, and therefore uses the default values for the other 8 optional parameters.
The demo concludes by using the DataLoader to iterate through the source data:
for epoch in range(2):
print("\n==============================\n")
print("Epoch = " + str(epoch))
for (batch_idx, batch) in enumerate(train_ldr):
print("\nBatch = " + str(batch_idx))
X = batch['predictors'] # [3,7]
Y = batch['political'] # [3]
print(X)
print(Y)
In neural network terminology, an epoch is one pass through all source data. The DataLoader class is designed so that it can be iterated using the enumerate() function, which returns a tuple with the current batch zero-based index value, and the actual batch of data. There is a tight coupling between a Dataset and its associated DataLoader, meaning you have to know the names of the keys used for the predictor values and the dependent variable values. In this case the two keys are "predictors" and "political."
Implementing a Dataset Class
You have a lot of flexibility when implementing a Dataset class. You are required to implement three methods and you can optionally add other methods depending on your source data The required methods are __init__(), __len__(), and __getitem__(). The demo PeopleDataset defines its __init__() method as:)
The __init__() method loads data into memory from file using the NumPy loadtxt() function and then converts the data to PyTorch tensors. Instead of using loadtxt(), two other common approaches are to use a program-defined data loading function, or to use the read_csv() function from the Pandas code library. The max_rows parameter of loadtxt() can be used to limit the amount of data read. If max_rows is set to None, then all data in the source file will be read.
In situations where your source data is too large to fit into memory, you will have to read data into a buffer and refill the buffer when the buffer has been emptied. This is a fairly difficult task.
The demo data stores both the predictor values and the dependent values-to-predict in the same file. In situations where the predictor values and dependent variable values are in separate files, you'd have to pass in two source file names instead of just one. Another common alternative is to pass in just a single source directory and then use hard-coded file names for the training and test data.
The demo __init__() method bulk-converts all NumPy array data to PyTorch tensors. An alternative is to leave the data in memory as NumPy arrays and then convert to batches of data to tensors in the __getitem__() method. Conversion from NumPy array data to PyTorch tensor data is an expensive operation so it's usually better to convert just once rather than repeatedly converting batches of data.
The __len__() method is defined as:
def __len__(self):
return len(self.x_data)
A Dataset object has to know how much data there is so that an associated DataLoader object knows how to iterate through all data in batches.
The __getitem__() method is defined as:
def __getitem__(self, idx):
if T.is_tensor(idx):
idx = idx.tolist()
preds = self.x_data[idx, 0:7]
pol = self.y_data[idx]
sample = \
{ 'predictors' : preds, 'political' : pol }
return sample
It's common practice to name the parameter which specifies which data to fetch as "idx" but this is somewhat misleading because the idx parameter is usually a Python list of several indexes. The __getitem__() method checks to see if the idx parameter is a PyTorch tensor instead of a Python list, and if so, converts the tensor to a list. The method return value, sample, is a Python Dictionary object and so you must specify names for the dictionary keys ("predictors" in the demo) and the dictionary values ("political" in the demo).
Using a Dataset in a DataLoader
The demo program creates a relatively simple DataLoader object using just the Dataset object plus the batch_size and shuffle parameters:
The other eight DataLoader parameters are not used very often. These parameters and their default values are presented in the table in Figure 2.
In some situations, instead of using a DataLoader to consume the data in a Dataset, it's useful to iterate through a Dataset directly. For example:
def process_ds(model, ds):
# ds is an iterable Dataset of tensors
for i in range(len(ds)):
inpts = ds[i]['predictors']
trgt = ds[i]['target']
oupt = model(inpts)
# do something
return some_value
You can use this pattern to compute model accuracy or a custom error metric.
Using Other DataLoaders
Once you understand how to create a custom Dataset and use it in a DataLoader, many of the built-in PyTorch library Dataset objects make more sense than they might otherwise. For example, the TorchVision module has data and functions that are useful for image processing. One of the Dataset classes in TorchVision holds the MNIST image data. There are 70,000 MNIST images. Each image is a handwritten digit from '0' to '9'. Each image has size 28 x 28 pixels and pixels are grayscale values from 0 to 255.
A Dataset class for the MNIST images is defined in the torchvision.datasets package and is named MNIST. You can create a Dataset for MNIST training images like so:
import torchvision as tv
tform = tv.transforms.Compose([tv.transforms.ToTensor()])
mnist_train_ds = tv.datasets.MNIST(root=".\\MNIST_Data",
train=True, transform=tform, target_transform=None,
download=True)
Instead of specifying the location of the source data, the download=True argument means the first time the Dataset object is created, the code reaches out to the Internet and downloads the MNIST data in compressed format from an invisible-to-you hard-coded URL, decompresses the data, and stores the data in a collection of directories and sub-directories in local root folder named "MNIST_Data", which you must first create on your machine. On subsequent calls to the MNIST Dataset constructor, the data is loaded from the local stored data cache instead of reloading from the internet, in spite of the somewhat misleading download=True argument.
The train=True argument instructs the constructor to fetch the training data rather than the test data. As you saw in the PeopleDataset example in this article, in most situations you want to transform the source data into PyTorch tensors. The MNIST Dataset does this by passing in a special built-in transform function named ToTensor().
After an MNIST Dataset object has been created, it can be used in a DataLoader as normal, for example:
mnist_train_dataldr =
T.utils.data.DataLoader(mnist_train_ds,
batch_size=2, shuffle=True)
for (batch_idx, batch) in enumerate(mnist_train_dataldr):
print("")
print(batch_idx)
print(batch)
input() # pause
To recap, there are many built-in Dataset classes defined in various PyTorch packages. They have different calling signatures, but they all read in data from some source (often a hard-coded URL), and have a way to convert their data to PyTorch tensors. After a built-in Dataset has been created, it can be processed by a DataLoader object using the enumerate()
|
https://visualstudiomagazine.com/articles/2020/09/10/pytorch-dataloader.aspx
|
CC-MAIN-2022-40
|
refinedweb
| 2,230
| 55.74
|
Конверсия Instancer’а в геометрию #2
Type: Maya Python Script (py) Name: ark_instToGeo Version: 2.0 Released: 2016.12.17 Download (Save as...)
Доработал свою старую утилиту для конвертации instancer‘а в геометрию – теперь значительно быстрее, стабильнее, понимает все возможные типы вращения инстансов (rotation и aim) и показывает прогресс и пишет финальную статистику в script editor.
Использование:
- файл ark_instToGeo.py надо положить в любую папку, откуда Maya читает скрипты (список можно получить MEL-командой “getenv PYTHONPATH;”);
- запустить python-команду:
- выделить instancer‘ы, которые нужно сконвертировать и нажать Convert – будет произведена конвертация, удалены static channels и применен euler filter.
from ark_instToGeo import *; ark_instToGeo()
Опции:
Тип конверсии:
- Duplicate – созданные объекты будут независимыми дубликатами;
- Hybrid – созданные объекты будут дубликатами, но их форма привязана к исходному объекту (outMesh->inMesh, можно разорвать связь вручную чтобы сделать дубликат независимым от исходного объекта);
- Instance – созданные объекты будут инстансами (медленнее предыдущих);
Start from Current Frame – если включено, вне зависимости от таймлайна или выставленного вручную диапазона фреймов, начнет конвертацию с текущего фрейма;
Playback/Custom Range – в первом случае диапазон фреймов будет взят из таймлайна, во втором – можно ввести значения начального и конечного фрейма;
Convert – запуск конвертации;
Posted on December 18, 2016 at 17:22 by Ark · Permalink
In: FX · Tagged with: ark_instToGeo, instancer, maya, particles, python, sag_instancerToGeometry, script, tool, utility
In: FX · Tagged with: ark_instToGeo, instancer, maya, particles, python, sag_instancerToGeometry, script, tool, utility
on 9 January 2017 at 2:08
Permalink
Hi! Thank you very much for your utility. But it does seem to work only with the first (the 0) item on the instancer’ list. Is this limitation normal, by design, it’s a bug, it’s me?
Here’s the example file that does this (maya 2016 sp2):
on 9 January 2017 at 14:41
Permalink
Hi. You’re using ‘cycle’ option to randomize objects – I’ve never used it this way and haven’t accounted for that, so, yep, that’s a limitation and I will add it in the next version. But you can always use the ‘usual’ method – set ‘cycle’ to ‘none’ in instancer, create per-particle attribute with random number and plug it into ‘Object Index’. Check this scene:
on 9 January 2017 at 14:51
Permalink
Thank you very much for your great, fast answer and suggestion!
on 10 January 2017 at 17:44
Permalink
Awesome! Thanks 🙂
Might help if it was possible to stop the conversion with ESC
on 13 January 2017 at 1:56
Permalink
Yep, I’m looking into that 🙂
on 9 May 2017 at 6:04
Permalink
Hello Ark! just a quick question, does this works with maya 2017? Im trying to use it and my viewport freeze when I click on convert
on 12 May 2017 at 13:17
Permalink
Hi Daniel. It works in 2017. In this version I’ve disabled viewport update, since it’s much-much faster this way – you should see progress bars moving though, that means everything is working. If not – send me your scene and I’ll check what is going on.
on 15 May 2017 at 1:47
Permalink
Hi Ark, I created a lot of meshes using the duplicate option and it works great but all the objects are hidden within its own group, is this working correctly?…I mean, do I have to unhide them all manuallly?
Whats the purpose of the groups and the meshes to be hidden by default?
I´m using Maya 2017 Update 3.
Thank You very much 🙂
on 15 May 2017 at 1:51
Permalink
Don´t worry, I ungrouped everything, my fault…thank you so much for your good work 🙂
on 5 June 2017 at 14:11
Permalink
Arkady, thank you, works better than on the Sawa.
on 8 June 2017 at 14:49
Permalink
This script is amazing! Thank you!
on 13 June 2017 at 5:26
Permalink
thanks for this script, the script its so cool 😀
on 14 July 2017 at 8:12
Permalink
I can’tt convert instancer to geometry by using the rotate expression.
it said the problem is from line 397. (list index out of range)
Other thing else is fine.
Please help!
on 14 July 2017 at 13:50
Permalink
Rotation expressions should work. Please, drop me a simplified scene with this problem and I’ll take a look.
on 14 July 2017 at 18:28
Permalink
Getting an error on convert, is there something I am missing or need to try?
C:/Users/name/Documents/maya/2016/scripts\ark_instToGeo.py line 198: No object matches name: Dressing01ModelGeoPublish_v004:forest_2:rotation.instanceAttributeMapping #
on 14 July 2017 at 19:09
Permalink
Please, check if the sample scene works for you:
And email me your scene with the problem.
on 14 July 2017 at 20:25
Permalink
Ark, example scene works great!
on 14 July 2017 at 20:39
Permalink
“No object matches name: Dressing01ModelGeoPublish_v004:forest_2:rotation.instanceAttributeMapping #”
on 27 July 2017 at 3:49
Permalink
this is awesome, just what I needed great script.
on 8 August 2017 at 5:08
Permalink
Hi Ark, Congrats for developing a wonderful script.
I created a lot of meshes using the instance option and it works great but all the objects are hidden within its own group. Do I have to unhide them all manuallly? Whats the purpose of the groups and the meshes to be hidden by default?
How to unhide all the meshes without manually unhiding one by one? Is there any possibility to do that. It’s urgent requirement for me.
Ungrouping all the meshes looses animation keys as keys are applied to group nodes. How to solve this?
Any help would be highly appreciated.
Thanks a lot for a great script.
on 8 August 2017 at 5:47
Permalink
Got the solution unhiding all the meshes in groups.
Used Select > Hierarchy option to select all the meshes inside the group nodes and visibility turned on for all the meshes in a single click.
Wonderful script & working great.
Thanks a lot again.
on 8 August 2017 at 10:23
Permalink
Hi Srinivas. Yes, the script just takes whatever settings you have on your original objects, including animations and so on. So, to make duplicates/instances visible, you need to have original objects visible.
on 18 August 2017 at 14:17
Permalink
Just used your script, worked a treat.
Thank you very much
Brilliant Stuff !!!
on 16 September 2017 at 7:28
Permalink
Started a conversion on a scene with about 500,000 particles instanced by a small 4 poly pyramid, set the range to be one frame – its been running for the past 2 hours – pretty fast system. Think that’s normal to take so long? I can’t see the status bar as I opened a window over Maya and if I try to bring Maya back to the front it never refreshes the window to see if its processing.
on 16 September 2017 at 9:01
Permalink
The problem is in the task you’re trying to accomplish – 500,000 separate objects in maya scene is a heavy thing, 4-poly or even empty groups. Instances are even slower. I haven’t done more than 100,000 (duplicates), it took about 10min to convert. Working with such a scene was very problematic though. So, try smaller particle count and duplicates mode, if it’s fast enough for you, switch to instances (if you need them, of course), then increase particle count.
|
http://www.sigillarium.com/blog/1188/
|
CC-MAIN-2017-39
|
refinedweb
| 1,235
| 70.63
|
Listing 3.1 outlines a basic servlet that handles GET requests . GET requests, for those unfamiliar with HTTP, are the usual type of browser requests for Web pages. A browser generates this request when the user enters a URL on the address line, follows a link from a Web page, or submits an HTML form that either does not specify a METHOD or specifies METHOD="GET" . Servlets can also easily handle POST requests, which are generated when someone submits an HTML form that specifies METHOD="POST" . For details on the use of HTML forms and the distinctions between GET and POST , see Chapter 19 (Creating and Processing HTML Forms).
import java.io.*; import javax.servlet.*; import javax.servlet.http.*; public class ServletTemplate extends HttpServlet { public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { // Use "request" to read incoming HTTP headers // (e.g., cookies) and query data from HTML forms. // Use "response" to specify the HTTP response status // code and headers (e.g., the content type, cookies). PrintWriter out = response.getWriter(); // Use "out" to send content to browser. } }
Servlets typically extend HttpServlet and override doGet or doPost , depending on whether the data is being sent by GET or by POST . If you want a servlet to take the same action for both GET and POST requests, simply have doGet call doPost , or vice versa.
Both doGet and doPost take two arguments: an HttpServletRequest and an HttpServletResponse . The HttpServletRequest lets you get at all of the incoming data; the class has methods by which you can find out about information such as form (query) data, HTTP request headers, and the client's hostname. The HttpServletResponse lets you specify outgoing information such as HTTP status codes (200, 404, etc.) and response headers ( Content-Type , Set-Cookie , etc.). Most importantly, HttpServletResponse lets you obtain a PrintWriter that you use to send document content back to the client. For simple servlets, most of the effort is spent in println statements that generate the desired page. Form data, HTTP request headers, HTTP responses, and cookies are all discussed in the following chapters.
Since doGet and doPost throw two exceptions ( ServletException and IOException ), you are required to include them in the method declaration. Finally, you must import classes in java.io (for PrintWriter , etc.), javax.servlet (for HttpServlet , etc.), and javax.servlet.http (for HttpServletRequest and HttpServletResponse ).
However, there is no need to memorize the method signature and import statements. Instead, simply download the preceding template from the source code archive at and use it as a starting point for your servlets.
|
https://flylib.com/books/en/1.94.1.34/1/
|
CC-MAIN-2018-30
|
refinedweb
| 426
| 64.61
|
Template
Consider a case where a Haunted site's users felt that they should make a repository of all haunted movies and share it with others. A movie, however, will have a lot of information to go with it—producers, director, actors, release date, distributors, storyline, etc. It will be very easy to create a page with all the information, but it won't be possible to summarize the movie in a way that anybody can have summary information when they visit any movie's page. There needs to be a common format that will be used by all the movie pages. That is how Wikipedia shows a summary of every James Bond movie at the right side of the page. Since all movies have some common attributes but different values, they use the same format for all the James Bond movie summaries—it's only the attributes' values that change, and not the attributes themselves. Can we use the same thing in our Haunted site?
MediaWiki has the solution, and it is known as a template. A template is a page that can be inserted into another page via a process called transclusion. Templates usually reside in the Template namespace in MediaWiki. Templates are useful for any text for which one wants a copy in two or more pages, and there is no need for each copy to be edited independently, to adapt it to the page it is in. Templates can also be parameterized—we can add parameters to a template in order to show different content based on the parameter. This lets a template act like a subroutine. Looking at it from other angle, a template can be thought of as being like the include file that we use in programming.
Creating our First Template
The syntax for insertion of the page Template:templatename is {{ templatename }}. This is called a template tag. If the page Template:templatename does not exist, then {{ templatename }} works as [[Template:templatename]], a link to a non-existing page, leading to the edit page for the template. Thus, one way of creating a template is putting in the tag first, and then following the link. Let's create our first template using this technique. Write down the following text in the URL section of the browser:
This will take us to an empty non-existent template page. We can edit the template and save it as our template. Let's make the movie summary information template for our movie section. It will contain the movie name, a poster, screenwriter, cast details, etc. Editing a template page is similar to editing a normal page. There is no difference at all, and so we can use wiki syntax in our template page. Let us add the following content in our template page for a movie named "The Haunting" and save it:
'''The Haunting''' <br>
[[Image:200px-The_Haunting_film.jpg]] <br>
'''The Haunting''' film poster <br>
'''Directed by''' Jan de Bont<br>
'''Produced by''' Donna Roth,<br>
Colin Wilson<br>
'''Written by''' Novel:<br>
Shirley Jackson <br>
'''Screenplay:'''<br>
David Self<br>
'''Starring''' Lili Taylor,<br>
Catherine Zeta-Jones,<br>
Owen Wilson,<br>
Liam Neeson<br>
'''Distributed by''' DreamWorks<br>
'''Release date(s)''' July 20, 1999<br>
'''Running time''' 113 minutes<br>
'''Language''' English<br>
'''Budget''' ~ US$80,000,000<br>
We can now call this template from any of our pages using a pair of double curly braces {{ }} with the name of the template between the braces. Assuming that we are creating a new page where we will show all stories, let's add the template to a story page. Open any of the story pages that we have created so far, and add the following line at the beginning of the edit page:
{{Movie_Summary}}
Now save the page and preview it in the browser. You will see true magic now; the content of the template is shown in the story page as follows:
We put the template tag at the start of the page, but you can always put it anywhere you want in the content page. We can use templates to create a header, a footer, the copyright information, special messages, etc., for our site. This is a very simple but powerful use of templates. Think about a situation where we have a lot of movie information available. What we did is just for a single movie, but we can use the same template for other movies with the same type of attributes. When we use templates, we don't have to worry about changing the summary attributes in each and every page. We will just change the template and all the pages will be updated, since pages include that template. We can do amazing things using templates. Also, since they are similar to normal pages, we can always create nice-looking templates using tables, images, links, etc.
Templates work on a project basis. So a template in one wiki will not work in another wiki site. In order to use the same template on another wiki site, we have to build the same template in that site. Also when we change a template, we must be careful about the impact of the changes in the pages where the template is actually used.
Parameterizing Templates
We already know that we can add parameters in our template to make it work like a subroutine. I hope all of you know what a subroutine means; if not you could visit the following URL:. Based on its parameters, a subroutine performs some task and shows results. We know templates are not subroutines, but they can be used as subroutines for performing different tasks.
Take the example of our movie summary template. We have hardcoded the name of the movie and other attributes, but we can use the same template for another movie by changing the attributes' values. So it is almost same as adding the content in each page. However, if we can parameterize the template, it will definitely make our task easy.
What we can do is make the movie name, movie poster, writer, actors' names etc., into variables that will be set by parameters and passed from the calling page. All the template parameters are divided into two categories: named parameters and numbered parameters. In order to create a parameterized template, we need to perform the following two tasks every time:
- In the template page, declare parameters that will be changed based on the passed values.
- Call the template with proper values from the calling page.
Parameters are declared with three pairs of braces with the parameter name inside. {{{myVar}}} declares a parameter name myVar. So in the template, the parameter declaration will done as follows:
{{{parname1|default}}}, {{{parname2|default}}}
and in the tag or calling page, we have to write it as follows:
{{templatename|parname1=parvalue1|parname2=parvalue2}}
The default option in the parameter declaration is totally optional. It can be different for each and every parameter, and applies when no value has been provided for the parameter. Here default stands for the default value of the parameter. This default value will be used if a parameter is not set to any value from the calling pages.
You will see that we are using the parameter name in both template definition and declaration page. This is known as a named parameter. There is another type of parameter as well, called a numbered parameter, which is indicated by the use of ?a number instead of a name. In a numbered parameter option, the declaration looks like this:
{{{1|default}}}, {{{2|default}}}
and in the calling page, we have to write down the tag as follows:
{{templatename|parvalue1|parvalue2}}
Now back to our movie summary example. We want to convert our movie summary template to a named parameterized template. We will use different parameters for different attributes of the template. We will also use a table to make the template look better. Here is the code for the template:
{|style="width:250px; " border="0"
|-
|width=100px|
|width=100px|
|-
| colspan="2" align="center" |'''{{{name}}}'''
|-
| colspan="2" align="center" |[[Image:{{{image}}}
|{{{image_size|200px}}}]]
|-
| colspan="2" align="center" |''{{{caption}}}''
|-
|'''Directed by'''||{{{director}}}
|-
|'''Produced by'''||{{{producer}}}
|-
|'''Written by'''||{{{writer}}}
|-
|'''Screenplay:'''||{{{screenplay}}}
|-
|'''Starring'''||{{{starring}}}
|-
|'''Distributed by''' ||{{{distributor}}}
|-
|'''Release date(s)'''||{{{released}}}
|-
|'''Running time'''|| {{{runtime}}}
|-
|'''Language'''|| {{{language}}}
|-
|'''Budget'''|| {{{budget}}}
|}
Now save the template and go to the "Haunted Movie" page, where we have included this template. We need to add parameters to the tag, and pass values to the parameters. Write the following tag at the top of the edit box:
{{Movie_Summary |
name = The Haunting |
image = 200px-The_Haunting_film.jpg |
caption = ''The Haunting'' film poster |
writer = '''Novel:'''<br>[[Shirley Jackson]] |
screenplay = [[David Self]] |
starring = [[Lili Taylor]],<BR>[[Catherine
Zeta-Jones]],<br>[[Owen Wilson]],<br>[[Liam Neeson]] |
director = [[Jan de Bont]] |
producer = [[Donna Roth]],<br>[[Colin Wilson]] |
distributor = [[DreamWorks]] |
released = [[July 20]], [[1999]] |
runtime = 113 minutes |
language = English |
budget = ~ US$80,000,000
}}
After saving the page, you will see that the page looks the same with parameterized values. Here is the page we will be shown on the screen:
Change the values of the parameters, and you will see the difference. We can easily created hundreds and thousands of movie pages with the help of this template. What we need to do is call the template with parameter values, and the template will do the rest.
The same task can be performed with numbered parameters. The numbered parameters will start from 1 and continue until all the parameters are numbered. For numbered parameters, the declaration in the template definition page will be as follows:
{|style="width:250px; " border="0"
|-
|width=100px|
|width=100px|
|-
| colspan="2" align="center" |'''{{{1}}}'''
|-
| colspan="2" align="center" |[[Image:{{{2}}}]]
|-
| colspan="2" align="center" |''{{{3}}}''
|-
|'''Directed by'''||{{{4}}}
|-
|'''Produced by'''||{{{5}}}
|-
|'''Written by'''||{{{6}}}
|-
|'''Screenplay:'''||{{{7}}}
|-
|'''Starring'''||{{{8}}}
|-
|'''Distributed by''' ||{{{9}}}
|-
|'''Release date(s)'''||{{{10}}}
|-
|'''Running time'''|| {{{11}}}
|-
|'''Language'''|| {{{12}}}
|-
|'''Budget'''|| {{{13}}}
|}
and the calling page tag will look like the following, with changed parameters to create a different movie summary with the same template:
{{Movie_Summary
|The Haunting
|200px-The_Haunting_Poster.jpg
|The Haunting DVD cover
|[[Robert Wise]]
|[[Robert Wise]]
|'''Novel:'''<br>[[Shirley Jackson]]
|[[Nelson Gidding]]
|[[Julie Harris]]<br />[[Richard Johnson]]<br />[[Claire Bloom]]
|[[MGM]]
|[[September 18]], [[1963]] ([[USA]])
|112 min.
|[[English language|English]]
|
}}
The output of the page will be:
So we can see that by changing the attributes' values we can use the same template in any page we want. This helps organize the content more effectively and also give a different identity to different types of content.
Named versus Numbered Parameters
Even though both named and numbered parameters can be used, there are some places where named parameters are better than numbered parameters, and vice versa. Here are some things to help you choose between named and numbered parameters.
Named parameters are used when you know which parameters exist, and their exact name. So knowing the name of the parameters is a must for named parameters. Also, we can always mix the order of parameters in the parameter list, and so, no particular order is fixed for a named parameter list. We can make one of our own, until we have all the required parameters in the list. Named parameters are also very easy to understand. We can define a meaning and purpose for the parameter.
For numbered parameters, on the other hand, we don't have to write the parameter's name followed by an assignment operator; we just pass the parameter separated by a pipe character. What is important here, however, is that the order of the parameters must be maintained, or else parameters will get wrong values. Numbers are international and they don't need translation for different languages or projects. So unlike named parameters, numbered parameters do not require any translation of the parameter list for using with multilingual sites.
Section
Sections are an efficient way of organizing content inside a page, since they allow us to generate a table of contents automatically, as well as let us edit section contents rather than page content all at once. Each section in an article has a corresponding Edit link on the right side of the section. This link takes us to the article edit page, but with that particular section only. Isn't it amazing? Suppose an article has a 100 sections, and we wanted to edit it. Conventionally, we would have to go through a huge page of 100 sections in the edit box, which will not only look very cumbersome, but will also be very difficult to trace and edit. Sections help us in this matter by ensuring that we get only the relevant section's content while leaving other sections' content untouched, and ensuring that users don't get lost in the huge amount of content. If a page is large, we can also break it into different sub-pages, but that is not always right thing to do. Let's see a comparison between using sections and creating separate pages for a big article.
Comparison between Sections and Separate Pages
Creating a Table of Contents Using Sections
We know that sections or headers can be used to create a table of contents for any article. If the article contains more than three sections, then a table of contents is automatically generated. We can also stop the automatic creation of a table of contents by following the methods:
- Turning it off in the user preference settings
- In the article edit box, making use of the magic word _NOTOC_.
We can also force the system to show a table of contents even when we have less than three sections in the article. This can be performed by adding the _FORCETOC_ or _TOC_ magic words inside the article. If we use the _FORCETOC_ magic word, then the table of contents is placed before the first header in the article, but if we use the _TOC_ magic word in the article, the table of contents is placed at the position of the _TOC_ word in the article. This gives a great flexibility in moving the table of contents to our desired position, such as to the right, center, or left, or inside a table, and in choosing the number of times we want to show the table of contents in the article.
Redirection
As the name suggest, redirection is the process in which users are redirected to a particular page based on the setup or action defined by someone. Most of us have some familiarity with redirection—when we visit a website online, and that site has been moved, we see a little redirection message, and in a few seconds we are moved to a new page. There are a lot of other places where you would find redirection required. When a page in MediaWiki is moved or renamed, a redirection is created automatically by the system.
A redirect is a page with only the following kind of content:
#REDIRECT [[link in internal link style]]
So far we know that a redirect is used for page movement and renaming in MediaWiki. However, redirects can be used for other purposes too.
- Finding a page.
- Conveniently going to a page.
- Linking indirectly to a page, without the need for a piped link.
In order to create a redirect, we have to create a new page or use an existing page from the site. If the page is new, then add the following line at the beginning of the edit box:
#REDIRECT [[A night in the jungle]]
Suppose the redirect page we have just created is named "Story". Now, whenever someone tries to access the "Story" page, he or she will be redirected to the "A night in the jungle" page. When the user is redirected to the page, he or she will see a small caption at the top of the page, citing details of the page from which they have been redirected:
Editing a redirect is as simple as creating it. Click the redirect page's name from any of the pages where you see the text Redirected from …. This will take you to the redirect page. Edit the page as you would a normal page, and save it.
We can add additional text after the redirect tag and the link. This can be used as an explanation when we visit the redirect page itself. Extra lines of text are automatically deleted when saving the redirect. The page will not redirect if there is anything on the page before the redirect. Also, there must be no spaces between the # and the REDIRECT. We have to also remember that interwiki redirects and special page redirects are not possible with the current features. Also, redirecting to an anchor is not possible.
Summary
We have seen MediaWiki Content Organizing Features like Templates, Sections, and Redirection.
If you have read this article you may be interested to view :
|
https://www.packtpub.com/books/content/mediawiki-content-organizing-features-templates-sections-redirection
|
CC-MAIN-2017-04
|
refinedweb
| 2,819
| 59.84
|
Births, we will build an API that allows people to text a Twilio phone number and receive picture messages with photos from a cloud storage account.
Twilio makes it Easier
We’re going to build this solution by doing three things:
- Purchasing a Twilio phone number with SMS and MMS capabilities on Twilio.com
- Creating an API to send the photos from a cloud storage account using Visual Studio 2017, C#, and the latest version of the Twilio Nuget package
- Connecting our API to our Twilio number using ngrok before sharing the number and taking more pictures
Let’s get started.
Purchasing Your Twilio Number
To purchase a Twilio phone number that can send MMS messages, log in to twilio.com, scroll down to the Super Network section of the console, and then click on Phone Numbers. You will next click on the Purchase a Number link (or the ‘+’ symbol if you already have other numbers already purchased), enter in an area code of your choice, select the SMS & MMS check boxes, and then click on the Search button. When eligible numbers are returned, select one of your choosing and then click on the Buy button. The same steps are shown in our animated gif below:
Keep in mind, this will only work with US based phone numbers. If you’re considering sending messages from or to non US based phone numbers, check out Twilio’s MMS Converter. This tool enables Twilio to send a URL in place of the picture message in countries where MMS support is limited. With our Twilio number purchased, Step 1 is complete!
Setting Up Your Visual Studio Web API Project
Let’s start Visual Studio and create an ASP.NET Web Application targeting the .NET Framework as shown in our animated GIF below. We will then create an empty Web API project. If you are unfamiliar with Visual Studio and would like more detailed steps, check out Create a Web API Project at docs.microsoft.com.
Next we will add the Twilio and JSON.Net Nuget packages to our project. The Twilio package contains the Twilio helper library, which provides objects to interact with Twilio’s APIs. The JSON.Net package allows us to do some JSON parsing in our code.
In the Solution Explorer window, right click on the References folder and from the context menu select Manage Nuget Packages. That should open up the Package manager window where you can search for Twilio followed by Newtonsoft.Json and then install the latest stable versions available.
Time to Write Some Code
Because this API is being built to return photos, we’re setting up our API to respond to HTTP GET requests to this url:, where xxxxx is the port number the application runs on. To do that, we need a Controller class. So in the Solution Explorer window, right-click on the Controllers folder and select Add from the menu, followed by Controller. A new dialog window should open up. In there, let’s select Web API 2 Controller – Empty and name this controller PhotosController. At this point your project should have a PhotosController.cs file. Add the statement
using Twilio.TwiML; to the file. This makes objects from the Twilio SDK available to use in our code. After doing that, your code should look like:
With that complete, it’s time to create the method that will respond to incoming SMS messages. Copy the code below into your PhotosController class so that it looks like the following:
This method creates a Twilio response object
MessagingResponse and uses the
Body method to tell Twilio to respond to an incoming text message with this generic message. Then it converts our
MessagingResponse object to an XML string that Twilio can parse by calling
response.ToString(). Finally it returns the XML as the Content of the HTTP response.
Running the project will open a web browser on your computer with your app. Navigating to, (where you replace xxxxx with the port used on your machine) should give you the following:
Time to Include Photos
You are probably looking at the output and saying, “where’s my picture?” Twilio looks for a URL in the text of a Media element, which should be a child to the Message element. This is where we need to provide a URL to the photo that we want to include. With my son’s photos stored on Dropbox, I will walk through getting a URL for items stored there. You can use any repository you would like as long as the item can be downloaded via a publicly accessible URL.
To access items from Dropbox, we need to
- Sign up for a dropbox account if you don’t already have one
- Create a new app on the Dropbox Developer portal. We will create an app folder that we will store our baby photos in
- Generate an access token used for authorization purposes
- Manually upload some photos to the newly created app folder
- Select a photo from our new folder and generate a publicly accessible URL for it
Steps 1 – 4 are pretty simple. In order to connect to Dropbox, we need to create an app on the Dropbox Developer portal. Once we navigate there and click on the Create app button in the top right corner of the page, a new page opens that allows us configure a Dropbox app. We will want to use the Dropbox API, set up an App Folder, and give our app a name as shown in the screenshot below.
After filling in those details and clicking the Create app button on the bottom of the page, we are taken to a configuration page for our Dropbox app. To create the access token we need, we will want to click on the Generate button in the OAuth2 section of the app creation page, as shown in the photo below.
Once you click that, you should save the token somewhere as we’ll be using it again.
Before we do anything else, let’s upload some photos to our new app folder. The easiest way to do this is to navigate to this url:, where you replace UPDATE_YOUR_APP_NAME with your app’s name, and then upload one or two photos there.
Time to cover step 5. Back in our API project, let’s create a DropboxHelper class by right clicking on the project name in the Solution Explorer window –> Add –> Class. In the dialog box, we will name our class DropboxHelper. We can replace all of code in that file with the code below:
This code contains 2 methods: GetFilePaths returns a list of items that our app has access to; GetFileUri returns a publicly accessible link for any one of those items. Because I only have pictures stored in my folder, I will only receive URLs for those items but this solution can work with other file types as well.
To properly authenticate our requests, replace the string
UPDATE_YOUR_ACCESS_TOKEN_HERE with the access token generated in step number 2 above. Once we do that, our helper class should successfully return a URL for items in the Dropbox app folder.
Time to Put It All Together
To connect our API to our DropboxHelper, let’s copy the code below into our PhotosController class. We’re going to put it underneath the Get method
GetPhotoUrl connects to Dropbox, gets all of the items, and then grabs a temporary publicly accessible URL for a randomly selected item. To finish off, we’ll update the Get method to replace the Message object creation with the below:
If we now run our app and browse to our API, we should see something similar to the below:
Step 2 complete!
Connect Our Code to Twilio
To allow Twilio to reach our API, we will use one of my favorite tools, ngrok, to expose our application running locally to the internet. If you need to download it, grab it from here. Once you have it downloaded, start up a command prompt and navigate to the directory that you saved the download to in your command prompt window. Then use the following command to start it:
Ngrok http -host-header=”localhost:xxxxx” xxxxx
where you’ll replace the xxxxx with your port number from the previous step.
With ngrok up and running, let’s head over to twilio.com/console/phone-numbers, open up our recently purchased phone number and change the URL configuration for incoming SMS messages so that it uses an HTTP GET request pointed to. When I made the change in my code and sent a message to my Twilio phone number, I received:
Achievement Unlocked!
Let’s recap what we’ve done. We built an API that will reply to incoming text messages on my Twilio number with updated photos from Dropbox. This API can be improved to grab data from another source, or prevent recipients from getting duplicate photos, or you could even build a website in front of this API to help manage it better. If you’d like to download the completed code, grab it from Github. Any issues with the code? Log an issue or drop a comment below.
For any other questions, message me on Twitter at @CoreyLWeathers, or email me at corey@twilio.com. I can’t wait to see what you build. For now, it’s time to go join the newly expanded family.
|
https://www.twilio.com/blog/2017/07/sharing-my-sons-birth-with-twilio-dropbox-and-asp-net.html
|
CC-MAIN-2017-39
|
refinedweb
| 1,571
| 68.6
|
Raising Better Events For Fun And Profit
I use the snot out of events. I use them to future-proof my controls and components. They are my secret weapon when I am untangling spaghetti code. I even sprinkle them in my soup.
In fact, I am reasonably sure that developers who don’t use the snot out of events are the number one cause of spaghetti code. Straight out of the snippet box, however, coding an event in .NET is a bit of an anemic experience. So today we’re going to pull our socks up and spice up the Event coding process, so you too can stop referencing that Form instance in your business logic.
The Chef Boyardee Version
For literary irony‘s sake, let’s say we have a class named Spaghetti, with a method named Plate(). An instance of Spaghetti is going to be Plated by a different component than the one that is going to consume it, so we want the Spaghetti to raise an event to indicate that it has been Plated. We whip out the quickest and dirtiest of events, like this:
public class Spaghetti { public event EventHandler Plated; public void Plate() { // TODO: plate the spaghetti! } }
At the end of the Plate() method’s code, we raise it like this:
public void Plate() { // TODO: plate the spaghetti! // raise the plated event if (this.Plated != null) this.Plated(this, new EventArgs()); }
Over in the HungryDude class, we can now attach a handler method for this event, so we know when our spaghetti dinner is Plated:
public class HungryDude { public HungryDude() { // instantiate my dinner Spaghetti dinner = new Spaghetti(); // listen for the dinner bell dinner.Plated += new EventHandler(dinner_Plated); } void dinner_Plated(object sender, EventArgs e) { // TODO: yum } }
While this may mean that it’s Miller Time for the Spaghetti developer, it leaves the HungryDude developer a little high and dry. Granted, it’s way better than no event being raised at all, but by declaring the event with the standard System.EventHandler and System.EventArgs, we have provided no additional data to the handling method about what has happened. Also, by raising the event with a direct call to the event itself, the Spaghetti developer has pretty well missed the bus on enabling extensibility.
Allow me to demonstrate by upgrading our simple event to a richer model.
The Louis’ Italian American Restaurant Upgrade
The first issue we should address is that there we’re not providing any level of detail about the event to the handler. While this is not always necessary, it’s better to have and not need than to need and not have. That goes double for distributed development teams, and triple for component developers who get hit by busses, presumably while trying not to miss them.
The best way to ensure that developers who consume our components have enough event-related data long after we have shuffled off this mortal coil is by extending the System.EventArgs class, and adding our own custom properties and a constructor to match:
public class PlatedEventArgs : EventArgs { public bool IsCold { get; set; } public PlatedEventArgs(bool isCold) { this.IsCold = isCold; } }
At this point we could get away with passing an instance of PlatedEventArgs to our event’s ‘e’ argument. However, any code handling the event would only see the EventArgs part if it did an ugly cast up to PlatedEventArgs. In order to make sure that the handler always presents our custom arguments to consuming code, we need to declare a custom handler delegate, too.
public delegate void PlatedHandler(object sender, PlatedEventArgs e);
Our original event was declared as an EventHandler, so now we must change it to be declared as our new handler’s type:
public event PlatedHandler Plated;
The handler code for our event now takes on the signature of our custom EventHandler, including our custom EventArgs:
void dinner_Plated(object sender, Spaghetti.PlatedEventArgs e) { if (e.IsCold) { // TODO: send it back } else { // TODO: yum } }
The next upgrade we should make is to provide a more extensible component by wrapping up our event-raising code in an overridable method:
protected virtual void OnPlated(PlatedEventArgs e) { if (this.Plated != null) this.Plated(this, e); }
Not only does this reduce the cumbersome null-checking code for raising the event to a single method call:
public void Plate() { // TODO: plate the spaghetti! // raise the plated event OnPlated(new PlatedEventArgs(false)); }
…but it also directly exposes the raising of this event to any component that may inherit from ours in the future. For example, if the inheriting component does not want the event to be raised, it can override it with an empty method:
public class ExtremelyRunnySpaghetti : Spaghetti { // ... protected override void OnPlated(PlatedEventArgs e) { // this spaghetti needs to be in a bowl, // and should never be plated! // base.OnPlated(e); } }
Or, it can raise the event conditionally:
public class PartTimeNinjaSpaghetti : Spaghetti { public bool NinjaMode { get; set; } // ... protected override void OnPlated(PlatedEventArgs e) { if (this.NinjaMode) { // quiet like the night } else { // raise the Plated event base.OnPlated(e); } } }
In part 2 of this article, I will demonstrate how to take the heavy lifting out of all of this rich event madness by adding a rich event code snippet in Visual Studio.
Stay tuned!
|
https://code.jon.fazzaro.com/2009/03/18/broaden-your-event-horizons-part-1/
|
CC-MAIN-2018-26
|
refinedweb
| 871
| 58.42
|
Computer Science Archive: Questions from February 21, 2010
- Anonymous askedp... Show more
Consider the following C program:
#include <sys/wait.h>
main() {
int status;
printf("%s\n", "Hello");
printf("%d\n", !fork());
if(waitpid(-1, &status, 0) != -1) {
printf("%d\n", WEXITSTATUS(status));
}
printf("%s\n", "Bye");
exit(2);
}
Recall the following:
- Function fork returns 0 to the child process and the child'sprocess Id to the parent.
- Function waitpid returns -1 when there is an error, e.g., whenthe executing process has no child.
- Macro WEXITSTATUS extracts the exit status of the terminatingprocess.
Which one of the following is not a valid output of thisprogram? (Note: Output is shown without the newlines and otheroutputs are possible.)
- Hello 0 1 Bye 2 Bye
- Hello 1 0 Bye Bye 2
- Hello 1 Bye 0 2 Bye
For this program:
int counter = 0;
void handler(int sig)
{
counter++;
}
int main()
{
int i;
signal(SIGCHLD, handler);
for (i = 0; i < 5; i++){
if (fork() == 0){
exit(0);
}
}
while (waitpid(-1, NULL, 0) != -1)
{
}
printf("counter = %d\n", counter);
return 0;
}
which statement is correct?
There is only one possible output for this program and it iscounter = 0.
There are several possible outputs for this program and onepossible output is counter = 0.
There is only one possible output for this program and it iscounter = 5.
There are several possible outputs for this program and onepossible output is counter = 5.
Answer:
Problems 10 - 13 assume the following block contents:
An allocator uses an implicit free list. Each memory block,either allocated or free, has a size that is a multiple of eightbytes. Thus, only the 29 higher order bits in the header and footerare needed to record block size, which includes the header andfooter and is represented in units of bytes. The usage of theremaining 3 lower order bits is as follows:
- bit 0 indicates the use of the current block: 1 forallocated, 0 for free.
- bit 1 indicates the use of the previous adjacentblock: 1 for allocated, 0 for free.
- bit 2 indicates the use of the next adjacent block: 1for allocated, 0 for free.
Helper routines are defined to facilitate the implementation offree(void *p). The functionality of each routine isexplained in the comment above the function definition.
For each problem, fill in the blank with the statement - a, b,or c - that correctly implements the function.
For example, the function header is passed a pointer tothe payload of a block and returns a pointer to its header:
/* given a pointer p to an allocated block, i.e., p is a
pointer returned by some previous malloc()/realloc() call;
returns the pointer to the header of the block*/
void * header(void* p)
{
void *ptr;
ptr = (void *) ((int *)p - 1);
return ptr;
}
Another example: the function size is passed a pointerto the header or footer of a block. It returns the size of theblock (including the header and footer).
/* given a pointer to a valid block header or footer,
returns the size of the block */
int size(void *hp)
{
int result;
result = (*(int *)hp) & ~0x7;
return result;
}
These functions are used in some of the choices in the followingproblems.
/* given a pointer p to the payload of a block, this function
returns a pointer to the payload of the next block*/
void * nextBlock(void* p)
{
void *ptr;
;
return ptr;
}
- ptr = p + size(p);
- ptr = (void *) ((char *)p + size(p));
- ptr = (void *) ((char *)p + size(header(p));
/* given a pointer p to the payload of a block, this function
returns a pointer to the payload of the previous block*/
void * previousBlock(void* p)
{
void *ptr;
;
return ptr;
}
- ptr = (char *)p - size((char *)p - 4);
- ptr = (char *)p - size((char *)header(p) - 4);
- ptr = (char *)p - size((char *)header(p)) - 4);
/* given a pointer to a valid block header or footer,
returns the usage of the next block,
1 for allocated, 0 for free */
int isNextFree(void *hp)
{
int result;
;
return result;
}
- result=((*(int *)hp) >> 2) & 1
- result=(*(int *hp) & 2) >> 1
- result=(*(int *hp) & 4) >> 1
/* given a pointer to a valid block header or footer,
returns the usage of the previous block,
1 for allocated, 0 for free */
int isPreviousFree(void *hp)
{
int result;
;
return result;
}
- result=((*(int *)hp) >> 2) & 1
- result=(*(int *hp) & 2) >> 1
- result=(*(int *hp) & 4) >> 1
• Show less0 answers
- Anonymous asked0 answers
- SilverCabbage8363 asked2 answers
- MammothMachine9955 askedDon't really know where to begin but here's what information theygive and what i have so far. I gues... Show moreDon't really know where to begin but here's what information theygive and what i have so far. I guess what my question is, is whatam i suppose to do and how am i suppose to do it. It says i need touse iteration, so that means "for x=" then if->else if>elseif->else-> end" format right? SO could someone help me start] • Show less1 answer
- Anonymous askedplz help me.... I want to draw context diagram of electroniccheque system so if u have any idea plz... Show moreplz help me.... I want to draw context diagram of electroniccheque system so if u have any idea plz help if itpossible..........JUST ON LY I NEED TO HELP ME CONTEXT DIAGRAM OF THISE-CHEQUE SYSTEM
1.The consumer accesses the merchant server
2.The merchant may validate the electronic cheque with his/herbank for payment authorization.3-Assuming the cheque is validated, the merchant closesthe transaction with the consumer.
4- The merchant forwards the cheques to his/her bankelectronically.
5-The merchant’s bank forwards the electroniccheques to the clearing house for being cashed.6-.The clearing house works with the consumer’sbank, clears the cheque and transfers money to
the merchant’s bank, whichupdates the merchant’s account.7-At a later time, the consumer’s bank updates theconsumer with the withdrawal information.SUMMEYR:consumer is External entityAccees is data lowe-cheque is process 0validate is data flowmerchants bank is external entityforward is data flowclearing house is External entityconumer bank is External entityjust my dear help me how to draw that contextdiagram.................. i will rate• Show less0 answers
- Anonymous askedint MaxIndex(int... Show more
Write a function that findsreturns the index of the maximum element in the array.
int MaxIndex(int arr[], intsize)
Use the above function toimplement selectionSort, i.e.,
void selectionSort(intarr[], int size)
This is an example of howselection sort works
Take the array and find themaximum in thearray: arr = 3 4 5 91 (maximum = 9)
Place the maximum value atthe lastindex arr = 3 4 5 19 (9 is at correct place)
find the maximum in thearray of size 4 arr = 3 4 5 19 (maximum = 5)
Place the maximum at lastindex-1 arr = 3 4 1 59 ( 5 and 9 are at correct place)
find maximum in the arrayof size3 arr = 3 4 1 59 (maximum = 4)
place the maximum at lastindex-2 arr = 3 1 4 59 (4,5,9 are at correct place)
Keep repeating the abovesteps till array is sorted
Also write the mainfunction to test your program.• Show less1 answer
- Anonymous asked, then there are the samenumber of left and right parent... Show moreExercise 1.4
Prove that if e ? < expression> , then there are the samenumber of left and right parentheses in e (where <expression> is defined as in Section 1.1.2). • Show less0 answers
- Anonymous askedExercise 1.3
Write a syntactic derivation that proves ( a "mixed" #( bag ( of .data))) is a datum, us... Show moreExercise 1.3
Write a syntactic derivation that proves ( a "mixed" #( bag ( of .data))) is a datum, using either the grammar in the book or therevised grammar from the preceding exercise. What is wrong with ( a. b . c ) ? • Show less0 answers
- Anonymous asked0 answers
- Anonymous askedSuppose an 8-bit data word stored in memory is 11000010. Usingthe Hamming algorithm, determine what... Show more
Suppose an 8-bit data word stored in memory is 11000010. Usingthe Hamming algorithm, determine what check bits would be stored inmemory with data word. Show how you got your answer.• Show less1 answer
- Anonymous askedFor the 8-bit word 00111001, the check bits stored with it would be 0111. Suppose when the word is r... More »1 answer
- Anonymous askedHow many check bits are needed if the Hamming error correctioncode is used to detect single bit erro... Show more
How many check bits are needed if the Hamming error correctioncode is used to detect single bit errors in a 1024-bit dataword?• Show less1 answer
- Anonymous askedand"intersection' function as applied on set... Show more
QUESTION 3
Write a function that implements the ‘union’ and"intersection' function as applied on sets. Here you have toassume that members of set1 are stored in a string s1. Members of set2 are stored in string s2. You have to storethe union of the two strings in string s3. All strings shouldbe NULL terminated character arrays and you are NOT allowed to usethe string object. As an example for union s1 could be“hello”, s2 could be “yellow”, then s3(union of s1 and s2) should be “heloyw”. Notethat set union does not have repeated characters. As an example ofintersection s1 could be “hello”, s2 could be“yellow”, then s3 (intersection of s1 and s2) should be“elo”. Note that set intersection does not haverepeated characters.• Show less0 answers
- Anonymous askedx.Hmave done this project but I have many errors. I need help to correct the mistakes.Just correct t... Show morex.Hmave done this project but I have many errors. I need help to correct the mistakes.Just correct the errors,please.
#include <cstdlib>
#include <iostream>
#include <fstream>
#include <string>
#include <iomanip>
using namespace std;
int main()
{
int process_employee(int& id, double& hourWage, int& hoursWorked, ifstream& inFile);
string filename = "Employee_Information.txt";
ofstream outFile;
outFile.open(filename.c_str());
if (outFile.fail())
{
cout <<"the file wsa not successfully opened" << endl;
exit(1);
}
outFile.open("Output1.txt");
cout << "\n The file named " << "Output.txt was not successfully opened"
<< "\n Please check that the file currently exists."
<< "\n This is dumb"
<< endl;
exit(1);
}
int process_payroll(int id, int hourWage, int hoursWorked, ifstream inFile,ofstream outFile); //function call
inFile.close(); //closing inputfile
outFile.close(); //closing outputflie
cout << "\n\n\n\nEnd of The Program \n\n";
return 0;
}
//This function reads the employee information from the input file and return them through referenced variables
//if the employee information is read successfully it returns 0 otherwise it returns 1.
int process_employee(int& id, double& hourWage, int& hoursWorked, ifstream& inFile)
{
//Local Declaration
//Statements
if(inFile >> id >> hourWage >> hoursWorked)
return 1;
else
return 0;
}
//First, process_payroll() function prints a header messageand the heading for the columns (Employee Hours, Rate, etc.)
//Second, it calls process_employee() function till 0 is retunred
//Third, it calculates the employees (gross pay, net pay, etc)
//Fourth, it call print_summary() function
void process_payroll(int& id, double& hourWage, int& hoursWorked, ifstream& inFile, ofstream& outFile)
{
//Local Declaration
double gross = 0, fed = 0, state = 0 , socsec =0, net = 0;
double tGross = 0, tFed = 0, tState = 0, tSocsec = 0, tNet = 0;
double overTime = 0;
double extraWage = 0;
char flag = '\0';
cout << "\n\n Employee Payroll \n" << endl << endl; //This message will be
cout << "Employee Hours Rate Gross Net" //printed on the screen
<< " Fed State Soc Sec\n"
<< "-------- ----- ---- ----- ---"
<< " --- ----- -------"
<< endl;
outFile << "\n\n Employee Payroll \n" << endl << endl; //This message will be
outFile << "Employee Hours Rate Gross Net" //printed on the file
<< " Fed State Soc Sec\n"
<< "-------- ----- ---- ----- ---"
<< " --- ----- -------"
<< endl;
while(process_employee(id, hourWage, hoursWorked, inFile))
{
if(hoursWorked <= 35)
{
extraWage = hourWage + 0.15; //exraWage variable is used instead of modifiying the hourWage variable
gross = hoursWorked * extraWage; //calculates gross
flag = '*'; //flag '*' is used to indicates the employees who worked less than 40 hours
}
else if(hoursWorked > 40)
{
overTime = (hoursWorked - 40) * 1.5; //calculates over time hours and multiply them by 1.5
gross = (40 + overTime) * hourWage; //calculates gross
flag = '$'; //flag '$' is used to indicates the employees who worked more than 40 hours
}
else
gross = hoursWorked * hourWage;
fed = gross * FED_TAX;
state = gross * STATE_TAX;
socsec = gross * SOCSEC_TAX;
net = gross - (fed + state + socsec);
tGross += gross; //calculates the total gross
tFed += fed; //calculates the total fed tax
tState += state; //calculates the total state tax
tSocsec += socsec; //calculates the total soc sec tax
tNet += net; //calculates the total net pay
cout << setiosflags(ios::fixed) //output format settings
<< setiosflags(ios::showpoint)
<< setprecision(2);
cout << setw(6) << id << setw(10) << hoursWorked << flag
<< setw(9) << hourWage << setw(11) << gross
<< setw(9) << net << setw(9) << fed
<< setw(11) << state << setw(12) << socsec
<< endl;
outFile << setiosflags(ios::fixed) //output file format settings
<< setiosflags(ios::showpoint)
<< setprecision(2);
outFile << setw(6) << id << setw(10) << hoursWorked << flag
<< setw(9) << hourWage << setw(11) << gross
<< setw(9) << net << setw(9) << fed
<< setw(11) << state << setw(12) << socsec
<< endl;
}
print_summary(tGross, tFed, tState, tSocsec, tNet, outFile); //function call
return;
}
//First print_summmary() function prints "Summary" message
//Second, it prints the column heading (Total Gross, Total Net, etc)
//Third, it prints the values of (total gross, total net, total fed, etc)
void print_summary(double tGross, double tFed, double tState, double tSocsec, double tNet, ofstream& outFile)
{
//Local Declaration
//Statements
cout << "\n\n Summary \n\n";
cout << "Total Gross Total Net Total Fed Total State"
<< " Total Socsec\n";
cout << "----------- --------- --------- -----------"
<< " ------------\n";
cout << setiosflags(ios::fixed) //output format settings
<< setiosflags(ios::showpoint)
<< setprecision(2);
cout << setw(11) << tGross << setw(14) << tNet
<< setw(14) << tFed << setw(16) << tState
<< setw(17) << tSocsec;
cout << endl;
outFile << "\n\n Summary \n\n";
outFile << "Total Gross Total Net Total Fed Total State"
<< " Total Socsec\n";
outFile << "----------- --------- --------- -----------"
<< " ------------\n";
outFile << setiosflags(ios::fixed) //output file format settings
<< setiosflags(ios::showpoint)
<< setprecision(2);
outFile << setw(11) << tGross << setw(14) << tNet
<< setw(14) << tFed << setw(16) << tState
<< setw(17) << tSocsec;
outFile << endl;
return2 answers
- Anonymous askedx.Hm
True / False
1. <perhaps one of the 5 justifications given in book for OOD>... Show morex.Hm
True / False
1. <perhaps one of the 5 justifications given in book for OOD> is a justification for OOD.
2. Objects in programs must always represent tangible entities.
3. Not all real-world entities are modeled by objects in a program, just those necessary for solution of the problem.
4. If two variables for object, X and ,Y both have the same pointer value, they stand for the same object.
5. Object names in UML object diagrams are underlined.
6. Generally, attributes of an object are hidden inside the object and can be accessed only through operations.
7. Encapsulation protects programmers from depending on the attributes of an object.
8. Encapsulation prevents programmers from changing the attributes of objects.
9. <name of an object similar to those shown in Activity 1, p. 23> is related to <name of an object similar to those shown in Activity 1, p. 23> by association.
10. <name of an object similar to those shown in Activity 1, p. 23> is related to <name of an object similar to those shown in Activity 1, p. 23> by aggregation.
11. All <graphs, trees> are <trees, graphs>.
12. By conventions, object names start with <lowercase, uppercase>
13. By conventions, class names start with <lowercase, uppercase>
14. Although objects generally receive messages, occasionally good design principles require an object that does not receive messages.
15. In theory, objects are assumed to always carry out messages without error, but in reality, they sometimes don’t because < a reason, perhaps one listed in table 2.1, p. 29>.
16. Message is synonymous with operation.
17. A message is the way an operation is invoked.
18. Messages can be both command and information.
19. Generally, it is not good design style for a message to be both a command message and an information message.
20. A client-supplier design uses 2-way messages.
21. A client-supplier design is more likely to gives objects that are reusable.
22. Generally, it is possible to transform 2-way message designs into client-supplier designs.
23. In a client-supplier design, messages generally flow from <client, supplier> to <supplier, client>.
24. In a client-supplier design, messages generally flow either way between client and supplier.
25. OOPs are essentially objects communicating with each other.
26. A Java program acts like an object instantiated by the operating system.
27. OOPs tend to have lots of code in their main program unit.
28. Generally, objects and classes appear on the same UML diagrams.
29. A <class, object> is an instance of a <object, class>.
30. By convention, class names are singular.
31. By convention, class names begin with <lowercase, uppercase>.
32. < Parent, child, superclass, subclass, base class, derived class> means the same thing as Parent, child, superclass, subclass, base class, derived class>.
33. <setter operations, getter operations> are more apt to have parameters than <getter operations, setter operations>.
34. Some languages allow classes to have messages, methods, and fields.
35. Generally, it is <good, poor> design to use class messages, methods, and fields.
36. Generally, language primitives <can, can’t> accept messages.
37. Generally, variables for primitive values contain < the value, a pointer to the value>.
38. Generally, variables for objects contain < the object, a pointer to the object>.
39. An attribute of an object is always implemented in OOPs as a field of the of the class.
40. Some books use the term object protocol to refer to all of an object’s messages.
41. Field is sõyox.Hmem>attribute
42. Classes with the bare minimum of code tend to be more reusable than those with extra code for extra functionality.
43. Classes where messages either get information or give commands but not both tend to be more reusable than classes with messages that both get information and give commands.
Short Answer
44. How can one tell if two objects are identical?
45. If x is an object, y is one of its attributes, and z is one of its operations, draw an UML object diagram for x.
46. What does encapsulation mean?
47. The object <name of object> is related to <name of object> by association. Draw a UML diagram showing the relationship.
48. The object <name of object> is related to <name of object> by aggregation. Draw a UML diagram showing the relationship.
49. What do <arrows, tadpoles> represent in message diagrams?
50. What does <an information, a command> message do?
51. What is the name of the process that automatically deallocates memory for no longer needed objects?
52. Draw a class diagram showing the class <name of a class> inheriting from <name of a class>.
53. When is the constructor of a class invoked?
54. Give an example of a primitive data type in Java.
55. What is the definition of <a term in table 2.3>?
56. What is one advantage in reusing code?
57. What is a framework?
58. What does it mean that a class has high cohesion?àx.HmØ(x.Hm • Show less0 answers
- Anonymous askedThe famous Match and hit game is played by a comput... Show morex.HmntProblem 2
The famous Match and hit game is played by a computer and a human player as follows. First, the computer selects a random 4-digit number N= a.
+b.
+ c.10 + d, where a,b,c,d are distinct non-zero digits- that is, a, b, c, d are distinct elements of the set {1,2,3,4,5,6,7,8,9}. Let us call numbers like this valid. The human player then tries to deduce N from a sequence of queries to the computer. Each query consists of a valid 4-digit number M = x.
+ y.
+ z.10 + w. The computer responds to each query with the number of matches and the number of hits. A match is a digit of N that appears in M at the same position(thus each of x=a, y=b,z=c or w=d counts as one match). A hit is a digit of N that appears in M, but not at the same position. For example, if N = 5167, then the queries 2934, 1687, 7165, 5167 will result in the following numbers of matches and hits:
no matches and no hits
1 6 8 7 one match and two hits
7 1 6 5 two matches and one hit5167 four matches and no hitwhere matches are denoted by
and hits are denoted by
. The play continues until the human player either wins or loses the game, as follows:
Human player wins if he submits a query with 4 matches(that is, M= N).
Human player looses if he submits 12 queries, none of them with 4 matches.
Write a C program, called match_and_hit.c, that implements (the computer part of ) the Match & Hit game. In this program, you are required to declare, define, and call the following functions:
The function isvalid(int n) that accepts as a parameter an arbitrary integer. The function should return 1 if the integer is valid, and 0 otherwise. Recall that an integer is valid if it is positive, consists of exactly 4 decimal digits, and all these digits are nonzero and distinct.
The function choose_N(void) that returns, as an int, a uniformly random choice of a valid integer N. The function should call both the rand library function and the function isvalid. It should keep calling rand until the number generated thereby is valid. Recall that before the function rand is invoked, the random seed should be initialized using srand(time(0)).
The function matches(int N, int M) that accepts as parameters two integers N and M and returns, as an int, the number of matches. You can assume than both N and M are valid.
The function hits (int N, int M)that accepts as parameters two integers N and M, then returns as an int, the number of hits. you can assume that both N and M are valid.
Here is a sample run of this program, assuming that the executable file is called match_and_hit. This run illustrates a situation where the human player wins the game. User input is underlined.
*** Welcome to the MATCH and HIT game***
The computer has selected a 4-digit number.
Try to deduce it in 12 rounds of queries.
Round#1
Please enter your query (4 digits): 5341
-> 2 matches and 1 hit
Round#2
Please enter your query (4digits): 1235
-> 1 match and 2 hits
Round#3
Please enter your query(4 digits): 2345
-> 4 matches and 0 hits
***********************************
CONGRATULATIONS! You won the game!
***********************************
Notice that a singular form(match, hit) is used with the number 1. You can easily achieve this functionality with the ?: conditional expression, as shown in class. the next sample run illustrates a situation where the human player loses the game. Again, user input is underlined.
***Welcome to the MATCH and HIT game***
The computer has selected a 4-digit number.
Try to deduce it in 12 rounds of queries.
Round# 1
Please enter your query (4 digits): 1234
-> 0 matches and 3 hits
. .
. .
. .
round # 12
Please enter your query (4 digits): 2435
-> 2 matches and 2 hits
*************************
Sorry, out of queries. Game over!
*************************
Notes
If the user enters a non-integer, print the message " Invalid query. Please try again!" Notice that if the user enters an integer followed by noninteger characters, these characters should be simply ignored (check the value returned by the scanf function.) if the user enters an integer that is not valid, print the message "Invalid number. Please try again! In both of these cases, you should then prompt the user to enter anther query, and keep doing so until the user provides a valid input. The following sample run illustrates all this.
***Welcome to the MATCH and HIT game***
The computer has selected a 4-digit number.
Try to deduce it in 12 rounds of queries.Round#1
Please enter your query (4 digits): ##
Invalid query. Please try again!
Please enter your query(4 digits): 0##
Invalid number. Please try again!
Please enter your query(4 digits): -5341
Invalid numbõPex.Hmgain!
Please enter your query(4 digits): 123456789
Invalid number. Please try again!Please enter your query(4 digits): 5340Invalid number. Please try again!Please enter your query(4 digits): 5344Invalid number. Please try again!Please enter your query (4 digits): 5341#OK?-> 2 matches and 1 hitRound # 2Please enter your query (4 digits): 1235-> 1 match and 2 hitsRound # 3Please enter your query (4 digits): 23456Êx.Hm{…x.Hm • Show less-> 4 matches and 0 hits***********************************CONGRATULATIONS! You won the game!***********************************The maximum number of queries (which we have, so far, assumed to be 12) should be introduced as a symbolic constant MAX_QUERIES in the beginning of the program, using the #define compiler directive. The number of digits in a valid integer (which we have, so far, assumed to be 4)should be also introduced as a symbolic constant calle N_DIGITS. You can assume that N_DIGITS is one of 1,2,3,4 and MAX_QUERIES is an integer in the range 1,2,....,24. The program should keep working correctly for all such values of N_DIGITS and MAX_QUERIES.You can always win the game with at most 12 queries, if you query cleverly. Try it, it's fun! Can you guarantee a win with less than 12 queries? If so, what is the minimum number of queries you need? You do not need to answer these questions, but you are welcome to consider them.1 answer
- Anonymous askedAB+C(D!+B!)+B(... Show moreN.B-(! -means not)
Simplify the following to the least amount of variables.
question 1
AB+C(D!+B!)+B(C+(D!+B)!)
Question 2
w(y+w!z)!+(x+x!y) • Show less2 answers
- Anonymous askedFor this class, provide a const... Show moreUsing java Programming languageI have to write a class names Slide:For this class, provide a constructor, appropriateaccessor/mutator methods for obtaining a specific line of text, andfinally an implementation of the toString() method which simplyreturns a String that represents the text of the slide as it wouldappear when printed on the screen. The text of the slide should bestored in a String array with its size being represented by a finalvariable called LINES_PER_SLIDE, which should be equal to 5. Theaccessor method for the slide's text should take a parameter oftype int that is between 1 and 5, inclusive, to determine whichline of slide's text is being requested (ie, if the parameter is 1the return value should be the first line of text, 2 should returnthe second line of text, and so on and so forth). The mutatormethod should accept two parameters, with one representing the newtext that will replace a given line and the second one representingwhich line is going to be changed (once again, an int between 1 and5, inclusive). These methods will require error checking to makesure that the line number being requested is within the validrange. The toString() method should return a result composed of thelines of text put together in the correct order with any newlinecharacters such that, when the method's result is printed, thelines of text would each appear on its own line.i did this:String slideText[]= newString[LINES_PER_SLIDE];but that just gives me an array thats capable ofholding 5 strings.. how can i create an array and store 5 lines perbox in each array?• Show less1 answer
- Anonymous askedYet the... Show moreA manufacturer advertises that its color bit-map terminal candisplay 2^24 different colors. Yet the hardware only has 1byte for each pixel. How can this be done?
So 1 byte is 8 bits, which is 2^8 different color combos, somehow Ineed to get that 8 to 24, factor of 3. I think this mighthave to do with Truecolor, but after reading the wiki article I donot understand how it works exactly. Does it rapidly strobe 3phases (shades of red, green, blue) to mix the colors of a pixeltogether?
Thanks for any help
• Show less0 answers
- Anonymous askedThe buffer pool must use the pure LRU replacem... Show moreI am trying to implement BufferPool as a javageneric. The buffer pool must use the pure LRU replacementpolicy. If anyone could help me with this I am having largetroubles figuring out the BufferPool constructor and the getmethod. I promise to rate if you can please help me figure thisout.package MinorP2.DS;import java.util.LinkedList;public class BufferPool<T> {class Buffer {long offset; // file offset of this data recordT data; // buffered data record/*** @param offset file offset of buffered record* @param data buffered record*/public Buffer(long offset, T data) { //TODO}}/*** @param Capacity size limit of the pool* @param dbFront client-supplied mediator for readingrecords*/public BufferPool(int Capacity, dbParser dbFront) { ...}/*** @param offset file offset of requested record* @return (reference to) requested record, or null*/public T Get(long offset) { //TODO!!}LinkedList<Buffer> Pool; // stores elems in MRU to LRUorderint Capacity;dbParser<T> dbFront;}• Show less0 answers
- Anonymous askedHere's one of the questions that I simply cannot seem to figure outwhatsoever. I don't even know how... Show moreHere's one of the questions that I simply cannot seem to figure outwhatsoever. I don't even know how to begin the problem. Thequestion seems to be worded in a way I've never seen, and theprofessor never went over these types of problems, only assignedthem. Anyway, any help is appreciated.
Write a C expression to generate the bit patterns that follow,where a^k represents k repetitions of symbol a. Assume a w-bit datatype. Your code may contain references to parameters j and k,representing values of j and k, but not a parameter representingw.
A. 1^(w-k) 0^k
B. 0^(w-k) 1^k 0^j
• Show less1 answer
- John786 askedIn a Java Servlet, how to show if someone new visit first time then It say as Welcome otherwise, say... More »0("robot.jpg");
Picture butterfly = new Picture(filename);
filename =FileChooser.getMediaPath("butterfly.jpg");
Picture robot = new Picture(filename);
//declare the source and target pixelvariables
Pixel sourcePixel = null;
Pixel targetPixel = null;
//);
targetPicture.copyRobot();
targetPicture.mirrorVertical();
targetPicture.repaint();
targetPicture.getPixel(0,100);
targetPicture.show();
//I am trying to make a BST. The remove method must not be a lazyremove. Here is what I have so far bu... Show moreI am trying to make a BST. The remove method must not be a lazyremove. Here is what I have so far but it is not correct. Is thereanyone out there that can guide me? I will rate./*** remove item.* @param x item toremove.* @param t root.* @return new root.*/private BinaryNode remove( T x , BinaryNodet){if(t == null){return null; //item not found}if(x.compareTo( t.element )< 0){t.left =remove(x,t.left);}else if(x.compareTo(t.element ) > 0){t.right =remove(x,t.right);}else if( t.left != null&& t.right != null) //Two children{BinaryNode rMin= findMin(t.right);BinaryNodedetachedNode= rMin;if(rMin.left ==null && rMin.right == null){t= new BinaryNode(rMin.element,t.right,t.left);rMin = null;if(t.element == root.element)root = t;return t;}//rMin has onenodeelse{ if(rMin.left != null && rMin.right != null){t = new BinaryNode(rMin.element,t.right,t.left);if(t.element == root.element)root = t;rMin = remove(rMin.element,rMin);}if(rMin.left != null){t = newBinaryNode(rMin.element,t.right,t.left);if(t.element == root.element)root = t;rMin = rMin.left;return t;}if(rMin.right != null){t = newBinaryNode(rMin.element,t.right,t.left);if(t.element == root.element)root = t;rMin = rMin.right;return t;}}0 answers
- Anonymous askedThe deoxyribonucleic acid (DNA) is a molecule that contains the genetic instructions requ... Show morex.Hmblem 3The deoxyribonucleic acid (DNA) is a molecule that contains the genetic instructions required for the development and functioning of all known living organisms. The basic double-helix structure of the DNA was co-discovered by Prof.Francis Crick.The DNA molecule consists of a long sequence of four nucleotide bases: adenine(A), cytosine(C), guanine(G) and thymine(T). Since this molecule contains all the genetic information of a living organism, geneticists are interested in understanding the roles of the various DNA sequence patterns that are continuously being discovered worldwide. One of the most common methods to identify the role of a DNA sequence is to compare it with other DNA sequences, whose functionality is already known. The more similar such DNA sequences are, the more likely it is that they will function similarly.Your task is to write a C program, called dna.c, that reads three DNA sequences from a file called dna_input.dat and prints the results of a comparison between each pair of sequences to the file dna_output.dat. The input file dna_input.dat consists of three lines. Each line is a single sequence of characters from the set {A,C,G,T}, that appear without spaces in some order, terminated by the end of line character \n. You can assume that the three lines contain the same number of characters,ACGTTTTAAGGGCTTAGAGCTTATGCTAATCGCGCGCGTATATCCTCGATCGATCATTCTCTCTAGACGTTTTAAGGGCTAAGGCGCGTAATTATCGTTTGAAGGGCTTAGTTAGTTAGTTCATCGGCGGCGTATATCCTCGATCGATCATTCTCTCTAGACGTTTTAAGGGCTGAGCCGGTCAGTTAEach of the three lines(shown with wrap-around above) consists of 95 characters: the 94 letters from{A,C,G,T} and the character\n(not shown). The output file dna_output.dat must be structured as follows. For each pair of sequences #i and #j, with i,j
{1,2,3} and i > j, you should print:
A single line, saying " Comparison between sequence #i and sequence #j:"
The entire sequence #i in the first row, and the entire sequence #j in the third row.
The comparison between the two sequences in the second (middle) row. This should be printed as follows. For each position, if the two bases are the same in both sequences then the corresponding base letter (one of A, C, G, T) should be printed; otherwise a blank " " should be printed.
A single line, saying " the overlap percentage is %x" where x is a floating-point number which measures the percentage of letters that match in the two sequences. This number should be printed with a single digit of precision after the decimal point:Comparison between sequence # 1 and sequence # 2:ACGTTTTAAGGGCTGAGCTAGTCAGTTCATCGCGCGCGTATATCCTCGATCGATCATTCTACGTTTTAAGGGCT AG T G T ATCGCGCGCGTATATCCTCGATCGATCATTCTACGTTTTAAGGGCTT AGAGCTTATGCTAATCGCGCGCGTATATCCTCGATCGATCATTCTCTCTAGACGTTTTAAGGGCTGAGCTAGTCAGTTCCTCTAGACGTTTTAAGGGCT AG A TTCTCTAGACGTTTTAAGGGCTAAGGCGCGTAATTAThe overlap percentage is 80.9%Comparison between sequence # 1 and sequence # 3:ACGTTTTAAGGGCTGAGCTAGTCAGTTCATCGCGCGCGTATATCCTCGATCGATCATTCTCGTTT AAGGGCT AG TAGT AGTTCATCG GCGTATATCCTCGATCGATCATTCTTCGTTTGAAGGGCTT AGT TAGTTAGTTCATCGGCGGCGTATATCCTCGATCGATCATTCTCTCTAGACGTTTTAAGGGCTGAGCTAGTCAGTTCCTCTAGACGTTTTAAGGGCTGAGC GTCAGTTCTCTAGACGTTTTAAGGGCTGAGCCGGTCAGTTAThe overlap percentage is 88.3%Comparison between sequence # 2 and sequence #3:ACGTTTTAAGGGCTTAGAGCTTATGCTAATCGCGCGCGTATATCCTCGATCGATCATTCTCGTTT AAGGGCTTAG T G T ATCG GCGTATATCCTCGATCGATCATTCTTCGTTTGAAGGGCTTAGTTAGTTAGTTCATCGGCGGCGTATATCCTCGATCGATCATTCTCTCTAGACGTTTTAAGGGCTAAGGCGCGTAATTACTCTAGACGTTTTAAGGGCT AG CG A TTACTCTAGACGTTTTAAGGGCTGAGCCGGTCAGTTAThe overlap percentage is 79.8%
Notes:As part of the solution, you are required to declare, define, and call the following functions. In
these functions, you can assume that input and output are global variables of type FILE *.
The function read DNA(charsequence[]) that reads a DNA sequence from input,
stores it in the array sequence[], and returns the number of letters read, as an int.
The function compare_ DNA(char seq1[], char seq2[], char seq3[], int n)
that stores in the array seq3[] the comparison sequence of the two DNA sequences stored
in seq1[] and seq2[]/x.Hmt. The length of these DNA sequences is assumed to be n. The function
returns, as a double, the percentage of overlap between the two DNA sequences.
The function print_ DNA(char seq1[], char seq2[], char seq3[], int n)
that prints to output the DNA sequences stored in seq1[] and seq2[], as well as their
comparison sequence stored in seq3[], according to the rules explained above. The lengthof all these sequences is assumed to be n. The function does not return a value. The numbers 241 and 60, used above, should be dened as symbolic constants MAX_ IN_ LENGTH
and OUT_ LENGTH, using the #define compiler directive. The program should keep working
correctly if the values of these symbolic constants are changed (within a reasonable range).ðx.Hmƒ}x.Hm • Show less1 answer
- foreverdeployed asked2 answers
- foreverdeployed asked2 answers
- Anonymous askeds grade-point average (GPA) after taking t... Show more
Write a C++ program that calculates and displays astudent’s grade-point average (GPA) after taking threecourses. The program reads a three-lined text data file, each linecontaining a course ID (a string without whitespace), the number ofcredit hours for the course (an integer), and the grade for thecourse (an upper- or lowercase character).
A GPA is calculated by dividing the total amount of grade-pointsby the total number of credit hours. “Grade-points” fora course is the number of credit hours multiplied by a numericvalue based on the letter grade: 4 for an ‘A,’ 3 for a‘B,’ 2 for a ‘C,’ 1 for a ‘D,’and 0 for an ‘F.’
For example, suppose a student earned the following:
Course ID Hours Grade
CISS241 4 ‘A’
MTH200 3 ‘B’
HST105 3 ‘C’
Then, total credit hours is 4 + 3 + 3 = 10, and the total amountof grade-points is 4 * 4 (A) + 3 * 3 (B) + 3 * 2 (C) = 16 + 9 + 6 =31.
Therefore, GPA = total grade points / total credit hours = 31 /10 = 3.1.
You can make up your own data file using Notepad. To make itconvenient for me to run and evaluate your program, name your inputfile CISS241_Makeup.dat.
Your program should display in a nice and pretty tabular formall input data along with the calculated grade-points for eachcourse and, of course, the calculated GPA.• Show less1 answer
- dorsty askedin a different scopes. I... Show moreBelow is the text file I read to my program butI need to store and use them in a different scopes. I'mthinking about using an array but I can't figure out how I canstore and display them so a user can choose the one he/she want.Please see the one I highlighted it red. Thanks.//This is the dataFile.txtMaverick_Apartments
7 3 7 11
101 895 2 1.5 true 695.00
102 900 2 2 false 1000.00
103 775 1 1 true 800.00
104 650 1 1 true 725.00
105 825 2 1.5 false 850.00
106 800 2 2 false 800.00
107 750 1 1.5 false 725.00
/** Purpose: This program uses ReadData class toread data from a text file to initialize
* the number of apartments, employees, carports,garages, etc.
* It also initializes the objects for theApartment class based on the input
* from the data file
*/
import java.io.File;
import java.io.FileNotFoundException;
import java.util.Scanner;
public class Proj2Read implementsproj2Constants
{
private static Scanner input;
/**
* @param args
*
*/
public static void main(String[] args)
{
int lineCounter = 0;
String aptComplexName;
int maxNumApartments;
int maxNumEmployees;
int maxNumCarports;
int maxNumGarages;
int numAptDetails;
if (args.length < 1){
System.err.println("usage: java classname(e.g.,TestAptComple)" +
" <file name in the same dir>");
System.exit(ABNORMAL_EXIT);
}
try{
input = new Scanner(newFile(args[ZEROI]));
}
catch(FileNotFoundException FNFE){
System.err.printf("Could not find the input file%s\n", args[ZEROI]);
System.exit(ABNORMAL_EXIT);
}
aptComplexName = input.nextLine(); //reading aline
System.out.println(aptComplexName);
//call method of AptCmplex to assign thename
maxNumApartments = input.nextInt();
maxNumEmployees = input.nextInt();
maxNumCarports = input.nextInt();
maxNumGarages = input.nextInt(); // reading 4values in 1 line
System.out.printf("%s, %d, %d, %d, %d\n",
aptComplexName, maxNumApartments,maxNumEmployees, maxNumCarports, maxNumGarages);
// call approriate methods to set the above in teAptComplex class
String comment = input.nextLine(); // to go tonext line in input
int i = 1;while ((i <= maxNumApartments) &&input.hasNext())
{//I WANT TO USE THESE IN A DIFFERENTSCOPE
intaptNumber;
int aptArea;
intaptBedrooms;
doubleaptBathrooms;
booleanaptPatio;
doublemonthly_rent;
aptNumber =input.nextInt();
aptArea =input.nextInt();
aptBedrooms =input.nextInt();
aptBathrooms =input.nextDouble();
aptPatio =input.nextBoolean();
monthly_rent =input.nextDouble(); // read all apt fields from oneline
comment = input.nextLine();
// call apartment constructor to create anobject
System.out.printf("%d, %d, %d, %f, %b,%f\n",
aptNumber, aptArea, aptBedrooms, aptBathrooms,aptPatio, monthly_rent);
i += 1;
}
maxNumApartments = i-1; // will take care even ifthe input ends earlier
}
}
public interfaceproj2Constants
{
int ABNORMAL_EXIT =1;
int BASE_INDEX =0;
String DEFAULT_APT_COMPLEX_NAME ="Maverick Apartments";
int DEFAULT_APT_NUMBER =0;
int DEFAULT_CPORT_NUMBER =0;
int DEFAULT_GARAGE_NUMBER =0;
String DEFAULT_FIRST_NAME ="John";
String DEFAULT_LAST_NAME ="Doe";
int DEFAULT_LEASE_PERIOD = 11;//in months
int DEFAULT_ID =999;
double DEFAULT_SALARY =1500.00;
double MAX_SALARY =10000.00;
int MAX_APARTMENTS =5;
int MAX_TENANTS =5;
double ZEROD =0.000;
int ZEROI = 0;
}0 answers
- Anonymous askeda) Create a constructor... Show moreCreate a class RationalNumbers(fractions) with the followingcapabilities:a) Create a constructor that prevents a 0 denominator in afraction, reduces or simplifies fractions that are not in reducedform and avoids negative denominators.b) Overload the addition, subtraction, multiplication,and division operators for this classc) Overload the relational and equality operators forthis class• Show less1 answer
- ImpoliteLeopard6758 askedWrite a program that generates 100 random numbers between 0 and1000. Display the number of even valu... Show moreWrite a program that generates 100 random numbers between 0 and1000. Display the number of even values generated as well as thesmallest, the largest, and the range of values.I know this site does not cover any C# books for programming,but I need some help doing this program in C# .• Show less0 answers
- ImpoliteLeopard6758 askedAnother program I need for C#, Prompt the user for the length of three line segments as integers. If... More »0 answers
- Anonymous asked"Enter the le... Show more
int numberDays(int employees)
{
int total=0,leaves;
for(inti=0;i<employees;++){
cout<<"Enter the leavesemployees"<<i+1;cin>>leaves;
total=total+leaves;
}
return total;
}• Show less1 answer
- Anonymous asked0("butterfly.jpg");
Picture butterfly = new Picture(filename);
filename =FileChooser.getMediaPath("robot.jpg");
Picture robot = new Picture(filename);
//distance = 1/2*(acceleration/... Show more
You have been asked to write 3 programs centered around theequation:
distance = 1/2*(acceleration/gravity) * time^2 + velocity *time.
Or, more simply,
d = ½ * g*t^2 + vt <== this equation will not work inC!
The first program should calculate the distance given a time anda velocity.
The second program should calculate the time given a distanceand velocity.The third program should calculate the velocity given a timeand distance.I need it to work on linux and gcc• Show less2 answers
- hardcourgymnast askedDesign and implement a class called Book that containsinstance data for the title, author, publisher... Show moreDesign and implement a class called Book that containsinstance data for the title, author, publisher, and copyright date.Define a Book constructor to accept and initialize this data.Include setter and getter methods for all instance data. Include atoString method that returns a nicely formatted, multi-linedescription of the book. Create a driver class called Bookshelfwhose main method instantiates and updates, several Bookobjects.THANKS SO MUCH!• Show less1 answer
- BlueSpring21 askedimp... Show moreHelp with the sorted structures for constructor anddriver classes:
import java.util.ArrayList;import java.util.Collections;importjava.util.ComparatorCollections.sort(arrayList,comparator);public void sort(){
for(intj=index;j<length;j++)
if(array[index] > array[j])
{temp = array[index];System.out.println("Sorting ArrayList :" +arrayList);}}
• Show less2 answers
- Christy89 askedWrite a computer program to solve systems ofLinear congruences of the form... Show more
x
ai( mod mi ) where (mWrite a computer program to solve systems ofLinear congruences of the form• Show less
x
ai( mod mi ) where (mi , mj)=1 for i
j.0 answers
-0 answers
- Anonymous askedWill the iteration x(n+1)=exp(... Show moreGiven the equation f(x)=x-exp(-x). Howmany roots does f(x)=0 have?• Show less
Will the iteration x(n+1)=exp(-x(n)) converge? If not propose 2iterations that converge linearly to the root.1 answer
- Anonymous askedT... Show more
I did all the other questions, I just need help with these 3problems.
Any help would be appreciated.
Thank
|
http://www.chegg.com/homework-help/questions-and-answers/computer-science-archive-2010-february-21
|
CC-MAIN-2014-10
|
refinedweb
| 7,221
| 56.15
|
Most of us would have been in a situation where multiple test cases are same but the input data isn’t, Or even when input data is same and is used for multiple test cases.
So In cases like these we can use Table driven approach, it is a way of testing that allows you to write minimal test code with increased readability.
Let’s understand this with an example, imagine you want to test this Shape Class
class Shape { def calculateArea(shape: String, sideOne: Int, sideTwo: Int): Int = { def calculateShapeArea(sideOne: Int, sideTwo: Int, area: (Int, Int) => Int) = area(sideOne, sideTwo) shape.toLowerCase match { case "rectangle" | "parallelogram" => calculateShapeArea(sideOne, sideTwo, (a, b) => a * b) case "rhombus" => calculateShapeArea(sideOne, sideTwo, (a, b) => a * b / 2) case _ => -1 } } }
Here we can use table-driven approach, it will make our code minimal and also save time.
Let’s understand this approach in scala –
Trait TableDrivenPropertyChecks of Scalatest can be used or its companion object. It has two methods –
forAll: checks properties against the rows of table
wherever: It is used to indicate a property need only hold whenever some condition is true
It allows you to create tables with between 1 and 22 columns and any number of rows. You create a table by passing tuples to one of the factory methods of object Table. Each tuple must have the same arity (number of members). The first tuple you pass must all be strings because it defines names for the columns. Subsequent tuples define the data. After the initial tuple that contains string column names, all tuples must have the same type. For example, if the first tuple after the column names contains two Ints, all subsequent tuples must contain two Int (i.e., have type Tuple2[Int, Int]).
Testing of Shape Class using TableDrivenPropertyChecks
import org.scalatest.prop.TableDrivenPropertyChecks._ import org.scalatest.{Matchers, WordSpec} class ShapeSpec extends WordSpec with Matchers { val shape: Shape = new Shape() val inputData = Table( ("testCase", "shapeType", "result"), ("shape is rectangle", "rectangle", 36), ("shape is rhombus", "rhombus", 18), ("shape is triangle", "triangle", -1) ) "calculate Area" should { forAll(inputData) { (testCase, shapeType, area) => testCase in { val result = shape.calculateArea(shapeType, 6, 6) result shouldBe area } } } }
Each forAll method takes two parameter lists, as shown in the example. The first parameter list is a table. The second parameter list is a function whose argument types and number match that of the tuples in the table. So here as our tuples in the table contain String, String, Int, then the function supplied to forAll must take 3 parameters, a String, a String, and an Int. The forAll method will pass each row of data to the function, and generate a TableDrivenPropertyCheckFailedException if the function completes abruptly for any row of data with an exception that would normally cause a test to fail in ScalaTest.
A DiscardedEvaluationException is thrown by the whenever method (also defined in this trait) to indicate a condition required by the property function not met by a row of passed data, this will simply cause forAll to skip that row of data.
Hopefully, you liked this approach of testing, and it helped you to write better code for testing.
|
https://blog.knoldus.com/table-driven-approach-for-testing/
|
CC-MAIN-2019-35
|
refinedweb
| 532
| 57.4
|
I hacked up this useful wrapper around the python command line shell to allow editing of the last typed in lines of code in an external editor.
Discussion
When working in the python command shell, I often end up writing more than 10+ lines of indented code before making a stupid typo. This got irritating enough for me to do something about it. So, here's an 'Interactive Console with an editable buffer'. Hope that people find it useful. To load it up automatically put it in your python startup file [ ].
Notes: a) This code works with python 2.3 only due to it's use of tempfile.msktemp(). Maybe somebody could modify it to work with other python versions. b) The editor is selected using the EDITOR environment variable and defaults to 'vim' if it has not been defined. c) The command to invoke the editor is '\e' by default. It may be customized by changing the 'EDIT_CMD' variable.
Use ipython. Get ipython at .
Editing multiple lines. Works fine, even with gvim.exe. However, for editing multiple lines like:
a small extension is required before 'return line'. Only the last line entered via the editor is returned by raw_input(). The other lines are handed over to InteractiveConsole.push(), e.g.
Python file extension. Awesome tip! This brings the standard Python shell to 90% of IPython's functionality (for me).
If you change the following line, the temporary file will have a .py file extension which might be helpful::
Make namespace available to the sub-shell. If you add kwargs to __init__ you can easily pass the local namespace to the InteractiveConsole.
then
|
http://code.activestate.com/recipes/438813/
|
crawl-002
|
refinedweb
| 273
| 68.26
|
JAVA Tutorials
Welcome to the Java Tutorials. The objective of these tutorials is to provide in depth understand of java from basic questions like what is java tutorial, core java, where it is used, what type of applications are created in java and why use java.
In addition to free Java Tutorials, you can find interview questions, how to tutorials and issues and their resolutions of Java.
JAVA Introduction
Java is a programming language that was created by James Gosling from Sun Microsystems (Sun) in 1991 and first made publicly available in 1995, after Sun Microsystems was inherited by Oracle. The platform was originally designed for interactive television, but it surpassed the technology and design of the digital cable television industry at the time. Today, Java remains an open-source programming language that falls under the GPL (General Public License)
The language derives much of its syntax from C and C++, but lacks the power of those languages because it asks less of the user (less customization, more simplicity). For example, tasks such as garbage collection (the process of reducing memory being used by the program) are automated in Java.
Five principles were used in the creation of the Java programming language:
- It must be simple, object-oriented, and familiar
- It must be robust and secure
- It must be architecture-neutral and portable
- It must execute with high performance
- It must be interpreted, threaded, and dynamic
Java was built as an exclusively object-oriented programming language—which doesn’t mean much right now, but will later in this guide. For now, suffice it to say that object-oriented programming allows for the creation of efficient, organized, and powerful code. Simply put, Java is a multithreaded, object-oriented, platform-independent programming language. This means that Java programs can perform multiple tasks using object-oriented concepts that can work across all platforms and operating systems. It is the most important factor distinguishing Java from other languages.
Java helps us to develop normal desktop applications, mobile applications, and web applications using separate packages such as the J2ME package for mobile application development and the J2EE package for web application development.
In this guide, we are going to learn the basics of object-oriented concepts as they apply to Java programming. We have two different types of application development concepts in Java: console-based application and GUI application development. Let’s see how to develop these types of applications using Java.
What is Java?
Java is a programming language that is supported by all devices, whether it is an Android phone, a Windows computer, or an Apple product. Java’s flexibility has made it one of the most popular programming languages around the globe. Java can be used to create web applications, games, Windows applications, database systems, Android apps, and much more.
Java’s combined simplicity and power makes it different from other programming languages. Java is simple in that it doesn’t expect too much from the user in terms of memory management or dealing with a vast and complex hive of intricate classes extending from each other. Although this doesn’t make much sense right now, it will once we start learning about inheritance in Java.
A Java program is run through a Java Virtual Machine (JVM), which is essentially a software implementation of an operating system that is used to execute Java programs. The compiler (process of converting code into readable instructions for the computer) analyzes the Java code and converts it into byte code, which then allows the computer to understand the instructions issued by the programmer and execute them in the appropriate manner.
The distribution of the Java platform comes in two packages: the Java Runtime Environment (JRE) and the Java Development Kit (JDK). The JRE is essentially the Java Virtual Machine (JVM) that runs Java programs. The JDK, on the other hand, is a fully featured software development kit that includes the JRE, compilers, tools, etc.
A casual user who only wants to run Java programs on their machine would only need to install the JRE, as it contains the JVM that allows Java programs to be executed. However, a Java programmer must download the JDK. We will explore these concepts in greater detail in the next part. As previously stated, Java programming creates an object-oriented and platform-independent program because the Java compiler creates a .class file instead of an .exe file. This .class file is an intermediate file that has byte code, and this is the reason why Java programs are platform independent. However, there are also disadvantages: Java programs take more time to complete their execution because the .class file must first load in the JVM before they are able to run in the OS.
We can develop all kinds of applications using Java, but we need to use separate packages for separate application developments. For example, if you want develop a desktop application, then you need to use JDK; if you want to develop an Android application, then you need to use Android SDK, because they have different sets of classes.
Java Language Structure
We will now use this sample code as an example to start the Java learning process. This code should make it easy to understand the basic structure of a Java program.
import java.util.Scanner;
public class ThisIsAClass {
public static void main (String args[]) {
int x = 5;
System.out.println(x);
}
}
The first line import java.util.Scanner; it is using the special keyword import, which allows the programmer to import tools in the Java library that aren’t included by default when starting a new class. After the keyword import, the programmer specifies a specific directory within the Java library for the Scanner tool by typing out: java.util.Scanner, which first accesses the Java library in the Utilities directory before accessing the specific tool needed, which in this case is “Scanner.”
By default, when starting a Java class, only the bare minimum tools and functions from the Java library that are needed for any basic program will be provided. For example, you don’t need to import packages for a simple program. If the programmer wants to use more than just the basic functionalities, they must use the “Import” keyword to give themselves more tools to work with.
You will also start to notice that there is a semicolon “;” after each statement. This semicolon functions as a period does for an English sentence. When the compiler is going through the program to prepare it for its execution, it will check for semicolons, so it knows where a specific statement ends and a new one starts.
The next thing you’ll notice in the sample code is: public class ThisIsAClass. There are three elements to this line that are very important to understand.
public—Defines the scope of the class and whether or not other classes have access to the class. This may not make sense now, but you will gain a better understanding of what this means when we learn about “Inheritance and Polymorphism.”
class—A class can be thought of as a “section” of code.
For example:
Section {everything here is the content of the section}
Again, you will gain a better understanding of how classes can be useful when we learn about Inheritance and Polymorphism.
ThisIsAClass—This third and final element of this important line is “ThisIsAClass,” which is simply a custom name that the user can define. You can call this anything, as all it does is give the “Section” or “Class” a name.
Section: Name {
Essay/Contents
}
In code, it would be presented:
Class ThisIsAClass {
Code
}
Another thing you may be scratching your head over are the curly braces: “{” and “}.” All that these characters do is tell the compiler where a specific section starts and ends. In English, this could be thought of as starting a sentence with a capital letter or indenting a new paragraph.
One thing to note is that the spacing in code does not affect whether or not it works. However, it is conventional and efficient to write properly spaced code so that you and other programmers can read and understand it more easily. You will learn how to space as you read more sample code. Eventually, you will start to notice trends in how conventional spacing works in programming.
public static void main (String args[]) {
int x = 5;
System.out.println(“The number is” + x);
}
}
As shown in the above sample code, two curly braces are set in bold to indicate that they communicate with each other. Whatever is written between them is the “Section” or “Class” of name “ThisIsAClass.” The sample applies to the bold and underlined curly braces to indicate that they are separate section dividers for the section within the parent section. The parent section is again considered a class, whereas the section within the parent section is considered a sub-section, or a subclass, or the formal term, a method.
A method is essentially a subclass that also has its own special elements, similar to a class but more specific. This contains four elements: a scope (public/private/protected), a return type, a name, and parameters. The scope, return type, and parameters are things you will understand better when we learn about Methods, along with Inheritance and Polymorphism.
- public—Scope.
- static—Conventional keyword for the main method.
- void—Return type.
- main—Name of the method (you can call this anything, but in this case, the main method must always exist in every Java program because it is the starting point of the Java program).
This method is a special method because it is named “main” and so it will be the first method that is called. The compiler always looks for the method named “main” at the process start. The main method must hold the following properties: “public static void” and have “String args[]” as the argument in the parameters. There can only be one main method in a Java project. The idea of the main method is that it will be the first method to be called, and so you can think of it as the home base for calling other methods and traveling to other classes (you will learn what this means later). For the remainder of this guide, your program must be written within this main method, as it is a special method that the compiler will always be looking for.
The contents of this method will also be introduced when learning about “Variables” later on in this guide. For now, this brief explanation should be enough for you to understand what is going on:
int x = 5;—A variable is being declared (a container) that is a type of integer (whole number), and it holds the value of five. The “=” is used to assign values to variables.
System.out.println—This is a classic line that every beginner learns; all it does is print a line of text to the console.
Example
The console is where the program’s output is placed in order for the programmer to test his or her program. The “System.out.println” line contains parenthesis and a string of text within those parentheses. The string of text must be surrounded by quotation marks in order for the program to know that it has to print out text instead of an actual variable like “x.” The reason the user can’t just say: “System.out.println(“The number is: x”);” is because the program won’t know if x is a variable or not. The program reads anything within quotation marks as a piece of text instead of a container holding a value. Therefore, the x must be outside of the quotation marks.
Then again, you can’t say System.out.println(“The number is: ” x ); because the program needs a specific keyword to know that the piece of text and the variable are two entities that need to be joined or “concatenated” together. The “+” symbol is used to show that the string of text and the variable “x” are to be connected together; this will allow the program to output “The number is: 5” (without the quotation marks). That is why the line is: “System.out.println(“The number is: ” + x );”
Commenting:
Commenting is the final fundamental concept to understand in a programming language. Although it is not necessary, it can greatly help you and other programmers around you if you ever forget how the program works.
This concept is used by programmers as an opportunity to explain their code using common English. A compiler will always ignore comments, as they aren’t actual code. They are simply explanations that were written to help people understand what was programmed. In order to write a comment, you must use “//” or “/*” and “*/.”
The “//” symbol is used to comment on one line. For example, if you were to explain what a certain variable was going to be used for, you would do the following:
int x = 5; // this variable is going to be used to find the total money
The compiler will ignore the commented line, but will process int x = 5. However, you can’t use the “//” symbol with the following:
int x = 5; // this variable is going to be used to find the total money
This is because the “//” symbol is only used for one line, and the comment is on two lines. You could do:
int x = 5; // this variable is going to be used to find the total money
As long as the comment is one line, it is fine. Anything on the line after that symbol is considered a comment.
The other technique does the same thing as “//” except it supports multi-line commenting. You must start the comment with “/*” and end it with “*/.”
Example:
int x = 5; /* this variable is going to be used to find the total money */
Variables
What is a Variable?
A variable is essentially a storage unit that holds a certain type of data. It is named by the programmer and used to identify the data it stores. A variable can usually be accessed or changed at any time. You can write information to it, take information from it, and even copy the information to store in another variable.
Variable Types
In Java, a variable has a specific type that determines what size it is and the layout of its memory. The Java language defines three types of variables.
Instance Variables
Instance variables are declared within a class but accessed outside of any method, constructor, or block. Instance variables are created with the keyword “new” and are destroyed when the object is destroyed. An instance variable is visible to all methods, constructors, and blocks within the class where it is declared. By giving a variable the keyword “public,” it can be accessed by subclasses within the class. However, it is recommended to set a variable’s access modifier to “private” when possible. All instance variables have a default value, unlike local variables, which do not have default values.
Example:
public class Test
{
public int num; // integer named num
public void dog()
{
num = 3;
}
}
In this example, we declare the integer “num” outside of any method, and can easily access it within any method that is inside of the class “Test.”
Class/Static Variables
Static variables are declared outside of any method, constructor, or block, and are declared with the keyword “static.” If a variable is static, only one instance of that variable will exist, regardless of how many instances of the object are called. Static variables are rarely used except for constant values, which are variables that never change.
Example:
public class Test
{
public static final int num = 3; // integer named num
public void dog()
{
System.out.println(num);
}
}
In this example, we declare an int called “num” and it set to be public, static, and final. This means that it can be accessed from within subclasses, only one instance of it can ever exist, and the value can never be changed.
Local Variables
Local variables are only declared within methods, blocks, or constructors. They are only created when the method, block, or constructor is created, and then they are destroyed as soon as the method ends. You can only access local variables within the method, block, or constructor where it is called; they are not visible outside of where they are called. Local variables do not have a default value.
Example:
public void cat(){ // method named cat
int x = 0; // int with value of 0
}
In this example, a variable named “x” is called within the method “cat.” That variable only exists within the context of cat, and it cannot be directly called or accessed outside of that method.
Data Types
Variables are used to store data. Java has multiple built-in data types that are used to store predefined types of data. These data types are called primitive data types, and are the most basic data types in the Java programming language. You can also create your own data types, which we will go over later. Java has eight primitive data types.
Byte
The byte data type is an 8-bit signed two’s complement integer. It has a default value of zero when it is declared, a maximum value of 127, and a minimum value of -128. A byte is useful when you want to save memory space, especially in large arrays.
Example
byte b = 1; // has a value of one
Short
The short data type is a 16-bit signed two’s complement integer. Its maximum range is 32,767 and its minimum value is -32,768. A short is used to save memory or to clarify your code.
Example:
short s = 1; // has a value of one
Int
The int data type is a 32-bit signed two’s complement integer. Its maximum value is 2,147,483,647 (231 -1) and its minimum value is -2,147,483,648 (-231). Int is the most commonly used data type for integral numbers, unless memory is a concern.
Example:
int i = 1; // has a value of one
Long
The long data type is a 64-bit signed two’s complement integer. Its maximum value is 9,223,372,036,854,775,807 (263 -1) and its minimum value is -9,223,372,036,854,775,808 (-263). This data type is used when a larger number is required than would be possible with an int.
Example:
long l = 1; // has a value of one
Float
The floating point data type is a double-precision 64-bit IEEE 754 floating point. The min and max range is too large to discuss here. A float is never used when precision is necessary, such as when dealing with currency.
Example:
float f = 1200.5f; //value of one thousand two hundred, and a half
Double
The double point data type is a double-precision 64-bit IEEE 754 floating point. It is often the data type of choice for decimal numbers. The min and max range is too large to discuss here. A float is never used when precision is necessary, such as when dealing with currency.
Example:
double d = 1200.5d; // value of one thousand two hundred, and a half
Boolean
A boolean data type represents one bit of data. It can contain two values: true or false. It is used for simple flags to track true or false conditions.
Example:
boolean b = true; // has a value of true
Char
The char data type is a single 16-bit Unicode character. Its minimum value is “/u0000” (or 0) and its maximum value is “/uffff” (or 65,535 inclusive)
Example:
char c = 'w'; // returns the letter "w"
String
Another commonly used data type is called String. String is not a primitive data type, but it is a very commonly used data type that stores collection of characters or text.
Example:
String cat = "meow"; // sets value of cat to "meow"
This example gets a String named “cat” and sets it to the string of characters that spell out “meow.”
Declaring a Variable
The declaration of a variable has three parts: the data type, variable name, and the stored value. Note that there is a specific convention and set of rules that are used when naming variables. A variable name can be any length of Unicode letters and numbers, but it must start with a letter, the dollar sign “$,” or an underscore “_,” or else it will return a syntax error. It is also common naming convention to start the name with a lowercase letter, followed by each subsequent word starting with a capital letter. For example, in the variable named “theName,” the first word “the” starts with a lowercase letter, and each following word, in this case “Name,” starts with a capital letter.
Example:
int theName = 123; // value of 123
In the example above, the data type is int and the name of the variable is the Name. The value stored inside that variable is 123. You can also declare a variable without storing a value in it.
Example:
int name; // no value
You can do this if you choose to declare it later.
Using a Variable
After a variable is declared, then you can read its value or change it. After the variable has been initially declared, you can only reference it by its name; you only need to declare its data type when you are declaring the variable.
Example:
name = 2; // sets the int "name" to a value of 2
The example above sets the value of name to 2. Notice how I never restated the data type.
Example:
System.out.println(name); // prints the value of name to the console
This example reads the value of “name” and writes it to the console.
Variables can also be added together for example.
Example:
int a; // no value
int b = 1; // value of one
int c = 2; // value of two
a = b + c; // sets a to the value of b + c
In the example above, we set the value of a to equal the value of b and c added together. The addition sign is known as an operator, which we are going to learn about in the following section. It is also possible to a certain extent to combine values of variables that are different data types.
Example:
int a; // no value
float b = 1; // value of one
int c = 2; // value of two
a = (int) (b + c); // sets the int "name" to a value of b + c
This example is just like the one before, except we have changed the data type of b from int to float. The only difference when adding them together is that we had to include something called a “cast” in the equation. What a cast does is simply let the compiler know that the value of (b + c) should be of the data type int. Note that for this example, if the value of b + c were to equal a decimal number (for example 3.2), the value of a would not be 3.2 but rather 3, because int does not support decimals.
Assignment
Using what we have learned about variables, we can now create a simple calculator to add numbers together for us. The first thing we will want to do is declare three variables: one to store the value, one to represent the first number we want to add, and one to represent the second number we want to add. We will declare these variables as double so that we can add decimal numbers:
double a = 0; // stores value of addition
double b = 3.55; // first number to add
double c = 52.6; // second number to add
Next, we will simply set the value of a to equal the value of b and c combined, and then print out the value of a.
a = b + c;
System.out.println(a);
If you run this program, it will print out 56.15. Now you have created a very simple calculator. I highly encourage you to play around with this and test things for yourself. Change the data type of the variables, add more numbers together, and experiment to understand how things work.
Our design of course tutorials and interview questions is practical and informative. At TekSlate, we offer resources to help you learn various IT courses. We avail both written material and demo video tutorials. For in-depth knowledge and practical experience explore Online Java Training.
|
https://tekslate.com/java-tutorials
|
CC-MAIN-2019-13
|
refinedweb
| 4,088
| 60.45
|
With the introduction of LINQ and its many flavours, developers started using this ORM technology to perform queries against an xml document, an sql server database, a in-memory collection of objects.
They were fascinated by its general purpose and its unified approach which can be summed up as "Learn one API-one model and use it against various data sources."
I say to people that are still new to LINQ or they do not know exactly why LINQ works the way it works, to have look at the enhancements applied to the C# language in version 3.0.
By that I mean, collection initialisers,object intialisers,extensions methods,auto-implemented properties,anonymous methods,anonymous types, the "var" keyword.Search the net for information on those.
Make sure before you go on implementing LINQ based applications, that you have all this knowledge under your belt.
In this post I will explore the issue that puzzles most developers, Lambda expressions.
I have seen people using LINQ with the query syntax which is more T-SQL like and more familiar to most developers.
Lambdas are used extensively in LINQ queries which are by nature pretty functional. So we must understand Lambdas if we want to use LINQ more efficiently.
Lambdas are a shorthand(shortcut) for anonymous methods/delegates. We all know that a delegate allow us to create a variable that points to a method.
Through that variable we can invoke the method at any time.So just to make sure that everyone follows along, Lambdas are essential in LINQ method syntax.
I will demonstrate that with a hands on example using an asp.net web site and c#.I will use a simple array of strings to do that.
1) Fire up Visual Studio 2008/2010. Express editions will work fine.
2) Create an empty web site. Give it the name of your choice.
3) Add a web form item to your site. Leave the default name
4) We will try something first with delegates and then we will move on to show the same example using Lambdas
5) Let's just say that we want to find all names from a list of names that contain the letter "A" and letter "B". We can solve that problem without using delegates but in this example I will use delegates.
6) Type this code inside the _Default class (the first line in the code below is just for reference)
public partial class _Default : System.Web.UI.Page{ public delegate bool MyUsefulStringFunctions(string myS);
public static bool ContainsA(string s) { return s.Contains("A"); } public static bool ContainsB(string s) { return s.Contains("B"); }
public static string[] ManipulateArray(string[] theStrings,
MyUsefulStringFunctions theFunction) { ArrayList theList = new ArrayList(); foreach (string s in theStrings) { if (theFunction(s)) { theList.Add(s); } } return (string[])theList.ToArray(typeof(string)); }
7) First I create a delegate that returns a type bool and defines an input parameter of type string.Remember, this is like defining an object type but what we actually see is a method signature for a function.
public delegate bool MyUsefulStringFunctions(string myS);
8) Then I create two instances of that delegate. Those two methods return bool and have an input parameter of type string.
public static bool ContainsA(string s) { return s.Contains("A"); } public static bool ContainsB(string s) { return s.Contains("B"); }
9) Now we can pass these methods as an input parameter to another method I have created.
public static string[] ManipulateArray(string[] theStrings,
MyUsefulStringFunctions theFunction) { ArrayList theList = new ArrayList(); foreach (string s in theStrings) { if (theFunction(s)) { theList.Add(s); } } return (string[])theList.ToArray(typeof(string)); }
This method accepts an array of strings and accepts an instance of the delegate.Then for each string in the array of strings that I pass to this method I will get this delegate (algorithm) applied to them.
All those strings that satisfy the algorithm (e.g contain the letter A) will be added to a new array of strings and returned back from the method.
10) In the Page_Load event handling of the Default.aspx page routine type
string[] names = { "Aidan", "George", "Bryony", "Mary", "Joy", "Alastair",
"John", "Oliver", "Ken", "Jessy", "Joddie", "Helen", "Wesley", "Elvis" }; string[] mynamesA = ManipulateArray(names, ContainsA); string[] mynamesB = ManipulateArray(names, ContainsB); foreach (string s in mynamesA) { Response.Write(s); Response.Write("<br/>"); }
11) I am defining an array of strings(names). Then I call the ManipulateArray method by passing the array of strings (names) and as a second parameter I pass ContainsA , which is an instance of our delegate declaration.
I save the return results of my method to another array(mynamesA,mynamesB) and I just loop through it.
Run your application and see the names starting with A printed out on the screen.Make sure you add breakpoints so you can see the flow of the execution.Obviously we can achieve that end result without using delegates. I just wanted to tell/remind you what delegates are and how we use them.
12) Now I am going to rewrite my small application by using anonymous methods
//public static bool ContainsA(string s) //{ // return s.Contains("A"); //} //public static bool ContainsB(string s) //{ // return s.Contains("B"); //}
string[] names = { "Aidan", "George", "Bryony", "Mary", "Joy", "Alastair",
"John", "Oliver", "Ken", "Jessy", "Joddie", "Helen", "Wesley", "Elvis" }; string[] mynamesA = ManipulateArray(names, delegate(string theS)
{ return theS.Contains("A"); }); foreach (string s in mynamesA) { Response.Write(s); Response.Write("<br/>"); }
Make sure your application runs as expected.We pass to the method (ManipulateArray) an array of strings like before and yes you guessed it, a delegate as a second input parameter. A delegate that if you look carefully is of type
public delegate bool MyUsefulStringFunctions(string myS);
because it returns a boolean and accepts a string.Well, you must be thinking when I am going to learn about lambdas... First of all I am going to write more posts on LINQ quantifiers,operators using both the query syntax and the method syntax. In this example I am going to change our small application to include lambdas. But as I said again the lambdas are like a shorthand definition of anonymous methods. They are anonymous methods in disguise.
I want you to just change this line of code
string[] mynamesA = ManipulateArray(names, delegate(string theS)
{ return theS.Contains("A"); });
with this line of code.
string[] mynamesA = ManipulateArray(names, (theS =>theS.Contains("A")));
Run again your application.It will work.
theS is the input parameter. The => is the seperator and the theS.Contains("A") is just the algorithm applied to all of the strings in the names array.
We basically say "Evaluate every string we give you, and if it contains the letter A,return true otherwise return false."
Email me if you need the source code. Stay tuned for more posts with lambda expressions.
Hope it helps!!!!!
Nice comparison.
Do you have any more info on this?
|
http://weblogs.asp.net/dotnetstories/archive/2011/01/26/investigating-lambda-expressions-in-linq.aspx
|
CC-MAIN-2014-15
|
refinedweb
| 1,148
| 57.47
|
Question
Calculate the NPV of the hybrid model, using the annual fuel savings as the annual cash inflow for the 10 years you would own the car.
Answer to relevant QuestionsNow look at the payback period of the hybrid model. Use the difference between the cost of the hybrid model and the gasoline-engine model as the investment. Use the annual fuel savings as the expected annual net cash inflow. ...Refer to Skyline Music in S12-11. What is the approximate internal rate of return (IRR) of the studio investment? In S12-11 Skyline Music is considering investing $750,000 in private lesson studios that will have no residual ...Refer to the Sikes Hardware information in E12-18. Compute the ARR for the investment. In E12-18 Sikes Hardware is adding a new product line that will require an investment of $1,500,000. Managers estimate that this ...Use the NPV method to determine whether Salon Products should invest in the following projects: • Project A costs $272,000 and offers eight annual net cash inflows of $60,000. Salon Products requires an annual return of ...Refer to the Flint Valley Expansion Data Set. Assume that the expansion has no residual value. What is the project’s IRR? Is the investment attractive? Why or why not? Flint Valley Expansion Data Set Assume that Flint ...
Post your question
|
http://www.solutioninn.com/calculate-the-npv-of-the-hybrid-model-using-the-annual
|
CC-MAIN-2016-50
|
refinedweb
| 226
| 60.51
|
Check out and open as a project group (...Required Projects). There are various error and warning badges, as expected. F11 to build, and then Resolve Problems on all projects which show the warning badge to download any other dependencies locally. There are still Java error badges on some projects, and they persist after a restart. Deleting $userdir/var/cache/index/ and restarting makes the badges all disappear, meaning that the Java indexer cache is to blame.
Dev build. Opened karaf/shell/console from inside and saw a bunch of errors - to be expected, since most dependencies are not available yet. Did Build with Dependencies, which succeeds. Most of the errors disappeared, yet one on src/main/java/org/apache/felix/karaf/shell/console/jline/Console.java remained:
method addCompletor in class jline.ConsoleReader cannot be applied to given types
required: jline.Completor
found: org.apache.felix.karaf.shell.console.jline.CompleterAsCompletor
Rewriting to
jline.Completor completor = new CompleterAsCompletor(...)
shows an "incompatible types" error. Jumping to the neighboring source for CompleterAsCompletor shows
...
import jline.Completor;
...
public class CompleterAsCompletor implements Completor {
...
with no error badges.
This same basic problem has happened for a long time (not a recent regression), and is very common when working with Maven projects. The Java parser tries to refresh after the dependencies become available, yet it still seems to cache some information about the type hierarchy which never gets corrected. The errors interfere with regular work, and restarting the IDE does not help, so there is no satisfactory workaround besides deleting the parser cache -> P2.
Moreover Maven 3.0 requires now that specify versions for all plugins even for already defined ones like compiler or jar. However NB reports a warning that we override the version number for the plugins where it shouldn't.
(In reply to comment #2)
> NB reports a warning that we
> override the version number for the plugins where it shouldn't.
Nothing to do with this issue. If you find any bugs in the Maven support, please file them under projects/maven with complete steps to reproduce.
Aside from the two bugs linked in depends on, I have found two bugs in the java indexing:
1. order of rebuilding supplementary files: even though the JavaCustomIndexer sends supplementary roots to rebuild in the dependency order, due to absorbing of the works inside RepositoryUpdater, the files/roots may actually be rebuilt in wrong order, leading to wrong error badges. I will attach a test case, consisting of four projects. To reproduce:
-unpack, hg up -r 0
-start the IDE
-open all projects, wait for scan
-stop the IDE
-hg up -r 1
-start the IDE
2. (not 100% sure how this happens yet) consider three source roots (sr1, sr2, sr3). All of them are indexed, sr2 and sr3 contain some errors. During initial scan, the following happens:
-sr1 is parsed, contains a new class that fixes sr2, supplementary indexing is scheduled (but will happen after the initial scan finishes)
-sr2 is parsed, but does not have any direct changes (so the errors remain there)
-sr3 (depends on sr2) the source path for sr3 changes, the indexing may fix some error and introduces new ones (due to errors in sr2)
-supplementary indexing runs, sr2 is fixed, but sr3 is not reindexed anymore
I think I know how to fix 1., not yet sure about 2 (might be possible to merge the supplementary indexing of sr2 to its initial scan, but I did not succeed so far).
Created attachment 106460 [details]
Testcase for problem 1.
There are still open bugs on related topics in the Maven component which are not going to be touched for 7.0; maybe for 7.0.1.
Integrated into 'main-golden', will be available in build *201103110400* on (upload may still be in progress)
Changeset:
User: Jan Lahoda <jlahoda@netbeans.org>
Log: #188323: ensuring that follow-up works are processed in the dependency order.
The above fix and fixes for bug #196554 fix the known problems in java.source. I use steps similar to these:
-clean userdir, no $HOME/.m2, cleaned (i.e. not built) checkout of maven:
$ svn info
Path: .
URL:
Repository Root:
Repository UUID: 13f79535-47bb-0310-9956-ffa450edef68
Revision: 1074467
Node Kind: directory
Schedule: normal
Last Changed Author: bentmann
Last Changed Rev: 1074306
Last Changed Date: 2011-02-24 22:31:36 +0100 (Thu, 24 Feb 2011)
-start the IDE, open the checkout, and open all its modules
-wait until scan is finished, build the main project
-after that, click on each project and resolve problems (not very user-friendly, BTW)
-there are still many projects with errors (I believe this is at least partially caused by the maven support reporting wrong source path)
-restart the IDE, wait until the scan finishes
-there shouldn't be any error badges in any of the projects
Would be good to test both with and without:
-J-Dorg.netbeans.modules.java.source.indexing.JavaCustomInxer.no.one.pass.compile.worker=true
(In reply to comment #9)
> click on each project and resolve problems (not very user-friendly, BTW)
No, it's not. Bug #189442
> maven support reporting wrong source path
Bug #190852 you mean?
I have tested the fix in the trunk build number 201103140400 (Ubuntu, Jdk6u24). I agree with integration of the fix to NB7.0. Honzo, please integrate if there is no objection from Jesse. Thanks
Verified in the trunk.
The patch seems good to me.
(In reply to comment #11)
> please integrate if there is no objection from Jesse
I do not know enough about the patch to either approve of it or object to it.
Transplanted to release70:
Verified in the following 70 build:
Product Version: NetBeans IDE 7.0 RC1 (Build 201103230000)
Java: 1.6.0_24; Java HotSpot(TM) Client VM 19.1-b02
System: Linux version 2.6.35-22-generic running on i386; UTF-8; en_US (nb)
|
https://netbeans.org/bugzilla/show_bug.cgi?id=188323
|
CC-MAIN-2014-15
|
refinedweb
| 982
| 63.9
|
does C++ has this function header for this :
sleep()?
I really need help
does C++ has this function header for this :
sleep()?
I really need help
I'm on Unix , and if I include < unistd.h >
i can use sleep(int) where int is how many seconds to sleep.
Its <windows.h>, and make sure you spell it Sleep(int), note the Uppercase S, and this has been posted before.
Code:
#include <windows.h>
void Sleep
(
DWORD dwMilliseconds
);
int main()
{
sleep(int);
}
thank you so much
I see someone uses it , but mis-spelling, then it doesn't work.
one more question,
is there a delay() for function header and gotoxy(int,int)?
i have a headage homework.. newbee though,
that I have to write a c++ program to scroll a user input message accross screen (like java does), and make it blinking..
It is fun and does headage after all, then I need to find resource and help.
If you use DEV-C++ to compile clrscr(); will help to make it flash, use conio.c. But if you use MSVC++ use cout << endl; just before using SYSTEM("CLS"); as it clears the buffer. Windows.h for this one. Should help you with most of it. Then you just have to place it in an appropriate loop.
|
http://cboard.cprogramming.com/cplusplus-programming/30128-sleep-printable-thread.html
|
CC-MAIN-2014-41
|
refinedweb
| 217
| 84.07
|
This is your resource to discuss support topics with your peers, and learn from each other.
10-22-2009 04:28 AM
Ydaraishy, I'm not sure I fully understand your comment about using "field width, not the character width * number of characters". Can you clarify?
I think I need to use a dynamic value that depends on the text. In my terminology, the "field width" is constant, in that I want to draw a background behind the text, that does not resize. I actually wasn't going to use one character's width, times the number of characters, as posted in code previously. I was just going to use getAdvance(getText()). However, I'm finding that this has some problems, too.
Anyway, I think I have an acceptable solution. It's not pretty, not very flexible, and has plenty of other downsides. However, it does conform to my functional definition of "right justified". The code is below.
By the way, this field is one-line only, so I'm not concerned about newlines. Also, as stated before, I don't use a label for this EditField, so the code below does not handle labels.
For the benefit of others who might be unfortunate enough to need to use this code, the whole reason I override paint() is to draw a custom, partly transparent, rounded-rectangle background. If you don't need that, you can ignore everything about alpha, background color, and paint(). I also allow setting of a custom font in this class, which is a separate issue from the text-alignment. Anyway, here's what I have so far.
import net.rim.device.api.ui.Graphics; import net.rim.device.api.ui.component.BasicEditField; import net.rim.device.api.ui.Manager; import net.rim.device.api.ui.Font; import net.rim.device.api.ui.Color; /** * CustomEditField provides a BasicEditField, with a custom rounded rectangle background, and * text that can be left, right, or center-justified with FIELD_LEFT, FIELD_RIGHT, or FIELD_HCENTER. */ public class CustomEditField extends Manager { private int _fieldHeight = 0; private int _fieldWidth = 0; private int _bgColor = Color.WHITE; private int _alpha = 0xFF; private long _alignment; private Font _font = Font.getDefault(); private AlignableEditField _editField; /** * Create a new instance of CustomEditField * @param initialValue Initial text to show in the field. * @param maxNumChars Maximum number of characters this field can hold. * @param style Styles for this field (see Field for usable styles). */ public CustomEditField(String initialValue, int maxNumChars, long style) { super(NO_VERTICAL_SCROLL | NO_VERTICAL_SCROLLBAR); _alignment = (style & BasicEditField.FIELD_HALIGN_MASK); _editField = new AlignableEditField(null, initialValue, maxNumChars, style); add(_editField); } public String getText() { return _editField.getText(); } public void setFont(Font value) { _font = value; _editField.setFont(_font); } /** * Set the field's background alpha value * @param value - the alpha value (0 to 255) to use for the field's
* background opacity. */ public void setAlpha(int value) { _alpha = value; } /** * Set the field's background color. * @param color - the RGB color value for the field's background. */ public void setBackgroundColor(int color) { _bgColor = color; } protected void sublayout(int width, int height) { if (_fieldWidth == 0) { // we only set these once _fieldHeight = height; _fieldWidth = width; setExtent(width, height); } // add 17 pixels of padding on the right, so the cursor is
// always visible int textWidth = _editField.getPreferredWidth() + 17; layoutChild(_editField, textWidth, height); if (_alignment == FIELD_RIGHT) { setPositionChild(_editField, width - textWidth, 0); } else if (_alignment == FIELD_HCENTER) { setPositionChild(_editField, (width - textWidth) / 2, 0); } else { // FIELD_LEFT setPositionChild(_editField, 0, 0); } } protected void paint(Graphics g) { int oldColor = g.getColor(); int oldAlpha = g.getGlobalAlpha(); g.setColor(_bgColor); g.setGlobalAlpha(_alpha); // paint a rounded rectangle background g.fillRoundRect(0, 0, _fieldWidth, _fieldHeight, 15, 15); // reset the graphics state to where it was g.setColor(oldColor); g.setGlobalAlpha(oldAlpha); super.paint(g); } public int getPreferredWidth() { return _fieldWidth; } public int getPreferredHeight() { return _fieldHeight; } private void layoutEditField() { // force this Manager to recalculate field layout based on
// current text width sublayout(_fieldWidth, _fieldHeight); } /** * The AlignableEditField provides a BasicEditField whose preferred
* width depends on the field's current (text) contents. This appears
* to be required for right, or center-justified text. */ private class AlignableEditField extends BasicEditField { public AlignableEditField(String label, String initialValue,
int maxNumChars, long style) { super(label, initialValue, maxNumChars, style); } public int getPreferredWidth() { return _font.getAdvance(getText()); } protected boolean keyChar(char key, int status, int time) { boolean result = super.keyChar(key, status, time); // changes in the field's text require a new layout (width change) layoutEditField(); return result; } } }
This code creates one of these things:
_rightJustifiedField = new CustomEditField("000000", 7,
Field.FIELD_RIGHT | BasicEditField.FILTER_INTEGER); _rightJustifiedField.setFont(fieldFont); _rightJustifiedField.setBackgroundColor(Color.WHIT
E); _rightJustifiedField.setAlpha(0x88);E); _rightJustifiedField.setAlpha(0x88);
10-22-2009 04:57 AM
A couple quick mods, after running this on the 9630 and seeing some bugs.
I made the padding on the right side of the text dynamic, as opposed to hard-coded (17px before).
So, AlignableEditField.getPreferredWidth() becomes
public int getPreferredWidth() { // add some padding on the right, // so the cursor is always visible return Math.max(_font.getAdvance(getText()), _font.getAdvance('0')) + _font.getAdvance(' '); }
and CustomEditField.sublayout simplifies to
protected void sublayout(int width, int height) { if (_fieldWidth == 0) { // we only set these once _fieldHeight = height; _fieldWidth = width; setExtent(width, height); } int textWidth = _editField.getPreferredWidth(); layoutChild(_editField, textWidth, height); if (_alignment == FIELD_RIGHT) { setPositionChild(_editField, width - textWidth, 0); } else if (_alignment == FIELD_HCENTER) { setPositionChild(_editField, (width - textWidth) / 2, 0); } else { // FIELD_LEFT setPositionChild(_editField, 0, 0); } }
10-22-2009 05:36 AM
If you've manually set the field width to be X so that it paints a background, you should do that separately, and draw your field on top of that (the manager could do this for you, if you so wish). This means that the field should resize automatically and look seamless on the background.
This also has the advantage that the field placement is not dependent on the font details or content but merely the size of the field.
(I could be misunderstanding, however.)
10-22-2009 06:26 AM
Please note my update to this post:
My suggestions made previously do not work, OP apologies for misdirecting you.
When I have time, I will work on this some more.
10-24-2009 03:42 AM - edited 10-24-2009 03:45 AM
No worries, Peter. If it was easy, I probably wouldn't have had to ask
I'm still not sure I'm on the same page with Ydaraishy. The implementation I just posted is based on the assumption, confirmed by Peter's comments, that an EditField WILL draw the text on the left side of its boundary. Therefore, in order to make it appear that the text is right-justified, it is required to adjust the left boundary of the EditField based on the EditField's CURRENT text. That means taking into account the width of the current text, in the current Font. I would love to not have to do that, but I don't see how I can avoid it, and achieve true right-justification. Please provide some specific details, if anyone knows how else to achieve this. I'm certainly not happy with my implementation, but it's the only functional solution I've been able to code so far.
To summarize my "solution", I am dynamically adjusting the X position of the EditField inside a Manager (my CustomEditField class). In order to encapsulate that crazy behaviour, I wrap the EditField inside this CustomEditField class, whose boundaries do not have to change. So, from the perspective of the enclosing Manager that will contain a CustomEditField, it doesn't have to pull any strange tricks. The tricks are hidden in CustomEditField.
Unfortunately, one downside of this implementation is that it also hides the API of the underlying EditField. As you can see in my implementation, if the enclosing Manager of the CustomEditField wants access to the EditField's text, I have to code a wrapper function, getText(), to expose the EditField.getText() method. I will have to do this for every method in the EditField API that I need to use, so it's a maintenance problem.
But, again, I can't figure out another way to accomplish this.
10-24-2009 05:41 AM
I'm sorry I haven't been as clear as I could have been -- I probably don't see the whole picture as clearly as you who is mired in this problem at the moment
I think also I haven't needed to play with the mechanics of the EditField very closely before.
But your solution sounds about right as to what I was thinking.
As to your "pass-through" problem, if you can expose the underlying EditField through your custom manager, you could say something along the lines of CustomEditField.editField().getText() instead of having to add methods to your CustomEditField, perhaps.
10-24-2009 01:38 PM
Ok, gotcha.
I certainly could add one method to allow access to the underlying (Basic)EditField, which would prevent me from having to add wrappers for everything in the EditField API I want to use externally. But, then, I've broken encapsulation, and the Manager that contains my new Field is made aware of the fact that this CustomEditField thing is not really an EditField itself.
It would be ideal if I could treat my CustomEditField as if it were a subclass of EditField, that provides everything an EditField does, and also adds the ability to center or right-justify text. But, I'll admit that I'm satisfied to just have something that functions!
This is one of those times where I actually could produce a more useful solution, with much less code, in a language that supported full multiple-inheritance. But, Java does not, so I have to either break encapsulation, or maintain a bunch of wrapper code.
I probably will go with your solution after I go through the trouble to wrap one more EditField API call. It certainly will be less code in the long run. Thanks.
06-24-2010 09:43 AM(); }
06-30-2010 06:42 AM
Any answer related to focus on customeditfields?(); }
08-21-2010 06:58 PM
Hello all
when I try to use CustomEditField shown here it works ok but after I add the custom field to the screen
any field after custom field is ignored when I run my app in BB simulator
why ??
I need ure Help
|
https://supportforums.blackberry.com/t5/Java-Development/Right-aligning-Text-in-Basic-EditField/m-p/361819
|
CC-MAIN-2017-13
|
refinedweb
| 1,720
| 53.71
|
I'm just starting to teach myself C++, and have begun learning about integer overflow. Out of curiosity I wrote some tests just to see what occurs with certain integer values.
Here's my program:
#include <iostream>
int main()
{
int x(0);
std::cout << x << std::endl;
x = x + 2147483647;
std::cout << x << std::endl;
x = x + 1;
std::cout << x << std::endl;
std::cout << std::endl;
unsigned int y(0);
std::cout << y << std::endl;
y = y + 4294967295;
std::cout << y << std::endl;
y = y + 1;
std::cout << y << std::endl;
}
0
2147483647
-2147483648
0
4294967295
0
Integers (generally) take a 32-bit representation. If you have 32 bits, you can address from 0 to 231-1. i.e.,
00000000000000000000000000000000 00000000000000000000000000000001 . . . 01111111111111111111111111111111 ^------------------------------- signed bit
0 indicates a positive number, 1 indicates a negative number.
If you add 1 to
01111111111111111111111111111111, you get
10000000000000000000000000000000, which is -2147483648 in decimal.
Using an unsigned integer, there's no signed bit and, ipso facto, can have a number twice as large as your largest signed integer. However, when the number rolls over again (i.e.,
11111111111111111111111111111111 +
00000000000000000000000000000001), you simply roll back to
00000000000000000000000000000000.
For a more in depth understanding, you can look at two's complement, which is how integers are represented in computers.
|
https://codedump.io/share/8RPJzVD00lsR/1/c-integer-overflow
|
CC-MAIN-2017-47
|
refinedweb
| 211
| 69.41
|
You can subscribe to this list here.
Showing
1
results of 1
Stefan Monnier writes:
> Here is a patch that attempts to clean up part of the EIEIO namespace.
> If you like it, please install it upstream, so it will get merged into
> Emacs later on. If you prefer, I can install it into Emacs directly and
> you'll merge it later on upstream, of course.
Thank you for the patch. I know it cannot have been fun. :-)
I didn't have time yet to look at your patch in detail. I did apply it
to CEDET trunk and got byte-compile errors because there's
eieio-class-parent as well as eieio--class-parent.
As for the general direction of the cleanup: We did discuss this a bit
in the bug report you opened for it some time ago, and Eric stated that
he'd at least like to keep the CLOS-compatible names without having to
prefix everything with 'eieio-'. Your suggestion was to use the shorter
'cl-' prefix instead, and at least I think that is a good compromise. So
instead of using 'eieio-class-name', for instance, we'd rather use
'cl-class-name'. I don't know enough CLOS to see which other names are
affected by this (but I could easily look it up, of course).
|
http://sourceforge.net/p/cedet/mailman/cedet-eieio/?viewmonth=201302&viewday=12
|
CC-MAIN-2015-06
|
refinedweb
| 222
| 79.4
|
#include <synch.h> (or #include <thread.h>) int rw_unlock(rwlock_t *rwlp);
Use rw_unlock(3T) to unlock a read-write lock pointed to by rwlp. The read-write lock must be locked and the calling thread must hold the lock either for reading or writing. When any other threads are waiting for the read-write lock to become available, one of them is unblocked. (For POSIX threads, see "pthread_rwlock_unlock(3T)".)
rw_unlock() returns zero after completing successfully. Any other returned value indicates that an error occurred. When any of the following conditions occur, the function fails and returns the corresponding value.
EINVAL
Invalid argument.
EFAULT
rwlp points to an illegal address.
|
http://docs.oracle.com/cd/E19620-01/805-5080/sthreads-74628/index.html
|
CC-MAIN-2016-30
|
refinedweb
| 109
| 61.22
|
HISTOGRAMS IN NUMPY
In this tutorial, we will learn about creating histograms in numpy along with using matplotlib library followed by a handy and simple example:
What is a Histogram?
Histogram is a kind of a bar chart that is used for representing statistical information by how a bar looks based on the probability distribution i.e. the complete of class intervals (grouped set of data) and frequency distribution on x and y axis. Histograms also has bins which are the difference between the frequency distribution values. If frequency distribution values are equal in intervals then the bins would be same in size else different intervals will have different bin sizes.
What is Histograms in Numpy?
NumPy has a numpy.Histogram() characteristic that is a graphical representation of the frequency distribution of information, but we will be creating histograms in matplotlib to keep things simple. In histograms, the total range is represented from min to max values which are divided into equal parts in terms of width. These parts are known as bins or class intervals.
import numpy as np a = np.array([21,22,23,24,25,26,28,30,32,33,34,35,40,41,42,43,44,50,51,52,55,56,56]) np.histogram(a,bins = [0,20,40,60,80,100]) hist,bins = np.histogram(a,bins = [0,20,40,60,80,100]) print hist print bins
Output:
[ 0 12 11 0 0] [ 0 20 40 60 80 100]
Let’s look at a quick example of how to use histogram with matplotlib in numpy:
import numpy as np import matplotlib.pyplot as plt x = np.random.randn(100) plt.hist(x, color='b', bins = 5, width=0.8) plt.title("Random Histogram") plt.xlabel("x axis") plt.ylabel("y axis") plt.show()
Output:
We will learn about using histograms in detail in our matplotlib section.
|
https://python-tricks.com/histograms-in-numpy/
|
CC-MAIN-2021-39
|
refinedweb
| 311
| 58.08
|
<!-- --> . . . . e E + - " " " " " ' ' ' ' ' at " " " " " ' ' ' ' ' {-- } --} <? ?> child :: descendant :: parent :: attribute :: self :: descendant-or-self :: ancestor :: following-sibling :: preceding-sibling :: following :: preceding :: namespace :: ancestor-or-self :: define element define attribute define type define function or and div idiv mod * in context satisfies return then else default declare xmlspace preserve strip namespace declare namespace declare result to where collation intersect union except as at case instance of castable as ) as item element attribute element { attribute { element { attribute { text { default collation = default element default function of type atomic value type type node empty none import schema _ . - _ * : * * : / // / // = = = is != isnot <= << >= >> eq ne gt ge lt le := < > - + ? | ( @ [ ] ) some every for let cast as treat as validate { validate context comment document document { text untyped processing-instruction node ( comment ( text ( processing-instruction ( if ( typeswitch ( , ; " " " " . .. order by stable order by ascending descending empty greatest empty least $ : ( id ( key ( occurs inside of a namespace declaration, and is needed to recognize a NCName that is to be used as the prefix, as opposed to allowing a QName to occur. (Otherwise, the difference between NCName and QName are ambiguous.) This state occurs at places where the keyword "namespace" is expected, which would otherwise be ambiguous compared to a QName. QNames can not occur in this state. This state occurs at places where the keywords "preserve" and "strip" is expected to support "declare xmlspace". QNames can not occur in this state. This state distinguishes tokens that can occur only inside the ItemType production.. This state allows XML-like content, without these characters being misinterpreted as expressions. The character "{" marks a transition to the DEFAULT. When the end tag is terminated, the state is popped to the state that was pushed at the start of the corresponding start tag. The "<--" token marks the beginning of an XML Comment, and the "-->" token marks the end. This allows no special interpretation of other characters in this state. In this state, only lexemes DEFAULT state, i.e. the start of an embedded expression, and the "}" character pops back to the original state. To allow curly braces to be used as character content, a double left or right curly brace is interpreted as a single curly brace character. This state is the same as QUOT_ATTRIBUTE_CONTENT, except that apostrophes are allowed without escaping, and an unescaped quote marks the end of the state. This state is the same as QUOT_ATTRIBUTE_CONTENT, except that quotes are allowed, and an unescaped apostrophe marks the end of the state. <![CDATA[ ]]> & & lt gt amp quot apos ; &# x ; < < > /> </ > = { { } {{ }} '' ' '
|
http://www.w3.org/2002/11/xquery-xpath-applets/xpath-grammar.xml
|
CC-MAIN-2016-44
|
refinedweb
| 412
| 52.9
|
When building a player, you sometimes want to modify the built player in some way. For example you might want to add a custom icon, copy some documentation next to the player or build an Installer. You can do this via editor scripting using BuildPipeline.BuildPlayer to run the build and then follow it with whatever postprocessing code you need:-
// C# example. using UnityEditor; using System.Diagnostics; public class ScriptBatch { [MenuItem("MyTools/Windows Build With Postprocess")] public static void BuildGame () { // Get filename. string path = EditorUtility.SaveFolderPanel("Choose Location of Built Game", "", ""); string[] levels = new string[] {"Assets/Scene1.unity", "Assets/Scene2.unity"}; // Build player. BuildPipeline.BuildPlayer(levels, path + "/BuiltGame.exe", BuildTarget.StandaloneWindows, BuildOptions.None); // Copy a file from the project folder to the build folder, alongside the built game. FileUtil.CopyFileOrDirectory("Assets/Templates scriptsA piece of code that allows you to create your own Components, trigger game events, modify Component properties over time and respond to user input in any way you like. More info
See in Glossary with the Process class from these methods as shown in the last section. This parameter is used to sort the build methods from lower to higher, and you can assign any negative or positive value to it.
Did you find this page useful? Please give it a rating:
|
https://docs.unity3d.com/Manual/BuildPlayerPipeline.html
|
CC-MAIN-2019-30
|
refinedweb
| 215
| 50.53
|
The attached patch against 2.6.5-udm4 fixes persistant mirror reactivation: - count_bits() fxn was broken. It went into an infinite loop because n was never incremented. - the size parameter passed into the find_*_bit fxns from count_bits() was also wrong - it needs to be in bits, not bytes.---------------------------------------- Sticks and stones may break my bones, and so would an 80 lb. carrot. -----End Obligatory Humorous Quote------------------------------------------
Fix persistant mirror reactivation - count_bits() fxn was broken. It went into an infinite loop because n was never incremented. - the size parameter passed into the find_*_bit fxns from count_bits() was also wrong - it needs to be in bits, not bytes. --- diff/drivers/md/dm-log.c 2004-04-14 16:22:54.111484595 -0500 +++ source/drivers/md/dm-log.c 2004-04-15 13:06:31.323250244 -0500 @@ -426,23 +426,16 @@ static int count_bits(unsigned long *addr, unsigned size) { - /* FIXME: test this */ -#if 1 int n, count = 0; + unsigned long bitsize = size << 3; - n = find_first_bit(addr, size); - while (n < size) { + n = find_first_bit(addr, bitsize); + while (n < bitsize) { count++; - find_next_bit(addr, size, n + 1); + n = find_next_bit(addr, bitsize, n + 1); } return count; -#else - int count = 0; - for (i = 0; i < lc->region_count; i++) - count += log_test_bit(lc->sync_bits, i); - return count; -#endif } static int disk_resume(struct dirty_log *log)
Attachment:
pgp00000.pgp
Description: PGP signature
|
https://www.redhat.com/archives/dm-devel/2004-April/msg00035.html
|
CC-MAIN-2015-22
|
refinedweb
| 221
| 65.52
|
Blog Map
[Blog Map] [Table of Contents] [Next Topic]
Perhaps the best way to compare and contrast the imperative (stateful) coding style and the functional coding style is to present examples that are coded in both approaches.
This blog is inactive.New blog: EricWhite.com/blogBlog TOCThis example will use some of the syntactic constructs that are presented in detail further on in this tutorial. Don't worry if this example contains code that you don't understand; it is presented so that you can see the big picture of the comparison of the two styles. Later, after you have read through the rest of the tutorial, if necessary return to these examples and review them. In this topic, we're more concerned with seeing the big picture.
The example will consist of two separate transformations. The problem that we want to solve is to first increase the contrast of an image, and then lighten it. So we want to first brighten the brighter pixels, and darken the darker pixels. Then, after increasing contrast, we want to increase the value of each pixel by a fixed amount. (I'm artificially dividing this problem into two phases. Of course, in a real world situation, you would solve this in a single transformation, or perhaps using a transform specified with a matrix).
To further simplify the mechanics of the transform, for the purposes of this example, we'll use a single floating point number to represent each pixel. And we'll write our code to manipulate pixels in an array, and disregard the mechanics of dealing with image formats.
So, in this first example, our problem is that we have an array of 10 floating point numbers. We'll define that black is 0, and pure white is 10.0.
When coding in a traditional, imperative style, it would be a common approach to modify the array in place, so that is how the following example is coded. The example prints the pixel values to the console three times – unmodified, after the first transformation, and after the second transformation.
The following code is attached to this page (Example #1).
using System;using System.Collections.Generic;using System.Linq;using System.Text; static class Program{ private static double Limit(double pixel) { if (pixel > 10.0) return 10.0; if (pixel < 0.0) return 0.0; return pixel; } private static void Print(IEnumerable<double> pixels) { foreach (var p in pixels) Console.Write(String.Format("{0:F2}", p).PadRight(6)); Console.WriteLine(); } public static void Main(string[] args) { double[] pixels = new[] { 3.0, 4.0, 6.0, 5.0, 7.0, 7.0, 6.0, 7.0, 8.0, 9.0 }; Print(pixels); for (int i = 0; i < pixels.Length; ++i) { if (pixels[i] > 5.0) pixels[i] = Limit((pixels[i] - 5.0) * 1.5 + 5.0); else pixels[i] = Limit(5.0 - (5.0 - pixels[i]) * 1.5); } Print(pixels); for (int i = 0; i < pixels.Length; ++i) pixels[i] = Limit(pixels[i] + 1.2); Print(pixels); }}
This example produces the following output:
3.00 4.00 6.00 5.00 7.00 7.00 6.00 7.00 8.00 9.002.00 3.50 6.50 5.00 8.00 8.00 6.50 8.00 9.50 10.003.20 4.70 7.70 6.20 9.20 9.20 7.70 9.20 10.00 10.00
Here is the same example, presented using queries. The following code is attached to this page. (Example #2):
double[] pixels = new[] { 3.0, 4.0, 6.0, 5.0, 7.0, 7.0, 6.0, 7.0, 8.0, 9.0 }; Print(pixels); IEnumerable<double> query1 = from p in pixels select p > 5.0 ? Limit((p - 5.0) * 1.5 + 5.0) : Limit(5.0 - (5.0 - p) * 1.5);
Print(query1);
IEnumerable<double> query2 = from p in query1 select Limit(p + 1.2);
Print(query2);
This example produces the same output as the previous one.
However, there are significant differences. In the second example, we did not modify the original array. Instead, we defined a couple of queries for the transformation. Also, in the second example, we never actually produced a new array that contained the modified values. The queries operate in a lazy fashion, and until the code iterated over the results of the query, nothing was computed.
Here is the same example, presented using queries that are written using method syntax (Example ); } ); Print(query1); IEnumerable<double> query2 = query1.Select(p => Limit(p + 1.2)); Print(query2);
Because the second query operates on the results of the first query, we could tack the Select on the previous call to Select. (Example ); } ).Select(p => Limit(p + 1.2)); Print(query1);
This ability to just tack the second Select on the end of the first one is an example of composability. Another name for composability is malleability. How much can we add/remove/inject/surround code with other code without encountering brittleness? Malleability allows us to shape the results of our query.
All three of the above approaches that were implemented using queries have the same semantics, and same performance profile. The code that the compiler generates for all three is basically the same.
In your example:
if (p > 5.0)
return Limit((p - 5.0) * 1.5 + 5.0);
else
return Limit(5.0 - (5.0 - p) * 1.5);
"(p-5.0)*1.5+5.0" and "5.0-(5.0-p)*1.5" are perfectly equivalent!
'All three of the above approaches that were implemented using queries have the same semantics, and same performance profile. The code that the compiler generates for all three is basically the same.'
I beg to differ.
The 'algorithmic' code would look similar or the same, but the underlying metaphor between the first example and the next two is large. By requiring that copies be made of the initial array, there is an underlying churn being induced in the heap. A churn which requires garbage collection of those self-same constructs. In this meager example those affects are relatively minor, but as things scale in complexity and size, those affects can become major. That makes the difference larger than it might first appear.
Memory churn scales with the number of compositions and the size of the intermediate result sets. This is not unlike some of the issues exhibited by SQLs during execution, which have some marked (and quite nasty) side effects.
@R King:
I think that maybe you misunderstood which queries I was referring to. Example #1 is the algorithmic approach, which is has a completely different performance profile from examples #2, #3, and #4, which are implemented via queries. (I've labeled the examples above so that what I'm referring to is clear.) #2 is implemented with query expressions, which are translated by the compiler into calls to extension methods. Example #3 is the same query expressed in method syntax. Example #4 is the same as #3, except it has the last Select tacked onto the end. #2, #3, and #4 have the same performance profile.
It is true that the queries induce a larger number of short-lived objects on the heap. The garbage collector is optimized for handling many short-lived objects. I have regularly used code similar to the final results of this tutorial on a set of documents that are fairly large: > 200 documents, each approx 50K in size. The query code executes for all 200 of the documents in about 2 seconds. The performance is very good.
Regarding #2, #3, and #4, they don't create intermediate result sets as such, due to lazy evaluation. So even if the source array was extremely large, the amount of long-term memory used doesn't increase.
I absolutely agree that there are certain scenarios where intruducing a large number of objects on the heap would result in unacceptible perf. But in those scenarios, you might choose another technology, such as C or C++. If you were processing XML and need good perf on extremely large XML documents, you may use a streaming parser such as SAX or XmlLite.
One of the ideas behind LINQ is that we have these incredibly powerful computers, and in many circumstances, we can use the power of the computer to make the developer's job easier. We don't care whether the resulting code runs in .02 seconds or in 2 seconds, if the developer was able to write the code much faster.
Does this make sense?
-Eric
Eric,
It does indeed make sense.
Let me give you a little of my background.. I've worked for 25+ years, many of them doing very large scalable systems. In the last 10 years or so I've been involved in hiring engineers, and I've run into a very large number of engineers that don't take these issues into account, even when performance of websites and such depends on such things. I've worked in C++, Java, and for the last year C#. At one level or another all these systems suffer from heap churn if you don't pay attention to what you are doing. Its why I pointed out the issue.
Thanks again for your continued thoughtful responses, and your contribution to making the .NET environment and its underlying environment the easy thing to use that it is. I frequent your blog and find it most illuminating.. :)
Regards,
RK
[Table of Contents] [Next Topic] To do functional programming in C#, it is important that we have a base
It should be mentioned that deferred execution makes last example the most effective. Printing result before and after the last selecting generates two iterations through the array while last example will scan array only once.
Deferred execution generates another trap for novices: there is no need following style, presented in the third example. Removing intermediate printing from second example will create code equivalent to the third one.
Shorter code! :D
double[] pixels = new[] { 3.0, 4.0, 6.0, 5.0, 7.0, 7.0, 6.0, 7.0, 8.0, 9.0 };
Print(pixels);
IEnumerable<double> query1 =
pixels.Select(
p => (p > 5.0) ? Limit((p - 5.0) * 1.5 + 5.0) : Limit(5.0 - (5.0 - p) * 1.5);
).Select(p => Limit(p + 1.2));
|
http://blogs.msdn.com/b/ericwhite/archive/2008/04/22/an-example-presented-in-both-coding-styles.aspx
|
CC-MAIN-2015-27
|
refinedweb
| 1,720
| 66.23
|
The EventGenerator class manages a whole event generator run.
More...
#include <EventGenerator.h>
AnalysisHandler
to this event generator during the initialization phase (in the doinit() function).
It is typically used by objects which need to introduce other Interfaced objects depending the parameters of the StandardModel object used. Note that objects which use these functions MUST override the preInitialize() function to return true, otherwize the whole initialization procedure may be corrupted..
EventGenerator.
References runName().
Check if there has been an interrupt signal from the OS.
If that's the case, finalize() is called
Referenced by used().
Make a simple clone of this object.
Implements ThePEG::InterfacedBase.
Referenced by globalLibraries().
Finalize this object.
Called in the run phase just after a run has ended. Used eg. to write out statistics.
Reimplemented from ThePEG::InterfacedBase.
Finish generating an event constructed from the outside.
Is called by generateEvent(tEventPtr).
Referenced by random().
Finish generating an event starting from a Step constructed from the outside.
Is called by generateEvent(tStepPtr).
Run this EventGenerator session.
Is called from go(long,long,bool).
Initialize this object after the setup phase before saving an EventGenerator to disk.
false
Initialize this generator.
Is called from initialize().
Initialize this object.
Called in the run phase just before a run begins.
Generate one event.
Is called from shoot().
The base filename used in this run.
The actual files are called filename.run, filename.dump, filename.out, filename.log and filename.tex for the input configuration file, output dump file, output file, log file, and reference file respectively. The filename is constructed from the path() and runName().
filename.run
filename.dump
filename.out
filename.log
filename.tex
Definition at line 297 of file EventGenerator.h.
References path(), and runName().
Indicate that the run has ended and call finish() for all objects including this one.
Note that finish() should not be called directly.
Referenced by manipulator().
Find a decaymode given a decay tag.
Referenced by path().
Find a matcher in this run given its name.
Find a particle in this run, using its PDG name.
Make a clone of this object, possibly modifying the cloned object to make it sane.
Finish generating an event which has already been partially constructed from the outside.
Calls the virtual method do doGenerateEvent().
Finish generating an event starting from a step which has already been partially constructed from the outside.().
1
-1
Calls the virtual method doGo().
Histogram scale.
A histogram bin which has been filled with the weights associated with the Event objects should be scaled by this factor to give the correct cross section.
Referenced by N().
This is done automatically if 'go()' is used. Calls the virtual method doInitialize().
Dynamically load the Main class in the given file, making it run its Init() method where it may use this EventGenerator.
Also call the initialize function before and the finish() function afterwards.
Access the special particles used in this generator.
Not relevant in the run phase.
Definition at line 557 of file EventGenerator.h.
References theLocalParticles.
Definition at line 563 of file EventGenerator.h.().
Function used to read in object persistently.
Function used to write out object persistently.
""
Create a new Interfaced object to be used in the run being initialized.
Create a decay mode according to the given tag.
Manipulate an interface of an Interfaced object.
Manipulate an interface of vector type (RefVector or ParVector) of an Interfaced object.
Register a new object to be included in the run currently being initialized.
Return a reference to the stream connected to the filea for references from used objects..
Calls the virtual method doShoot();
Register a given object as used.
Only objects registered in this way will be included in the file with model references..
Definition at line 930 of file EventGenerator.h.
Referenced by misc()..
|
https://thepeg.hepforge.org/doxygen/classThePEG_1_1EventGenerator.html
|
CC-MAIN-2018-39
|
refinedweb
| 630
| 53.78
|
q-table rows drag and drop
- claudiofbezerra last edited by
q-table rows drag and drop? I’m trying without success so far. Has anyone ever got it?
I once tried it with q-table without success and would like to know if it’s possible, too.
My plan-b was to use a standard
<table>and to wrap the
<tr>elements.
That was before Quasar v1, so nowadays I would use
QMarkupTable.
But it would be nicer to be able to use it in a
QTable, of course.
I’m not saying it is impossible - I just gave up quickly…
I just tried again and I still can’t do it. I think I have an explanation why it doesn’t work… Let me show you my approach:
As a starting point I looked at the example at:
<q-table ... > <template v-slot: <q-tr : <q-td ... </q-td> ... </q-tr> </template> </q-table>
So I thought I could just replace the
<q-tr>with the
<draggable>tag from the Drag&Drop-library and configure it to render as a
<q-tr>. Like this:
<q-table ... > <template v-slot: <draggable : ... </q-td> ... </draggable> </template> </q-table>
Now here is why this won’t work:
The
<q-tr>and the body-slot don’t correspond to the
<tr>and
<tbody>elements that are rendered in the DOM in the end.
The drag-functionality actually gets attached to the cells instead of the rows!
The only way I see that this could work is if we somehow can render the individual rows (the actual
<tr>elements) with a
v-forby ourself.
- coopersamuel last edited by
This is a bummer - any thoughts on this from the quasar folks?
- lucasfernog last edited by
- coopersamuel last edited by
I need the features of QTable
There is a way, but it requires some work…
The library I like to use for drag & drop (vuedraggable) is actually just a wrapper for Sortable.js to make it work with vue.js.
You could simply use Sortable.js (or something similar) and make it work with vue.js in your own way.
<q-table ...
import Sortable from "sortablejs"; mounted() { const element = document.querySelector("#myTable tbody"); // grab the element containing the <tr> elements const sortable = Sortable.create(element, { onEnd(event) { // gets called when dragging ended // Sortable.js only swaps the elements in the DOM, // so we need to swap the elements in the table data using the indexes (event.oldIndex and event.newIndex) or probably better by using ids if you have pagination } }); }
And then you also need to somehow notify Sortable.js when your data changes (after sorting, on pagination, etc.).
I don’t see an update() or refresh() function in the documentation, so you might have to call destroy() and re-initialize Sortable again every time, but idk.
… something like that.
I recently found this very promising vue library to add drag and drop functionality:
It seems very easy and powerful.
Just throwing it out here.
This post is deleted!
This post is deleted!
|
https://forum.quasar-framework.org/topic/4595/q-table-rows-drag-and-drop/3
|
CC-MAIN-2020-29
|
refinedweb
| 507
| 75.3
|
Created attachment 23228 [details]
example code
I can set a print area and read it back just fine, but it does not get saved when the document is written to disk. I have example code and files (both input and output.)
Created attachment 23229 [details]
example input
Created attachment 23230 [details]
example output
I've tried poi-3.2-final, poi-3.5-final, and poi-3.6
This problem is not happened in poi-3.2
But this problem is happened in poi-3.5-final and poi-3.6
Any chance you could create 4 files:
* a simple document, but with enough data to make the print area make sense
* that document, as saved by excel having set the print area
* that document, as saved by POI 3.2 having set the print area
* that document, as saved by a recent POI svn checkout, having set the print area
If you're also able to use org.apache.poi.hssf.dev.BiffViewer to spot the differences between the 4, that'd be handy, as that'll help us narrow in on which bits have gone wrong
Created attachment 25545 [details]
TestPoiPrintArea.java
Created attachment 25546 [details]
Generated Excel File when TestPoiPrintArea.java is run with poi-3.6.jar
Created attachment 25547 [details]
Generated Excel File when TestPoiPrintArea.java is run with poi-3.2-FINAL.jar
Sorry,
I'm not able to use BiffViewer.java.
I also do not have good enough internet connection to checkout from poi svn.
I've attached 3 files:
1. TestPoiPrintArea.java
2. test_poi_36.xls: Generated Excel file when TestPoiPrintArea.java is being run with poi-3.6.jar (set print area failed)
3. test_poi_32.xls: Generated Excel file when TestPoiPrintArea.java is being run with poi-3.2-FINAL.jar (set print area success)
Do you need another document ?
Set print area is not working at all since poi-3.5-FINAL.jar, this is not intermittent bug or in specific case.
Thank you very much..
Could you please generate a file with the same data and print area in excel? That way we can also check that what we do matches what excel does once fixed, and will also be a useful one for a read unit test
Created attachment 25548 [details]
same document + set print area written manually in Excel (not using POI)
Created attachment 25552 [details]
Generated Excel File when TestPoiPrintArea.java is run with poi-3.7-SNAPSHOT-20100608.jar
The set print area still not working on poi-3.7-snapshot
Fixed in r953180. It seems that some versions of excel are pickier than others about the TabIdRecord when processing the print rules, and we'd stopped updating it for new sheets....
(In reply to comment #12)
> Fixed in r953180. It seems that some versions of excel are pickier than others
> about the TabIdRecord when processing the print rules, and we'd stopped
> updating it for new sheets....
Thanks a lot..
I will try when poi-3.7-SNAPSHOT-20100610.jar is available on
I've running the TestPoiPrintArea.java using poi-3.7-SNAPSHOT-20100610.jar and the print area still not set. (bug still not solved)
Does this snapshot contain fixed version (r953180) ?
Created attachment 25577 [details]
Generated Excel File when TestPoiPrintArea.java is run with poi-3.7-SNAPSHOT-20100610.jar
Hmm, maybe it wasn't just the TabID problem
The only other difference I can spot is in the print area name record:
.NameIsMultibyte = false
.Name (Unicode text) = Print_Area
.Formula (nTokens=1):
- org.apache.poi.hssf.record.formula.Area3DPtg [sheetIx=0 ! $A$1:$C$1]R
+ org.apache.poi.hssf.record.formula.Area3DPtg [sheetIx=0 ! $A$1:$C$1]V
(- is for the 3.2 version, + for the 3.7 one)
It seems to have been switched from a by-reference to by-value definition
I've no idea why, or what this'll do to excel (well, other than confuse yours but not mine). I guess one for Josh (our resident formula guru)
Is this topic dead? I'm noticing the same bug in the POI 3.7 release. If there is any information I can provide tell me and I'll do what I can.
Best Regards
(In reply to comment #17)
> Is this topic dead? I'm noticing the same bug in the POI 3.7 release. If there
> is any information I can provide tell me and I'll do what I can.
I'd suggest you try hacking the Ptg on the definition to be Reference not Value, and see if that fixes it. (It'll probably mean some low level fiddling about in the record layer)
If it does, we can make the change in POI. If not, more work will be needed to identify the problem.
I found a difference HSSFWorkbook and XSSFWorkbook.
In XSSFWorkbook#setPrintArea(), FormulaParser#parse() is called as follows.
FormulaParser.parse (formulaText, fpb, FormulaType.NAMEDRANGE, getSheetIndex ());
The third argument is FormulaType.NAMEDRANGE.
However, in HSSFWorkbook#setPrintArea(), third argument is FormulaType.CELL as follows.
HSSFFormulaParser.parse (sb.toString (), this, FormulaType.CELL, sheetIndex)
In fact, if I create a following class, setPrintArea works fine.
--------------------------------------------------------------------------------
import java.lang.reflect.Field;
import java.util.regex.Pattern;
import org.apache.poi.hssf.model.HSSFFormulaParser;
import org.apache.poi.hssf.model.Workbook;
import org.apache.poi.hssf.record.NameRecord;
import org.apache.poi.hssf.record.formula.SheetNameFormatter;
import org.apache.poi.hssf.usermodel.HSSFWorkbook;
import org.apache.poi.ss.formula.FormulaType;
public class MyHSSFWorkbook extends HSSFWorkbook {
private static final Pattern COMMA_PATTERN = Pattern.compile(",");
private Workbook workbook;
public MyHSSFWorkbook() throws NoSuchFieldException, IllegalAccessException {
this.workbook = getLowLevelWorkbook();
}
private Workbook getLowLevelWorkbook() throws NoSuchFieldException, IllegalAccessException {
Field field = HSSFWorkbook.class.getDeclaredField("workbook");
field.setAccessible(true);
return (Workbook) field.get(this);
}
@Override
public void setPrintArea(int sheetIndex, String reference) {
NameRecord name =
workbook.getSpecificBuiltinRecord(NameRecord.BUILTIN_PRINT_AREA, sheetIndex + 1);
if (name == null) {
name = workbook.createBuiltInName(NameRecord.BUILTIN_PRINT_AREA, sheetIndex + 1);
// adding one here because 0 indicates a global named region; doesn't make sense for print areas
}
String[] parts = COMMA_PATTERN.split(reference);
StringBuffer sb = new StringBuffer(32);
for (int i = 0; i < parts.length; i++) {
if (i > 0) {
sb.append(",");
}
SheetNameFormatter.appendFormat(sb, getSheetName(sheetIndex));
sb.append("!");
sb.append(parts[i]);
}
// FormulaType.CELL -> FormulaType.NAMEDRANGE
name.setNameDefinition(HSSFFormulaParser.parse(sb.toString(), this, FormulaType.NAMEDRANGE,
sheetIndex));
}
}
--------------------------------------------------------------------------------
Is this a bug?
Thanks for tracking that down. I've changed HSSFWorkbook to set the correct type, and added a unit test for it. Committed in r1069780.
|
https://bz.apache.org/bugzilla/show_bug.cgi?id=46664
|
CC-MAIN-2018-17
|
refinedweb
| 1,075
| 52.87
|
Using Methods With Value and Reference Types
In this post I will show you the practical usage of methods with value types and reference types in C#. I have used a console application to run the code I posted here. It might be helpful if you can also create your own console application so you can try the code samples yourself.
Value and Reference Types in C#
Here is the list of all value types in C#:
- Structs (ex: System.DateTime)
- Enums (ex: System.Data.CommandType)
- Integral types (char, sbyte, byte, short, ushort, int, uint, long, ulong)
- Float
- Decimal
- Double
- Bool
And here are the reference types:
- class
- interface
- delegate
- dynamic
- object
- string
Using Methods With Value Types
Consider the following method:
static void AddTwo(int x) { x = x + 2; }
Which is used in the following code fragment:
int myInt = 5; AddTwo(myInt); Console.WriteLine("Value of myInt: {0}", myInt);
What do you think will be displayed as the value of 'myInt'?
You might think that the answer is 7. However, this is not the correct answer. Instead, the value would still be 5. Inside the method, it will seem as if the value of the parameter changes. But once the method is finished, you will find that the original value of the variable passed to the method remains unaffected.
This is the behavior of value types. In the AddTwo method, we say that the parameter x is passed by value. There is a way for the changes to persist even after the method finishes. But for now, let's talk about using methods with reference types.
Using Methods With Reference Types
In the previous example, you saw that the original parameter remains unaffected. In other words, any changes made inside the method were not persisted once the method block finishes.
When using reference types, there are some types of changes that will persist, and some that won't. To demonstrate these scenarios, let's create an employee class:
public class Employee { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } }
And let's create a new John Smith instance:
Employee john = new Employee { Id = 1, FirstName = "John", LastName = "Smith" };
Now that we have an employee set up, we can create methods that will make changes to it and we can see if the changes will persist or not.
The first method that we will create changes the name of the employee:
static void ChangeName(Employee employee) { employee.FirstName = "Jane"; }
Now let's run the method:
ChangeName(john); Console.WriteLine("Employee first name after ChangeName: {0}", john.FirstName);
You will be see that the name was successfully changed even after the method has finished. Changing property values of classes will persist even after the method finishes.
Now let's create another method. This time, the employee parameter will be set to null:
static void Nullify(Employee employee) { employee = null; }
And let's call the method this way:
Nullify(john); Console.WriteLine("Employee is null after Nullify? {0}", john == null);
Now, will john be null?
This time, the answer will be false. The changes to the parameter did not persist after the method block finished. Similarly for the following two methods:
static void Newify(Employee employee) { employee = new Employee { Id = 5, FirstName = "Jane", LastName = "Doe" }; } static void Assignify(Employee employee) { Employee anotherEmployee = new Employee { Id = 5, FirstName = "Jane", LastName = "Doe" }; employee = anotherEmployee; }
In the Newify method, the employee parameter is assigned to a new Employee object. In the Assignify method, the employee parameter is assigned to a different Employee object. Do you think the changes will persist?
Newify(john); Console.WriteLine("Employee first name after Newify: {0}", john.FirstName); Assignify(john); Console.WriteLine("Employee first name after Assignify: {0}", john.FirstName);
The answer to that is no in both questions: the changes will not persist. Even after calling Newify and Assignify, the first name of the john object will not change. Changes to the reference of a reference type (ex: assigning to null, assigning to a new object, assigning to an existing object) will not be persisted outside of the method.
Using the Ref and Out Parameters
We have talked about changes that will not persist outside of the method. For value types, any change in the method does not persist. For reference types, assigning the parameter to something else will also not persist. How do we change that?
The answer is by using the ref and out keywords. Let's modify the AddTwo method to read the following:
static void NewAddTwo(ref int x) { x = x + 2; }
Notice that now, the ref keyword is added to the parameter. And we now call the method in this way:
int myInt = 5; NewAddTwo(ref myInt); Console.WriteLine("Value of myInt: {0}", myInt);
Notice also that the ref keyword is also added in method invocation. If you run the code, you will find that the answer is now 7: the changes to the variable in the method have been persisted even after the method block finishes.
You can do the same for reference types. If you use the ref keyword on the Nullify, Newify, or Assignify methods, you will notice that the changes will persist after the methods have been called.
In place of the ref keyword, you can also use the out keyword to achieve the same effect. The difference is that when using the ref keyword, the variable has to be assigned before using it as a parameter in the method.
static void SetToSeven(ref int x) { x = 7; } static void Main(string[] args) { int myOtherInt; SetToSeven(ref myOtherInt); // error: myOtherInt has to be assigned }
When using the out keyword, there is no need to initialize the variable.
static void SetToSeven(out int x) { x = 7; } static void Main(string[] args) { int myOtherInt; SetToSeven(out myOtherInt); // valid }
That's it! I hope you enjoyed this post and good luck with your project.
|
http://www.ojdevelops.com/2013/04/using-methods-with-value-and-reference.html
|
CC-MAIN-2020-05
|
refinedweb
| 987
| 62.88
|
well i was reading here
how to begin. i created a workspace called CS201
project called hw1, and a source file called problem1_1.
now do i have to use this names(cs2001 for workspace, hw1,hw2,hw3 for project and problem1_1 problem1_2... for source files)?.
k but that isnt that importent, but i got some problems when i started to use.
- first of all, after i close the visual studio, enter again and open the workspeace, when i try to run the programm (creat hw1.exe) so it says:
--------------------Configuration: hw1 - Win32 Debug--------------------
Linking...
LIBCD.lib(crt0.obj) : error LNK2001: unresolved external symbol _main
Debug/hw1.exe : fatal error LNK1120: 1 unresolved externals
Error executing link.exe.
hw1.exe - 2 error(s), 0 warning(s)
what is that means?
- why when i write a code it dissapear after a sec? how to keep it?.
- when i type
#include <iostream.h>
int main (void)
{
cout << "Hello World";
}
it doesnt show me the "hello world". why?
i'll glad to get help. thx
|
https://cboard.cprogramming.com/cplusplus-programming/53795-starting-programming-cplus-visual-studio-6-0-a.html
|
CC-MAIN-2017-04
|
refinedweb
| 171
| 77.84
|
Sun and Eclipse Squabble 423
gbjbaanb writes "CNET news is reporting on a potential spat between Sun and Eclipse: 'Sun Microsystems has sent a letter to members of Eclipse, urging the increasingly influential open-source project to unify rather than fragment the Java-based development tool market.' Although Sun's letter says it wants interoperability, and a 'broad base' for java tools, it then insists Eclipse should push to be a 'unifying force for Java technology'. Competing tools is a good thing, but it sounds like Sun just wants everything to work its way."
Eclipse will take out Sun (Score:5, Funny)
let's see sun invents java, ibm, makes a tool ... (Score:5, Insightful)
Re:let's see sun invents java, ibm, makes a tool . (Score:5, Funny)
-
Re:let's see sun invents java, ibm, makes a tool . (Score:2, Informative)
Re:let's see sun invents java, ibm, makes a tool . (Score:2)
Re:let's see sun invents java, ibm, makes a tool . (Score:3, Funny)
KFG
Re:let's see sun invents java, ibm, makes a tool . (Score:3, Interesting)
a) my company pays for it
b) my company also bought me a 2.6GHz P4 box with a gig of RAM
I have tried Eclipse and netbeans (and AnyJ), but didn't really get on with them. That was probably mostly due to being used to JBuilder, though, rather than through any real failing of the alternatives.
I really see no connection to SCO here.. (Score:3, Interesting)
Sun has their own, free (Mozilla public license derrived) Java IDE.Netbeans [netbeans.org]
Re:let's see sun invents java, ibm, makes a tool . (Score:5, Insightful)
Eclipse is light years ahead of NetBeans, and gaining developers everyday.
Eclipse has NEVER crashed on me, not once in about a year. nor have I found any bugs. not a one.
Also note that IBM/Eclipse has SWT. SWT is a set of graphical tools that allow you to code once, but run on any OS and look/feel/run "native" to that OS. This sort of replaces AWT/Swing but it ties you to SWT.
Furthermore, there is not Eclipse/RCP or Rich Client Platform. This allows you to use eclipse as your underlying application architecture (sort of like MFC), and end users can't even tell.
There's also "eclipse.exe" and not eclipse.jar.
Sun's problem is that IBM is doing to Java what Sun initially sought to do to Java. IBM is going to steal Java away from Sun within 5 years.
I should mention that whining wont change anything Sun...
Re:let's see sun invents java, ibm, makes a tool . (Score:3, Informative)
That's not strictly true. The GUI widgits in SWT are provided by a shared library compiled for the local platform and linked to Java code with JNI.
This means you need a shared library compiled and tested for your platform. To see what platforms are currently supported and the status of those platforms, check out the port status [eclipse.org] section of the eclipse homepage.
My impression of SWT is it's more feature rich than AWT, faster
Re:let's see sun invents java, ibm, makes a tool . (Score:5, Informative)
And it does it pretty well. This is what AWT should have been. The fact that it actually uses the underlying environment effectively means they don't have to update their look and feel every time one of their platforms releases a new UI. As a result, applications look like other native apps, including "themes" and such.
Re:let's see sun invents java, ibm, makes a tool . (Score:3, Informative)
Re:let's see sun invents java, ibm, makes a tool . (Score:5, Insightful)
A sane company who's trying to beat everyone's favorite convicted monopolist [microsoft.com] at gathering developers around their campfire for the next big platform of application development (i.e. this Internet thing). Can you name more than 3 IDE's for Windows development? No fair using Google....
What I'm saying is that I think that Sun wants to have "... all the wood behind one arrowhead " when Java &
Anyway, my prediction is that IBM will have a good laugh about this whole thing. They'll ignore it, continue to make gobs of $$$ off of their services division, and not worry about fighting Microsoft directly. It's worked well for them for 20 years... why stop now?
--Mid
Re:let's see sun invents java, ibm, makes a tool . (Score:4, Insightful)
Visual Studio
Delphi
C++Builder
MinGW Developer Studio
Dev C++
Re:let's see sun invents java, ibm, makes a tool (Score:3, Interesting)
Some form of unification wouldn't be all that bad - but unification should not be misread as "only one IDE".
As much as Sun created a "the same bytecode runs on all platforms" - and the much the same, that XML data is portable between platforms - exactly the same way we would need some unification in the "project properties" files. If you really WANT competition to happen, what we need is a way, that the same project can be opened with a number of IDEs, but b
Re:let's see sun invents java, ibm, makes a tool . (Score:4, Insightful)
As if to make things worse, SWT is not part of the standard Java package, so you have to make sure it's available for the platform you want to run an SWT-based program on.
Sun might do people a few favours by adopting it.
Interestingly, there's a bigger, more glaring example of an IDE that encourages the use of a non-bundled API, and that API covers way more than UIs: Apple's Xcode (and before that ProjectManager), which is based around Cocoa. Now, theoretically, there's a Java port of GNUStep which is portable, but that's not entirely compatable with Cocoa out-of-the-box (different
.nib formats for starters), and it's very much a beta still.
As far as I'm aware, Sun isn't complaining about it.
Re:let's see sun invents java, ibm, makes a tool . (Score:3, Informative)
Speaking from professional experience, one needs only include an swt.jar and set of binary libraries in your distribution for the platforms which you are targeting. You can explicitly specify the swt library to be part of your libraries when you start up the VM for your java application, and then you're done.
The pain attached to using SWT is all but irrelevant considering the advantages of having the platform native widget set at your disposal through a homogenous API. If you love MDI then you won't enjoy
Re:let's see sun invents java, ibm, makes a tool . (Score:3, Informative)
Re:let's see sun invents java, ibm, makes a tool . (Score:3, Interesting)
Funny how when it's an open source group doing the fragmenting it somehow becomes a good thing.
There's a big difference between what Microsoft was trying to do and what IBM is doing. Eclipse works completely within the current language constructs. Since everything I've seen in SWT is just done through JNI, it's just another library, so anything made in Eclipse can be run in Netbeans and vice-versa. You may need to port your project files and fix your classpath, but none of the actual code needs to be
A lesson from Microsoft (Score:3, Informative)
Opensource is the opposite of this. I would be pissed too if I were Sun. How can we sell Forte for $2000 and give java away for free to sell more copies of forte?
It goes agaisnt their business model and Java is the only thing keeping them afloat since their hardware sales are losing to wintel/lintel.
Re:A lesson from Microsoft (Score:5, Interesting)
Re:A lesson from Microsoft (Score:2, Insightful)
in general just think this sort of competition is counter-productive in this type of setting. competition is useful in driving innovation, but in an open-source system, if the end users are pissed off about slow progress or missing features, they can alwa
Re:A lesson from Microsoft (Score:4, Insightful)
It requires very little effort to identify the reasons why Eclipse is better than Forte. Any fool can see this, so I won't waste time on it.
[IBM] used their own proprietary GUI API so the two projects could never interoperate.
They created an entirely new GUI API because Swing sucks. A better GUI for Java was desperately needed. Swing does not approach the results of a native GUI application, while SWT does. The SWT GUI in Eclipse is better than the GUI provided by the native OS in most cases.
Eclipse and Forte aren't even in the same ballpark. The phrase "universal tools platform" actually means something with Eclipse.
The battle is over. Eclipse won. The result isn't due to some IBM conspiracy against Sun. It's due to Eclipse being a better product.
they named their product as a way of snubbing Sun
The character of your rival says much about you. Sun and IBM are competing rivals. Nothing more ugly than that. It's a credit to Sun than IBM should name their work in such a way. It's Sun's job to remain worthy of that credit.
Re:A lesson from Microsoft (Score:5, Funny)
Indeed. Sun should feel honored to have such a noble and gallant competeing rival pissing on its shoes in public.
KFG
Mod parent up! (Score:2)
Re:A lesson from Microsoft (Score:2)
Who moderated this troll insightful? (Score:2, Informative)
Eclipse and Netbeans are different things (Score:3, Insightful)
Mysteriously, Eclipse has no built-in support for client-side GUI development. For a product that was supposed to be pushing IBMs SWT GUI library, this is a serious weakness. You can get rather second-rate plugins for Eclipse to do this, but in contrast netbeans has a first-rate Swing GUI designer tool. (For those who don't think Swing is a useful GUI, look at its integration into MacOS/X).
Re:A lesson from Microsoft (Score:2, Informative)
This is not true. You obviously haven't used Netbeans or Eclipse, since there is a huge difference between both. Netbeans is built on top of Swing. Theoretically, Swing is a really nice GUI library that is very flexible. In the real world, Swing made Netbeans too slow to be usable, not to mention the metal UI made it look ugly too. SWT, the GUI library Eclipse uses, doesn't have all th
Re:A lesson from Microsoft (Score:4, Informative)
You've got to be kidding. SWT is entirely non-proprietary and open source--you can implement it freely, you can change it, you can use the code, whatever.
That is in sharp contrast to Swing. Not only are there no open source implementations of Swing, you can't even implement it without satisfying a boatload of legal requirements imposed on you by Sun.
Hats off to Sun's PR department: they have lots of people like you thinking that black is white.
Re:A lesson from Microsoft (Score:3, Informative)
Hardly. Netbeans just does more than Eclipse by default. As a result it hogs tons of memory. (Not a big deal on developer machines with 512 MB.) Eclipse is quickly matching Netbeans' bloat as more and more features are added.
Re:A lesson from Microsoft (Score:2, Insightful)
NetBeans is dead, Sun needs to deal with it.
[And yes, I've used both, though I admit I haven't touched NetBeans for like a year and a half.]
Re:A lesson from Microsoft (Score:2)
You might want to retry it, or at least stop complaining about the speed of Netbeans.
I've been using it for the last two years, and its performance has gotten better over time.
Re:A lesson from Microsoft (Score:3, Interesting)
I'm afraid it is true. I use both Netbeans and Eclipse on a daily basis (even today...you should try Netbeans 3.5.1 It's quite different than the last time you used it when it was probably Forte 1.0). Eclipse out of the box is really fast to start up. Netbeans is not.
But then, out of the box I can edit XML, JSP, Servlets, have a Tomcat server, do Swing visual editing, have automatic code completion and a bunch of other stuff with Netbeans. Eclipse is not much more than Wordpad with synta
Re:A lesson from Microsoft (Score:3, Insightful)
512MB is for grandma's E-machine. Give me 2 gigs for a dev box any day.
Re:A lesson from Microsoft (Score:2)
Is your problem that forte isn't for free for people developing commercial applications with it?
Re:A lesson from Microsoft (Score:2, Insightful)
As usual... (Score:3, Informative)
Java... (Score:3, Insightful)
Re:Java... (Score:5, Interesting)
They wouldn't have to accept any changes they didn't like. They could still enforce exactly what they wanted with the Java trademark. They could put the source in the public domain with the simple stipulation that non-strictly-compliant implementations couldn't be called Java(tm).
Not having it free software certainly didn't slow Microsoft down one bit from extending it without their approval. In fact, the result was a freshly-designed competitor (C#/.Net).
They don't even seem to be making a profit on the language itself, why this obsessive desire to control it with an iron fist?
As for the people-might-use-it question, it would certainly make all the difference to this developer. I know there are free Java implementations, but until I see a solid crossplatform GUI kit, I'll probably continue to look elsewhere.
I don't think it's so nefarious. (Score:5, Insightful)
1: Sun develops Java. We all owe them for that. Let's face it. Love it or hate it, Sun has created a widely used language. They control what goes into the language.
2: Eclipse, as a development platform, is gaining ground all the time. Great. I'm all for diversity.
But, Sun's position is understandable. The presence of programming tools, in this corporate climate, can make or break a language. It seems like sun, more or less, is looking to have a more formal place in Eclipse's management. Conspiracy theories, of course, are abound.... except,
JAVA IS SUN'S LANGUAGE. Imagine, if Sun had more a voice in eclipse development, think of what is possible!!! What a concept? The language developers and the IDE developers working togeter?
Sorry for my smart-assed comments. What my point is, this has just as much potential to be a good thing for Eclipse. Sun is certainly capable of providing constructive agreement, and the Eclipse foundation doesn't actually need to listen to Sun. I just think that there's a lot of potential for cooperation.
Re:I don't think it's so nefarious. (Score:4, Insightful)
No, they want more influence that IMHO rather over-reaching. This paragraph show it:.
So, what Sun essentially wants is to have unified plugin system -- which I think it should be up to any IDE developer on how to do it rather than forcing the plugin standard. Sun sees Eclipse as a prospective unifier.
I speculate that this would have something to do with the Java beans -- which was designed to be the definitive plugin standard for Java IDEs. Unfortunately, Java beans are so poorly designed that all developers would need to extend the basic features by a whole lot. Eclipse did that and succeeded. Morever, hordes of open source programmer backed it up and become de facto standard.
What I see is that Sun wanted to get the momentum to recoup the control it has lost.
Re:I don't think it's so nefarious. (Score:4, Insightful)
JavaBeans are not about IDE plugins. It was developed as a programming model to allow one to create visual components that could be easily modified and controlled in a GUI builder (as such, tables, textfields, trees,
... are all javabeans in Swing).
Re:I don't think it's so nefarious. (Score:2, Insightful)
Re:I don't think it's so nefarious. (Score:3, Interesting)
Competition will be better in the long run... (Score:3, Insightful)
I'm just happy there is a real alternative to JBuilder now... don't get me wrong, I love JBuilder but there is no way I could afford it at the prices they are charging.
Re:Competition will be better in the long run... (Score:5, Insightful)
Come on. (Score:4, Insightful)
Honestly, Sun has been a perpetual source of sub-standard implementations of their own technologies for almost 10 years. What is the most trusted Java JVM for Linux or BSD systems? IBM JVM 1.3.1 "Black down". Increasingly this is no longer the case, as sun continues to revise the Java API faster than a decent implementation can be produced. I ask, Sun wants their net beans IDE to be "The One". Why?
It's not as if they have done a great job implementing their own technologies in the past. In fact Sun is responsible for a day to day lack of leadership of the Java Platform as a whole. Take for example the great mess of XSLT and XML parsers. Sun's "reference implementations" of such things are infamous in the developer community. Incomplete implementations and low performance drive developers to find other tools, which may or may not do things the way that sun wants - more importantly it creates an environment where developers must use different tools to get the same job done, creating incompatibility and complexity in an environment that carries compatibility as a flag of independence.
IBM has finally rallied around the notion of Linux and Java as a common platform - and Sun in usual fashion tries to "gain control". I ask the community what has Sun's control *REALLY* gotten us besides a mess of different API's, frameworks and "reference implementations".
Re:Come on. (Score:2, Funny)
...which will promptly fail, probably, because its LDAP client will need FUCKING PATCHING RIGHT OUT OF THE GOD DAMNED BOX...
...I mean, it's going to be released non-compliant and b0rked, like Sol8 and Sol9 were.
Re:Come on. (Score:5, Informative)
Excuse me? You must be confusing the IBM JVM with the Blackdown JVM from blackdown.org [blackdown.org], which is a specialised port of the Sun JVM to Linux.
Faster than a decent implementation can be produced? You're really exaggerating now:
Java has gone from 1.0 (Januari 1995) to 1.4.2 (June 2003, which was 9 months later than 1.4.1, September 2002) to 1.5 (alpha available now, not sure when scheduled for release, I thought the end of this year).
At this moment I can choose between installing Sun 1.4.2, blackdown 1.4.1 and ibm 1.4.1 I on my gentoo box. Then there are also JVMs like JRockit, which is also at 1.4.2.
The are also no major API changes between the point releases (1.4.1 for example added support for Webstart, 1.4.2 added WinXP and GTK look and feel), the rest are only bugfixes.
Eclipse invited Sun... (Score:5, Interesting)
Personally, I like the direction that Eclipse is going. I tried Forte once and it just didn't feel right. Eclipse however, has been fantastic since I found it and started using it as my work IDE. (My whole project team adopted it as well.) It has made coding Java a pleasure as no other IDE (in any language) has, and has led to me using Java as a development language for personal projects where I otherwise would have used C or C++. I've largely given over using XEmacs for coding Java. I'm also impressed by the speed of the Eclipse development cycle with new milestones coming out approximately every month. I always get this kid-in-the-candy-shop feeling checking out the New and Noteworthy page with each new milestone.
Re:Eclipse invited Sun... (Score:2, Funny)
Re:Eclipse invited Sun... (Score:2)
NetBeans gives Swing a bad name. The code in NetBeans is so slow and crap that everybody feels they need to blame something, and most people point the finger at the Swing API even though there are other IDEs written in Swing that work even faster than Eclipse.
IMHO, Eclipse didn't need to be written on a whole new widget toolkit. If you want native widgets, write a set of UI delegates that use the native widgets. If you want a crappy API without garbage collection, use C...
Re:Eclipse invited Sun... (Score:2)
Suppose you want native widgets on 14 different OS's? Are you supposed to custom-build your widgets on each of those 14 OS's - even though you don't understand half of their API's?
The whole idea of SWT is that somebody else has already "writ[ten] a set of UI delegates that use the native widgets". Why go and write another set?
I haven't seen too many swin
Re:Eclipse invited Sun... (Score:3, Insightful)
What I was saying was that 'somebody else' could have written them all as native UI delegates, and still had the Swing API on the top, instead of having to invent a whole new, worse, API.
Then you could easily have your gnome app lookalike contest. Windows is already taken care of if you have -Dswing.defaultlaf=com.sun.swing.plaf.windows.Win
d owsLookAndFeel set as default in your Java installation, or if the equivalent is done in the code.
Re:Eclipse invited Sun... (Score:2)
I hope this squabble starts pushing the team a little, as numerous others and I, have been waiting on a "Folding" implementation [eclipse.org] for a very long time.
Oh, well. Another pointless PR ping-pong match. (Score:5, Funny)
Eclipse (with exaggerated innocence): Moi? Whatever do you mean?
Sun: You know.
Eclipse: Actually, no, I don't.
Sun: Don't be coy!
Eclipse: YAWN. Do you have something to say or what?
Sun: You know damn well we're working on Swing, and Netbeans, and all that, and here you come out with SWT and start going off on weird tangents, I mean, hell, who's in charge here? I thought you were going to be cool about this.
Eclipse: I am. People really dig java, and they're having a blast using Eclipse to work on it.
Sun: Yeah, thanks a lot, poor Forte...
Eclipse: I didn't tell you to charge so much for it.
Sun: I didn't tell you to be free!
Eclipse: No, that was my idea. But it's cool anyway. Anyway, you've got problems of your own. It's like, make up your mind already.
Sun: What the hell are you talking about???
Eclipse: Java 1.1.8, then Java 1.2, then Java 1.3, then 1.4, and every five minutes you "depreciate" something, driving your developers nuts...
Sun: You... How can you... You...
Eclipse: And then there's AWT, no, it's Swing, no, it's going to be some kind of weird beany scheme...
Sun: You... OOOOH you make me SO MAD! Swing was a good idea! So were the beans!
Eclipse: Well, so's SWT. Deal.
Sun: It's not the same thing!
Eclipse: Sure it is.
Sun: Is not!
Eclipse: Is too!
Sun: Is not!
Eclipse: Is too! Anyway, what's the difference? SWT is based on AWT, so it works everywhere, doesn't it? You should really dig it.
Sun: (Sulks)
Eclipse: Aw, come on, join the board of directors. You know you want to. You can even keep your Netbeans. I promise.
Sun: I'll think about it...
Eclipse: Yep. I know.
Re:Oh, well. Another pointless PR ping-pong match. (Score:3, Funny)
Sun is just pissed (Score:5, Insightful)
Re:Sun is just pissed (Score:4, Informative)
Maybe not that bad, but not good. We use swing across the board at our company and I can't tell you how hideous each window is. And they look different on every machine. A layout that looks good on my system has buttons cramped in the corner on somebody else's.
And everything runs slow as hell.
Not saying that doing the stuff in C++ would be any easier, but Java's GUI packages are all sorts of shady.
Re:Sun is just pissed (Score:3, Informative)
Sounds like you're not using layout managers correctly (or to put it differently, to their full potential).
Re:Sun is just pissed (Score:2)
IDEA [intellij.com] from the folks at intellij just about blows all other java IDEs out of the water IMHO, and its Swing.
It does suffer from the occasional slow down (during garbage collection) but so does eclipse.
Whats more, the look and feel is miles ahead of eclipse. It is commercial, but its worth every penny if you spend long enough infront of it.
GNOME vs. KDE (Score:2)
Sun and Eclipse will work together eventually, just like we now have freedesktop.org. Just cut the politics and "the community needs this and that" and keep doing what makes sense technology-wise.
User Interface (Score:5, Insightful)
Re:User Interface (Score:2)
Re:User Interface (Score:3, Insightful)
I don't. I want it to look the same on different platforms, and I don't care about "native" performance, at least on my machines netbeans is more than fast enough. Now, why is Sun (or rather the netbeans.org people) supposed to do what you want, anyway?
Re:User Interface (Score:4, Insightful)
They don't have to - any more than MS has to listen to my needs when coding the next version of Internet Explorer.
However, when somebody does come along and listen to the needs of their customers, you'll see them flocking away in droves.
If Sun wants to be the official creators of a substandard version of Java they should feel free to do so, but they shouldn't be surprised when people are publishing hacks left and right to make it actually work the way developers want it to work. Sure, the hack might not be the "one true way" in Sun's mind, and it would be better if Sun and IBM cooperated to get SWT integrated into Java rather than working in opposition. However, enough developers prefer the IBM way to the Sun way, to a degree that Sun is having trouble controlling their own language despite the fact that they have worked hard to keep much of it proprietary.
They should just do what other have suggested and open source the language. They could take the UNIX(tm) approach and tell those who package up JDK's and JRE's that they can only use the "Java" trademark if they meet certain requirements.
Re:User Interface (Score:3, Insightful)
Then you are a fool, I pity the users of your software.
As a user I expect apps running on Windows to have a L&F that is consistent with other apps on my platform, ditto for GTK, and OSX. IBM recognised this simple fact with SWT. Sun didn't quite get it with Swing but then tried to correct their mistake by reimplementing native L&F over their cross-platform widget set - which is nuts.
It is just amazing that some developers are still s
Eclipse Forte (Score:5, Informative)
I've tried the 2 of them and they both are pretty decent IMHO. The big difference, and I mean big, is how responsive each are on a fairly moderate system. After starting forte, I can go have a coffee and a smoke and maybe even take a quick nap...at which point forte should be running when I get back and I can then get to work.
Eclipse on the other hand is really fast. When I first tried it I couldn't believe that it was a Java program. It even looks good, rather than that ancient, dull look that most Java apps have.
Since then, I've upgraded to a P4 with 1G ram and they both run pretty good (although Eclipse is still much faster). I do like both of them but Sun and IBM and anyone else interested in furthering Java should collaborate on 1 killer IDE that puts any MS tools to shame, and allows lazy programmers (like me!) to be more productive in less time
:) As Eclipse appears superior to forte and probably has the largest installed base (don't know how it compares to Jbuilder) Sun would probably get a lot more respect from developers.
-Pat
Dissenting opinion (Score:2, Interesting)
I repeat.
SWT GTK is unusable under Linux and Eclipse devs do not know what is wrong and cannot fix the bug, even after much screaming on bugzilla!
This shows a clear inferiority of SWT to me. It's not crossplatform in a workable way.
AWT may be ugly, but it works! It may not be the fastest, but it is f
Re:Dissenting opinion (Score:4, Insightful)
First, I dont' think it's realistic to cripple a UI's features for crosscompatibility. Second, looks do count or most people wouldn't switch from Swing's nasty ass metal look.
"IDEA uses Swing and it's fast enough. JEdit using Swing and it is fast enough.".
And no, JEdit is not fast enough. That's like saying Netbeans is fast enough. Neither can handle Eclipse's cool coding features on a crappy computer, and neither responds to me faster than I can think (using a crappy under $1000 computer).
"It's not crossplatform in a workable way."
It is, that's why Eclipse is super popular.
Re:Dissenting opinion (Score:4, Informative)
First, I dont' think it's realistic to cripple a UI's features for crosscompatibility. Second, looks do count or most people wouldn't switch from Swing's nasty ass metal look.
As opposed to SWT's nasty ass Windows 2000 look..
This is complete bullshit. IntelliJ IDEA runs fine on a PII-333 laptop with 256Mb of RAM, whereas Eclipse runs like complete shit on the same box. Since I don't have $3000 for the new laptop with specs high enough to run Eclipse, I won't be buying up in order to use it any time soon.
And no, JEdit is not fast enough. That's like saying Netbeans is fast enough. Neither can handle Eclipse's cool coding features on a crappy computer,
Well you're right there, at least, JEdit and NetBeans both stink.
Re:Dissenting opinion (Score:3, Interesting)
The Windows XP look in Swing is 10 times better than the Windows 2000 look in SWT. Metal doesn't enter into it when one line of code can set it to Windows look and feel. Now I'm waiting for the GTK look and feel to actually use the current style...
And no, I always ran the current EAP version of IDEA. And yes, it did say those requirements were minimum for some reason, but it worked fast enough to use on the PII-333, which is much more than I could say for Eclipse.
Re:Dissenting opinion (Score:2)
Who's buying IDEA? I'm using the EAP, continuously updating every 30 days. That's for the last 4 months or so, before that I was using my work's license, but unfortunately I haven't managed to wean the current employer off Visual Studio.NET yet.
This also means I'm on the current version, not this "older version" of which you speak.
Even if I did buy IDEA, the laptop I could buy with the money would be hardly any better than the last one. The situation would probably be the same, fast IDEA vs. slower Ecl
Re:Dissenting opinion (Score:2)
As far as Visual Studio.NET goes, the complete lack of refactoring makes it a non-option. I recently asked a colleage how to do it, and he had been on VS.NET for so long he didn't even know what refactoring meant. Apparently it's become common practise to use "find and replace in files" to perform refactoring (find next match, check if it's the right kind of object, rename if it is, don't if it isn't, repeat until the refactoring job is complete 2 hours later.)
Eclipse and IDEA can both do it, and both h
Re:Dissenting opinion (Score:2)
Re:Dissenting opinion (Score:2, Interesting)
I beg to differ: it's very usable for me.
More importantly (in a text editor), it has excellent font support, thanks to GTK+'s fontconfig/freetype support. AWT/Swing basically only supports the quite unreadable Lucida fonts that are included in the JRE -- and no sub-pixel anti-aliasing.
That hurts readability a lot, especially on an LCD monitor.
Re:Dissenting opinion (Score:2)
I can't speak for SWT coding, but SWT GTK runs fine in linux, so does eclipse. I wouldn't have been using it as my main editor for the last 2 years otherwise. I've had the motif and GTK versions running on sub 500mhz machines and they're still plenty useable enough to develop on. Thats even on different distro's with different versions of java installed. Sounds like a local issue rather than a problem with SWT.
SWT works great
Re:Dissenting opinion (Score:3, Funny)
Sure, maybe if you don't need silly controls like tables or trees, then awt is great.
A Company of Dilberts (Score:5, Interesting)
Why care about this? (Score:2)
In the end, you, as a developer need to figure out what tool you want to use. I think it's great there are so many choices. On the project I'm working
New name for Sun -- indian giver (Score:2, Interesting)
It seems Sun has a problem understanding GPL, and similar Free Software/Open Source Software type licenses and projects today.
Yeah.
Unix will be back. Really, it will. Customers will return to Solaris one day! After all
I don't care what you say about Microsoft... (Score:2, Informative)
Eclipse is really not very good (Score:3, Interesting)
What's with SWT? It's horrible to code with. It has no really control over look and feel. You have to dispose of everything explicitly (al la C++) which completely goes against Javas garbage collection paradigm.
I right an app in SWT it looks one way on Windows and another way on Gnome (usually a complete mess on one).
Don't get me wrong I think Forte and Sun One are pretty awful too. The only sensible choice in the IDE market right now is Intellij (no don't work for them). However this IDE is not open or free (unfortunately).
Personally I don't think Sun or IBM are particularly good at writing software and should stick to their Hardware and Consulting (IBM) core competancies.
Re:Eclipse is really not very good (Score:2)
Eclipse is really very good. (Score:3, Insightful)
Re:Eclipse is really not very good (Score:3, Informative)
GC was not made to clean up (native) resource allocations, but only to reclaim memory. You should bear that in mind.
Re:Eclipse is really not very good (Score:3, Insightful)
large open source project open-ness "a sham". (Score:2, Insightful)
according to the article, IBM is basically going to maintain control of this project. it is also hinted, in the article, that the project is not going to accept code contributions from outside of the group of people who are members of the project.
in other words, it is possible to obtain the source code, but the open-ness of the project is a complete sham.
that's fine by me, because at least the code is available.
Sensitive to Business Interests? Why? (Score:2)
I won't speak for Eclipse, but if that question were put to me, I would answer along the lines of: "No. We are technologists. We will focus on technology. It is the responsibility of busineses to focus on business interests. Agile busineses will adapt to new and changing technology, or they will die."
I don't want to *need* any tools. (Score:3, Interesting)
When I code in C, I use Emacs and Make, and I don't think I'm at much of a disadvantage with respect to people who are using C IDEs. In an ideal world, when I code in Java, I'd like to use Emacs and Ant, and I'd like to be at not much of a disadvantage with respect to people using Eclipse and NetBeans.
I actually have high hopes for Java 1.5 in this regard. The whole "metadata" thing could totally revolutionize Java development, making it pretty simple to do fairly complicated things. My hopes are that once that's in place, the tools are much less necessary.
Too late, fix swing instead (Score:3, Insightful)
And even worse, swing was full of bugs. Up until j2se 1.4.x swing doesn't support european keyboards, and some characters commonly used in many programming languages can't be typed using various European locales on some platforms. This bug has bin around since the day of jdk1.2 and there are numerous others that act as show stoppers for writing serious applications with java GUI. And they have bin around for years.
This is very sad since the swing architecture is quite elegant. But somehow Sun decided that java was for the server side only.
Now they complain that a major app like an IDE isn't using their archtecturally good, but in reality unsuported GUI framework. Sun would do much better if they started to fix the bugs in swing, and perhaps use some profiling tool to find the worst performance bottlenecks, than to try to make development tools of their own.
That way people could actually use java for creating cross platform GUI apps. This is what java once was intended for. As it is today, you are probably better of using QT and C++ for cross platform work.
Today the developers have already chosen Eclips.It have a good chance of replacing emacs as the swiss army knife of software development.Just like most people extending emacs didn't complain that they had to use lisp to extend their tool even if they normally didn't do their work in lisp, people extending eclipse will not mind using swt.
As Eclipse is the dominating java IDE of today tool venders will have to support it for a long period of time. A defacto standard is alread set.
By creating an alternative standard Sun is the one who is creating the fragmentation. And given Suns long tradition of creating IDEs with low usability fragment is probably the only thing it will be.
The only OK development tool I have seen so far is Forte/Netbeans and that was adopted by Sun in a quit mature state.
Instead Sun should focus on fixing swing. That way people might start using it for their cross platform GUIs regardless of what IDE they prefer to use. If they don't, people might find out that swing in reality only sort of works on windows, and then having a native swt library support for a few other platforms doesn't seam too bad.
Sun needs to join Eclipse, not the other way round (Score:4, Interesting)
Eclipse allows you to develop plugins for the IDE, and provides a powerful interface to do so. NetBeans allows for plugins as well. More people are doing plugins for Eclipse. Plugins help drive the market. Seems like Sun has plugin envy.
"Don't define 'interoperability' on your own terms, but rather work with other major players in the industry to achieve actual interoperability," the Sun letter told Eclipse members. "Push the organization to be a unifying force for Java technology."
Sun should take it's own advice. I hope Eclipse doesn't try and fix what ain't broke. Sun should adopt Eclipse's model. It is clearly superior.
Eclipse is *not* a Java IDE (Score:3, Insightful)
Let's see: you want to build an IDE. You want to write it in a high-level language with garbage collection. You want high performance. You don't want to use a non-mainstream language like Smalltalk. There aren't so many options.
So you pick Java.
The GUI APIs suck. So you build a new one from scratch and create SWT.
The fact that Eclipse is written in Java is not supposed to be of interest to its users except the few power-users that write extensions. The fact that it can be used to write Java code is irrelevant, too. After all, you can write Java in Emacs or J# in Microsoft Visual Studio.
Sun, get off IBM's back.
astounding hypocrisy (Score:5, Insightful)
That is, no lock-in other than into Java itself, of course.
In particular, Sun warned that the new bylaws of Eclipse give the position of executive director, now held by an IBM employee, an "unusual amount of power" to dictate the work of the open-source group. Sun also questioned whether IBM employees will continue to make up the majority of project staffers.
Sun is one to talk. Eclipse is open source. Anybody can take it and fork it if they don't like what the Eclipse effort is doing.
That's in stark contrast to Sun's Java implementation: not only is it fully owned and controlled by Sun, Sun even owns the patents and copyrights related to the specifications. And Sun's "Java Community Effort" is run by numerous people from Sun. And because Sun is so afraid that people are going to run away in droves given a choice to do their own thing, they are refusing to open up their Java specs or implementation. They say there is "a risk of forking"--you bet there is, given how poor a job Sun has been doing.
So, what does that mean? IBM has a little influence over an open source effort to produce one of many development tools, an influence that only matters as long as Eclipse does a good job because the minute they stop, people will fork it. Sun, on the other hand, has sunk their teeth and claws into the Java standard and platform and isn't letting go. Sun has the entire industry by the throat and various other unmentionable parts.
Sun's hypocrisy is simply astounding. What I can't figure out is whether anybody at Sun actually believes the PR bullshit they are releasing or whether the entire company is in on it.
Real Programmers(tm) use a *text* editor (Score:3, Interesting)
For me, both are too intrusive on the development process. I have a file with some program, script, or data and I want to edit it. Maybe this file will be fed to some type of filter, or is in some form that the editor does not "know" about. Maybe it is from one of my "projects" or maybe is a random file that I want to edit or examine.
It seems like in these situations, the typical IDE wants to know what "project" this file belongs to, or wants to *copy* this file from its working directory to some IDE owned part of the filesystem. Like I've made some commitment to never use other editors again, so I won't mind that the "real" copy of this file will now live off of some IDE owned directory now. I don't understand why an IDE can't keep what ever type of metadata it wants its own namespace but let me keep my working file in whatever place suits me.
It also seems that the point of these IDE's is to enable people to program who need crutches to do so. It seems with the excess supply of labor, it is now possible to hire people who don't need this type of help. I would question the wisdom of hiring someone who cannot build a mental model of the system they are working on, or need "wizards" that insert boilerplate "hello world" programs to get you started. Yet, I've seen plenty of job postings that seem to suggest that knowing how to use a particular IDE is equivalent to knowing the language itself.
That is not to say some automation like completion are not good. The less typing the better. But there is a difference between saving keystrokes and enabling people who don't know what they are doing. It is also interesting to me that the types of people who rely on their editor to know how to program are the same types who end up wasting more time navigating through a bunch of menus per lines of code written.
Its like the person who uses some GUI filemanager rather than a shell with file completion abilities. Witness the shell user change directories before the GUI users hand even reaches the mouse. While a GUI filemanager is a good tool to enable a secretary who doesn't care to learn how to use a computer, it is a sad statement when an IDE is used to enable a programer who doesn't care to learn how to program.
Re:What does Sun have to do with Java? (Score:2, Interesting)
get real. I have os x, and I use Sun systems everyday - no comparison. it's makes me gag, to read you making such a simplistic and ignorant comparison. os x can't touch solaris/sparc - sorry game over,that's life. when os x can handle 70+ CPUs in ONE system - give me a call. Otherwise, take your little no experience skinny 14 year old ass back to the farm.
Re:java technology. what's it all about? (Score:2)
Eclipse isn't a Java implementation, it's an IDE implementation. Personally I like IDEA better for developing in Java since it's a lot smoother than Eclipse, but to be fair, Eclipse can be used for more than Java.
On a related mote, GCJ actually compiles SWT-laced code, and thus may actually be possible to use to compile Eclipse.
:-)
Re:Jesus. You people really don't get it. (Score:2, Insightful)
Java feels as if it has a new lease of life thanks to Eclipse and GCJ. Sun have done absolutely nothing on AWT to make it any better - making sure that everybody goes for Swing instead - whereas I would imagine that IBM would have been fine with Swing sitting on top of a better AWT.
At the end of the day, there is almost certainly a technical solution to this. Eclipse might well move to a swing like system that can sit
Re:C# (Score:3, Informative)
Not sure I'm with you on pass by value, it's already done with RMI/Serializable Objects for instance.
Big +1 on generics, I can't wait until 1.5! Also, Java 1.5 attributes will make themselves useful in some situations, but they are already here in several open source libraries.
I'd take C# as an option in a real solution (read: billable) only in two scenarios: 1. I have a MS only client (I do!) or 2. The open source community gets a lot more excited about it.
To elaborate, the fact that Java has such a ri
|
http://developers.slashdot.org/story/04/01/30/2140201/sun-and-eclipse-squabble
|
CC-MAIN-2015-18
|
refinedweb
| 8,023
| 73.78
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.