text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
Event Hubs Management Library Sample
This sample uses the .NET Standard Event Hubs management library to show how users can dynamically create Event Hub namespaces as well as entities. The management library can be consumed by both the full .NET Framework and .NET Core applications. For more information on .NET Standard see .NET Platforms Support.
The management library allows Create, Read, Update, and Delete operations on the following:
- Namespaces
- Event Hub Entities
- Consumer Groups
Prerequisites
In order to get started using the Event Hub management libraries, you must authenticate with Azure Active Directory (AAD). AAD requires that you authenticate as a Service Principal which provides access to your Azure resources. For information on creating a Service Principal follow one of these articles:
- Use the Azure Portal to create Active Directory application and service principal that can access resources
- Use Azure PowerShell to create a service principal to access resources
- Use Azure CLI to create a service principal to access resources
The above tutorials will provide you with an
AppId (Client ID),
TenantId, and
ClientSecret (Authentication Key), all of which will be used to authenticate by the management libraries. You must have 'Owner' permissions under Role for the resource group that you wish to run the sample on. Finally, when creating your Active Directory application, if you do not have a sign-on URL to input in the create step, simply input any URL format string e.g..
You will also have to install DotNet Core in order to run the sample.
Running the sample
Populate the
appsettings.json file with the appropriate values obtained from Azure Active Directory, and run the app using Visual Studio or
dotnet run. You may find your SubscriptionId by clicking More services -> Subscriptions in the left hand nav of the Azure Portal.
{ "TenantId": "", "ClientId": "", "ClientSecret": "", "SubscriptionId": "", "DataCenterLocation": "East US" }
Required NuGet packages
In order to use the
Microsoft.Azure.Management.EventHub package, you will also need:
Microsoft.Azure.Management.ResourceManager- used to perform operations on resource groups, a required 'container' for Azure resources.
Microsoft.IdentityModel.Clients.ActiveDirectory- used to authenticate with Azure Active Directory.
Programming pattern
The pattern to manipulate any Event Hubs resource is similar and follows a common protocol:
Obtain a token from Azure Active Directory using the
Microsoft.IdentityModel.Clients.ActiveDirectorylibrary ```csharp var context = new AuthenticationContext($"{tenantId}");
var result = await context.AcquireTokenAsync( "", new ClientCredential(clientId, clientSecret) ); ```
Create the
EventHubManagementClientobject
csharp var creds = new TokenCredentials(token); var ehClient = new EventHubManagementClient(creds) { SubscriptionId = SettingsCache["SubscriptionId"] };
Set the CreateOrUpdate parameters to your specified values
csharp var ehParams = new EventHubCreateOrUpdateParameters() { Location = SettingsCache["DataCenterLocation"] };
Execute the call
csharp await ehClient.EventHubs.CreateOrUpdateAsync(resourceGroupName, namespaceName, EventHubName, ehParams); | https://azure.microsoft.com/zh-tw/resources/samples/event-hubs-dotnet-management/ | CC-MAIN-2018-17 | en | refinedweb |
Step 2: Now remove “Arduino Original IC” from Arduino board with the help of Screw Driver. And insert “New Atmega328 IC” into the Arduino board.
Step 3: Now open Arduino IDE and go to File -> example -> ArduinoISP and open it.
After opening ArduinoISP, select Arduino UNO board from Tools -> Board -> Arduino Uno.
Then select COM PORT from Tools -> Serial Port -> COM10
and then upload ArduinoISP Sketch.
.
Step 6: Now in Arduino IDE go to Tool and click on the Burn Bootloader.
Now you will see the Rx and Tx LED on the Arduino board is blinking randomly for some time. It means Bootloader is burning in new ATmega 328 IC. And Arduino IDE will show “Done burning bootloader”. Now you can use this ‘New IC’ in your Arduino board.
Step 7: Now Build your own Homemade Arduino Board on Zero PCB by soldering the components gathered in Step 1, following the Circuit Diagram below. Also check the Video below.
Insert the ‘New IC’ in this board and you are done.
You can also build it properly on PCB with a proper PCB layout and etching. Learn here to Make PCB at Home and convert Schematic into PCB layout using EasyEDA.
For LCD Interfacing, just connect your home made Arduino Board with Original Arduino Board using Rx, Tx, RST and GND pins of Original Arduino Board, as shown in below Fritzing Circuit or above Circuit Diagram. And upload the Below Given Code (Code section).
Please Remove ‘Arduino Original IC’ from the board, when you upload code in new Arduino IC on the bread board or Zero PCB. You can power your Arduino Board with the 5v Pin of Original Arduino Board, as we have done in above Fritzing Circuit.
#include <LiquidCrystal.h>
LiquidCrystal lcd(12, 11, 5, 4, 3, 2);
void setup()
{
lcd.begin(16, 2);
lcd.print("HomeMade Arduino");
lcd.setCursor(0,1);
lcd.print("Circuit Digest");
}
void loop()
{
}
how to burn bootloader without original ic
because i can't able to upload any programme in original ic, my bootloader is crashed
i have a new ic, but i can't burn bootloader
pls tell any alternate idea
please you have not yet posted anything on cnc machine.i will be glad if anything like that is posted.thanks in advance
avrdude: stk500_getsync() attempt 8 of 10: not in sync: resp=0xe9
please what can i do am getting this kind of error message
try with arduino ide 1.6.7
Nice piece of work done, though I believe little more innovation is required to give it a proper shape. | https://circuitdigest.com/microcontroller-projects/how-to-make-arduino-board-diy | CC-MAIN-2018-17 | en | refinedweb |
Is Perl Better Than a Randomly Generated Programming Language? 538
First time accepted submitter QuantumMist writes "Researchers from Southern Illinois University have published a paper comparing Perl to Quorum(PDF) (their own statistically informed programming language) and Randomo (a programming language whose syntax is partially randomly generated). From the paper: 'Perl users were unable to write programs more accurately than those using a language designed by chance.' Reactions have been enthusiastic, and the authors have responded."
Better? (Score:5, Funny)
Better? How about we start with distinguishable?
Re:Better? (Score:5, Funny)
Indeed. This is the reason why the Obfuscated Perl Contest is run by the Department of Redundancy Department.
Re:Better? (Score:5, Informative)
Yet another ridiculous summary. The study wasn't which language was better, it was in which language can first-time users write a program more accurately. My guess is that Cobol would beat any of the three - it is designed from the ground up to be readable.
Quorum looks a lot like Pascal (Score:2)
My guess is that Cobol would beat any of the three - it is designed from the ground up to be readable.
So are Pascal and Python. In fact, Quorum looked a lot like Pascal from what I saw in the PDF.
Re:Quorum looks a lot like Pascal (Score:5, Insightful)
Languages that consider whitespace need to die.
Re:Quorum looks a lot like Pascal (Score:4, Insightful)
Well said.
If you want your code properly indented, just indent it. It's like the Python apologists are incapable of formatting their code properly unless the language forces its particular version of "properly" on you.
Before the trolls fire back: In the case of code written by others, run it through a pretty-printer. Problem solved. Oh, as a bonus, you can use that same tool to format code the way you prefer, and switch it back to whatever style your company requires at the press of a button. Why is this a bad thing?
Re: (Score:2)
Because in practice, the automated code cleaner results in almost every line of code in the file to have a difference highlighted by my company's source code repository diff generator. This obfuscates the true nature of the change to the logic in the code I am making in order to fix a bug or implement a feature. In turn, that makes it harder for people responsible for maintaining the code to determine what exactly changed from version to version.
Re: (Score:3)
I've always felt that version control systems should store syntax trees, but have never had the time to do the work to do that.
Re: (Score:3)
Just enforce a formatter on commit. If the formatted code is any different from the original file, abort the commit. Git makes this kind of thing easy. It also means the repository is always in a sane state. A simple script can reformat all changed files trivially before a commit operation.
Re: (Score:3)
No-one is saying that Python is good because it forces you to indent. Quite the opposite: all sane people indent their code anyway, whatever the language, so why not use that to indicate program structure?
Re: (Score:3)
Because not everyone uses the same indentation as everyone else. If indentation rules need to be worked out before starting a project, you're wasting more time than a language where indentation has no meaning.
Re: (Score:3)
My main objection to semantic indent can be summarized in this psuedo code example:
//end foo //end fubu
class fubu
function foo(bar)
start function
more code
all's well
console.log("debug message")
more real code
Having that debug console statement out of band with the rest of the functional indents makes it easy to notice when scanning code. Now you might say one should never debug th
Re: (Score:3)
Re: (Score:3)
Oh the fucking irony of it. I was trying to post the following using pre and code tags without success and just ended proving your point:
Sure. Because
def function(): if condition: while ok: do_something() end while end if end def Is much more readable than: def function(): if condition: while ok: do_something()
Re: (Score:3, Insightful)
Most languages consider whitespace. In most programming languages where both of the following are valid, they will have different semantics:
1: foo bar
2: foobar
Quite a lot of languages even distinguish between different types of whitespace, e.g., C where the following two constructs are different, despite differing only in which particular kind of whitespace:
1:
//
foo();
bar();
2:
// bar();
foo();
Python may be unusual in which differences in whitespace it considers
Re: (Score:2)
I had no trouble parsing that
:)
But, yes you are correct.
Re: (Score:3)
Really, Python's problem is that both spaces and tabs are legal - if the language required one or the other, it would be fine, modulo subjective readibility arguments about braces.
Re:Quorum looks a lot like Pascal (Score:5, Interesting)
Fortran (at least, IV and earlier) totally ignored white space, even in the middle of an identifier. Of course, this led to problems like
DO 10 I = 1.10
meaning "assign the floating point number 1.10 to variable DO10I", when the programmer meant to type
DO 10 I = 1,10
meaning "loop from here to label 10 varying I from 1 to 10".
An error something like this caused the Mariner II probe to Venus to go off course at launch and the Range Safety Officer hit the destruct.
Re:Quorum looks a lot like Pascal (Score:4, Funny)
Fortran is interesting, theologically - it considers God to be real unless declared integer.
Re:Quorum looks a lot like Pascal (Score:5, Insightful)
Re:Quorum looks a lot like Pascal (Score:5, Interesting)
If those punctuation marks (or keywords) make the code more readable, then they're not gratuitous are they? I, for one, find brace-less languages fantastically hard to read, Python especially.
I LUUUUURV Python so much that if it was legal I would marry it, but I completely agree. Curly braces to denote block starts and stops make the code easier to read and manage. I should not have to wonder whether a function or block continues past the bottom of the current screen's worth of code when it ends with a few lines of whitespace because I have to know the indentation level of the next line of code to know if it's in a different block context than the last line of code on the current page. I also should never have to wonder if I re-indented code correctly when cut/pasting or adding/removing a level of block nesting.
I don't care if Python wants to keep the indentation requirements. Forcing the code of awful programmers to be more readable in this way is a good thing. Forcing all code to be less readable in another way is a bad trade-off. Just add in the damn braces! Then I can use tools to auto-indent for additional readability.
Re: (Score:3)
Re:Quorum looks a lot like Pascal (Score:5, Insightful)
Personally, I find that curly braces make code easier to read on top of perfect indentation. In truth, though, it's not so much the braces, as it is the nearly-empty lines of code that are spend to put those braces (note: this specifically applies to ANSI-style brace layout only, not K&R style). It creates a kind of a visual box, clearly delimited, with body of the block in it - more so than plain indentation does by itself.
That said, I wouldn't call Python "fantastically hard to read", quite the opposite - it tends to be one of the easiest languages to read. Not because of indentation, but because its basic syntax is rather clean.
Re: (Score:3)
That is only one of the two syntactic roles assigned to parentheses. The other is to disambiguate priority. For instance, you have to write (a + b) * c if you don't want it to mean add(a, mult(b, c)). But you see, combining multiple lines is very exactly this: priority disambiguation. Consider "if (cond) a; b". Priority is such that the statement is parsed like "(if (cond) a); (b)", because the if statement doesn't eat up semicolons. If you want it to mean "(if (cond) ((a); (b)))", then you could just put p
Re: (Score:2)
I remember monospaced fonts. Punch cards with FORTRAN used them. Remington typewriters used them too.
Re: (Score:2)
Get a language that can be programmed using with any text editor.
Re: (Score:2)
Way to misunderstand what was being said on purpose. You must be a fun guy at parties.
indeed (Score:2)
me: My hovercraft (pantomimes puffing a cigarette)... is full of eels (pretends to strike a match).
them: Ahh, matches!
me: Ya! Ya! Ya! Ya! Do you waaaaant... do you waaaaaant... to come back to my place, bouncy bouncy?
Re: (Score:2)
Quorum looked a lot like BASIC to me. Only the keywords were different. The headline for the article is horrible (as usual). The headline (and summary) neglect to mention that this test was given to people who had no experience in programming.
We compared novices that were programming for the first time using each of these languages, testing how accurately they could write simple programs using common program constructs (e.g., loops, conditionals, functions, variables, parameters).
My takeaway from this "research" is that Perl is not a good language for beginners. If you already know the general concepts of programming, Perl is fairly easy to pick up.
Re: (Score:3)
COBOL is designed to be readable, but it's hardly writable.
(roughly 10 years of experience developing COBOL code).
Re: (Score:3)
No one writes COBOL anymore.
We just tweak it.
Re: (Score:3)
As I understand it, there was only one original COBOL program ever written. Everyone else copied & modified it for their purpose.
Re: (Score:3)
Indeed. vim is impossible for a first-time user. That does not mean it is a terrible editor. Over-emphasizing day 1 productivity is a bad thing when most of your days will not be 'day 1'.
It's the study participants. (Score:2)
You know, the "study" (which I didn't read, this being slashdot 8-) probably involved exposing the languages in question a hugely diverse and wide ranging number of College Undergrads That Fancy Themselves Programmers. As such, the fact that the quality of the code was not distinguishable despite the language chosen indites the programmers more than the languages.
The problem with most studies is that College Freshmen already know everything so any attempt to test them is doomed to fail.
Re: (Score:2)
Nonono, it's pseudorandom. They just used a very good function.
Next question (Score:2)
How does C++ fair? LOL
Re:Next question (Score:5, Funny)
How does C++ fair?
Farely average.
Re: (Score:2)
I know C++ fairly well, trust me, it's the most pointlessly complex language on the planet. And the boilerplate goes on forever.
C++ might have developed sanely if they'd introduced it's major features in reverse order, i.e. lambdas way back in 1983, templates a bit later, and class methods only during the last decade. As it stands, there are basically two types of C++ code : code that badly emulates functional programming styles, and code consisting entirely of calls to simple wrappers around extern "C" f
Re: (Score:3)
Unfortunately, C++ remains the only language with a full-featured yet concise RAII, which is its main advantage when compared to C. And templates, while messy, are also extremely efficient in terms of generated code - more so than similar mechanism (generics etc) in pretty much any other language I know of.
Re: (Score:2, Funny)
How does C++ fair? LOL
#%@$&#@^UGSOWDYRO&F@#L(EGFGP*$TW
This Script written in Perl computes the answer.
Re:Next question (Score:4, Funny)
I dunno. Since it's a comment on Perl, starting with a # would seem to be entirely accurate according to the syntax.
Re: (Score:2)
In a productivity study of experienced users, perl & python were best and C++ worst in both time to finish and lines of code. [connellybarnes.com]
Re: (Score:2)
I've used Python, and Perl is my "goto" language (sorry, bad pun) so I tend to suspect they would do better than C/C++ in these areas too, but
Re: (Score:3, Informative)
The study cited has several biases in favor of the scripted languages that are acknowledge by the author in the references of your supplied link.
Primarily:
- The non-scripted languages (C, C++, Java) were tested under formal conditions in 1997 / 1998 (Java 1.1 I assume), the script programmers wrote their programs at home and self reported their times (and in most cases spent several days thinking about the problem before starting work, time which was not included).
- The script programmers were told that the
Re: (Score:2)
Far more problems fall into what you seem to consider "easy". My guess is you don't know either language nor what a hard problem really is.
Re: (Score:2)
High end video games tend to involve some reasonably sticky problems...
Re: (Score:2)
I did not suggest they did not. Only that they were not the only source of such problems.
Most video games seem to cheat their way out of lots of problems, something programs used for business cannot often do. A classic example of such cheating is instant hit bullets.
Java? (Score:2)
How is Java better than C++?
Trick question? (Score:4, Funny)
I always thought Perl was a randomly generated programming language.
Re:Trick question? (Score:5, Funny)
Re: (Score:2)
Like everything else in Perl, the name is too long.
Pathological Rubbish would have been more apropos.
Re:Trick question? (Score:4, Informative)
Hence the name: Pathologically Eclectic Rubbish Lister.
Note for the ignorant... that REALLY IS what it stands for!
Re: (Score:2)
I think APL has the edge there. It went so far as to make up its own non-ASCII symbol set.
Perl Is way better (Score:5, Informative).
It's possible to write bad unreadable code in anything, but it's just so much easier in Perl that I shudder anytime I get asked to look at someone elses Perl code. That has NEVER been a good experience.
Re: (Score:2)
Sounds more like an issue of EBCAK.
You can make a program that's illegible, blaming Perl for the incompetence or sloth of the people that are writing the code is hardly a fair. What about all those C programs where code is being run from random other files without concern for organization or maintainability?
Re: (Score:2)
Yeah but wasn't this supposed to be measuring the efforts of "first time users".
Maintaining someone else's code is an entirely different problem.
Trying to sort out someone else's code is generally a scary experience across the board. You can make spaghetti out of any language.
Re: (Score:2)
Trying to sort out someone else's code is generally a scary experience across the board. You can make spaghetti out of any language.
IME it's easier to read Java code, even decompiled java code, than just about anything. C# sharp can be easy too, but a lot more regx use, linq and such ugliness drag it down.
Re: (Score:3)
Re: (Score:3)
use strict;
Learn it, live it, love it.
Re:Perl Is way better (Score:5, Insightful)
I would suggest that perhaps Perl is particularly effective in separating good from bad programmers. In other languages, restrictions allow bad programmers to write code that *looks* good.
But if you see readable, understandable Perl code, you know you've got a keeper.
"if you see readable, understandable Perl code" (Score:2)
One of these days that may happen to me.
Re: (Score:2)
I would suggest that perhaps Perl is particularly effective in separating good from bad programmers. In other languages, restrictions allow bad programmers to write code that *looks* good.
But if you see readable, understandable Perl code, you know you've got a keeper.
I've looked at Perl like I look at English. It's possible to write really well done English that uses some obscure structures for emphasis, or to increase clarity. It is however more likely that someone will piece together the most incoherent confusing material into an English essay, and you will have difficulty following it.
Illegible code in Perl is not a fault of the language, but rather a fault of the programmer. Whether the matter of Perl letting people write so hideously is a good or bad thing, it must
Re:Perl Is way better (Score:5, Insightful)
This!
Perl is a "beautiful" language -- in the same way some people talk about certain human languages (e.g. romance languages, Russian, or Sanskrit) being beautiful, as opposed to merely functional. Other people will disparage those same languages as being too this, or not enough that... the same kind of debate we see with programming languages, particularly with Perl, which is kind of interesting.
And for some of those human languages, you'll also hear people lament how horribly some non-native speakers butcher them, perhaps because those non-native speakers are using them merely as a "functional" language, rather than grasping the full depth of expression that is possible.
This analogy has at least some merit I think, since Perl is a language that was designed "linguistically" at least in some sense, in that it has the same kinds of patterns that natural languages have and is chock full of idioms and expressions, that some programmers (myself included) find not only useful from a functional perspective, but actually enhance the creative process that happens when one writes code. I think part of that is due to Larry Wall's now venerable "Programming Perl" -- which is one of the few truly valuable programming books that's also actually fun to read -- especially if you're one of those people that thinks at least a little like Larry, and enjoys a dry wit.
Anyway, so yes, I totally agree, programmers that need "restrictions" in a scripting language to have their code be readable are definitely a certain "kind" of programmer. Not that they are better or worse programmers, they just don't embrace the TMTOWTDI philosophy, which is something that the society at-large doesn't generally embrace, so it's no surprise that there seem to be a lot of people that shit all over Perl.
I've seen (in my own code and in others) truly beautiful and elegant Perl code that reads like a story, and also the "line noise" code people complain about. Which is really all about regular expressions. Some people really love 'em, perhaps a little too much. Though as has been pointed out probably a billion times, there's nothing wrong with one-off throwaway code that looks like line noise, but if you're building a giant system, then your code should be all pretty and commented and generally sociable.
It's both unfortunate (and I still hope... a mixed-blessing) that Perl 6 has taken so long to come about. In that PHP went and pretty much took over it's niche as a web-development and "glue" language. Though the Perl community is still strong, if small, and I have no doubt that it will remain a living language for a long time to come, if for no other reason than the fact that CPAN is awesome, and there are zillions of lines of code written in Perl that a lot of people depend on every day, and that when Perl6 matures, I think it will enjoy a resurgence, within the Perl community, and perhaps much further, simply because of the simple and powerful philosophies that it encodes.
Easy things should be easy and hard things possible.
Re: (Score:2)
Comments are supposed to tell you what's going on. In fact, Perl has a built-in self-documentation system that makes it a breeze to document and find the documentation you want.
You don't maintain perl code by trying to understand it and tweak it. You maintain it by replacing lines or blocks of code with better written code.
And if you're not man enough to write better code, wtf are you doing trying to maintain it in the first place?
Re: (Score:3)
I certainly hope not. Whenever i see comments in C++ or Java code i'm thinking "why did you not write this to be more ovious way in the first place, wtf needs explanations here".
There a few cases where code needs comments IMO, and class-level and function-level docs are perfectly OK. But comments within source are a sign of
a) something incredibly clever being done
b) sloppy design or poorly written code that needs explanation on whats going on
In 99% of
Re: (Score:3, Insightful)
Despite some of the ill founded comments in this discussion, natural language is not comparable to computer language. Programming is closer to mathematics then human language. In the same
Re: (Score:3).
One could - quite validly - say the same about the English Language.
Now, I'll grant programming and spoken/written languages don't overlap perfectly with one another. That's why languages like LISP have such elegance; what they're designed to express is something far more abstracted and formalised in nature. It's possible to conceive of a complex structure and accompanying set of behaviours and properties simply by scanning a screenful of LISP, but English is narrative in nature. You don't scan across; you scan from top to bottom.
It's possible to write bad unreadable code in anything, but it's just so much easier in Perl that I shudder anytime I get asked to look at someone elses Perl code. That has NEVER been a good experience.
Perl can be difficult to grok, but it can be elegant as well. I've experienced revulsion looking at Perl code before, but never so consistently as with ASP and PHP. These are languages (and I use that term loosely) that simply cannot be made pretty.
In the right hands, Perl can be as elegant and expressive (and opaque, and efficient) as Shakespearean English. Argue however you like, the same is not true of many other languages. Python has clarity and simplicity. It's truly an engineer's language. LISP, as I've said, is beautiful in the same way architecture can be beautiful: taken as a whole, rather than a story. I didn't understand the appeal of Ruby until I learned that its inventor is Japanese. Then it all became clear. What seemed like awkward, nearly backward syntactical constructions suddenly made sense.
In other words, horses for courses. But arguing that Perl is not readable in its very nature is like arguing that English in incomprehensible based entirely on watching Jersey Shore.
Re: (Score:3)
Again, this depends on the programmer who wrote the code, not the language.
Sure, but all the Perl documentation I've ever seen (Camel Book, etc.) encourages Perl coders to concentrate on the result foremost, even at the expense of the process. Thinking about how to write well-structured code seems to be actively discouraged in the Perl community. Once it works, you're done.
The Python community were among the first point this out: Sure, there may be "more than one way to do it," as the Perl hackers like to say, but there's probably one good way to do it. If you don't even bother to
Re: (Score:2)
It depends on both. I mentioned specifically that you can write bad unreadable code in anything because it's true.
But that's like saying you can kill people with a screwdriver. It's true, but it's an awful lot easier with a shotgun. Perl just seems to make it an awful lot easier to "be clever" and come up with something that nobody can understand later. I don't consider that a good thing.
Re: (Score:3)
Since it's an acronym, PERL is acceptable too.
Perl = name of language.
perl = name of compiler / interpreter
PERL = acronym for Practical Extension and Report Language
Novices learning from whom...? (Score:2)
Who taught them Perl? Where did they learn to call subroutines with an ampersand? A Perl 4 manual?
OK they're novices but even I didn't write loops using C-style loops as a novice Perl coder because I was reading that it was more readable to do for($a..$b) instead.
Re:Novices learning from whom...? (Score:5, Informative)
Yes it was Perl 4 [perl.org], which is one of the flaws in this study.
Re: (Score:2)
That's sort of the the point. I'm not a good programmer, but when I code, I tend to use Perl, I focus on making the code legible and typically don't take on much with it. Perl works well with that, but there's plenty of folks that use Perl for things that it's not really intended for and don't have any idea what maintainable code should look like.
Ultimately GIGO, you need more than a study like this to determine whether or not Perl is better than a randomly generated programming language. Ultimately, I woul
Re:Novices learning from whom...? (Score:5, Informative)
"we did not train participants on what each line of syntax actually did in the computer programs. Instead, participants attempted to derive the meaning of the computer code on their own."
They were not trained. They were just shown code samples with no explanation. The code samples had 1-letter variable names and no comments. The Perl sample uses $_[0} for getting the first sub argument instead of shift, and "for ($i = $a; $i = $b; $i++)" to do a for loop instead of "foreach $i ($a
.. $b)", so it is deliberately obfuscated Perl.
Re: (Score:3)
A shift would have been more intuitive?
No, but perhaps a "my ($a,$b,$c) = @_;" would have been. Since I'm a long-time Perl programmer, I can't really speak for the newbie. But the use of the numerous $_[n]-lines is probably unclear. In any case, it is considered bad code, since it is both hard to read and error prone.
Using a foreach, instead of the C-style for loop, is certainly easier and MUCH closer to the implementation used in Quorum and Randomo. So that, at least, was very poorly thought-out. And Randomo? Is it really random? Or is it
Re: (Score:3, Informative)
It in fact has three disadvantages: it bypasses any prototype coercions, it passes @_ unmodified by default, and it's unidiomatic.
All of these fencepost errors I've fixed argue otherwise.
Not so fast.... (Score:5, Informative)
Also... (Score:5, Insightful)
While Perl has never had a particular reputation for clarity, the fact that our data shows that there is only a 55.2 % (1 - p) chance that Perl affords more accurate performance amongst novices than Randomo, a language that even we, as the designers, nd excruciatingly difcult to understand, was very surprising.
This is a complete misunderstanding of what a p value [wikipedia.org] means in statistical inference. The p value is not, and should not be interpreted as, the chance that "Perl affords more [or less] accurate performance." The p value is the chance, given that there is no difference, of obtaining a difference as large or larger. This is covered in first-year statistics.
Re: (Score:2)
...of what a p value [wikipedia.org] means in statistical inference. The p value is not, and should not be interpreted as, [perl's divergence in] accurate performance." The p value is the chance, given that there is no difference, of obtaining a difference as large or larger.
And they say English can't be obfuscated like programming languages.
Re: (Score:3)
If p is the chance, given no difference, of obtaining a result that is larger, what would you interpret (1-p) to mean?
What are they trying to prove? (Score:3)
Re: (Score:2)
Re: (Score:2)
That whitespace is a good way to delimit blocks.
Re: (Score:2)
What bad habits would one learn from, say, Python?
Indentation as syntax.
Re: (Score:2)
Indentation is a good habit even if it's not necessary in a language's syntax.
Re: (Score:2)
Indentation is a good habit even if it's not necessary in a language's syntax.
Too much work I'm lazy. Right click auto format puts in all the appropriate indents.
Re: (Score:2)
Exactly like a Python IDE.
Re: (Score:2)
Sure but confusing it with syntax is a stupid idea.
Re: (Score:3)
First, a question: Why is it such a bad thing to use whitespace as syntax?
Second, the act of indenting your code is not in itself a bad thing (when we talk "normal" languages like C), so why is it suddenly a bad habit when they pick it up in Python?
In a hundred years we will see this as brilliant.. (Score:2)
I keep reading the full paper (+points for publishing the whole thing!) and have yet to hit upon the definition of the word "accurate" they are using to measure the results. Apparently that is contained inside their previous paper with no direct link. On page 3 though, Perl is described as "A well-known commercial programming language". Really? C# is a commercial language, Perl
Better is a strong word (Score:2)
The participants didn't know the languages before. If anything, the study only proved that Perl has a steep learning curve.
APL (Score:2)
Didn't APL prove this a long time ago?
Well written Perl (Score:4, Interesting)
Re: (Score:2)
Not each line. Only lines that need to be separated. There's no need for a semicolon if the next line is a closing curly, for example.
Some insert them anyhow, and I can see the rationale for doing so. But unfortunately, that also encourages cut/paste programming, which is especially bad for perl. I remove superfluous semicolons precisely so I will have to stop and think before doing a cut/paste job.
Southern Illinois University-Edwardsville (Score:2)
long term benefits (Score:2)
Perl and language (Score:3)
Perl is a language, just like Dutch, Swedish, English, German and most of the others. In just about any language there is, to paraphrase a well-known Perl motto, more than one way to say something. That is in many ways a good thing, especially when it comes to using the language creatively as a novelist or poet or similar type of wordsmith does.
It is true that this quality does tend to make Perl programs somewhat hard to grasp for the uninitiated in the programmers style of writing. That is another quality the Perl language shares with those other languages mentioned above - did you understand all of Finnegans Wake the first time you read it?
In other words, Perl is a writers' language. It is not an editors' language. Once you get into the right mood, Perl flows like your native language does. Done right, this can lead to great things. It can also lead to the sort of notes you made when attending those lectures you did not care about in the first place, and did not understand in the second. Use Perl for things you care about, and it will provide you the means to express yourself in just the right way (for you).
If you give someone Lisp, (Score:3) | https://developers.slashdot.org/story/11/10/27/213231/is-perl-better-than-a-randomly-generated-programming-language | CC-MAIN-2018-17 | en | refinedweb |
Can anyone help a novice C# beginner? I'm attempting to call an object created in the same class
but a different method. Both methods were called from another class. Below is simplied code.
I've left out other code that the methods perform. An error indicates the "listener" object isn't
recognized in the 2nd method. Thank you for any help your can offer.
Code:// this 1st class calls methods of a 2nd class public class Lru_operation { // create an object of the 2nd class public Lru_Listen LruListen = new Lru_Listen(); // this method calls two methods from other class public void LruOperation() { LruListen.ListenForAag(); // first method call LruListen.LruListenAccReq(); // second method call } } // this is the 2nd class public class Lru_Listen { // 1st method creates an object from a different class (HttpListener) public void ListenForAag() { HttpListener listener = new HttpListener(); } // 2nd method calls 1st method's object to perform // a method task from a different class public void LruListenAccReq() { HttpListenerContext context = listener.Getcontext(); } } | http://forums.devshed.com/programming/955768-call-object-method-last-post.html | CC-MAIN-2018-17 | en | refinedweb |
epoll_create, epoll_create1 - open an epoll file descriptor
Synopsis
Description
Errors
Versions
Conforming To
Notes
Colophon
#include <sys/epoll.h>
int epoll_create(int size); int epoll_create1(int flags);
epoll_create() creates an:
On success, these system calls return a nonnegative file descriptor. On error, -1 is returned, and errno is set to indicate the error.. call 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.sgvulcan.com/epoll_create.2.php | CC-MAIN-2018-17 | en | refinedweb |
sasl_server_userdb_setpass_t - UserDB Plaintext Password Setting Callback
Synopsis
Description
#include <sasl/sasl.h>
int sasl_server_userdb_setpass_t(sasl_conn_t *conn, void *context, const char const char *pass, unsigned passlen, struct propctx *propctx, unsigned flags)
sasl_server_userdb_setpass_t is used to store or change a plaintext password in the callback-suppliers.
SASL callback functions should return SASL return codes. See sasl.h for a complete list. SASL_OK indicates success.
sasl(3), sasl_callbacks(3), sasl_errors(3), sasl_server_userdb_checkpass_t(3), sasl_setpass(3 | http://manpages.sgvulcan.com/sasl_server_userdb_setpass_t.3.php | CC-MAIN-2018-17 | en | refinedweb |
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
I'm trying to use a calculated number field to calculate "weight/urgency" using two custom(severity and priority) fields.
<!-- @@Formula:
int r1 = 0;
int r2 = 0;
String severity= issue.get("customfield_10705");
String priority= issue.get("customfield_10704");
if("S1".equals(severity))
r1 = 4;
else if("S2".equals(severity))
r1 = 3
else if("S3".equals(severity))
r1 = 2;
else if("S4".equals(severity))
r1 = 1;
else
r1 = 0;
if("P1".equals(priority))
r2 = 4;
else if("P2".equals(priority))
r2 = 3
else if("P3".equals(priority))
r2 = 2;
else if("P4".equals(priority))
r2 = 1;
else
r2 = 0;
return (r1 * r2);
-->
This is what I'm using and it doesn't seem to work.
Hi,
== doesn't work for strings. You need to use the equals method instead:
if ("P1".equals(priority))
Hey David, thanks for the answer.
It still doesn't seem to work.
Is there anything else I might be doing wrong?
Hey David, I've implemented the fix and it's still not working. Also, the field doesn't show up on the view screen.
What do you think I'm doing. | https://community.atlassian.com/t5/Jira-questions/Using-calculated-number-field/qaq-p/584214 | CC-MAIN-2018-17 | en | refinedweb |
HiI'm creating a plugin in order to quickly edit css (for now just css) like the Brackets IDE.I know that for know this does not look perfect but it is the beginning
here is the git project:
It looks good! Just a little suggestion, use the font specified in font_face in the popup too, it'll look better IMO.
font_face
Thinks a lot but the font_face is the same. Of course alternatively you could change the design very easely in the next release
I don't really understand your answer, maybe I didn't expimed myself really clearly: in the CSS, inject in place or before the font you already specified in font-family using python the value specified in the setting font_face.
font-family
So, if in my settings I say that I want the font_face to be Roboto Mono, it'll be the same in the popup...
Roboto Mono
I've quickly looked into the code, you should probably use BeautifulSoup to parse the HTML and find the style and link tag. It's fairly important because if I comment a link tag out for example, with a HTML parser, you won't beleive I want to load the stylesheet.
style
link
Furthermore, you should add try/except block when you open the css file, since if it doesn't exists, it will raise a FileNotFoundError and stop the execution... And by the way, here's a better way to open a file in python:
FileNotFoundError
# no
code = open(cwd+'/'+code).read()
# yes
with open(cwd + '/' + code) as my_file:
code = my_file.read()
# the file is automatically close here, even if there is an exception above
PS: Are you planning on adding this to Package Control?
Great Idea !!It would be nice if we could navigate to css files
one small issue it skip style if there is no space between classname and { for example
.test_test { // works good
color: red;
font-weight: bold;
}
.test_test{ // skip
color: green;
font-weight: bold;
}
i think if you make space optional it do the trick(?s).%s\s?{.*?} something like this
(?s).%s\s?{.*?}
Thank you @math2001 i will look up all those things and yes i already made a pull request to Package Control.And @unknown_user you are right i will add this too.
Thank you guys
how about linux support ?
I don't know about linux, i can't test it but i think it can work.Also my package is available in Package Control now guys! Check it out!
I already install this plugin on my linux machine and it works (it show styles but didn't show border)I can help with tests on linux !!
Thanks a lot for the test! I think that the border shows only if you have the 3140 build (or higher)
I try to import BeautifulSoup in order to match only the uncommented link and style but every time i try to import this module i keep getting this error:
ImportError: This package should not be accessible on Python 3. Either you are trying to run from the python-future src folder or your installation of python-future is corrupted.
any idea ?
Can you show the full traceback please? You need to require it in your dependencies and then it should work.
File "/Users/gamliel/Library/Application Support/Sublime Text 3/Packages/QuickEdit/QuickEdit.py", line 7, in <module>
import bs4
File "/usr/local/lib/python3.6/site-packages/bs4/__init__.py", line 35, in <module>
from .builder import builder_registry, ParserRejectedMarkup
File "/usr/local/lib/python3.6/site-packages/bs4/builder/__init__.py", line 320, in <module>
from . import _htmlparser
File "/usr/local/lib/python3.6/site-packages/bs4/builder/_htmlparser.py", line 10, in <module>
from html.parser import HTMLParser
File "./python3.3/html/parser.py", line 11, in <module>
File "/Library/Python/2.7/site-packages/_markupbase/__init__.py", line 8, in <module>
raise ImportError('This package should not be accessible on Python 3. '
ImportError: This package should not be accessible on Python 3. Either you are trying to run from the python-future src folder or your installation of python-future is corrupted.
here is the full traceback.I don't get how i can use the dependencies, i create a formatted folder with a json file in it ? | https://forum.sublimetext.com/t/quick-edit-like-brackets/29839 | CC-MAIN-2018-17 | en | refinedweb |
If
Having data stored in a single document wildly increasing is always wrong,
since the document has limit on its size (16MB).
I would recommend you to just reference user_id in activites collection,
like:
{
'_id': 'activity_id',
'user_id': 'user_id',
'time': ...,
'action: ...,
...
}
Make sure that index added for "user_id", and if you are always querying on
multiple fields, like {user_id: user_id, time: {$gt: blah blah}}, create
index on all the fields following the order in query, in this case:
(user_id: 1, time: -1) (if you always sort time DESCENDING)
EDIT
In your InvoicePdf class, change the barcode method to:
def barcode
barcode = Barby::Code39.new @order.order_number
barcode.annotate_pdf(self)
end
The annotate_pdf method takes a Prawn::Document as its argument, which is
self here.
Original Answer
If you want to create a new pdf with your barcode, you can just do:
def barcode
_barcode = Barby::Code39.new(@order.order_number)
outputter = Barby::PrawnOutputter.new(_barcode)
outputter.to_pdf
end
Note that you can specify pdf options (including height, margins, page
size, etc.) on the new PrawnOutputter or on the to_pdf call. See the
documentation for more details: and.
And if you want to write it to a file d")
I just realised that the easy way to solve this is via the use of queries.
I could have easily done the following.
var field = 'appearance.elementary_one_reading';
Word.find().where(field, xxx).exec(callback);
You can use the Chrome developer tools or Firebug (firefox) to see what
values are being returned.
In your "for" loop, write debugger; and the code will break as it iterates
over the collection, or use the dev tools to set a break point.
Well,.
When defining a schema you can specify options as a second parameter. Set
_id to false to disable auto _id.
var Embedded = new Schema({
some: String
}, {
_id: false
})
See the docs.]);
}
Doh.. changing
var componentSchema = new Schema({
name: String
, desc: String
}, { discriminatorKey : '_type' })
to
var componentSchema = new Schema({
name: String
, desc: String
}, { collection : 'components', discriminatorKey : '_type' })
Fixes the issue. Not sure why. don't think your notion of a single route that will either update an
existing user or create a new one is a good idea. Instead, consider
following the REST design:
app.post('/api/users', createNewUser);
app.put('/api/users/:userId', updateUser);
createNewUser should just make a new user object with the proper attributes
and call save on it. updateUser can do User.update({_id:
req.params.userId}, fieldsToChange.... See if you can make that work to
start and don't add complications like checking for already registered
users until a simplistic create/update pair works OK and you feel
confident. Then you can evolve your code to be more robust and realistic,
but go step by step.
This link would help you out , take a look Tuning node-mongodb-native
connection pool size.
It is difficult to predict the optimal poolsize for MongoDb , i use apache
benchmark tests (
ttp://
) to test the servers performance and response for different values of
poolSize and get an approximation for what suits the best for given number
of concurrent requests for your server .?)
collection.find({ $where : "this.a < this.b" })
This query is not performant. Or
While inserting the document, insert a boolean true/false based on (a <
b ) or ( b < a ) and query for that boolean..
//problem 1: `find` returns a list of results. You just need findById
var query = Category.findById(parentCategoryID);
query.select('name');
query.exec(function (err, parentCategory) {
//Problem 2: don't ignore errors. Handle them first and short-circuit
return
if (err) {
console.err(err);
return;
}
console.log("Fetched parentCategory: "+parentCategory+"..
parentCategory._id: "+parentCategory._id);
//problem 3: mongoose will do the right thing with your schema here
//all you need is
var category = new Category();
category.name = name;
category.parent = parentCategory;
//and don't forget
category.save(...callback....);
}
Also note if you have a schema, and you assign something that does not
match the schema, mongoose will just drop the
I have tested this feature and it works fine with 2.4.4 MongoDB.
Having taken a closer look at your indexes I realized the problem is a
small typo.
My TTL index which works:
{
"v" : 1,
"key" : {
"test_expira" : 1
},
"ns" : "test.usuarios",
"name" : "test_expira_1",
"expireAfterSeconds" : 120,
"background" : true,
"safe" : true
}
Your TTL index which does not work:
{
"v" : 1,
"key" : {
"test_expira" : 1
},
"ns" : "dbnamehere.usuarios",
"name" : "test_expira_1",
"expiresAfterSeconds" : 120,
"background" : true,
"safe" : true
}
Note the correct keyname for TTL index is "expireAfterSeconds" where yours
has an extra letter and is "expiresAfterSeconds"."};
});
You can't do it without callbacks, but you can use an async flow control
library like async to help manage the nest of callbacks. In this case you
probably want to use async.parallel.
Using that you can do something like:
users.get(base_URL, (req, res) => {
var data = {
title: "Yes, got it right"
};
async.parallel([
(callback) => {
UserModel.find({}, (err, docs) {
data.user_list = docs;
callback(err);
});
},
(callback) => {
// Other query that populates another field in data
}
], (err, results) => {
// Called after all parallel functions have called their callback
res.render('<some_jade_file_here>', data);
});
});
You are correct to want to use MongoDB's aggregation framework. Aggregation
will give you the output you are looking for if used correctly. If you are
looking for just a list of the _id's of all users' favorite workouts, then
I believe that you would need to add an additional $group operation to your
pipeline:
db.users.aggregate(
{ $unwind : "$favoriteWorkouts" },
{ $group : { _id : "$favoriteWorkouts", number : { $sum : 1 } } },
{ $sort : { number : -1 } },
{ $group : { _id : "oneDocumentWithWorkoutArray", hot : { $push :
"$_id" } } }
)
This will yield a document of the following form, with the workout ids
listed by popularity:
{
"_id" : "oneDocumentWithWorkoutArray",
"hot" : [
"workout6",
"workout1",
"workout5",
"workout4",
It is not a good idea to query for your image by the node.js Buffer that
contains the image data. You're right that it's probably an issue between
the BSON binary data type and a node Buffer, but does your application
really require such a comparison?
Instead, I'd add an imageID or slug field to your schema, add an index to
this field, and query on it instead of bin in your findOneAndUpdate call:
var imageSchema = new Schema({
imageID: { type: String, index: { unique: true }},
mime: String,
bin: Buffer,
uses : [{type: Schema.Types.ObjectId}]
});
Use a reference in your Notification schema, and then populate it, as per
the Mongoose Docs.
var mongoose = require('mongoose'),
Schema = mongoose.Schema,
ObjectId = Schema.ObjectId,
var notificationSchema = new Schema({
initiator: { type: ObjectId, ref: 'User' }
});
var Notification = mongoose.model('Notification', notificationSchema);
You can then use Mongoose's query populate method:
app.get('/notifications/:id', function(req, res) {
Notification
.find({ initiator: req.params.id })
.select('_id type initiatorId')
.populate('initiator')
.exec(function(err, notifications) {
if (err) return handleError(err);
// do something with notifications
});
});
However, I'm slightly confused why the id is a user id (and not a
notification id) –
Actually in your code you are passing a callback, which is never handled in
function userSchema.statics.list
You can try the following code:
userSchema.statics.list = function (calbck) {
this.find(function (err, users) {
if (!err) {
calbck(null, users); // this is firing the call back and first
parameter should be always error object (according to guidelines). Here no
error, so pass null (we can't skip)
} else {
return calbck(err, null); //here no result. But error object.
(Here second parameter is optional if skipped by default it will be
undefined in callback function)
}
});
}
Accordingly, you should change the callback which is passed to this
function. i.e.
exports.list = function (req, res){
UserModel.list(function(err, users
You don't have whitespace or funny characters in ' title', do you? They can
be defined if you've quoted identifiers into the object/map definition.
For example:
var problem = {
' title': 'Foo',
'content': 'Bar'
};
That might cause console.log(item) to display similar to what you're
expecting, but cause your undefined problem when you access the title
property without it's preceding space.).
I'm a little confused by your question, simply because the index you
provide ({ first_name: 1, archived: 1 }) is a compound index. All of the
following queries will make use of that compound index:
conditions = { archived: false, first_name: "Billy" };
conditions = { first_name: "Billy", archived: false };
conditions = { first_name: "Billy" };
Now, let's assume we have two separate indexes, { first_name: 1 } and {
archived: 1 }. In this case, MongoDB will do query optimization to
determine which index is the most efficient to use. You can read more about
the query optimization performed by MongoDB here.
The MongoDB query optimizer will thus likely use the same index for both of
the multicondition queries you provided:
conditions = { archived: false, first_name: "Billy" };
conditions.
My guess is that your index didn't stick, which you later fixed the syntax.
db.users.getIndexes()
Use that to verify that there are indexes on the collection next time. | http://www.w3hello.com/questions/Embedding-a-document-MongoDB-with-Mongoose | CC-MAIN-2018-17 | en | refinedweb |
Motivation
The motivation of this tutorial blog is to help (not limited to) SAP Business One Partners to:
- Tackle Customer’s pain points by Developing Apps quickly
- Build responsive SAP Fiori (SAPUI5) App > Role-based & Decomposed App Perspective
- Move away from (only) developing Traditional B1 add-ons to (more) Modern Loosely-Coupled App
- Improved User Experience with Mobile-first approach by using Enterprise-ready UX (SAPUI5)
- Achieve quick Hybrid App development and initial to market, ready to use
Here’s a minute of the outcome:
A simple app idea to get you quickly started. Refer here for more information about the App Use-Case.
Contents
1. What are You getting into?
2. What will You achieve?
3. Overview Diagram of Tools & Technologies
6. How do You Get Started?
7. Back-end: SAP HANA XS & SAP Business One HANA Service Layer
8. Front-end: SAP Cloud Platform Web IDE + Hybrid App Toolkit + HANA Studio + Xcode
9. Tools & Components Compatibility Versions
10. Next Steps
What are You getting into?
Objective
Utilize SAP tools to ease development efforts in packaging & deploying SAPUI5 applications on Mobile.
Show a very simple Loosely-Coupled end-to-end Hybrid App solution utilizing SAP Business One Service Layer
Duration
90 minutes (not including setting up prerequisites)
Difficulty Level
Advanced
Challenge
Setting up various plugins & IDEs & Connecting the dots.
What will You achieve?
Hybrid Application deployed on mobile devices (iOS / Android), packaged by SAP Hybrid Application Toolkit (SAP HAT)
Consuming SAP Business One Service Layer (RESTful Web API), ONLY available on SAP Business One version for HANA
Developing using SAP HANA XS Artifacts (XSJS / XSODATA)
Overview Diagram
Prerequisites
The following prerequisites are required.
Strictly follow the documentation on the provided links to install & achieve the following steps.
The screenshots are mainly for your reference.
SAP Cloud Platform Trial Account
… to utilize their services – SAP Web IDE & other Cloud Components.
SAP HANA Tools (Eclipse Plugin) or SAP HANA Studio or SAP HANA Web-based Development Workbench: Editor
… to support your SAP HANA XS Development with Application / Database artifacts. Either of the tools mentioned above will be fine, depends on your preference.
SAP HANA Web-based Development Workbench: Editor
Chrome Browser > http://<hana_ip>:8000/sap/hana/ide/editor
SAPUI5 – UI Development Toolkit for HTML5 (Eclipse Plugin)
… to develop modern enterprise-ready web apps, responsive to all devices and running on the browser of your choice.
Install SAP HANA Tools Plugin in Eclipse.
SAP Cloud Platform, Web IDE Personal Edition (Optional)
… can be installed and used to develop SAP Fiori Apps on your local workstation. Mainly to use the Layout Editor, which gives you a GUI to edit the UI components instead of code level. However, compared to SAP Cloud Platform’s Web IDE, it is less comprehensive.
After installing successfully, you should be able to run SAP Web IDE on your local machine.
It is very similar to SAP Cloud Platform Web IDE.
SAP Mobile Platform (SMP).
SAP Web IDE Hybrid App Toolkit add-on / SAP HAT Direct Download
… enables developers to develop hybrid mobile apps in SAP Web IDE. The apps can be previewed and deployed to mobile emulator and device in local development environment via Hybrid App Toolkit Connector.
Step-by-Step setting up SAP HAT.
After setting up successfully, you can run the Hybrid App Toolkit on your local machine before “hooking” it to SAP Cloud Platform’s Web IDE.
This is an important step required later.
In SAP Cloud Platform, you could test the connection to your SAP HAT. Next, you’ll be ready to package & deploy an app from SCP Web IDE into your device directly / packaged iOS or Android Project files locally.
SCP > Services > SAP Web IDE > Preferences > Plugins
Plugins > Enable Hybrid App Toolkit > Logout Browser Session & Relogin into SCP Web IDE
Now you should be able to see Hybrid Application Toolkit as one of the items in Preferences.
SAP Business One HANA Service Layer
Make sure you’ve installed Service Layer as part of the SAP Business One HANA Server Components installation. Once you’ve installed, you may check the following URLs & config files to verify that your Service Layer is setup successfully.
Navigate to your SAP Business One HANA Service Layer API endpoint (either HTTP/HTTPS):
https://<hana_ip>:50000/b1s/v1
http://<hana_ip>:50001/b1s/v1
Checking Service Layer’s version in your HANA server.
SAP Business One HANA Demo Database
For this demo shown, you will be using a demo company database – “SBODEMOSG”.
Specific IDEs (Xcode / Android Studio)
iOS app, you’d need Xcode.
Android app, you’d need Android Studio.
Use-Case
You may apply the following Design Thinking methodology to develop a specific Use-Case for your SAP Fiori Application.
For this demonstration, you will be using a specific Use-Case for a specific Persona.
This is just a simple app example and of course, you may feel free to elaborate further once you’re done.
Persona
Procurement Officer (end-customer)
Pain Point
- Lack of stock visibility
- Manual calls / emails
- No discounted rates due to order placed on non-optimal period (peak season)
- …
Feature
- View on Stock Availability
- Place Sales Order
- …
Application Overview
It will be a B2B Application.
Business could give this App for end-customer to view on specific stock availability & authorize the customer to create a direct-sales order into Business’ SAP Business One ERP System.
Stock Availability App for Customer’s Procurement Officer
Now, you will get started with the specific steps to get this App deployed. In the next section, you will be. provided with screenshots & code snippets for your reference to get this deployed from SAP Cloud Platform into your local environment.
Make sure you’ve completed the prerequisites section above before proceeding on.
How do You Get Started?
In the following sections, you will see in details how you can develop & deploy a Loosely-Coupled Hybrid App, consuming SAP Business One HANA Service Layer, packaged by SAP HAT & SAP Mobile Platform SDK into your device.
In the Back-end section, you’ll learn how to implement control flow logic using SAP HANA XS & Service Layer. It will be heavy (more manual hands-on) but it will benefit you a lot after understanding the mechanics.
In the Front-end section, you’ll learn how to exploit SAP Web IDE Fiori App Template, pointing to your OData-point and deploying it into your device directly or local server (exporting) for further development.
Back-end: SAP HANA XS & SAP Business One HANA Service Layer
Starting with the back-end topic, you will be creating a SAP HANA XS Project first and next, you will implement some of the XS Artifacts relating to SAP Business One HANA Server.
Basically, you will prepare the data level & semantic layer to contain the data, ready to push to the Front-end.
For those who are new to SAP HANA XS, you may refer to:
Tutorial to get started on how to create & deploy a SAP HANA XS Project.
Follow the first 3 tutorials in the link.
SAP HANA Extended Application Services (XS)
Create a New SAP HANA XS Project.
a. In SAP HANA Studio or Eclipse (with SAP HANA Tools installed), under Project Explorer > File > New > Other…
b. Select SAP HANA > Application Development > XS Project
c. Fill in Project Name (trial.stock) – this will be the namespace for the demo I’ll be showing.
d. Tick “Share project in SAP Repository” – your project will be shared in your HANA server repository, thus the project name (namespace) is very important.
e. Click Next > Repository Package (Project Name – namespace) will be in there. And by default, it will already navigate your project to be deployed in the root level of your workspace.
f. Thus, Untick “Add Project Folder as Subpackage”.
g. Click Next > Tick both “XS Application Access” & “XS Application Descriptor”.
h. Click Finish.
i. [IMPORTANT] Right Click Project > Team > Activate
This is to push the local files into your remote HANA server and make it active.
Make sure you activate the files frequently as some of the following steps might have a dependency of some files. e.g. .xsaccess pointing to .xssqlcc file. Thus your .xssqlcc file has to be active first.
This should be the output of your project, and of course, feel free to add additional XS components into your project to look like the screenshot.
In later parts, you will learn about the usage of each file & how to add them.
1. public.xssqlcc:
This is a SQL-connection configuration file specifies the details of a connection to the database that enables the execution of SQL statements from inside a server-side (XS) JavaScript application with credentials that are different to the credentials of the requesting user.
In short, you want to expose the application without any login prompt > anonymous_connection.
New > Other > SAP HANA > Application Development > SQL Configuration File > public.xssqlcc
{ "description" : "Public Open Connection" }
In your Chrome Browser > Go To > http://<hana_ip>:8000/sap/hana/xs/admin/
Navigate to public.xssqlcc > Edit the details as accordingly (Username & Password)
2. .xsaccess:
SAP HANA App Access File – Basically you can define authorization & authentication configuration in this file. It is the entry point of every XS App. Without this file + .xsapp, your App will not fire up in the browser / device.
Make sure the value of anonymous_connection is pointing to the correct location of your .xssqlcc file. And of course .xssqlcc has to be activated on the server first, otherwise by activating your .xsaccess file, it will give you an error.
By default, you should already have this.
New > Other > SAP HANA > Application Development > XS Application Access File > .xsaccess
{ "anonymous_connection": "trial.stock::public", "exposed" : true, "authentication" : null }
3. B1SL.xshttpdest:
HTTP Destination Config Syntax File – You can define specific destination host details in this file so that the information won’t be revealed publicly.
For this scenario, you will be defining your SAP Business One HANA Service Layer HTTP destination in this file.
NOTE: This demo you won’t be using HTTPS (SSL not setup for the demo environment). It will be HTTP instead. This might pose some security issues, thus it is not recommended for production systems.
Please edit the following code snippet to your HANA environment. e.g. host & port might differ from your environment.
Using an Internal IP Address is fine here.
By default, 50000 (HTTPS), 50001-50004 (HTTP).
New > Other > SAP HANA > Application Development > XS HTTP Destination Configuration > B1SL.xshttpdest
description = "B1 SL HTTP Connection"; host = "172.31.31.64"; port = 50001; proxyType = none; proxyHost = ""; proxyPort = 0; authType = none; useSSL = false; timeout = 0;
4. ITEM.calculationview:
Graphical Calculation View (Semantic Layer) – Here is a semantic view to wrap your Item data. Suitable for quick View Operations.
For this scenario, you will pull information from SAP Business One HANA Demo Database.
– Item Master Data: OITM & ITM1
– Item Group: OITB
New > Other > SAP HANA > Database Development > Modeler > Calculation View > ITEM.calculationview
– Fill in the details.
– Make sure Data Category is “DIMENSION“.
Add a Join Node > Select the OITM & ITM1 Tables into the Join & Select the Fields as listed below.Make sure you follow the properties too.
Make sure you follow the properties too.
Add another Join Node & connect Join_1 with OITB table.
Add a Projection Node to Add a Filter on the Price List.
SAP Business One manages a number of price list per item. Thus for simplicity of our app, you will just choose the base price list of the item = 1.
Add another Projection Node to Project the required fields to the Semantics.
Lastly, Select ItemCode as the Key.
Do not forget to Activate your Calculation View.
Right Click Calculation View > Team > Activate
5. stock.xsodata:
OData Service Definition – this is used to expose your data (direct data table / semantic view) in a commonly-used OData container.
NOTE: In this scenario, you will wrap your Calculation View (Semantic Layer) to expose your SAP Business One Demo Data, strictly for READ-ONLY purposes.
For other operations (CREATE / UPDATE / DELETE), you will be using SAP Business One Service Layer to manage those transactions, this is to prevent any data integrity issues.
New > Other > SAP HANA > Application Development > XS OData File > stock.xsodata
Here, you will point to your Calculation View.
service { "trial.stock::ITEM" as "Item" keys ("ItemCode"); }
SAP Business One HANA Service Layer
It is a ready-to-consume Web Service (RESTful API) for SAP Business One HANA, design as an Application Server that exposes System & Business Objects as a Service via HTTP Protocol.
To learn more about SAP Business One Service Layer, visit the following blogs:
B1 Service Layer: Sample Payload of CRUD and How to find help by Yatsea Li
B1 Service Layer: Entity CRUD – Update by Andy Bai
How to call SAP Business One Service Layer from HANA XS by Maria Trinidad MARTINEZ GEA
For your Stock Availability App scenario, you will be using the following API calls from Service Layer:
Login – POST /b1s/v1/Login
Stock Availability Check – GET /b1s/v1/Items(‘<ITEM_ID>’)/ItemWarehouseInfoCollection
Sales Order – POST /b1s/v1/Orders
You may use RESTful API Clients such as Postman or Advanced Rest Client to test your Service Layer Calls.
Those API calls will be elaborated in the Front-end section later.
For now, you will be implementing a server-side proxy to handle those calls easily. In future, you can reuse this file for other usages.
6. B1SL_Proxy.xsjs:
XS JavaScript – In the nutshell, it serves as a proxy between your front-end to service layer that defines the business logic required to serve client requests for data via HTTP.
New > Other > SAP HANA > Application Development > XS JavaScript File > B1SL_Proxy.xsjs
/** * Reference: * SAP Business One Solution Architect Team * * * */ function callServiceLayer(path, method, body, sessionID, routeID) { try { $.trace.debug("Service Layer Proxy -- callServiceLayer (path: " + path + ", method: " + method + ", body: " + body + ", sessionID: " + sessionID + ", routeID: " + routeID + ")"); /** * The following destination is defined in your .xshttpdest file to locate your Service Layer URL * For this example, we're using HTTP instead of HTTPS (Environment for SSL has not been setup yet) * * */ var destination = $.net.http.readDestination("trial.stock", "B1SL"); var client = new $.net.http.Client(); var header = ""; if (method === $.net.http.PATCH) { method = $.net.http.POST; header = "X-HTTP-Method-Override: PATCH"; } var req = new $.net.http.Request(method, path); if (header !== "") { req.headers.set("X-HTTP-Method-Override", "PATCH"); } if (body) { req.setBody(body); } if (sessionID) { req.cookies.set("B1SESSION", sessionID); } if (routeID) { req.cookies.set("ROUTEID", routeID); } var response = client.request(req,destination).getResponse(); var myCookies = [], myHeader = [], myBody = null; for (var c in response.cookies) { myCookies.push(response.cookies[c]); } for (var h in response.headers) { myHeader.push(response.headers[h]); } if (response.body) try { myBody = JSON.parse(response.body.asString()); } catch (e) { myBody = response.body.asString(); } $.response.contentType = "application/json"; $.response.status = response.status; $.response.setBody(JSON.stringify({ "status": response.status, "cookies": myCookies, "headers": myHeader, "body": myBody })); $.trace.debug("Service Layer Proxy -- callServiceLayer Response status: " + $.response.status); } catch (e) { $.trace.warning("Service Layer Proxy -- callServiceLayer Exception: " + e.message); $.response.contentType = "application/json"; $.response.setBody(JSON.stringify({ "error": e.message })); } } var B1SLAddress = "/b1s/v1/"; var aCmd = $.request.parameters.get('cmd'); var actionURI = $.request.parameters.get('actionUri'); var sessionID = $.request.parameters.get('sessionID'); var routeID = $.request.parameters.get('routeID'); var filter = $.request.parameters.get('filter'); var path = B1SLAddress + actionURI; var body = null; if ($.request.body) { body = $.request.body.asString(); } $.trace.debug("B1SL_Proxy cmd: " + aCmd); switch (aCmd) { case 'login': path = B1SLAddress + "Login"; /** * [POTENTIAL SECURITY BREACH] * Login credentials should be stored safely somewhere * * */ var loginInfo = {}; loginInfo.UserName = "manager"; loginInfo.Password = "manager"; loginInfo.CompanyDB = "SBODEMOSG"; body = JSON.stringify(loginInfo); callServiceLayer(path, $.net.http.POST, body, sessionID, routeID); break; case 'Add': callServiceLayer(path, $.net.http.POST, body, sessionID, routeID); break; case 'Update': callServiceLayer(path, $.net.http.PATCH, body, sessionID, routeID); break; case 'Delete': callServiceLayer(path, $.net.http.DEL, body, sessionID, routeID); break; case 'Get': callServiceLayer(path, $.net.http.GET, body, sessionID, routeID); break; case 'Action': callServiceLayer(path, $.net.http.POST, body, sessionID, routeID); break; default: $.trace.warning("Service Layer Proxy -- callServiceLayer Exception: " + e.message); $.response.status = $.net.http.INTERNAL_SERVER_ERROR; $.response.contentType = "application/json"; $.response.setBody(JSON.stringify({ "Unknown command": aCmd })); break; }
With that, you’re done preparing the Back-end. Now, test the XSOData point in the Browser.
In your Chrome Browser, Navigate to http://<hana_ip>:8000/trial/stock/stock.xsodata/Item
This will be required later.
Front-end: SAP Cloud Platform Web IDE + Hybrid App Toolkit + HANA Studio + Xcode / Android Studio
Now, this is the interesting part where you will use SCP + SAP Web IDE + SAP HAT + SMP SDK. In this section, you will develop a Fiori App with step-by-step template, pointing to your XSOData Point setup in the previous section.
In SAP Cloud Platform, you will setup a Destination with your XSOData and consume it immediately via SAP Web IDE from an App perspective. It will be interesting to see how quick an App then will be developed automatedly.
SAP Cloud Platform Web IDE
1. Login into SCP with your Trial Account > Click on Destination.
2. Add a New Destination of your XSOData point (Use your Public IP Address or Domain Name)
e.g. http://<hana_ip>://8000/trial/stock/stock.xsodata
Enter the following information.
Take note of the Description, it will be used later.
Add the following Additional Properties:
- WebIDEEnabled: true
- WebIDESystem: <give it a name>
- WebIDEUsage: odata_gen
3. SCP > Services > SAP Web IDE.
4. You should see the following Home page of SAP Web IDE.
Click on New Project from Template.
5. Select All Categories > SAPUI5 Master Detail Kapsel Application.
6. Project Name: StockQuery
7. Select Service URL > Select the Destination (Setup earlier) > Enter “/” > Click Test.
8. Enter & Select the following information. Take note of your Project Namespace.
Project Namespace: <name>.stock
9. Click Next > Finish.
10. Expand your Project. Workspace > StockQuery > view > Right Click Detail.view.xml > Open With Layout Editor.
11. Drag Button Control into Footer Bar & Edit the Icon & Text > Check.
12. Drag another Button Control & Edit the Icon & Text > Order.
13. Click Run.
14. Preview your App.
Now, let’s package & deploy it with SAP Hybrid App Toolkit (HAT).
Make sure you’ve completed Setting up SAP HAT in the Prerequisites section of this blog. Ensure that your SAP HAT is up & running.
15. SAP Web IDE > Tools > Preferences.
Plugins > Enable Hybrid App Toolkit > Logout Browser Session & Relogin into SCP Web IDE
16. Preferences > Hybrid Application Toolkit > Click Test Connection.
You should see a success message of, “The connection is available. The HAT Connector version is v1.28.2.”
17. Right Click on Project > StockQuery > Project Settings.
18. Fill in the required information & select the Build Options & Platforms.
For this demo, you’ll be using iOS.
Click Save.
19. Right Click Project > StockQuery > Run > Run on Local Device/Simulator > iOS Device.
20. Your StockQuery App is currently being built, packaged & deployed into your local machine by SAP HAT.
21. The Cordova Project is Ready.
22. Ignore Harmless Error in iOS Device Code Signing.
23. You should be able to see the Hybrid App is being packaged locally in the following structure.
At this point, you’ve successfully packaged the Hybrid App into your local machine via SAP HAT.
Next steps, you will modify the packaged Hybrid App with SAP HANA Studio or Eclipse & add in SAP Business One HANA Service Layer API calls, to complete the App Features.
Eclipse (with SAP HANA Tools plugin installed) or SAP HANA Studio
In this scenario, you will edit the packaged Fiori (SAPUI5) Project files to include the following functions:
- Global Variable
- Check Stock Availability
- Place Sales Order
1. Open SAP HANA Studio or Eclipse (see prerequisite section on how to install SAP HANA Tools in Eclipse)
2. File > New Project > SAPUI5 Application Development > Application Project
3. Project Name: StockQuery
Uncheck “Create an Initial View“.
4. Default Project Structure will be created in your Project Explorer.
5. In your System / Windows Explorer, navigate to the Hybrid App location.
SAPHybrid > StockQuery > hybrid > platforms > ios > www
Copy All Files & Folders into your newly SAPUI5 Project in SAP HANA Studio / Eclipse.
6. Paste Copied Files & Folders into WebContent.
7. You should achieve the following results.
8. Share Project into your SAP HANA Server Repository.
Right Click StockQuery > Team > Share Project…
9. Select SAP HANA Repository.
10. Click Browse… Select your SAP HANA XS Project > “trial.stock”
Tick “Add Project Folder as Subpackage“.
Click Finish.
11. StockQuery > Team > Activate.
Push (Activate) the updated files into your SAP HANA Server.
12. (OPTIONAL) You might face with this error on “Unsupported encoding ISO…”
Click Ok.
13. Under Problems Tab / Windows, locate the error.
Right Click > Quick Fix.
14. Select the following components to fix encoding.
Click Finish.
15. Activate Project.
16. StockQuery > WebContent > Component.js
Open Component.js.
Edit the serviceUrl to your XSOData Point (Public IP Address), as you will be accessing from your mobile app.
The following template is auto-generated by SCP Web IDE, thus such changes are required.
17. StockQuery > WebContent > index.html
Open index.html.
Modify SAPUI5 library src to the following remote SAPUI5 library source.
18. Add a New File in WebContent > global.js & update index.html to include the file.
global.js
Make sure you define your Public HANA IP Address / Domain Host Name.
var Protocol = "http"; var HANA_IP = "<public_hana_ip>"; // e.g. "54.254.198.40" var Port = "8000"; var XSJSUrl = Protocol + "://" + HANA_IP + ":" + Port + "/trial/stock/B1SL_Proxy.xsjs"; var XSODataPoint = Protocol + "://" + HANA_IP + ":" + Port + "/trial/stock/stock.xsodata"; var SLSessionID = "";
index.html
<!-- Define global file to store variables --> <script type="text/javascript" src="global.js"></script>
19. Modify Buttons > Press Function in your App.
StockQuery > WebContent > view > Detail.view.xml
Open Detail.view.xml
Modify the Button Function Accordingly.
<Button text="Check" width="100px" id="btnCheck" press="onCheckPress" icon="sap-icon://check-availability"/> <Button text="Order" width="100px" id="btnOrder" press="onOrderPress" icon="sap-icon://sales-order-item"/>
20. Add Button Function into Controller file.
StockQuery > WebContent > view > Detail.controller.js
Open Detail.controller.js
Add New Function for Check Stock Availability Button, onCheckPress method.
onCheck_Login = XSJSUrl + "?cmd=login"; // Service Layer Call URL to Login $.ajax({ type: "POST", url: SLCall_Login, success: function (result) { if (result.error) { alert(result.error); return; } else { SLSessionID = result.cookies[0].value; localStorage.setItem("b1session", SLSessionID); // Store B1 Service Layer SessionID into browser's LocalStorage var SLCall_CheckStock = XSJSUrl + "?cmd=Get&actionUri=Items('" + ItemID + "')/ItemWarehouseInfoCollection&sessionID=" + SLSessionID; // Service Layer Call URL to Check on Stock Level $.ajax({ type: "POST", url: SLCall_CheckStock, success: function (result) { var inStock = result.body.ItemWarehouseInfoCollection[0].InStock; var comStock = result.body.ItemWarehouseInfoCollection[0].Committed; var StockAvail = inStock - comStock; var resultDialog = new sap.m.Dialog({ title: 'Stock Availability', state: 'Success', content: new sap.m.Text({ text: 'Stock Item: ' + ItemID + "\nAvailability Level: " + StockAvail }), beginButton: new sap.m.Button({ text: 'Close', press: function () { resultDialog.close(); }.bind(this) }) }); resultDialog.open(); }, error: function (request, textStatus, errorThrown) { alert("SAPB1 Service Layer Call has failed: " + textStatus + " / " + errorThrown ); } }); } }, error: function (request, textStatus, errorThrown) { alert("SAPB1 Service Layer Call has failed: " + textStatus + " / " + errorThrown ); } }); }
21. Add Button Function into Controller file.
StockQuery > WebContent > view > Detail.controller.js
Open Detail.controller.js
Add New Function for Create Sales Order Button, onOrderPress method.
onOrder_Order = XSJSUrl + "?cmd=Add&actionUri=Orders&sessionID=" + localStorage.getItem("b1session"); // Service Layer Call URL to Login /** * [ORDER JSON Array] * Prepare Order object to be created * - to be further developed - * */ var orderInfo = {}; orderInfo.CardCode = "C99998"; orderInfo.DocDueDate = "2017-09-09"; orderInfo.Comments = "Added from StockQuery App."; orderInfo.DocumentLines = []; var orderItem = {}; orderItem.ItemCode = ItemID; orderItem.Quantity = "1"; orderInfo.DocumentLines.push(orderItem); $.ajax({ type: "POST", url: SLCall_Order, data: JSON.stringify(orderInfo), dataType: "json", success: function (result) { console.log(result); var resultDialog = new sap.m.Dialog({ title: 'Success Order', state: 'Success', content: new sap.m.Text({ text: 'Order placed for: ' + ItemID + "\nOrder Reference Number: " + result.body.DocNum }), beginButton: new sap.m.Button({ text: 'Close', press: function () { resultDialog.close(); }.bind(this) }) }); resultDialog.open(); }, error: function (request, textStatus, errorThrown) { alert("SAPB1 Service Layer Call has failed: " + textStatus + " / " + errorThrown ); } }); }
22. Activate your Project & Test with Public IP Address on Chrome Browser.
Navigate to your SAP HANA Repository Namespace of your Project.
e.g. http://<public_hana_ip>:8000/trial/stock/StockQuery/WebContent/index.html
[IMPORTANT] Please use your Public HANA IP Address throughout, otherwise you’ll face with cross-origin issues.
And you’re done with your Back-end & Front-end!
Next, you’ll deploy the App into your device.
Deployment
Before opening the Xcode / Android Project file for deployment, you’d need to update the old contents.
There will be 3 locations to replace the contents of the Updated Project:
- SAPHybrid > StockQuery > webapp
- SAPHybrid > StockQuery > hybrid > www
- SAPHybrid > StockQuery > hybrid > platforms > ios > www
1. Right Click Project & Open in System Explorer.
StockQuery > Show In > System Explorer.
2. Copy Contents from StockQuery > WebContent.
3. Paste Contents into Hybrid App Folder.
SAPHybrid > StockQuery > webapp
4. Paste Contents into Hybrid App Folder.
SAPHybrid > StockQuery > hybrid > www
5. Paste Contents into Hybrid App Folder.
SAPHybrid > StockQuery > hybrid > platforms > ios > www
6. Now, you can open the Xcode Project File.
Open Stock Availability Check.xcodeproj.
7. In Xcode, make the following changes:
Bundle Identifier: Give it a suitable identifier
Tick “Automatically manage signing”
Select a Team.
8. Open info.plist file for further configuration.
Stock Availability Check > Resources > Stock Availability Check-info.plist
Make sure you have the following configurations:
- UIRequiresFullScreen – NO
- Status bar is initially hidden – YES
- View controller-based status bar.. – NO
9. Select your Device > Run.
Tools & Components Compatibility Versions
Tools compatibility version dated as of 23rd September 2017.
SAP HAT
Version: 1.28.2
SAP Mobile Platform (SMP) SDK
Version: 3.15.0
SAP Web IDE
Version: 170330
SAP SAPUI5
Version 1.46.12
SAP HANA 1.0 SPS 12
1.00.122.05.1481577062
SAP Business One HANA
9.2 PL07
SAP Business One HANA Service Layer
9.2 PL07
Xcode version
8.3
Next Steps
How to build Fiori apps for SAP Business One on HANA by Mostafa SHARAF
In this blog, you’ll learn how to build a Sales Order Fiori App with SAP Business One Service Layer.
SAP Fiori for iOS – Build Your First Native Mobile App
In this course, you’ll learn how to build a native iOS Fiori App. From there, you can leverage on the native capabilities of your device that drives a better user experience.
Utilize SAP HANA Express Edition (HXE) to host your Solution on a HANA server, up to 32GB for FREE! From there you can deploy your XS artifacts and let it run in your server in parallel to SAP Business One HANA. With that, you can achieve a Loosely-Coupled Solution approach – Solution Utilizing HXE memory while your SAP Business One is Utilizing your own HANA Server Memory.
To successfully understand and build Enterprise-ready solution with SAPUI5, you’ll find this Toolkit with great examples and API Reference.
Now THAT is a detailed tutorial. Congrats, Jacob!
I liked the way you put the pieces together. As a suggestions I would get rid of the HANA Studio part.
We can create the SAPUI5 project in the SAP WEB IDE (on SAP CP) and import it in the HANA Onpremise system using the Web Based Development Workbench. 🙂
Thank you Ralph! for your encouragements & suggestions.
Ok, point taken!
Thought-process was to exploit the quick capability of copy & paste file-based straight from/into IDE.
Hello,
I try to do it follow your document,but when I try to open the “” ,There’s “404 – not found ” for me.
this is my image.May I get some help from you?
Hi 寇 超 ,
Your stock.xsodata is not activated in the server.
(Notice that “grey diamond” on that file)
Right click stock.xsodata > Team > Activate.
Then you’ll be able to see it.
Hope it helps.
Regards,
Jacob Tan
do we need to SMP to use and deploy HAT?
Hello Bilen Cekic,
Yes, you need to install SMP for that.
Part of the process of installing & setting up SAP HAT.
Hope it helps.
Thank you. | https://blogs.sap.com/2017/10/05/build-deploy-sap-fiori-app-on-mobile-devices-consuming-sap-business-one-hana-service-layer/ | CC-MAIN-2018-17 | en | refinedweb |
RectObj man page
RectObj — The RectObj widget class
Synopsis
#include <Xm/Xm.h>
Description
RectObj is never instantiated. Its sole purpose is as a supporting superclass for other widget classes.
Classes
RectObj inherits behavior and a resource from Object.
The class pointer is rectObjClass.
The class name is RectObj.ancestorSensitive
Specifies whether the immediate parent of the gadget receives input events. Use the function XtSetSensitive if you are changing the argument to preserve data integrity (see XmNsensitive). The default is the bitwise AND of the parent's XmNsensitive and XmNancestorSensitive resources.
- XmNborderWidth
Specifies the width of the border placed around the RectObj's rectangular display area.
- XmNheight
Specifies the inside height (excluding the border) of the RectObj's rectangular display area.
- XmNsensitive
Determines whether a RectObj receives input events. If a RectObj is sensitive, the parent dispatches to the gadget all keyboard, mouse button, motion, window enter/leave, and focus events. Insensitive gadgets do not receive these events. Use the function XtSetSensitive to change the sensitivity argument. Using XtSetSensitive ensures that if a parent widget has XmNsensitive set to False, the ancestor-sensitive flag of all its children is appropriately set.
- XmNwidth
Specifies the inside width (excluding the border) of the RectObj's rectangular display area.
- XmNx
Specifies the x-coordinate of the upper left outside corner of the RectObj's rectangular display area. The value is relative to the upper left inside corner of the parent window.
- XmNy
Specifies the y-coordinate of the upper left outside corner of the RectObj's rectangular display area. The value is relative to the upper left inside corner of the parent window.
Inherited Resources
RectObj inherits behavior and a resource from Object. For a description of this resource, refer to the Object reference page.
Translations
There are no translations for RectObj.
Related
Object(3).
Referenced By
Core(3), XmArrowButtonGadget(3), XmCascadeButtonGadget(3), XmGadget(3), XmLabelGadget(3), XmPushButtonGadget(3), XmSeparatorGadget(3), XmToggleButtonGadget(3). | https://www.mankier.com/3/RectObj | CC-MAIN-2018-17 | en | refinedweb |
Email.
Export outlook express to eml , outlook express recovery , export outlook express to thunderbird , dbx to outlook , migrate outlook express tool to perform dual process. DBX Converter tool That will work best for your device for installation minimum System Requirements Pentium II 400 MHz, 64 MB RAM, Minimum 10 MB Space. Try our demo version of DBX Converter to convert DBX to PST. Demo version will convert & save only 10 mails per folders (Inbox, Drafted, Sent Items, and Deleted Items) of Outlook Express. DBX Converter software is a powerful DBX File Conversion tool to convert DBX files & quickly. SysTools DBX Converter Tool Run on windows all platform with windows Upgrade version DBX Converter Tool support Outlook Express 5.0, 6.0 and MS Outlook 2000/2002/XP/2003/2007/2010 Our DBX converter software is user-friendly software, no any technical knowledge are requiring using this software but when facing any problem for importing DBX file to Outlook then contact our support department which available in 24X7 If you find the desired features then you can purchase our full licensed version at just $69 which is for the personal license and if you want to purchase its business license then you have to pay $199 only.
Import outlook express files to outlook 2010 , dbx converter , import outlook express data to outlook , dbx converter , import dbx to thunderbird
if you were using Mac/Entourage Mail and want ot switch emails from Mac/Entourage Mail to Windows Live Mail/Windows Mail/Outlook Express then please try MBOX to DBX Converter to successfully convert MBOX file emails of Mac/Entourage to EML/DBX. Software has simple GUI which makes its more effective and comprehensible tool to import MBOX into Outlook Express, Import MBOX into Windows Mail & Import ...MBOX into Windows Live Mail. The tool allows users to extract email-message from .mbox and .mbx file and perform MBOX-DBX and MBOX-EML conversion. To get more idea about SoftLay MBOX to DBX Converter please visit our website. We provide demo version of MBOX to DBX Converter free which will give an idea how software convert mbox-dbx-eml. After evaluate demo version you can buy our software online from our website.
Now you can convert a lot of DBX files with DBX Converter software to different file format (PST, MSG, RTF and EML) DBX Converter at SysTools Software to Import DBX Emails metadata to PST to convert Outlook Express (5.0, 6.0 and above) emails into the ANSI (MS Outlook 2000, 2002, XP) and UNICODE (MS Outlook 2003, 2007, 2010). DBX Converter also converts DBX files into the EML RTF, MSG, or Thunderbird ...format. The outcome of using SysTools DBX Converter is the bulk conversion of the Outlook Express mailbox items (inbox, outbox, drafts, etc) and its meta-data (to, cc, bcc, attachments, etc), effortlessly than ever. The DBX Converter is equally responsible for and is proficient at the conversion of default and orphan DBX files. To acquire a proper insight on the software, go ahead and try out the demo version of the DBX Converter tool, which is available on our website for free of any charges. The demo version will convert 10 items per mail folder. After trying out the demo version, if you are still keen on purchasing the DBX Converter software, you may apply for the personal license at $69 and the business license at $199. Having any problem regarding DBX Converter you can take help form SysTools Software Tech-support team any time of business day.
outlook express , import dbx emails to pst , outlook express recovery , import dbx files into outlook
If you want to convert all Outlook Express DBX files into Outlook. You can use SysTools DBX Converter software which is cost effective and easy to use. DBX converter changes all DBX data of Outlook Express into Outlook easily & quickly. You can convert DBX files to Outlook with the help of DBX to PST Converter software. A DBX file to Outlook conversion utility is an advance version of SysTools DBX ...converter software it works very smoothly. DBX converter is user friendly software even though technical knowledge is not required for using this utility. You can convert all Outlook Express DBX files (Either Inbox, Outbox, Attachments, Sent items etc.) into Outlook with in just few clicks. It is very beneficial tool for you because Outlook email client is an advance compare to Outlook Express. Outlook provides some extra features if you want to use those features you must export all Outlook Express email to Outlook with the help of DBX Converter software. Outlook Express Converter provides by SysTools. DBX to Outlook conversion software supports to run on every version of Windows 98/ME/2000/2003/XP/Vista/7. You can download trial version of SysTools DBX Converter which is free of cost. Demo version of DBX Converter software convert only first 15 items per folder from Outlook Express into Outlook you can purchase full License version of DBX to Outlook conversion software which is only $69 and you can convert your unlimited Outlook Express DBX data to Outlook.
Export outlook express email to outlook , outlook express recovery , import dbx to microsoft outlook , outlook express recovery , converter dbx to pst Ms-Outlook It converts DBX file(Local folder & all E-mail meta data)on all Windows Systems. If want to test DBX Converter Software feature download our Demo ...Version software form our site This Demo version Only Convert 10 file of Outlook Express to Outlook desired features are)
dbx converter , how to import emails in outlook , how to export outlook express to outlook , dbx converter
If you are look for popular (Outlook Express to Outlook) Conversion software which will easily Export Outlook Express Emails to Outlook or convert .DBX files to PST? SysTools software designed great software which easily and successfully converts whole unlimited Outlook Express Emails into Outlook (PST) or Thunderbird only in within few minutes, that is DBX Converter software. Our DBX file converter ...software allows all users to convert Outlook Express files into MS Outlook and RTF, MSG, EML file with high transfer results. Our DBX Converter Software is easy to use and not much technical knowledge is required for using this utility. Even then if you face any problem, chat with our support team who seat only for solving your conversion problem. DBX converter software is such a good conversion software, that this software easily and quickly convert OE emails into Microsoft Outlook or EML and If you want to test our DBX Converter software then download the free online Demo version which converts only 15 OE Emails into Outlook and if you want to convert unlimited Outlook Express Emails to Outlook, then Order full License version for Personal License $69 and Business License $199.
Outlook Express Email to Outlook to Import DBX Files into Outlook or import DBX files into EML Format. Apart from the Outlook Express into Outlook process, the conversion ability of this software extends to the conversion of DBX files into MSG, RTF and EML. As EML files are easily imported by the Thunderbird into its system this utility is also capable of converting DBX data into the Thunderbird. ...Our Outlook Express converter software easily converts multiple DBX emails into MS Outlook without any problem. DBX to PST tool also helps with the full conversion of orphan DBX emails into PST. Using DBX Converter user can convert default Outlook Express folders to PST & EML files or can choose any DBX files saved in any other location to export to the PST & EML files. SysTools DBX converter software converts your DBX Files into PST or EML. DBX Converter is 100% result-oriented DBX file reader tool to convert Outlook Express to MS Outlook. This utility is easy to use and no technical Knowledge is required for using this utility. Our DBX Converter Software easily and smoothly runs with all versions of Outlook Express (5.0, 6.0) and MS Outlook (2000/2002/XP/2003/2007/2010).
Are you in search of DBX Converter FREE? Catch-up Free DBX File Converter device to apply on few number of Outlook Express emails and for massive catalog this utility needs little pocket money ($69) and offers liberty to perform DBX to PST Conversion for long time. Need to break PST, it may need if your DBX files heavy so attain it PST SPLIT most valued feature which is raerely available in online ...Appliances. Convert DBX to PST Program is complete package which offers to users to Convert DBX Files to PST, EML, MSG because sometimes users require converting Outlook Express emails Client into Outlook format. Use this app in multiple cases, it also fix all bugs from files & interruption free export Email in other file format .MSG and .EML and stores by providing multiple naming conventions. Data preview also available in total 8 view. Free DBX File Converter is one of the most efficient utility file management software that helps all OE users to convert Outlook Express DBX files into numerous required formats according to user demand and will able to retain real database and properties as it is. Operate this s/w on any Win OS (old and new) here new app not needs Outlook configuration. After successful process, this app grant to open DBX frequently without losing any single information or changes into data. 24/4 support service for needy. To acquire demo version of DBX Converter Software go to our site verify all working process before purchase our software, because demonstration have limitation to convert initial 25 Email from Outlook Express to Outlook or other file format. Now get DBX Converter Software at very affordable price only 69 USD for personal license and 199 USD for Business license.
dbx converter free , convert dbx to pst program , free dbx file converter , outlook express to microsoft outlook , dbx to pst conversion , open dbx , dbx converter software , dbx to rtf converter
If you have problem with your Outlook Express data and convert ...into EML, MSG, MBOX, HTML & RTF etc. All consumers can extract DBX to HTML format with all data. DBX to PST converter is the relevant utility that provides easy way to convert DBX to PST file with each and every data of DBX file. Features of DBX Converter Tool * Supported entire versions of MS Outlook and Outlook Express with all Windows OS * It can save DBX file data to PST, EML, MSG, HTML, MBOX and RTF * Easy for converting DBX to MS Outlook file perfectly * Program gives you chance to convert OE emails to PST with every email Meta data * Here we are giving you full support online free
convert dbx to pst , dbx converter tool , dbx to html , dbx to eml , dbx to msg , dbx to mbox , dbx to rtf).
import dbx to pst , pst files , import dbx files into outlook , dbx recovery , importing dbx into outlook
Are you using fully featured DBX converter software and your DBX converter software works very slowly? If you want to convert Outlook Express to Outlook quickly & fast then try our SysTools DBX converter software which import Outlook Express DBX file to Outlook very fast and convert unlimited DBX data into Outlook PST or Thunderbird only few minutes. Our DBX to PST converter tool provides facility ...for you like convert DBX to PST, DBX to EML, DBX to MSG, and DBX to RTF. Outlook Express Export to Outlook program very helpful for all conversion of Outlook Express emails like Inbox, Outbox, Drafts, Personal folders etc. This utility supports all versions of Outlook Express (5.0, 6.0 & above) and MS Outlook 2000/2002/XP/2003/2007/2010. Our DBX converter software runs on all windows version like 98/ME/2000/2003/XP/Vista/7..
Export dbx to eml , outlook express , outlook express to outlook , outlook express recovery , outlook express export to outlook
Get done convert Express to Outlook by our fresh creation and download the splendid DBX Converter product from online which is really proficient to convert DBX to Thunderbird (EML), Outlook (PST), and MSG file format without any alteration. Each one has distinct application and no specification to use beside Win OS in the system prior initiating conversion. There is also naming conventions given to ...save MSG and EML file. DBX converter allows users for successful convert Outlook Express DBX files into another format like (EML, PST, WAB, and MSG) without any inaccuracy or restriction, DBX to Outlook converter enable users to respond effective conversion of Outlook Express to MS Outlook 2000/2007/2013/2010 versions. Free TRIAL version gives preview of database, able to split PST files according to size & supports 45 GB data hassle free. Analyze Free utility “how DBX converter works”, save and convert first 25 items of Outlook express folder (sent, draft, inbox, delete items). It helps users to understand the features & functionality of software. Download FULL edition of DBX to EML Converter you can convert whole DBX file into desired format according to requirements of clients. Users can convert all the DBX folders successfully at reasonable price with full version of the Outlook Express files to Outlook tool and after full satisfaction purchase the full licensed version at just $69(personal license), if users want to purchase for business purpose then they have to pay $199 only. If you faced any problem while installing or using our DBX converter tool, then we are there to provide guidance and technical support. Our Tech Support team is available to help at any time of business day for more info visit to our site.
dbx converter , dbx to Outlook converter , Outlook express files to Outlook , dbx to eml converter , convert dbx to thunderbird , convert express to Outlook
If you need an email conversion software to convert Outlook Express email into MS Outlook format? You definitely. DBX Converter software converts only 10 emails per folder to PST & EML files. To transfer all your Outlook Express files & its meta contents buy our Full Version of the software at only $69. DBX Converter Software converts DBX files of Outlook Express 5.0 and above. Software successfully runs on Windows 95/98/ME/2000/XP/2003/Vista & Windows 7 Operating System.
Convert DBX Files of Outlook Express to PST & EML Files with DBX Converter Software. Software successfully converts Outlook Express DBX files to PST files of MS Outlook to read DBX files in MS Outlook..
Convert outlook express , oe to outlook , outlook express , transfer outlook express to outlook component with email properties (from, to, cc, bcc, ...subject, sent & received date/time, message fields etc) to MS Outlook. Convert Outlook Express to Outlook software powered with advance features such as:
oe to outlook , outlook express backup , transfer outlook express to outlook
Now all users can use the superlative Solution Enstella DBX Converter Software to repair DBX file and recover all the data of Outlook Express file into outlook file. Software has simple to understand GUI process by which all users can use for Export DBX file to PST Outlook file. By taking help of outlook express converter Software you can extract all items of your DBX file in order to convert DBX ...file into PST Outlook file with emails and attachments. Software gives the facility to auto search DBX file location in order to scans all its data for smart conversion into PST Outlook file. Software will provide the complete preview of DBX conversion to PST Outlook file. You can also filter all the emails of your outlook Express file according to dates “From date” to “To date” to Export outlook Express messages to PST Outlook file.DBX Converter Software gives the facility to create single OR Separate PST File for every DBX File. Using outlook express email converter Software you can rename the emails by selecting multiple naming convention such as- subject, subject +date, subject +date+ from, from +date +subjects etc.DBX Converter Software not only convert DBX file to PST but also in some different formats such as-EML, MSG, TXT, MBOX and HTML. You can use the demo version of DBX Converter Software that will facility to restore 10 emails per folder of DBX file into each format at free of cost but If you want to restore complete data of DBX file in any format then you have to download the full version of the DBX file that available at affordable price.
dbx converter , outlook express converter , export dbx file , convert dbx , export outlook express messages , outlook express email converter
SysTools software company launches great features of its latest version in DBX converter software which will convert DBX data speedily in Outlook or many other formats like EML, MSG RTF, and Thunderbird at very short time. This utility is fabulous because fully conversion possible within few steps like Open DBX Converter software - Browse DBX files - Scan DBX files - Choose Format convert like (PST, ...EML, MSG, RTF and Thunderbird) - Choose Save Location ? Start Converting and result is all DBX files to PST and other format. SysTools DBX Converter software is compatible with all Windows version like Win 98 to Win7 as well as works with all Outlook Express application. export DBX file to Outlook software provides free of cost demo version this demo version show you conversion process but this demo version converts limited items like 15 items per folder and you want to convert unlimited Outlook Express DBX files to Outlook Then order full license version which available only $69 for personal license and $199 business license.
Export outlook express emails to outlook , dbx converter , export outlook express to outlook 2010 , dbx to outlook , export outlook express to thunderbird
Getting positive feedback from SysTools DBX Converter User now we can say on the behalf of user response that our importing technology is robust, latest and secure for importing DBX file. So now SysTools labs are launching his new version of DBX Converter 3.2. This latest version of DBX Converter import all Outlook Express DBX file (Local Folder) to Outlook PST file (Personal Folder) with all email ...metadata (ANSI or UNICODE format). An exclusive feature of this newly launched version of DBX Converter is that user can convert there all Outlook Express Email Metadata DBX file format to MS-Outlook email PST, RTF, MSG, EML or Thunderbird file format regarding their need. DBX Converter support Outlook Express 5.0, 6.0 all latest version and MS-Outlook 2000,2002,2003,2007,2010 windows all version and it take less space on System Hard Disk easy installation user friendly software. User can test our latest version of DBX Converter 3.2 while downloading our demo version This Trail version has all feature of the license version. having complete satisfaction from our Demo version user can purchase our product while just paying $69 for Personal use and $199 for business use have any problem user can take help of our Tech Support at any of working day.
dbx to outlook , outlook express converter to pst , import dbx to pst , export outlook express to thunderbird , outlook express import outlook
SysTools Newest version of DBX Converter 3.2 software converts all Outlooks (DBX file) to Outlook (PST file) format. This DBX to PST converter Software can perform numerous conversion tasks like .DBX to .RTF, .DBX to .MSG, .DBX to .EML or .DBX to Thunderbird converting large files. The DBX Converter has an attractive, easy-to-use graphic user interface and converts files very fast. This software offer ...you option to choose the importing format .PST, .RTF, .MSG, .EML or Thunderbird and reliability besides having the ability fast conversion from Outlook express DBX file to MS Outlook PST file format as well other file format also. And the advantage of using this robust conversion product are visible form the start (like import speed taking less space on hard disk (3.78 MB) on your system and support Windows all version) DBX Converter Software Support all version of Outlook Express well as all version of MS- Outlook User Can test Our DBX Converter Software while downloading Demo Version from our site. This Trial Version has all feature of license Version but our Demo version can import only 10 DBX File from Outlook Express to Outlook if you want to convert unlimited DBX files into Outlook format then purchase full licensed version which available only for personal license $69 and business license $199. User having any problem regarding DBX Converter software user take help of our Tech Support any time of business day.
Filter: All / Freeware only / Title
OS: Mac / Mobile / Linux
Sort by: Download / Rating / Update
systools dbx converter serial
dbx Converter
dbx Converter mac
online pst to dbx converter
dbx Converter Chinese
dbx Converter Full Torrent
sys Tools dbx Converter
dbx Converter Torrent
dbx Converter Code
dbx Converter Mbox mac
Systools dbx Converter
Systools dbx Converter Rapidshare
SysTools dbx Converter rar | http://freedownloadsapps.com/s/dbx-converter/ | CC-MAIN-2018-17 | en | refinedweb |
I want to control running process/program by script in Python.
I have a program linphonec (You can install: apt-get install linphonec).
My task is:
1. Run linphonec (I'm using subprocess at the moment)
2. When linphonec is running it has a many command to control this and I want to e.g use "proxy list" <- this is command in linphonec.
Simple flow:
test@ubuntu$ > linphonec
linphonec > proxy list
There are actually 2 ways to communicate:
Run your program with
myprogram.py | linphonec to pass everything you
Use subprocess.Popen with subprocess.PIPE in constructor via keywrod-args for stdin (propably stdout and stderr, too) and then communicate for a single command or use stdin and stdout (stderr) as files
import subprocess p=subprocess.Popen("linphonec", stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True) #this is for text communication p.stdin.write("proxy list\n") result_first_line=p.stdout.readline() | https://codedump.io/share/vko9xc0IsIuX/1/how-to-type-command-to-running-process-in-python | CC-MAIN-2018-17 | en | refinedweb |
Oskari bundle
A bundle is a component in an Oskari application. A bundle is a selection of Oskari classes which form a component that offers additional functionality for an application. A bundle can offer multiple implementations for a functionality which can then be divided into smaller packages for different application setups. Packages can be used to offer a multiple views for the same functionality for example search functionality as a small on-map textfield or a window-like UI (see Tile/Flyout) for the same functionality. For a short introduction see create your own bundle.
Directory structure
See here for info about structure and conventions.
Implementation
Bundles implementation files should be located under the
/bundles folder. If the bundle has a BundleInstance (ie. something that is started/instantiated when the bundle is played) it is usually defined in a file called
instance.js, but this is not enforced and any file referenced in bundle definition (
bundle.js) can be used. The bundle doesn't need to have an instance and can be just used to import dependency files that can be instantiated elsewhere. Usually you want to implement a BundleInstance since you can think of it as a starting point for your functionality which is triggered by just adding your bundle in an applications startup sequence.
A Bundle instance is an Oskari class which implements
Oskari.bundle.BundleInstance protocol. A Bundle instance is created as a result from a Bundle definitions (see above) create method. Bundle instance state and lifecycle is managed by Bundle Manager.
Bundle lifecycle methods
start- called by application to start any functionality that this bundle instance might have
update- called by Bundle Manager when Bundle Manager state is changed (to inform any changes in current 'bundlage')
stop- called by application to stop any functionality this bundle instance has Bundle instance is injected with a mediator object on startup with provides the bundle with its bundleid:
instance.mediator = { bundleId : bundleId }
Definition
The bundle definition (or package) should be located in
bundle.js file under the
/packages folder. The bundle package definition should not implement any actual functionality. It should only declare the JavaScript, CSS and localization resources (== files) and metadata if any. If the bundle package can be instantiated the package's
create method should create the bundle's instance. The bundle doesn't need to have an instance and can be used to import dependency files that can be instantiated elsewhere. In that case the create method should return the bundle class itself (
return this;).
Bundle should install itself to Oskari framework by calling
installBundleClass at the end of
bundle.js
Oskari.bundle_manager.installBundleClass("<bundle-identifier>", "Oskari.<mynamespace>.<bundle-identifier>.MyBundle");
Adding new bundle to view
In order to get bundle up and running in your map application, the bundle needs to be added to the database. There are two tables where it shoud be added:
- portti_bundle
- portti_view_bundle_seq
portti_bundle includes definitions of all available bundles. Definition of the new bundle should be added here to be able to use it in a view. It is recommended to use
flyway-scripts when making changes to database. Documentation can be found here and here.
Below is an example of an flyway-script (which is actually
SQL) adding new bundle to
portti_bundle table (Replace
--insert to portti_bundle table -- Add login bundle to portti_bundle table INSERT INTO portti_bundle ( name, startup ) VALUES ( 'login', '{ "bundlename":"login", "metadata": { "Import-Bundle": { "<bundle-identifier>": { "bundlePath":"/Oskari/packages/bundle/" } } } }' );
When bundle is added to
portti_bundle table, it can be added to
portti_view_bundle_seq table to be used in a view. Below is an example of an flyway-script adding new bundle to
portti_view_bundle_seq table.
-- Add login bundle to default view INSERT INTO portti_view_bundle_seq ( view_id, bundle_id, seqno, config, state, startup, bundleinstance ) VALUES ( (SELECT id FROM portti_view WHERE application='servlet' AND type='DEFAULT'), (SELECT id FROM portti_bundle WHERE name='login'), (SELECT max(seqno)+1 FROM portti_view_bundle_seq WHERE view_id=(SELECT id FROM portti_view WHERE application='servlet' AND type='DEFAULT')), (SELECT config FROM portti_bundle WHERE name='login'), (SELECT state FROM portti_bundle WHERE name='login'), (SELECT startup FROM portti_bundle WHERE name='login'), 'login' );
After these steps, and when bundle is defined correctly in front-end code, the bundle should be loaded when starting your map application. Great way to check if the bundle is loaded at start is to look at startupSequence in GetAppSetup in developer console.
Resources
Any additional CSS definitions or images the bundle needs are located under the bundle implementation
resources folder. Any image links should be relative paths.
Last modified: Tue Mar 07 2017 13:56:13 GMT+0200 (EET) | http://oskari.org/documentation/core-concepts/oskari-bundle | CC-MAIN-2018-17 | en | refinedweb |
No connection to Wipy 2 via FTP (Filezilla) either Telnet (Putty)
Yesterday my wipy 2 with expansion board finally arrived. I'm successfully updated the firmware to the actual version (os.uname() = "(sysname='WiPy', nodename='WiPy', release='1.4.0.b1', version='v1.8.6-398-g4a4a81e on 2017-01-20', machine='WiPy with ESP32')") and the installation of pymakr runs successfully to. With pymakr I can connect to the board and run code over the pycom console as you can see above (os.uname()...).
But if i try to connect using ftp (filezilla) or telnet (putty) the connection always will be refused.
Messages from filezilla:
Status: Verbinde mit 192.168.4.1:21...
Fehler: Zeitüberschreitung der Verbindung nach 20 Sekunden Inaktivität
Fehler: Herstellen der Verbindung zum Server fehlgeschlagen
And yes, my settings in the programs are all correct like discribed in the manual and also here in the forum. I'm completly helpless...
I've tryed out the tool "ampy" from adafruit like discribed in the adafruit mycropython tutorials by Tony D. With ampy I can read out files from the local filesytem on my wipy but writing files to it doesn't work.
I've also read that it should be possible with pymakr to write files on the board but I can't find no access to the local filesystem of my board within pymakr.
By the way, im using Windows 7 Professional 64bit, the newest version of Filezilla (3.24.0) and my wipy 2 board is connected with my computer via usb on port COM11, the drivers are up to date.
Thanks in advance for the help.
Best regards,
Corum
- DisasterPants
I also had a connection problem with Wipy 2.0 via FTP (FileZilla), which brought me to this page and none of these replies helped me. The German looking error you have seems to be similar to mine(prompt was "Connection timed out after 20 seconds of inactivity") and this is how I managed to solve the problem:
Pymakr: 1.0.0.b3
FileZille: 3.27.0.1
Windows: 10
Copy code into pymakr(save file for future use):
import network
import time
wlan = network.WLAN(mode=network.WLAN.STA)
wlan.connect(ssid='your_ssid', auth=
(network.WLAN.WPA2, 'your_password'))
while not wlan.isconnected():
time.sleep_ms(50)
print(wlan.ifconfig())
In Pymakr terminal type:
print(wlan.ifconfig()) # The first sequence of numbers is the IP address of your WiPy 2.0 for FileZilla
If your connection is successful, you will see on the left hand side of FileZilla a file directory which will include the files "boot.py" and "main.py" or a folder called "flash", click the folder. Next copy and paste this test code into Pymakr and save it as "blink":
from machine import Pin
import time
led = Pin('P9', mode=Pin.OUT)
def blinktest():
while 1:
led.value(0)
time.sleep(1)
led.value(1)
time.sleep(1)
"while 1:" should be indented and so should the the following lines be indented underneath "while 1:". The comment box isn't properly indenting the code.
Now drag this file into the file directory that contains "boot.py" and "main.py". The status bar on top of FileZilla will tell you if the transfer is successful. The last thing to do is type this into the Pymakr terminal which will blink the LED on the expansion board.
import blink
blink.blinktest()
I found that with my short amount of time with Pymakr is that it is very picky, where I would have to unplug the Wipy 2.0 and restart Pymakr and do everything in the right sequence in order for this to work. This includes uploading the wifi_config file again, otherwise the "timed out" error in FileZilla will come back to haunt you. I hope this helps others with getting pass this issue.
Finally I've modified my boot.py and the wipy connects now to my local router with an static ip.
I can now upload files with Filezilla and Telnet works also fine with Putty!
Thanks to @JMarcelino for hints and tipps and the kind support for a noob like me!
Thank you so far.
But in the manuals there isn't anywere a hint, that ftp and telnet will doesn't work if I only use an usb connection to my wipy!
There should be a hint that the wipy must be connected to the local Network to use FTP and Telnet and that the usb connection doesn't support these services.
Best regards
Corum
- jmarcelino
@Corum
The COM port connection is a simple serial terminal it doesn't know about IP protocol.
The only way to connect to 192.168.4.1 is by having your PC joining the WiPy's wlan as you did on the iPad.
Once you're in you can setup the WiPy as a WiFi station on your own wlan, read the link I posted below on how to do it. Then it'll get an IP assigned from your network.
I think Chrome on iOS doesn't understand ftp links, it works on Safari though.
Browsing doesn't work. The browser (Chrome for iOS) throws an error "ERR_CONNECTION_TIMED_OUT".
To test I install an ftp client on my ipad (FTP Client lite), enter the data for the wipy and... the connection works! I can browse the /flash directory.
It seems on my PC it can't dissolve the host address 192.168.4.1 to my com port (COM11) an otherwise...
Yes the 192.168.4.1 address is only available if you connect to the WiPy's own wlan network directly.
If you want the WiPy to join your home network (and get a 192.168.100.xxx addresss as you say) you need to configure the WiPy first. See
Also the WiPy is not running a web server so you can't connect to it directly via a browser.
You'll really need an FTP or telnet client on your iPad. But you can try browsing to as well (not very useful though)
Thanks you for the fast reply.
I've tryed to connect to the wipy wlan network with my ipad and it worked. I received from the dhcp on the wipy ip-address 192.168.4.2 for the ipad, but via browser 192.168.4.1 is not available.
My wlan router for my local network runs on a diffrent ip-range (192.168.100...).
@Corum said in No connection to Wipy 2 via FTP (Filezilla) either Telnet (Putty):
192.168.4.1
Just checking but have you connected your PC to the WiPy WiFi network? | https://forum.pycom.io/topic/560/no-connection-to-wipy-2-via-ftp-filezilla-either-telnet-putty | CC-MAIN-2018-17 | en | refinedweb |
table of contents
other versions
- jessie 3.74-1
- jessie-backports 4.10-2~bpo8+1
- stretch 4.10-2
- testing 4.15-1
- stretch-backports 4.15-1~bpo9+1
- unstable 4.15-1
NAME¶clearenv - clear the environment
SYNOPSIS¶
#include <stdlib.h>int clearenv(void);int clearenv(void);
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
DESCRIPTION¶The clearenv() function clears the environment of all name-value pairs and sets the value of the external variable environ to NULL.
RETURN VALUE¶The clearenv() function returns zero on success, and a nonzero value on failure.
VERSIONS¶Available since glibc 2.0.
CONFORMING TO¶Various¶Used. | https://manpages.debian.org/jessie/manpages-dev/clearenv.3.en.html | CC-MAIN-2018-17 | en | refinedweb |
Programmatic Expressive!
As an experiment, I took my own website's source code, and made a couple of tweaks:
- I imported the middleware pipeline from my
config/autoload/middleware-pipeline.global.phpfile into programmatic declarations inside my
public/index.php.
- I imported the routed middleware definitions from my
config/autoload/routes.global.phpfile into programmatic declarations inside my
public/index.php.
The bits and pieces to remember:
- Refer to your middleware using fully-qualified class names, just as you would in your configuration. This allows Expressive to pull them from the container, which you are still configuring!
- Order of operations is important when defining the pipeline and defining routes. The pipeline and routes can be defined separately, however, and I recommend doing so; that way you can look at the overall application pipeline separately from the routing definitions..
Here's what I ended up with.
First, my middleware pipeline configuration becomes only a list of dependencies, to ensure services are wired correctly:
// config/autoload/middleware-pipeline.php use Mwop\Auth\Middleware as AuthMiddleware; use Mwop\Auth\MiddlewareFactory as AuthMiddlewareFactory; use Mwop\Factory\Unauthorized as UnauthorizedFactory; use Mwop\Redirects; use Mwop\Unauthorized; use Mwop\XClacksOverhead; return [ 'dependencies' => [ 'invokables' => [ Redirects::class => Redirects::class, XClacksOverhead::class => XClacksOverhead::class, ], 'factories' => [ AuthMiddleware::class => AuthMiddlewareFactory::class, Helper\UrlHelperMiddleware::class => Helper\UrlHelperMiddlewareFactory::class, Unauthorized::class => UnauthorizedFactory::class, ], ], ];
Similarly, the routing configuration is also now only service configuration:
// config/autoload/routes.global.php use Mwop\Blog; use Mwop\ComicsPage; use Mwop\Contact; use Mwop\Factory; use Mwop\HomePage; use Mwop\Job; use Mwop\ResumePage; use Zend\Expressive\Helper\BodyParams\BodyParamsMiddleware; use Zend\Expressive\Router\RouterInterface; use Zend\Expressive\Router\FastRouteRouter; return [ 'dependencies' => [ 'delegators' => [ Blog\DisplayPostMiddleware::class => [ Blog\CachingDelegatorFactory::class, ], ], 'invokables' => [ Blog\FeedMiddleware::class => Blog\FeedMiddleware::class, Blog\Console\SeedBlogDatabase::class => Blog\Console\SeedBlogDatabase::class, BodyParamsMiddleware::class => BodyParamsMiddleware::class, RouterInterface::class => FastRouteRouter::class, ], 'factories' => [ Blog\DisplayPostMiddleware::class => Blog\DisplayPostMiddlewareFactory::class, Blog\ListPostsMiddleware::class => Blog\ListPostsMiddlewareFactory::class, Contact\LandingPage::class => Contact\LandingPageFactory::class, Contact\Process::class => Contact\ProcessFactory::class, Contact\ThankYouPage::class => Contact\ThankYouPageFactory::class, ComicsPage::class => Factory\ComicsPage::class, HomePage::class => Factory\PageFactory::class, Job\GithubFeed::class => Job\GithubFeedFactory::class, ResumePage::class => Factory\PageFactory::class, 'Mwop\OfflinePage' => Factory\PageFactory::class, ], ], ];
Finally, let's look at the
public/index.php. As noted earlier, Expressive
defines a similar API to other microframeworks. This means that you can call
things like
$app->get(),
$app->post(), etc. with a route, the middleware to
execute, and, in the case of Expressive, the route name (which is used for URI
generation within the application). Here's what it looks like when done:
// public/index.php namespace Mwop; use Zend\Expressive\Application; use Zend\Expressive\Helper; // Delegate static file requests back to the PHP built-in webserver if (php_sapi_name() === 'cli-server' && is_file(__DIR__ . parse_url($_SERVER['REQUEST_URI'], PHP_URL_PATH)) ) { return false; } chdir(dirname(__DIR__)); require_once 'vendor/autoload.php'; $container = require 'config/container.php'; $app = $container->get(Application::class); // Piped middleware $app->pipe(XClacksOverhead::class); $app->pipe(Redirects::class); $app->pipe('/auth', Auth\Middleware::class); $app->pipeRoutingMiddleware(); $app->pipe(Helper\UrlHelperMiddleware::class); $app->pipeDispatchMiddleware(); $app->pipe(Unauthorized::class); // Routed middleware // General pages $app->get('/', HomePage::class, 'home'); $app->get('/comics', ComicsPage::class, 'comics'); $app->get('/offline', OfflinePage::class, 'offline'); $app->get('/resume', ResumePage::class, 'resume'); // Blog $app->get('/blog[/]', Blog\ListPostsMiddleware::class, 'blog'); $app->get('/blog/{id:[^/]+}.html', Blog\DisplayPostMiddleware::class, 'blog.post'); $app->get('/blog/tag/{tag:php}.xml', Blog\FeedMiddleware::class, 'blog.feed.php'); $app->get('/blog/{tag:php}.xml', Blog\FeedMiddleware::class, 'blog.feed.php.also'); $app->get('/blog/tag/{tag:[^/]+}/{type:atom|rss}.xml', Blog\FeedMiddleware::class, 'blog.tag.feed'); $app->get('/blog/tag/{tag:[^/]+}', Blog\ListPostsMiddleware::class, 'blog.tag'); $app->get('/blog/{type:atom|rss}.xml', Blog\FeedMiddleware::class, 'blog.feed'); // Contact form $app->get('/contact[/]', Contact\LandingPage::class, 'contact'); $app->post('/contact/process', Contact\Process::class, 'contact.process'); $app->get('/contact/thank-you', Contact\ThankYouPage::class, 'contact.thank-your'); // Zend Server jobs $app->post('/jobs/clear-cache', Job\ClearCache::class, 'job.clear-cache'); $app->post('/jobs/comics', Job\Comics::class, 'job.comics'); $app->post('/jobs/github-feed', Job\GithubFeed::class, 'job.github-feed'); $app->run();
This approach provides a nice middleground between defining the middleware inline:
$app->get('/', function ($request, $response, $next) { // ... }, 'home');
and the straight configuration approach:
'routes' => [ [ 'path' => '/', 'middleware' => HomePage::class, 'allowed_methods' => ['GET'], 'name' => 'home', ], / * ... */
It loses, however, some flexibility: with the configuration-driven approach, we can easily define some routes or pipeline middleware that only execute in development, and ensure the order in which they occur — something not easy to do with the programmatic approach.
The main point in this exercise, however, is to demonstrate that Expressive allows you to choose your own approach, which is the guiding principle behind the project. | https://mwop.net/blog/2016-05-16-programmatic-expressive.html | CC-MAIN-2018-17 | en | refinedweb |
Turn on or turn off self-service site creation (SharePoint Server 2010)
Applies to: SharePoint Server 2010
Topic Last Modified: 2010-04-12
The self-service site creation feature in Microsoft SharePoint Server 2010 allows users who have the Use Self-Service Site Creation permission to create sites in defined URL namespaces. For more information about user and site permissions, see User permissions and permission levels (SharePoint Server 2010). To determine whether self-service site creation is a good practice for Web sites in your organization, see Plan sites and site collections (SharePoint Server 2010).
In this article:
To turn on or turn off self-service site creation by using Central Administration
To turn on or turn off self-service site creation by using the Stsadm command-line tool
Verify that the user account that is performing this task is a member of the Farm Administrators SharePoint group.
On the SharePoint Central Administration Web site, click Application Management.
On the Application Management page, click Manage Web Applications.
Click the Web application for which you want to turn on or turn off self-service site creation. The ribbon becomes active.
On the ribbon, click Self-Service Site Creation.
On the Self-Service Site Collection Management page, configure the following settings:
Specify whether self-service site creation is On (enabled) or Off (disabled) for the Web application. The default value is On.
To require users of self-service site creation to supply a secondary contact name on the sign-up page, select Require secondary contact.
Click OK to complete the operation.
Verify that the user account that you use to run the Stsadm command-line tool is a member of the Administrators group on the local computer and a member of the Farm Administrators group.
On the drive where SharePoint Server 2010 is installed, click Start, and then type command prompt into the text box. In the list of results, right-click Command Prompt, click Run as administrator, and then click OK.
At the command prompt, type the following command:
cd %CommonProgramFiles%\Microsoft Shared\Web server extensions\14\bin
To turn on self-service site creation, type the following command:
stsadm.exe -o enablessc -url <url> -requiresecondarycontact
Where <url> is the URL of the Web application.
This command turns on self-service site creation and requires a secondary contact.
To turn off self-service site creation, type the following command:
stsadm -o disablessc -url <url>
Where <url> is the URL of the Web application.
For more information, see Enablessc: Stsadm operation (Office SharePoint Server) and Disablessc: Stsadm operation (Office SharePoint Server). | https://technet.microsoft.com/en-us/library/cc261685.aspx | CC-MAIN-2018-17 | en | refinedweb |
In the previous chapter you were introduced to some basic object-oriented programming terms. This chapter will expand on these terms, and introduce you to some new ones, while concentrating on how they apply to the Objective-C language and the GNUstep base library. First let us look at some non OO additions that Objective-C makes to ANSI C.
Objective-C makes a few non OO additions to the syntax of the C programming language that include:
BOOL) capable of storing either of the values
YESor
NO.
BOOLis a scalar value and can be used like the familiar
intand
chardata types.
BOOLvalue of
NOis zero, while
YESis non-zero.
//) to mark text up to the end of the line as a comment.
#importpreprocessor directive was added; it directs the compiler to include a file only if it has not previously been included for the current compilation. This directive should only be used for Objective-C headers and not ordinary C headers, since the latter may actually rely on being included more than once in certain cases to support their functionality.
Object-oriented (OO) programming is based on the notion that a software system can be composed of objects that interact with each other in a manner that parallels the interaction of objects in the physical world.
This model makes it easier for the programmer to understand how software works since it makes programming more intuitive. The use of objects also makes it easier during program design: take a big problem and consider it in small pieces, the individual objects, and how they relate to each other.
Objects are like mini programs that can function on their own when requested by the program or even another object. An object can receive messages and then act on these messages to alter the state of itself (the size and position of a rectangle object in a drawing program for example).
In software an object consists of instance variables (data) that represent the state of the object, and methods (like C functions) that act on these variables in response to messages.
As a programmer creating an application or tool, all you need do is send messages to the appropriate objects rather than call functions that manipulate data as you would with a procedural program.
The syntax for sending a message to an object, as shown below, is one of the additions that Objective-C adds to ANSI C.
Note the use of the square [ ] brackets surrounding the name of the object and message.
Rather than 'calling' one of its methods, an object is said to 'perform' one of its methods in response to a message. The format that a message can take is discussed later in this section.
Objective-C defines a new type to identify an object:
id, a type that
points to an object's data (its instance variables). The following code
declares the variable '
button' as an object (as opposed to
'
button' being declared an integer, character or some other data type).
When the button object is eventually created the variable name '
button'
will point to the object's data, but before it is created the variable could
be assigned a special value to indicate to other code that the object does not
yet exist.
Objective-C defines a new keyword
nil for this assignment, where
nil is of type
id with an unassigned value. In the button
example, the assignment could look like this:
which assigns
nil in the declaration of the variable.
You can then test the value of an object to determine whether the object exists, perhaps before sending the object a message. If the test fails, then the object does not exist and your code can execute an alternative statement.
The header file
objc/objc.h defines
id,
nil, and other
basic types of the Objective-C language. It is automatically included in your
source code when you use the compiler directive
#include
<Foundation/Foundation.h> to include the GNUstep Base class definitions.
A message in Objective-C is the mechanism by which you pass instructions to objects. You may tell the object to do something for you, tell it to change its internal state, or ask it for information.
A message usually invokes a method, causing the receiving object to respond in some way. Objects and data are manipulated by sending messages to them. Like C-functions they have return types, but function specific to the object.
Objects respond to messages that make specific requests. Message expressions are enclosed in square brackets and include the receiver or object name and the message or method name along with any arguments.
To send a message to an object, use the syntax:
[receiver messagename];
where
receiver is the object.
The run-time system invokes object methods that are specified by messages. For example, to invoke the display method of the mySquare object the following message is used:
[mySquare display];
Messages may include arguments that are prefixed by colons, in which
case the colons are part of the message name, so the following message
is used to invoke the
setFrameOrigin:: method:
[button setFrameOrigin: 10.0 : 10.0];
Labels describing arguments precede colons:
[button setWidth: 20.0 height: 122.0];
invokes the method named
setWidth:height:
Messages that take a variable number of arguments are of the form:
[receiver makeList: list, argOne, argTwo, argThree];
A message to
nil does NOT crash the application (while in Java messages
to
null raise exceptions); the Objective-C application does nothing.
For example:
[nil display];
will do nothing.
If a message to
nil is supposed to return an object, it will return
nil. But if the method is supposed to return a primitive type such as
an
int, then the return value of that method when invoked on
nil, is undefined. The programmer therefore needs to avoid using the
return value in this instance.
Polymorphism refers to the fact that two different objects may respond differently to the same message. For example when client objects receive an alike message from a server object, they may respond differently. Using Dynamic Binding, the run-time system determines which code to execute according to the object type.
A class in Objective-C is a type of object, much like a structure definition in C except that in addition to variables, a class has code - method implementations - associated with it. When you create an instance of a class, also known as an object, memory for each of its variables is allocated, including a pointer to the class definition itself, which tells the Objective-C runtime where to find the method code, among other things. Whenever an object is sent a message, the runtime finds this code and executes it, using the variable values that are set for this object.
Most of the programmer's time is spent defining classes.
Inheritance helps reduce coding time by providing a convenient way of
reusing code.
For example, the
NSButton class defines data (or instance variables) and methods to create button objects of a certain type, so a subclass of
NSButton could be produced to create buttons of another type - which may perhaps have a different border colour. Equally
NSTextField can be used to define a subclass that perhaps draws a different border, by reusing definitions and data in the superclass.
Inheritance places all classes in a logical hierarchy or tree structure that
may have the
NSObject class at its root. (The root object may be
changed by the developer; in GNUstep it is
NSObject, but in "plain"
Objective-C it is a class called "
Object" supplied with the runtime.)
All classes may have subclasses, and all except the root class do have
superclasses. When a class object creates a new instance, the new object holds
the data for its class, superclass, and superclasses extending to the root
class (typically
NSObject). Additional data may be added to classes so
as to provide specific functions and application logic.
When a new object is created, it is allocated memory space and its data in the
form of its instance variables are initialised. Every object has at least one
instance variable (inherited from
NSObject) called
isa, which is
initialized to refer to the object's class. Through this reference, access is
also afforded to classes in the object's inheritance path.
In terms of source code, an Objective-C class definition has an:
Typically these entities are confined to separate files
with
.h and
.m extensions for Interface and Implementation files,
respectively. However they may be merged
into one file, and a single file may implement multiple classes.
Each new class inherits methods and instance variables from another class. This results in a class hierarchy with the root class at the core, and every class (except the root) has a superclass as its parent, and all classes may have numerous subclasses as their children. Each class therefore is a refinement of its superclass(es).
Objects may access methods defined for their class, superclass, superclass' superclass, extending to the root class. Classes may be defined with methods that overwrite their namesakes in ancestor classes. These new methods are then inherited by subclasses, but other methods in the new class can locate the overridden methods. Additionally redefined methods may include overridden methods.
Abstract classes or abstract superclasses such as
NSObject define
methods and instance variables used by multiple subclasses.
Their purpose is to reduce the development effort required to
create subclasses and application structures.
When we get technical, we make a distinction between a pure abstract
class whose methods are defined but instance variables are not,
and a semi-abstract class where instance variables are defined).
An abstract class is not expected to actually produce functional instances since crucial parts of the code are expected to be provided by subclasses. In practice, abstract classes may either stub out key methods with no-op implementations, or leave them unimplemented entirely. In the latter case, the compiler will produce a warning (but not an error).
Abstract classes reduce the development effort required to create subclasses and application structures.
A class cluster is an abstract base class, and a group of private, concrete subclasses. It is used to hide implementation details from the programmer (who is only allowed to use the interface provided by the abstract class), so that the actual design can be modified (probably optimised) at a later date, without breaking any code that uses the cluster.
Consider a scenario where it is necessary to create a class hierarchy to define objects holding different types including chars, ints, shorts, longs, floats and doubles. Of course, different types could be defined in the same class since it is possible to cast or change them from one to the next. Their allocated storage differs, however, so it would be inefficient to bundle them in the same class and to convert them in this way.
The solution to this problem is to use a class cluster: define an abstract superclass that specifies and declares components for subclasses, but does not declare instance variables. Rather this declaration is left to its subclasses, which share the programmatic interface that is declared by the abstract superclass.
When you create an object using a cluster interface, you are given an object of another class - from a concrete class in the cluster.
In GNUstep,
NSObject is a root class that provides a base
implementation for all objects, their interactions, and their integration in
the run-time system.
NSObject defines the
isa instance variable
that connects every object with its class.
In other Objective-C environments besides GNUstep,
NSObject will be
replaced by a different class. In many cases this will be a default class
provided with the Objective-C runtime. In the GNU runtime for example, the
base class is called
Object. Usually base classes define a similar set
of methods to what is described here for
NSObject, however there are
variations.
The most basic functions associated with the
NSObject class (and
inherited by all subclasses) are the following:
In addition,
NSObject supports the following functionality:
In fact, the
NSObject class is a bit more complicated than just
described. In reality, its method declarations are split into two components:
essential and ancillary. The essential methods are those that are needed by
any root class in the GNUstep/Objective-C environment. They are declared
in an "
NSObject protocol" which should be implemented by any other
root class you define (see Protocols). The ancillary
methods are those specific to the
NSObject class itself but need not be
implemented by any other root class. It is not important to know which
methods are of which type unless you actually intend to write an alternative
root class, something that is rarely done.
Recall that the
id type may be used to refer to any class of object.
While this provides for great runtime flexibility (so that, for example, a
generic
List class may contain objcts of any instance), it prevents the
compiler from checking whether objects implement the messages you send them.
To allow type checking to take place, Objective-C therefore also allows you to
use class names as variable types in code. In the following example, type
checking verifies that the
myString object is an appropriate type.
Note that objects are declared as pointers, unlike when
id is used.
This is because the pointer operator is implicit for
id. Also, when
the compiler performs type checking, a subclass is always permissible where
any ancestor class is expected, but not vice-versa.
Static typing is not always appropriate. For example, you may wish to store
objects of multiple types within a list or other container structure. In
these situations, you can still perform type-checking manually if you need to
send an untyped object a particular message. The
isMemberOfClass:
method defined in the
NSObject class verifies that the receiver is of a
specific class:
The test will return false if the object is a member of a subclass of the
specific class given - an exact match is required. If you are merely
interested in whether a given object descends from a particular class, the
isKindOfClass: method can be used instead:
There are other ways of determining whether an object responds to a particular method, as will be discussed in Advanced Messaging.
As you will see later, classes may define some or all of their instance
variables to be public if they wish. This means that any other object or
code block can access them using the standard "
->" structure access
operator from C. For this to work, the object must be statically typed (not
referred to by an
id variable).
In general, direct instance variable access from outside of a class is not recommended programming practice, aside from in exceptional cases where performance is at a premium. Instead, you should define special methods called accessors that provide the ability to retrieve or set instance variables if necessary:
While it is not shown here, accessors may perform arbitrary operations before returning or setting internal variable values, and there need not even be a direct correspondence between the two. Using accessor methods consistently allows this to take place when necessary for implementation reasons without external code being aware of it. This property of encapsulation makes large code bases easier to maintain.
Classes themselves are maintained internally as objects in their own right in Objective-C, however they do not possess the instance variables defined by the classes they represent, and they cannot be created or destroyed by user code. They do respond to class methods, as in the following:
Classes respond to the class methods their class defines, as well as those defined by their superclasses. However, it is not allowed to override an inherited class method.
You may obtain the class object corresponding to an instance object at runtime
by a method call; the class object is an instance of the "
Class"
class.
Classes may also define a version number (by overriding that defined in
NSObject):
int versionNumber = [NSString version];
This facility allows developers to access the benefits of versioning for classes if they so choose.
Class names are about the only names with global visibility in Objective-C.
If a class name is unknown at compilation but is available as a string at run
time, the GNUstep library
NSClassFromString function may be used to
return the class object:
The function returns
Nil if it is passed a string holding an invalid
class name. Class names, global variables and functions (but not methods)
exist in the same name space, so no two of these entities may share the same
name.
The following lists the full uniqueness constraints on names in Objective-C.
There are also a number of conventions used in practice. These help to make code more readable and also help avoid naming conflicts. Conventions are particularly important since Objective-C does not have any namespace partitioning facilities like Java or other languages.
Strings in GNUstep can be handled in one of two ways. The first way is the C
approach of using an array of
char. In this case you may use the
"
STR" type defined in Objective-C in place of
char[].
The second approach is to rely on the
NSString class and associated
subclasses in the GNUstep Base library, and compiler support for them. Using
this approach allows use of the methods in the
NSString API. In addition, the
NSString class provides the means to initialize strings using
printf-like formats.
The
NSString class defines objects holding raw Unicode character
streams or strings. Unicode is a 16-bit worldwide standard used to define
character sets for all spoken languages. In GNUstep parlance the Unicode
character is of type unichar.
A static instance is allocated at compile time. The creation of a static
instance of
NSString is achieved using the
@"..." construct
and a pointer:
Here,
w is a variable that refers to an
NSString object
representing the ASCII string "Brainstorm".
The class method
stringWithFormat: may also be used to create instances
of
NSString, and broadly echoes the
printf() function in the C
programming language.
stringWithFormat: accepts a list of arguments
whose processed result is placed in an
NSString that becomes a return
value as illustrated below:
The example will produce an
NSString called
gprsChannel
holding the string "The GPRS channel is 5".
stringWithFormat: recognises the
%@ conversion specification
that is used to specify an additional
NSString:
The example assigns the variable
two the string "Our trading name is
Brainstorm." The
%@ specification can be used to output an object's
description - as returned by the
NSObject
-description method),
which is useful when debugging, as in:
When a program needs to call a C library function it is useful to convert
between
NSStrings and standard ASCII C strings (not fixed at compile
time). To create an
NSString using the contents of the returned C
string (from the above example), use the
NSString class method
stringWithCString::
To convert an
NSString to a standard C ASCII string,
use the
cString method of the
NSString class:
NSStrings are immutable objects; meaning that once they are created,
they cannot be modified. This results in optimised
NSString code. To
modify a string, use the subclass of
NSString, called
NSMutableString. Use a
NSMutableString wherever a
NSString could be used.
An
NSMutableString responds to methods that modify the string directly -
which is not possible with a generic
NSString.
To create a
NSMutableStringuse
stringWithFormat::
While
NSString's implementation of
stringWithFormat: returns
a
NSString,
NSMutableString's implementation returns an
NSMutableString.
Note. Static strings created with the
@"..." construct are
always immutable.
NSMutableStrings are rarely used because to modify a string, you
normally create a new string derived from an existing one.
A useful method of the
NSMutableString class is
appendString:,
which takes an
NSString argument, and appends it to the receiver:
This code produces the same result as:
The the GNUstep Base library has numerous string manipulation features,
and among the most notable are those relating to writing/reading
strings to/from files. To write the contents of a string to a file,
use the
writeToFile:atomically: method:
writeToFile:atomically: returns YES for success, and NO for. This is a useful feature, which should be enabled.
To read the contents of a file into a string, use
stringWithContentsOfFile:, as shown in the following
example that reads
@"/home/Brainstorm/test":
This document was generated by Richard Frith-Macdonald on July, 26 2013 using texi2html 1.76. | https://www.gnu.org/software/gnustep/resources/documentation/Developer/Base/ProgrammingManual/manual_2.html | CC-MAIN-2015-35 | en | refinedweb |
A recursive method is a method that calls itself. An iterative method is a method that uses a loop to repeat an action. Anything that can be done iteratively can be done recursively, and vice versa. Iterative algorithms and methods are generally more efficient than recursive algorithms.
Rec.
Function call and return in Python )!
def fact ( n ): if ( n == 0 ): return 1 else: return n * fact ( n - 1 )
In mathematics there are recurrence relations that are defined recursively. A recurrence relation defines a term in a sequence as a function of one or more previous terms. One of the most famous of such recurrence sequences is the Fibonacci series. Other than the first two terms in this series, every term is defined as the sum of the previous two terms:
F(1) = 1 F(2) = 1 F(n) = F(n-1) + F(n-2) for n > 2 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, ...Here is the Python code that generates this series:
def fib ( n ): if ((n == 1) or (n == 2)): return 1 else: return fib (n - 1) + fib (n - 2)Even though the series is defined recursively, the above code is extremely inefficient in determining the terms in a Fibonacci series (why?). An iterative solution works best in this case.
However, there are sorting algorithms that use recursion that are extremely efficient in what they do. One example of such a sorting algorithm is MergeSort. Let us say you have a list of numbers to sort. Then this algorithm can be stated as follows: Divide the list in half. Sort one half, sort the other half and then merge the two sorted halves. You keep dividing each half until you are down to one item. That item is sorted! You then merge that item with another single item and work backwards merging sorted sub-lists until you have the complete list. | http://www.cs.utexas.edu/~mitra/csSpring2013/cs313/lectures/recursion.html | CC-MAIN-2015-35 | en | refinedweb |
Re: BUG? OR NOT A BUG?
Discussion in 'ASP .Net' started by John, Sep Login control bug or SQL 2005 bug?RedEye, Dec 12, 2005, in forum: ASP .Net
- Replies:
- 2
- Views:
- 689
- Jason Kester
- Dec 13, 2005
Bug Parade Bug 4953793Michel Joly de Lotbiniere, Nov 30, 2003, in forum: Java
- Replies:
- 4
- Views:
- 741
- Michel
- Dec 2, 2003
XSLTC cannot compile a stylesheet. Bug or not bug?Andrea Desole, Aug 8, 2006, in forum: Java
- Replies:
- 2
- Views:
- 1,347
- Andrea Desole
- Aug 9, 2006
Syntax bug, in 1.8.5? return not (some expr) <-- syntax error vsreturn (not (some expr)) <-- fineGood Night Moon, Jul 22, 2007, in forum: Ruby
- Replies:
- 9
- Views:
- 412
- Rick DeNatale
- Jul 25, 2007
Bug or not a bug? array*=intKyle Schmitt, Oct 29, 2008, in forum: Ruby
- Replies:
- 6
- Views:
- 160
- Kyle Schmitt
- Oct 30, 2008 | http://www.thecodingforums.com/threads/re-bug-or-not-a-bug.110106/ | CC-MAIN-2015-35 | en | refinedweb |
Here's a little more info related to the authentication feature Dan is
championing.
The core logic of Application.dispatchRequest() is basically this part:
## ssPath = "server side path for the request's URL"
if ssPath is None:
self.handleBadURL(transaction)
elif isdir(ssPath) and noslash(request.pathInfo()): # (*) see below
self.handleDeficientDirectoryURL(transaction)
elif self.isSessionIdProblematic(request):
self.handleInvalidSession(transaction)
else:
self.handleGoodURL(transaction)
So we basically hit one of these four methods:
self.handleBadURL(transaction)
self.handleDeficientDirectoryURL(transaction)
self.handleInvalidSession(transaction)
self.handleGoodURL(transaction)
handleGoodURL() is literally:
def handleGoodURL(self, transaction):
self.createServletInTransaction(transaction)
self.awake(transaction)
self.respond(transaction)
self.sleep(transaction)
Perhaps it would be appropriate for the authentication plug-in to subclass
Application, override handleGoodURL() and do the right thing regarding
authentication, then call super.
Or, since authentication is a pretty common and core concept to serving, we
could explicitly provide a hook for it in handleGoodURL(), but the default
handler would be a no-op and WebKit would define a basic Authenticator
class to dictate the interface it expects.
Thoughts?
-Chuck
__________________________________________________
Do You Yahoo!?
Yahoo! Mail - Free email you can access from anywhere!
I've been meaning to write a "PlugInTemplate" which you copy and change to
fit your stuff. This sounds like a good situation to force me to do it. :-)
I also I have "write the plug-in author's guide" on my list, but you're not
likely to get that in the next 24 hrs. In lieu of that, you can use the
template and ask me all the questions you want.
In those cases where we want to do a plug-in, but the hooks aren't quite
there in WebKit, we will add those hooks. In fact, we already did this for
the upcoming Win32Kit. We'll keep the hooks fairly generic. For example,
the last hooks were the new initThread() and delThread() methods in
AppServer. You can't tell that Win32Kit prompted those.
It's interesting that on another list, someone mentioned that one of the
goals for the next AOLserver release was to make everything modular so
people could use the pieces they want, possibly outside of the server. I'm
surprised by how many open source projects just turn into monoliths which
later have to be painfully hacked apart.
I'm glad we avoided that mess. :-)
-Chuck
At 03:53 PM 8/24/00 -0700, Daniel Green wrote:
| http://sourceforge.net/p/webware/mailman/message/5307428/ | CC-MAIN-2015-35 | en | refinedweb |
2.1.28 Part 1 Section 15.2.5, Custom XML Data Storage Part
a. The standard does not specify that Office reserves namespaces for storing document properties.
Office reserves the following specific namespaces for storing document properties:
Furthermore, a Custom XML Data Storage part with root namespace of describes the properties of this document as defined by the xsd:schema children of Content Type Schemas (§3.5.2.1, Content Type Schemas Schema). The xsd:schema that bears the ma:root Root (§3.5.2.2.1, Shared Attributes) attribute describes the structure of the root element this Custom XML Data Storage part, with the other xsd:schema children collectively describing elements from other namespaces referenced.
Show: | https://msdn.microsoft.com/en-us/library/ff531468(v=office.12).aspx | CC-MAIN-2015-35 | en | refinedweb |
Why is this code only assigning the ascii value to the arrray. I want it to print out the letters ex:what Am i doing wrong.?what Am i doing wrong.?array element 0= a
array element 1 = b
Here are the two scripts i coded. I thought the second would work, but it didn't. I was so excited to wake up this morning cause i thought i had solved it but .....
Code:code 1 /* alpaarray.c intializes a 26 element array to the ltrs of the alphabet and * displays it */ #include<stdio.h> #define SIZE 26 int main(void) { int ndx; char alphabet[SIZE]; char ltr; /* outer loop to count to 26 */ for (ndx = 0; ndx < SIZE; ndx++) { for (ltr = 'a'; ltr <= ('a' + ndx); ltr ++) alphabet[ndx] = ltr;/* assigning only ascii value */ printf("Alphabet element number %3d is %3d\n", ndx, alphabet[ndx]); } getchar(); return (0); }Code:/* ltrarray2.c intialises a character array to ltr of the alphabet */ #include <stdio.h> #define SIZE 26 int main(void) { int row; char alpha[SIZE], ltr; /* set up rows */ for (row = 0, ltr = ('a' + row); row < SIZE; row++,ltr++) alpha[row] = ltr; /* print elements of row */ for (row = 0; row < SIZE; row++) printf("Element %3d of Alpha array is %3d\n", row, alpha[row]); getchar(); return (0); } | http://cboard.cprogramming.com/c-programming/49002-i%27m-officially-stuck.html | CC-MAIN-2015-35 | en | refinedweb |
SDK
The Software Development Kit (SDK) provides information intended for developers who are creating applications using any of the Microsoft® Visual Studio®-based BizTalk Server tools, the public application programming interfaces (APIs), or the samples and utilities provided in the BizTalk Server SDK. The following tips are provided to enhance your experience with the SDK documentation.
Using Help in a Developer Environment
Microsoft® BizTalk® Server 2004 Help contains features that you can use to display the developer documentation in your preferred language and to link between BizTalk Server 2004 Help and Visual Studio 2003 Help.
Using language filtering
The BizTalk BizTalk Server Help in Visual Studio
You can view BizTalk BizTalk Server 2004 Help. Select BizTalk Server Documentation from the Programs, Microsoft BizTalk Server 2004 menu.
If you are developing in Visual Studio .NET, either of the first two methods are recommended. Using either of these methods enables integration between BizTalk Server Help and Visual Studio .NET Help; this integration is extremely useful when navigating class relationships across the documentation sets. If you view BizTalk Server Help outside of Visual Studio .NET, read the following tip to properly render links to Visual Studio .NET.
Linking between BizTalk Server 2004 Help and Microsoft Visual Studio .NET Help
When a member is inherited from the .NET Framework Base Class Library, two links are provided, as shown in the following example:
If you are viewing BizTalk Server Help in Visual Studio .NET or in the Visual Studio .NET Documentation, the link in the left column goes to the exact member page. If you are viewing BizTalk Server 2004 Help outside of Visual Studio, use the link in the right column to go to the System namespace page in MSDN Library. Note that the link will not go to the specific member page.
This section contains:
- Programming Guide. Contains information on how to build applications using the Microsoft BizTalk Server developer tools.
- Programmer's Reference. Contains reference information for managed and unmanaged public APIs, as well as property reference information for maps, schemas, functoids, and the message context.
- Samples. Provides instructions for using the samples provided in the BizTalk Server 2004 SDK. The samples are installed in the SDK directory of the Microsoft BizTalk Server 2004 installation path.
- Utilities Provides instructions for using the utilities provided in the BizTalk Server 2004 SDK. The utilities are installed in the SDK directory of the Microsoft BizTalk Server 2004 installation path. | https://msdn.microsoft.com/en-us/library/ee277413(v=bts.10).aspx | CC-MAIN-2015-35 | en | refinedweb |
std::shared_ptr::shared_ptr
Constructs new
shared_ptr from a variety of pointer types that refer to an object to manage.
An optional deleter
d can be supplied that is later used to destroy the object when no
shared_ptr objects own it. By default, a delete-expression for type
Y is used as the deleter.
shared_ptrwith no managed object, i.e. empty
shared_ptr
shared_ptrwith
ptras the pointer to the managed object.
Ymust be a complete type and
ptrmust be convertible to
T*. Additionally:
das the deleter.
Deletermust be callable for the type
Y, i.e. d(ptr) must be well formed, have well-defined behavior and not throw any exceptions.
Deletermust be
CopyConstructible. The copy constructor and the destructor must not throw exceptions.
allocfor allocation of data for internal use.
Allocmust be a
Allocator. The copy constructor and destructor must not throw exceptions.
shared_ptrwith no managed object, i.e. empty
shared_ptr.
shared_ptrwhich shares ownership information with
r, but holds an unrelated and unmanaged pointer
ptr. Even if this
shared_ptris the last of the group to go out of scope, it will call the destructor for the object originally managed by
r. However, calling
get()on this will always return a copy of
ptr. It is the responsibility of the programmer to make sure that this
ptrremains valid as long as this shared_ptr exists, such as in the typical use cases where
ptris a member of the object managed by
ror is an alias (e.g., downcast) of
r.get()
shared_ptrwhich shares ownership of the object managed by
r. If
rmanages no object,
*thismanages no object too. This overload doesn't participate in overload resolution if
Y*is not implicitly convertible to
T*.
shared_ptrfrom
r. After the construction, *this contains a copy of the previous state of
r,
ris empty. This overload doesn't participate in overload resolution if
Y*is not implicitly convertible to
T*.
shared_ptrwhich shares ownership of the object managed by
r.
Y*must be convertible to
T*. Note that r.lock() may be used for the same purpose: the difference is that this constructor throws an exception if the argument is empty, while std::weak_ptr<T>::lock() constructs an empty
std::shared_ptrin that case.
shared_ptrthat stores and owns the object formerly owned by
r.
Y*must be convertible to
T*. After construction,
ris empty.
shared_ptrwhich manages the object currently managed by
r. The deleter associated to
ris stored for future deletion of the managed object.
rmanages no object after the call. If
Dis a reference type, equivalent to shared_ptr(r.release(), std::ref(r.get_deleter()). Otherwise, equivalent to shared_ptr(r.release(), r.get_deleter())
[edit] Notes
When constructing a
shared_ptr from a raw pointer to an object of a type derived from std::enable_shared_from_this, the constructors of
shared_ptr update the private
weak_ptr member of the std::enable_shared_from_this base so that future calls to shared_from_this() would share ownership with the
shared_ptr created by this raw pointer constructor.
Constructing a
shared_ptr using the raw pointer overload for an object that is already managed by a
shared_ptr leads to undefined behavior, even if the object is of a type derived from std::enable_shared_from_this (in other words, raw pointer overloads assume ownership of the pointed-to object).
[edit] Parameters
[edit] Exceptions
[edit] Example
#include <memory> #include <iostream> struct Foo { Foo() { std::cout << "Foo...\n"; } ~Foo() { std::cout << "~Foo...\n"; } }; struct D { void operator()(Foo* p) const { std::cout << "Call delete for Foo object...\n"; delete p; } }; int main() { { std::cout << "constructor with no managed object\n"; std::shared_ptr<Foo> sh1; } { std::cout << "constructor with object\n"; std::shared_ptr<Foo> sh2(new Foo); std::shared_ptr<Foo> sh3(sh2); std::cout << sh2.use_count() << '\n'; std::cout << sh3.use_count() << '\n'; } { std::cout << "constructor with object and deleter\n"; std::shared_ptr<Foo> sh4(new Foo, D()); } }
Output:
constructor with no managed object constructor with object Foo... 2 2 ~Foo... constructor with object and deleter Foo... Call delete for Foo object... ~Foo... | http://en.cppreference.com/w/cpp/memory/shared_ptr/shared_ptr | CC-MAIN-2015-35 | en | refinedweb |
Introduction to parallel MCMC for Bayesian inference, using C, MPI, the GSL and SPRNG
Introduction
This post is aimed at people who already know how to code up Markov Chain Monte Carlo (MCMC) algorithms in C, but are interested in how to parallelise their code to run on multi-core machines and HPC clusters. I discussed different languages for coding MCMC algorithms in a previous post. The advantage of C is that it is fast, standard and has excellent scientific library support. Ultimately, people pursuing this route will be interested in running their code on large clusters of fast servers, but for the purposes of development and testing, this really isn’t necessary. A single dual-core laptop, or similar, is absolutely fine. I develop and test on a dual-core laptop running Ubuntu linux, so that is what I will assume for the rest of this post.
There are several possible environments for parallel computing, but I will focus on the Message-Passing Interface (MPI). This is a well-established standard for parallel computing, there are many implementations, and it is by far the most commonly used high performance computing (HPC) framework today. Even if you are ultimately interested in writing code for novel architectures such as GPUs, learning the basics of parallel computation using MPI will be time well spent.
MPI
The whole point of MPI is that it is a standard, so code written for one implementation should run fine with any other. There are many implementations. On Linux platforms, the most popular are OpenMPI, LAM, and MPICH. There are various pros and cons associated with each implementation, and if installing on a powerful HPC cluster, serious consideration should be given to which will be the most beneficial. For basic development and testing, however, it really doesn’t matter which is used. I use OpenMPI on my Ubuntu laptop, which can be installed with a simple:
sudo apt-get install openmpi-bin libopenmpi-dev
That’s it! You’re ready to go… You can test your installation with a simple “Hello world” program such as:
#include <stdio.h> #include <mpi; }
You should be able to compile this with
mpicc -o helloworld helloworld.c
and run (on 2 processors) with
mpirun -np 2 helloworld
GSL
If you are writing non-trivial MCMC codes, you are almost certainly going to need to use a sophisticated math library and associated random number generation (RNG) routines. I typically use the GSL. On Ubuntu, the GSL can be installed with:
sudo apt-get install gsl-bin libgsl0-dev
A simple script to generate some non-uniform random numbers is given below.
#include <gsl/gsl_rng.h> #include <gsl/gsl_randist.h> int main(void) { int i; double z; gsl_rng *r; r = gsl_rng_alloc(gsl_rng_mt19937); gsl_rng_set(r,0); for (i=0;i<10;i++) { z = gsl_ran_gaussian(r,1.0); printf("z(%d) = %f\n",i,z); } exit(EXIT_SUCCESS); }
This can be compiled with a command like:
gcc gsl_ran_demo.c -o gsl_ran_demo -lgsl -lgslcblas
and run with
./gsl_ran_demo
SPRNG
When writing parallel Monte Carlo codes, it is important to be able to use independent streams of random numbers on each processor. Although it is tempting to “fudge” things by using a random number generator with a different seed on each processor, this does not guarantee independence of the streams, and an unfortunate choice of seeds could potentially lead to bad behaviour of your algorithm. The solution to this problem is to use a parallel random number generator (PRNG), designed specifically to give independent streams on different processors. Unfortunately the GSL does not have native support for such parallel random number generators, so an external library should be used. SPRNG 2.0 is a popular choice. SPRNG is designed so that it can be used with MPI, but also that it does not have to be. This is an issue, as the standard binary packages distributed with Ubuntu (libsprng2, libsprng2-dev) are compiled without MPI support. If you are going to be using SPRNG with MPI, things are simpler with MPI support, so it makes sense to download sprng2.0b.tar.gz from the SPRNG web site, and build it from source, carefully following the instructions for including MPI support. If you are not familiar with building libraries from source, you may need help from someone who is.
Once you have compiled SPRNG with MPI support, you can test it with the following code:
#include <stdio.h> #include <stdlib.h> #include <mpi.h> #define SIMPLE_SPRNG #define USE_MPI #include "sprng.h" int main(int argc,char *argv[]) { double rn; int i,k; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&k); init_sprng(DEFAULT_RNG_TYPE,0,SPRNG_DEFAULT); for (i=0;i<10;i++) { rn = sprng(); printf("Process %d, random number %d: %f\n", k, i+1, rn); } MPI_Finalize(); exit(EXIT_SUCCESS); }
which can be compiled with a command like:
mpicc -I/usr/local/src/sprng2.0/include -L/usr/local/src/sprng2.0/lib -o sprng_demo sprng_demo.c -lsprng -lgmp
You will need to edit paths here to match your installation. If if builds, it can be run on 2 processors with a command like:
mpirun -np 2 sprng_demo
If it doesn’t build, there are many possible reasons. Check the error messages carefully. However, if the compilation fails at the linking stage with obscure messages about not being able to find certain SPRNG MPI functions, one possibility is that the SPRNG library has not been compiled with MPI support.
The problem with SPRNG is that it only provides a uniform random number generator. Of course we would really like to be able to use the SPRNG generator in conjunction with all of the sophisticated GSL routines for non-uniform random number generation. Many years ago I wrote a small piece of code to accomplish this, gsl-sprng.h. Download this and put it in your include path for the following example:
#include <mpi.h> #include <gsl/gsl_rng.h> #include "gsl-sprng.h" #include <gsl/gsl_randist.h> int main(int argc,char *argv[]) { int i,k,po; gsl_rng *r; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&k); r=gsl_rng_alloc(gsl_rng_sprng20); for (i=0;i<10;i++) { po = gsl_ran_poisson(r,2.0); printf("Process %d, random number %d: %d\n", k, i+1, po); } MPI_Finalize(); exit(EXIT_SUCCESS); }
A new GSL RNG, gsl_rng_sprng20 is created, by including gsl-sprng.h immediately after gsl_rng.h. If you want to set a seed, do so in the usual GSL way, but make sure to set it to be the same on each processor. I have had several emails recently from people who claim that gsl-sprng.h “doesn’t work”. All I can say is that it still works for me! I suspect the problem is that people are attempting to use it with a version of SPRNG without MPI support. That won’t work… Check that the previous SPRNG example works, first.
I can compile and run the above code with
mpicc -I/usr/local/src/sprng2.0/include -L/usr/local/src/sprng2.0/lib -o gsl-sprng_demo gsl-sprng_demo.c -lsprng -lgmp -lgsl -lgslcblas mpirun -np 2 gsl-sprng_demo
Parallel Monte Carlo
Now we have parallel random number streams, we can think about how to do parallel Monte Carlo simulations. Here is a simple example:
#include <math.h> #include <mpi.h> #include <gsl/gsl_rng.h> #include "gsl-sprng.h" int main(int argc,char *argv[]) { int i,k,N; double u,ksum,Nsum; gsl_rng *r; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&N); MPI_Comm_rank(MPI_COMM_WORLD,&k); r=gsl_rng_alloc(gsl_rng_sprng20); for (i=0;i<10000;i++) { u = gsl_rng_uniform(r); ksum += exp(-u*u); } MPI_Reduce(&ksum,&Nsum,1,MPI_DOUBLE,MPI_SUM,0,MPI_COMM_WORLD); if (k == 0) { printf("Monte carlo estimate is %f\n", (Nsum/10000)/N ); } MPI_Finalize(); exit(EXIT_SUCCESS); }
which calculates a Monte Carlo estimate of the integral
using 10k variates on each available processor. The MPI command MPI_Reduce is used to summarise the values obtained independently in each process. I compile and run with
mpicc -I/usr/local/src/sprng2.0/include -L/usr/local/src/sprng2.0/lib -o monte-carlo monte-carlo.c -lsprng -lgmp -lgsl -lgslcblas mpirun -np 2 monte-carlo
Parallel chains MCMC
Once parallel Monte Carlo has been mastered, it is time to move on to parallel MCMC. First it makes sense to understand how to run parallel MCMC chains in an MPI environment. I will illustrate this with a simple Metropolis-Hastings algorithm to sample a standard normal using uniform proposals, as discussed in a previous post. Here an independent chain is run on each processor, and the results are written into separate files.
#include <gsl/gsl_rng.h> #include "gsl-sprng.h" #include <gsl/gsl_randist.h> #include <mpi.h> int main(int argc,char *argv[]) { int k,i,iters; double x,can,a,alpha; gsl_rng *r; FILE *s; char filename[15]; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&k); if ((argc != 3)) { if (k == 0) fprintf(stderr,"Usage: %s <iters> <alpha>\n",argv[0]); MPI_Finalize(); return(EXIT_FAILURE); } iters=atoi(argv[1]); alpha=atof(argv[2]); r=gsl_rng_alloc(gsl_rng_sprng20); sprintf(filename,"chain-%03d.tab",k); s=fopen(filename,"w"); if (s==NULL) { perror("Failed open"); MPI_Finalize(); return(EXIT_FAILURE); } x = gsl_ran_flat(r,-20,20); fprintf(s,"Iter X\n"); for (i=0;i<iters;i++) { can = x + gsl_ran_flat(r,-alpha,alpha); a = gsl_ran_ugaussian_pdf(can) / gsl_ran_ugaussian_pdf(x); if (gsl_rng_uniform(r) < a) x = can; fprintf(s,"%d %f\n",i,x); } fclose(s); MPI_Finalize(); return(EXIT_SUCCESS); }
I can compile and run this with the following commands
mpicc -I/usr/local/src/sprng2.0/include -L/usr/local/src/sprng2.0/lib -o mcmc mcmc.c -lsprng -lgmp -lgsl -lgslcblas mpirun -np 2 mcmc 100000 1
Parallelising a single MCMC chain
The parallel chains approach turns out to be surprisingly effective in practice. Obviously the disadvantage of that approach is that “burn in” has to be repeated on every processor, which limits how much efficiency gain can be acheived by running across many processors. Consequently it is often desirable to try and parallelise a single MCMC chain. As MCMC algorithms are inherently sequential, parallelisation is not completely trivial, and most (but not all) approaches to parallelising a single MCMC chain focus on the parallelisation of each iteration. In order for this to be worthwhile, it is necessary that the problem being considered is non-trivial, having a large state space. The strategy is then to divide the state space into “chunks” which can be updated in parallel. I don’t have time to go through a real example in detail in this blog post, but fortunately I wrote a book chapter on this topic almost 10 years ago which is still valid and relevant today. The citation details are:
Wilkinson, D. J. (2005) Parallel Bayesian Computation, Chapter 16 in E. J. Kontoghiorghes (ed.) Handbook of Parallel Computing and Statistics, Marcel Dekker/CRC Press, 481-512.
The book was eventually published in 2005 after a long delay. The publisher which originally commisioned the handbook (Marcel Dekker) was taken over by CRC Press before publication, and the project lay dormant for a couple of years until the new publisher picked it up again and decided to proceed with publication. I have a draft of my original submission in PDF which I recommend reading for further information. The code examples used are also available for download, including several of the examples used in this post, as well as an extended case study on parallelisation of a single chain for Bayesian inference in a stochastic volatility model. Although the chapter is nearly 10 years old, the issues discussed are all still remarkably up-to-date, and the code examples all still work. I think that is a testament to the stability of the technology adopted (C, MPI, GSL). Some of the other handbook chapters have not stood the test of time so well.
For basic information on getting started with MPI and key MPI commands for implementing parallel MCMC algorithms, the above mentioned book chapter is a reasonable place to start. Read it all through carefully, run the examples, and carefully study the code for the parallel stochastic volatility example. Once that is understood, you should find it possible to start writing your own parallel MCMC algorithms. For further information about more sophisticated MPI usage and additional commands, I find the annotated specification: MPI – The complete reference to be as good a source as any.
18 thoughts on “Getting started with parallel MCMC”
Please please please stop using LAM/MPI.
We’ve wholly replaced it with Open MPI these days. :-)
Hi,
That’s a really informative blog. I am trying to parallelise MCMC in a bioinformatics software using OpenMPI. After deep analysis, I realised that within each iteration of the MCMC loop, the steps are sequential and hence I cannot parallelise a single MCMC chain. hence, I am trying to implement the multiple chain approach. However, I am not able to understand that how should the results be combined from the smaller multiple chains to obtain the final results?
Any help would be greatly appreciated. Thanks!
Essentially, after discarding burn-in from the start of each chain, all of your samples should be from the desired posterior, and hence can be pooled for analysis. However, it can be helpful to keep the chains separate for diagnostic purposes. The R “coda” package will read in parallel chains and conduct diagnostic analysis and inference for you.
Thanks for the reply. My query is how to ‘pool for analysis’ to obtain a single chain like behaviour from the multiple chains?
for eg, if i obtain the following chains:
chain1: x1,x2,x3…
chain2: y1,y2,y3…
chain3: z1,z2,z3…
How should i combine them to obtain a single chain?
In some sense it doesn’t matter. So, for example, you could concatenate the chains, as:
x1,x2,x3,…y1,y2,y3,…,z1,z2,z3,…
but I would strongly recommend that you take a look at the “coda” package mentioned above for methods to analyse the chains in parallel
thanks
Can you provide an example of how a bad parallel RNG affects the results of MC. Also can you add openMP into the mix as well since it is more likely that single computer owners will be using openMP rather than MPI which is meant for large clusters.
Thank you so munch for sharing, that’s very helpful.
Now, I’m wondering if it’s possible to proceed the same way using MKL in order to generate arrays of random numbers in parallel. I use both GSL and MKL on one node and found to get a bunch of random number at once very convenient. So, having the possibility to do this in parallel would be very great, like getting one random array per node.
I’ll definitely implement your code above first.
Thanks again. Eric. | https://darrenjw.wordpress.com/2010/12/14/getting-started-with-parallel-mcmc/ | CC-MAIN-2015-35 | en | refinedweb |
diff --git a/doc/mail/tutorial/smtpclient/smtpclient.xhtml b/doc/mail/tutorial/smtpclient/smtpclient.xhtml index aea44ba..9e89a84 100644 --- a/doc/mail/tutorial/smtpclient/smtpclient.xhtml +++ b/doc/mail/tutorial/smtpclient/smtpclient.xhtml @@ -29,18 +29,17 @@ files.
The first step is to create the most
-minimal
.tac file possible for use by
-
twistd.
.tacfile possible for use by
twistd.
from twisted.application import service-
The first line of the The first line of the
.tac file imports
-
twisted.application.service, a module which contains many
-of the basic service classes and helper functions available
-in Twisted. In particular, we will be using the
-
Application function to create a new application
+
.tac file
+imports
twisted.application.service, a module which
+contains many of the basic service classes and helper
+functions available in Twisted. In particular, we will be using
+the
Application function to create a new application
service. An application service simply acts as a
central object on which to store certain kinds of deployment
configuration.
The first line of the
The second line of the
.tac file creates a new
-application service and binds it to the local name
-
application.
twistd requires this local
-name in each
.tac file it runs. It uses various pieces
-of configuration on the object to determine its behavior. For
+
The second line of the
.tac file creates a
+new application service and binds it to the local
+name
application.
twistd requires this
+local name in each
.tac file it runs. It uses various
+pieces of configuration on the object to determine its behavior. For
example,
"SMTP Client Tutorial" will be used as the name
of the
.tap file into which to serialize application
state, should it be necessary to do so.
That does it for the first example. We now have enough of a
-
.tac file to pass to
twistd. If we run smtpclient-1.tac using the
-
twistd command line:
That does it for the first example. We now have enough of
+a
.tac file to pass to
twistd. If we
+run smtpclient-1.tac using
+the
twistd command line:
twistd -ny smtpclient-1.tac @@ -101,13 +100,13 @@ from twisted.application import internet from twisted.internet import protocol-
twisted.application.internet is another
-application service module. It provides services for
+
+servers, though we are not interested in those parts for the
+moment).
twisted.application.internet is
+another application service module. It provides services for
establishing outgoing connections (as well as creating network
-servers, though we are not interested in those parts for the moment).
-
twisted.internet.protocol provides base implementations
-of many of the core Twisted concepts, such as factories and
-protocols.
twisted.internet.protocolprovides base +implementations of many of the core Twisted concepts, such +as factories and protocols.
The next line of smtpclient-2.tac instantiates a new client factory.@@ -116,12 +115,12 @@ instantiates a new client factory. smtpClientFactory = protocol.ClientFactory() -
Client factories are responsible for constructing -protocol instances whenever connections are established. -They may be required to create just one instance, or many instances if -many different connections are established, or they may never be -required to create one at all, if no connection ever manages to be -established.+
Client factories are responsible for +constructing protocol instances whenever connections are +established. They may be required to create just one instance, or +many instances if many different connections are established, or they +may never be required to create one at all, if no connection ever +manages to be established.
Now that we have a client factory, we'll need to hook it up to the
network somehow. The next line of
smtpclient-2.tac does
@@ -131,16 +130,16 @@ just that:
We'll ignore the first two arguments to
-
internet.TCPClient for the moment and instead focus on
+
We'll ignore the first two arguments
+to
internet.TCPClient for the moment and instead focus on
the third.
TCPClient is one of those application
service classes. It creates TCP connections to a specified
address and then uses its third argument, a client factory,
to get a protocol instance. It then associates the TCP
connection with the protocol instance and gets out of the way.
We can try to run
smtpclient-2.tac the same way we ran
-
smtpclient-1.tac, but the results might be a little
+
We can try to run
smtpclient-2.tac the same way we
+ran
smtpclient-1.tac, but the results might be a little
disappointing:
@@ -198,11 +197,11 @@ something with a bit more meaning: smtpClientService = internet.TCPClient('localhost', 25, smtpClientFactory)-
This directs the client to connect to localhost on port -25. This isn't the address we want ultimately, but it's a -good place-holder for the time being. We can run smtpclient-3.tac and see what this change -gets us:+
This directs the client to connect to localhost on +port 25. This isn't the address we want ultimately, but it's +a good place-holder for the time being. We can +run smtpclient-3.tac and see what this +change gets us:
exarkun@boson:~/mail/tutorial/smtpclient$ twistd -ny smtpclient-3.tac @@ -246,9 +245,9 @@ exarkun@boson:~/mail/tutorial/smtpclient$
A meagre amount of progress, but the service still raises an -exception. This time, it's because we haven't specified a -protocol class for the factory to use. We'll do that in the -next example.+exception. This time, it's because we haven't specified +a protocol class for the factory to use. We'll do.
In smtpclient-5.tac, we will begin
to use Twisted's SMTP protocol implementation for the first time.
-We'll make the obvious change, simply swapping out
-
twisted.internet.protocol.Protocol in favor of
-
twisted.mail.smtp.ESMTPClient. Don't worry about the
-E in ESMTP. It indicates we're actually using a
-newer version of the SMTP protocol. There is an
-
SMTPClient in Twisted, but there's essentially no reason
-to ever use it.
twisted.internet.protocol.Protocolin favor +of
twisted.mail.smtp.ESMTPClient. Don't worry about +the E in ESMTP. It indicates we're actually using a +newer version of the SMTP protocol. There is +an
SMTPClientin Twisted, but there's essentially no +reason to ever use it.
smtpclient-5.tac adds a new import:@@ -314,10 +314,10 @@ to ever use it. from twisted.mail import smtp -
All of the mail related code in Twisted exists beneath the
-
twisted.mail package. More specifically, everything
-having to do with the SMTP protocol implementation is defined in the
-
twisted.mail.smtp module.
All of the mail related code in Twisted exists beneath
+the
twisted.mail package. More specifically, everything
+having to do with the SMTP protocol implementation is defined in
+the
twisted.mail.smtp module.
Next we remove a line we added in smtpclient-4.tac:@@ -379,17 +379,17 @@ exarkun@boson:~/doc/mail/tutorial/smtpclient$
Oops, back to getting a traceback. This time, the default
implementation of
buildProtocol seems no longer to be
-sufficient. It instantiates the protocol with no arguments, but
-
ESMTPClient wants at least one argument. In the next
+sufficient. It instantiates the protocol with no arguments,
+but
ESMTPClient wants at least one argument. In the next
version of the client, we'll override
buildProtocol to
fix this problem.
smtpclient-6.tac introduces a
-
twisted.internet.protocol.ClientFactory subclass with an
-overridden
buildProtocol method to overcome the problem
-encountered in the previous example.
smtpclient-6.tac introduces
+a
twisted.internet.protocol.ClientFactory subclass with
+an overridden
buildProtocol method to overcome the
+problem encountered in the previous example.
class SMTPClientFactory(protocol.ClientFactory): @@ -421,8 +421,8 @@ will now instantiate-
SMTPClientFactory: smtpClientFactory = SMTPClientFactory()
Running this version of the code, we observe that the code -still isn't quite traceback-free.+
Running this version of the code, we observe that the +code still isn't quite traceback-free.
exarkun@boson:~/doc/mail/tutorial/smtpclient$ twistd -ny smtpclient-6.tac @@ -484,11 +484,11 @@ provide that information to it. actually includes message data to transmit. For simplicity's sake, the message is defined as part of a new class. In a useful program which sent email, message data might be pulled in from the filesystem, -a database, or be generated based on user-input. smtpclient-7.tac, however, defines a new -class,
SMTPTutorialClient, with three class attributes -(
mailFrom,
mailTo, and -
mailData): +a database, or be generated based on +user-input. smtpclient-7.tac, however, +defines a new class,
SMTPTutorialClient, with three class +attributes (
mailFrom,
mailTo, +and
mailData):class SMTPTutorialClient(smtp.ESMTPClient): @@ -552,14 +552,14 @@ Twisted is
getMailData:
This one is quite simple as well: it returns a file or a file-like -object which contains the message contents. In our case, we return a -+object which contains the message contents. In our case, we return +a-automatically.automatically.
There is one more new callback method defined in smtpclient-7.tac. This one isn't for providing information about the messages to @@ -587,8 +587,8 @@ which starts up, connects to a (possibly) remote host, transmits some data, and disconnects. Notably missing, however, is application shutdown. Hitting ^C is fine during development, but it's not exactly a long-term solution. Fortunately, programmatic shutdown is extremely -simple. smtpclient-8.tac extends -+simple. smtpclient-8.tac +extends
sentMailwith these two lines:
sentMailwith these two lines:from twisted.internet import reactor @@ -667,11 +667,11 @@ def getMailExchange(host): return defer.succeed('localhost')-
++
defer.succeedis a function which creates a new -
Deferredwhich already has a result, in this case -
'localhost'. Now we need to adjust our -
TCPClient-constructing code to expect and properly handle -this
Deferred:
defer.succeedis a function which creates a +new
Deferredwhich already has a result, in this +case
'localhost'. Now we need to adjust +our
TCPClient-constructing code to expect and properly +handle this
Deferred:def cbMailExchange(exchange): @@ -687,18 +687,18 @@ getMailExchange('example.net').addCallback(cbMailExchange) scope of this document. For such a look, see the Deferred Reference. However, in brief, what this version of the code does is to delay the -creation of the
TCPClientuntil the -
Deferredreturned by
getMailExchangefires. -Once it does, we proceed normally through the creation of our -
SMTPClientFactoryand
TCPClient, as well as -set the
TCPClient's service parent, just as we did in the -previous examples. +creation of the
TCPClientuntil the
Deferred+returned by
getMailExchangefires. Once it does, we +proceed normally through the creation of +our
SMTPClientFactoryand
TCPClient, as well +as set the
TCPClient's service parent, just as we did in +the previous examples.
SMTP Client 11
At last we're ready to perform the mail exchange lookup. We do -this by calling on an object provided specifically for this task, -+this by calling on an object provided specifically for this +task,
twisted.mail.relaymanager.MXCalculator:
twisted.mail.relaymanager.MXCalculator:def getMailExchange(host): @@ -710,19 +710,14 @@ def getMailExchange(host):
Because- - --->
getMXreturns a
Record_MXobject rather than a string, we do a little bit of post-processing to get the results we want. We have already converted the rest of the tutorial -application to expect a
Deferredfrom -
getMailExchange, so no further changes are required. smtpclient-11.tac completes this tutorial -by being able to both look up the mail exchange host for the recipient -domain, connect to it, complete an SMTP transaction, report its -results, and finally shut down the reactor. | http://twistedmatrix.com/trac/raw-attachment/ticket/4572/smtpclient.xhtml.patch | CC-MAIN-2015-35 | en | refinedweb |
)
[WebMethod]
public GeoPoint GetGeoPoint(string address)
{
if( address.Length < 1 )
return null;
string uriString = String.Format("{0}&output=js", System.Web.HttpUtility.UrlEncode( address ) );
WebClient myWebClient = new WebClient();
string postData = "";
myWebClient.Headers.Add("Content-Type","application/x-www-form-urlencoded");
// Apply ASCII Encoding to obtain the string as a byte array.
byte[] byteArray = Encoding.ASCII.GetBytes(postData);
// Upload the input string using the HTTP 1.0 POST method.
byte[] responseArray = myWebClient.UploadData(uriString, "POST", byteArray);
// Decode and display the response.
string data = Encoding.ASCII.GetString( responseArray );
int index = data.IndexOf("<point", 0);
GeoPoint point = new GeoPoint();
if( index < 0 )
{
point.Data = data;
return point;
}
int endIdx = data.IndexOf("/>", index);
string responseData = data.Substring(index, (endIdx+2)-index);
// Regex search and replace
RegexOptions options = RegexOptions.None;
Regex regex = new Regex(@"^.*lat=""", options);
string input = responseData;
string replacement = @"";
string result = regex.Replace(input, replacement);
// Regex search and replace
options = RegexOptions.None;
regex = new Regex(@""".*$", options);
string lat = regex.Replace(result, replacement);
regex = new Regex(@"^.*lng=""", options);
result = regex.Replace(input, replacement);
regex = new Regex(@"""/.*$", options);
string lng = regex.Replace(result, replacement);
point.Data = responseData;
point.Latitude = Double.Parse( lat );
point.Longitude = Double.Parse( lng );
return point;
}
/// Represents a geographical location with a latitude and longitude.
[Serializable]
public class GeoPoint
{
private double m_Latitude;
private double m_Longitude;
private string m_Data;
private bool m_HasData = false;
public GeoPoint()
{
m_Latitude = 0.0;
m_Longitude = 0.0;
}
public GeoPoint(double latitude, double longitude)
{
m_Latitude = latitude;
m_Longitude = longitude;
}
public GeoPoint(double latitude, double longitude, string data)
{
m_Latitude = latitude;
m_Longitude = longitude;
m_Data = data;
}
public double Latitude
{
get { return m_Latitude; }
set
{
m_Latitude = value;
m_HasData = true;
}
}
public double Longitude
{
get { return m_Longitude; }
set
{
m_Longitude = value;
m_HasData = true;
}
}
public string Data
{
get { return m_Data; }
set { m_Data = value; }
}
public bool HasData
{
get
{
if( m_HasData && (m_Latitude != 0) && (m_Longitude != 0) )
return true;
return false;
}
}
}
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/11008/PinPoint-your-exact-location-and-show-the-world-wh?fid=199374&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Quick&spc=Relaxed&fr=11 | CC-MAIN-2015-35 | en | refinedweb |
Unanswered: dynamically add gridpanel from file to viewport
Hi,
Having a namespace: "Ext.namespace('Grid'); " is dangerous and not very useful.
If you want to autoLoad components, you need to use the Ext JS Loader system. I have a detailed writeup here:
Jay Garcia @ModusJesus || Modus Create co-founder
Ext JS in Action author
Sencha Touch in Action author
Get in touch for Ext JS & Sencha Touch Touch Training | https://www.sencha.com/forum/showthread.php?144583-dynamically-add-gridpanel-from-file-to-viewport | CC-MAIN-2015-35 | en | refinedweb |
Wrapper for the libKJB type Word_list (useful for globs). More...
#include <l_word_list.h>
Wrapper for the libKJB type Word_list (useful for globs).
This class is useful mostly because of the glob-related constructors. If you don't need globs, then consider using vector<string> instead. This container does not offer any deletion operations, at least currently. But it does at least offer input iterators.
When you dereference a valid const_iterator generated by this class, the result is like a const char*. For example, the following shows how to produce a very basic directory listing of regular filenames in the current working directory:
A glob(7) is what they call Unix filename pattern-matching expressions, such as "*.cpp" or "/tmp/????.pid" or the like. We wrap function kjb_c::kjb_glob in some of the ctors for this class – please see its man page.
Construct by searching for a Glob pattern, seeking files.
take non-negative word-count; generate that many empty entries.
construct a word list from an "argv" style array of arguments.
copy ctor (slow) creates a deep copy of an existing word list.
dtor releases memory of underlying C structure
insert a given string into the list (maybe increasing its size).
If the list contains empty entries, the given word will go into the first one encountered. Otherwise the list is reallocated and grown, and the given word goes in the first newly-created entry.
synonym for cbegin
Return the underlying C representation (as const)
generate, return const_iterator to front of list
generate, return const_iterator pointing one-past-last entry
return the number of non-empty entries in the list
synonym for cend
Convert the word list into a STL vector of STL strings.
concatenate two word lists
assignment operator deep-copies contents of given list.
Access an entry of the word list; this performs range checking.
Number of entries in the word list (could be empty, though).
A word list entry might point to a null-terminated string, or it might be empty, i.e., (char*)NULL. This counts all the entries, empty or not.
remove all empty entries from back (only) of the list. | http://kobus.ca/research/resources/doc/doxygen/classkjb_1_1Word__list.html | CC-MAIN-2022-21 | en | refinedweb |
Sensor Message Publishing#
Overview top#
One of the main functionalities of SVL Simulator is its ability to work with third-party tools, which include autonomous driving systems. Simulator provides output from virtual sensors that can be received and parsed by tools like Apollo or Autoware.Auto. Third-party systems can then provide instructions back to Simulator, which will execute them on virtual vehicle. The whole communication aims to be indistinguishable from its real-world equivalent, which means that each message sent and received by Simulator follows the data format expected by third party tools.
Since multiple bridge plugins are supported, all the sensor plugins have to be bridge-agnostic. Plugins can check if the bridge is connected and decide when and what to send, but do not know the bridge type, and therefore its expected format. This splits responsibility for publishing a message between two plugin types:
- Sensor prepares data in bridge-agnostic format and requests for it to be sent
- Bridge converts message to bridge-specific format and sends it
This article covers subjects of creating and dispatching bridge messages from sensor plugins. For details about creating new sensor plugins, see sensor plugins page.
If you're creating a bridge plugin and want to make sure that sensor data is properly handled, see sensor plugins page.
Sensor data types top#
Data types for sensors don't have to follow any particular guidelines, but next to the data that sensor produces, you should consider including fields that might be required by some bridge plugins. Often used examples include timestamp or sequential message number. You do not need to worry about how bridge plugin will consume this data - just make sure all the fields have public read accessors.
If your sensor produces small amount of data that's easy to convert, you can decide to call the publishing delegate directly and perform the whole operation synchronously. If instead you opt to use asynchronous approach (see asynchronous publishing), you have to be aware about including any reused, heap-allocated resources (e.g. single large buffer included in each message), since the order of updating and accessing data from multiple threads will not be defined.
Probably the most common examples that would call for reused resources are large array buffers. Of course it's possible to allocate completely new array for each message, but this would add unnecessary work for garbage collector and could affect performance. You might consider using pool of pre-allocated buffers, and while it's certainly possible to do so, you would have to depend on callbacks to track which buffers are actively in use. Fortunately, there is a built-in, simpler way to handle this kinds of cases.
If you want to reuse resources and use asynchronous publishing, your data type should be a class, provide parameterless constructor, and implement the
IThreadCachedBridgeData interface. This interface defines two methods:
-
CopyToCache - your implementation should perform a full deep copy of current instance data into the container of the same type, provided through parameter. Provided target instance is part of a pooling system that will make sure this data will not be accessed again until conversion and publishing on the bridge side are complete. This means only one thread will ever access it at a time, and you don't have to worry about thread safety.
-
GetHash - your implementation should return value that groups instances into compatible sub-pools. Returning one value for all instances is valid if you don't need sub-pools. If some instances are not compatible (e.g. they use pre-allocated buffers with different sizes), this should return value calculated based on incompatible parameters.
Implementing
IThreadCachedBridgeData interface is enough to enable thread-side caching functionality and no further changes are required on sensor side, as long as you're using
BridgeMessageDispatcher (see asynchronous publishing section for details).
The instance provided by
IThreadCachedBridgeData interface is persistent, pooled resource. Depending on previous usage, its fields may or may not be initialized - make sure to clear any optional fields. If you're using arrays, consider either storing valid data size in separate parameter (it would allow you to use larger arrays to store varying amounts of data without the need to reallocate them every single time) or using
GetHash method to split instances into separate sub-pools based on array size.
For an example of
IThreadCachedBridgeData interface usage, see
Simulator.Bridge.Data.PointCloudData type in Simulator repository. This example includes good practices described above.
Asynchronous publishing top#
When you expect the conversion process to take a long time, you might want to avoid occupying main thread of the application, and perform the process asynchronously. You might also consider using multiple threads to increase overall processing speed. Both of these cases, and more, are covered by the
BridgeMessageDispatcher class. Its default instance is created for each simulation and is accessible through the class' static field
Instance. Note that this instance only exists during simulation.
Interacting with the message dispatcher is simplified to a single method,
TryQueueTask. Its signature and detailed description can be found below.
public bool TryQueueTask<T>( Publisher<T> publisher, T data, Action<bool> callback = null, object exclusiveToken = null) where T : class, new()
Parameters:
publisher- Publisher delegate for active bridge, created by calling
AddPublisher<T>(Topic)on bridge instance. See bridge registration section for an example.
data- Sensor data to convert and send. If type of the data implements
IThreadCachedBridgeDatainterface, thread-exclusive cache will be used to store copy of the data. Original data will never be accessed by worker thread in such case. See sensor data types section for more details.
callback(optional) - Delegate that will be called after conversion and publishing process was resolved. This will always be executed, but if delegate boolean parameter is set to
false, the process has failed. See callback section for more information.
exclusiveToken(optional) - Token used to enforce exclusivity for given task. Only one publishing process with the same token can be active or queued at a time - new requests with the same token will be dropped. See exclusive publishing section for more details.
Returns:
bool- True if message was enqueued and is expected to be converted and sent. False if it was immediately dropped for any reason (see dropping messages section for a list of possible causes).
Multi-threading top#
BridgeMessageDispatcher internally manages a number of background threads that execute conversion and publishing code defined in bridge plugins. By default, only a single background thread is running, but dispatcher will automatically spawn new ones if the queue is not expected to be unloaded in a single cycle. Maximum number of worker threads is defined by logical core count of your CPU. If one or more threads are idle for a while, thread count will be reduced. This means that threads will be scaled up and down dynamically, based on current bridge data throughput.
It's possible for extreme situations to occur, in which thread count reached its limit and yet queue size keeps growing. This can happen if large messages requiring long processing are send very often. At this stage CPU is usually close to 100% utilization on all cores and not much can be done. In this situation, dispatcher will block main thread execution until one of worker threads is finished with its work. This will usually severely reduce framerate, so you should consider reducing bridge data throughput or using more powerful CPU at that stage. Warning is displayed when this situation occurs.
Dropping messages top#
There are situations in which calling
TryQueueTask will not publish the data.
BridgeMessageDispatcher considers message as published when publisher delegate for active bridge finishes execution without throwing any exception. Note that this doesn't always mean that message was properly received, or even sent. This is heavily dependent on bridge implementation. If you're implementing your own bridge, it's suggested to throw exception whenever something goes wrong with publishing process.
BridgeMessageDispatcher class will catch the exception, allow sensor to react to it, and display it in console. The exception will not be thrown again to prevent worker thread from crashing.
For standard causes of dropping messages (unrelated to bridge implementation), consult the list below. Most of them will occur when enqueuing attempt happens, in which case
TryQueueTask will return
false. If task was enqueued correctly, and problem occurred during delegate execution,
TryQueueTask will return true, but the problem still can be reacted to through callback.
Time is paused (
TryQueueTaskreturns
false, callback executes with
false)
If time in simulation is paused, every module should hold its execution. If sensor ignores time state and attempts to publish data during pause, message will be dropped.
Exclusive token is already in use (
TryQueueTaskreturns
false, callback executes with
false)
Message with the same token is either queued or being processed. See exclusive publishing for details about token usage.
Exception was thrown by publisher delegate (
TryQueueTaskreturns
true, callback executes with
false)
Reasons for exception occurring at this stage are dependent on bridge type. Possible causes include lack of compatible publisher, errors during conversion process or problems with connection. For more details check console (if you're running Simulator in Unity Editor) or application logs.
Callback top#
When attempting to enqueue bridge message, you can provide optional delegate (
callback) with a single boolean parameter. This delegate will always be invoked, even if message was dropped immediately and never enqueued. If bridge properly executed publisher delegate,
callback will be invoked with
true passed as its parameter. Details depends on bridge implementation, but this usually means that message was properly converted and sent. If message was dropped for any reason (see dropping messages), or if exception was thrown anywhere in publisher delegate code,
callback will be invoked with
false passed as its parameter.
Some uses of the callback include cleaning up resources after message is published, reacting to expected failures, or waiting for very expensive bridge operations to finish before continuing with sensor work.
Exclusive publishing top#
There are use cases in which multi-threaded character of the publishing process, mixed with unknown processing times, can have undesired implications. If, for example, you enqueue two subsequent messages with timestamps, older message might, in theory, finish publishing later. If message order is critical and you want to enforce it, there are a few options:
- publish messages synchronously - will block main thread
- publish using your own, single background thread - requires synchronization between main thread (that can access time) and background thread
- use callbacks, start new publishing process when previous finishes
- provide exclusive token
While all options are viable, providing exclusive token is probably the easiest solution if synchronous execution is undesired.
The exclusive token can be an instance of any object. Its purpose is simple - only one publish request with individual token can be active at a time. If you call
TryQueueTask and provide exclusive token, two things can happen:
- if another request using the same token is currently active, message is immediately dropped and
TryQueueTaskreturns
false
- if the token is not used by any of the active requests, message is processed as usual - if it's not dropped for different reason, message is enqueued, token becomes active and
TryQueueTaskreturns
true
The token becomes active immediately when request using it becomes enqueued, and becomes inactive when publish delegate execution either succeeds or crashes. If you provide the exclusive token, messages using it will always be published in chronological order, although their frequency depends on conversion time and queue load.
Please note that using the exclusive token will drop already prepared message. If the performance cost of preparing the message is significant, you should consider using callback instead to wait for previous message to finish publishing before preparing the next one.
Frequency-based publishing top#
Many sensors have predefined frequency at which they collect and publish data. If your sensor is supposed to work with a set frequency, the easiest way to achieve this is to use
FrequencySensorBase.
FrequencySensorBase is an abstract class that preserves all standard
SensorBase class behavior, but provides functionality that is supposed to simplify, or even completely remove any need for time tracking in your sensor code.
When you create sensor derived from
FrequencySensorBase class, you will need provide implementation for its abstract member,
UseFixedUpdate. It's boolean property with only a read accessor. Simply make it return
false if you want to use standard update loop, or
true if your use case depends on using Unity's
FixedUpdate loop. For vast majority of cases, using standard update loop should be enough.
In both cases sensor updates will happen with set frequency, but you shouldn't expect perfect intervals. Your code will be executed when either
Update (with intervals dependent on framerate) or
FixedUpdate (with fixed intervals, 10 ms default in SVL Simulator) is executed by Unity. In both cases, most of the updates in the loop are ignored, and the ones closest to sensor intervals are chosen. Your code will be executed at that point.
Sensor frequency is defined through
Frequency parameter. It's marked as sensor parameter, and therefore will be visible among sensor settings. See sensor plugins page for more information about sensor parameters. Frequency is based on simulation time, so sensor will handle pause or non-realtime mode properly.
Last thing that has to be done is overriding
SensorUpdate method. Code inside will be executed with set frequency, unless simulation is paused. If you need to use either
Update or
FixedUpdate method directly, you can override them too, but remember to call their base implementation before your code.
For an example of
FrequencySensorBase usage, see publishing with set frequency section.
Asynchronous GPU readback#
In some cases, sensors utilize GPU capabilities to process data. This doesn't only include rendering in camera-based sensors, but also cases where highly parallel computations need to be performed on large data sets. In both cases data is stored on GPU (as either textures or compute buffers), and then read back to process and publish on CPU. Reading back large buffers or textures can take significant time, and may result in latency spikes if main thread is blocked during this process. As usual with long operations, you might want to perform this process asynchronously and make sure that main thread is able to keep stable framerate.
You can see longer processing times in publishing frames on the frame time chart below. Upper row shows case with synchronous GPU readback, lower row shows the same setup with asynchronous GPU readback. Notice that latency spikes are significantly reduced.
Simulator provides an utility class (
GpuReadbackPool<TData, T>) that can help with pooling and tracking asynchronous GPU readbacks. It will internally handle some of the problems related to asynchronous reads:
- Multiple readbacks can be active at once - required resources will be pooled and tracked automatically
- Requests will always be finished in the same order they were started
- Timestamp of the readback start will be stored with the data
- You don't need to track state of active readbacks, callback will be called for each upon completion
To use this system, you have to create an instance of
GpuReadbackPool<TData, T> for your sensor. It requires explicit initialization through
Initialize method, which takes buffer size and delegate called on completion as parameters. If the buffer size ever changes (e.g. when output texture resolution for sensor is modified), you can change the buffer size through
Resize method.
TData type parameter is usually
GpuReadbackData<T>, but can also be a type derived from it.
T must be a struct compatible with
NativeArray<T>.
With this kind of setup, using asynchronous GPU readbacks is very simple - when operations on GPU are finished, you can simply call:
ReadbackPool.StartReadback(GpuResource);
When readback finishes, delegate declared in
Initialize method will be called. Instance of
GpuReadbackData<T> will be passed as a parameter, containing
gpuData (native array containing data read back from the GPU) and
captureTime (timestamp of the request start) fields. If you used your own type derived from
GpuReadbackData<T>, your custom fields will be accessible too.
StartReadback has multiple overloads that allow you to specify some of the readback parameters (like texture format, buffer offset etc.). It also returns instance of the
GpuReadbackData<T>, so you can populate it with any data at the time of readback start. This is especially useful if you want to include any custom fields.
Please note that
GpuReadbackPool<TData, T> is a passive class and requires explicit updates from outside. You should call
Process method whenever you want to trigger a check on pending readbacks. This is usually done from
Update method of sensor class.
For an example of
GpuReadbackPool<TData, T> usage, see reading GPU data asynchronously section.
Code examples top#
Snippets presented here are intended to explain parts of the sensor code responsible for message publishing. Large parts of the sensor code are omitted - see sensor plugins page for more complete examples.
Custom sensor data type top#
This example shows data type that supports thread-side caching through
IThreadCachedBridgeData interface (see sensor data types for details).
array field in this use case will contain reference to persistent buffer used by sensor. The same buffer is passed for all messages. For the sake of this example, let's assume that buffer size is constant, but data size is not.
dataSize field will contain size of valid data in the buffer. Check one of publishing examples below to see how data is prepared.
public class FloatArray : IThreadCachedBridgeData<FloatArray> { // Array reference passed from sensor. Shared across multiple messages. public float[] array; // Specifies size of valid data in array. public int dataSize; // Timestamp that is required by receiving side. public double timestamp; // Method defined by IThreadCachedBridgeData interface. // This will be called internally by BridgeMessageDispatcher before enqueuing message. // Parameter target contains instance provided by pooling system. // Implementation must perform deep copy from current instance to target. public void CopyToCache(FloatArray target) { // If array in target instance was not initialized or is too small, allocate new one. if (target.array == null || target.array.Length < dataSize) target.array = new float[dataSize]; // Copy all valid data to array in target instance. // Data beyond dataSize (if present) has undefined value, so it's skipped. Array.Copy(array, target.array, dataSize); // Value types can be simply assigned in target instance. target.dataSize = dataSize; target.timestamp = timestamp; } // Method defined by IThreadCachedBridgeData interface. // This should return hash that groups instances into compatible sub-pools. public int GetHash() { // No sub-pools required - return the same value for all instances. return 0; } }
Bridge registration top#
During initialization process, each sensor must define its expected interactions with bridge.
OnBridgeSetup is an abstract method defined in
SensorBase class, that is invoked once for each sensor if bridge instance is present. Each sensor must implement it. You can use it to get reference to the bridge instance and register publisher (or subscriber) of your data type. Delegate returned by
AddPublisher method is responsible for converting and publishing messages.
Data type used for this example (
FloatArray) is described in its own section. Note that bridge plugin must implement conversion between this type and its own, bridge-specific message format. You can find more information about bridge plugin implementation on the bridge plugins page.
//; // OnSetupBridge will be called during initialization if any bridge is present. public override void OnBridgeSetup(BridgeInstance bridge) { // Store reference to bridge instance so we can track its state. Bridge = bridge; // Register publisher of FloatArray data type for the bridge. // Returned delegate can be used to convert and publish messages. Publish = bridge.AddPublisher<FloatArray>(Topic); } // Visualization and sensor update code is omitted in this example. [...] }
Publishing using exclusive token top#
This example shows how custom data can be published using exclusive token - it assumes that message order is critical and enforces it by dropping any messages until previously enqueued one is published.
Since
array used as buffer is reused every frame, data type in this example utilizes thread-side caching. Exact time that conversion and publishing will take is unknown, so there's a risk that this data might be overwritten before conversion finishes. Using data type that supports caching is invisible from sensor code (as shown below), but all cross-thread access issues - like the one mentioned - are mitigated.
//; // Update is called every frame. private void Update() { //. Instance of this sensor is used as exclusive token. // If simulation is paused or token is already in use, this message will be dropped. // Sensor doesn't care whether message dropped or not, so returned value is ignored. BridgeMessageDispatcher.Instance.TryQueueTask(Publish, data, this); } } // Initialization and visualization code is omitted in this example. [...] }
Publishing with set frequency top#
This example shows how
FrequencySensorBase can be used to calculate and publish data with set interval. See frequency-based publishing section for details.
Similar to previous example, used data type utilizes thread-side caching to prevent simultaneous access from multiple threads to
array buffer.
// Declare sensor name and types of data that it will publish over the bridge. // Sensor type is derived from FrequencySensorBase instead of usual SensorBase. [SensorType("Float Array Sensor", new[] {typeof(FloatArray)})] public class FloatArraySensor : FrequencySensorBase { // Field used to store reference to the bridge instance. BridgeInstance Bridge; // Field used to store publisher delegate. Publisher<FloatArray> Publish; // Preallocated array used as a buffer. public float[] array; // FixedUpdate is not required - standard update loop will be used. protected override bool UseFixedUpdate => false; // SensorUpdate is executed with frequency defined by Frequency sensor parameter in base class. // Frequency is based on simulation time, so pause or non-realtime mode will be handled properly. private override void SensorUpdate() { //. This will always succeed, as both requirements are met: // - time is not paused (SensorUpdate wouldn't be called if it was) // - optional exclusive token is not provided BridgeMessageDispatcher.Instance.TryQueueTask(Publish, data); } } // Initialization and visualization code is omitted in this example. [...] }
Reading GPU data asynchronously top#
This example shows how
GpuReadbackPool<TData, T> can be used to perform GPU readbacks asynchronously.
//; // Buffer used for storing data on GPU side. private ComputeBuffer computeBuffer; // Asynchronous GPU readback pool. private GpuReadbackPool<GpuReadbackData<Vector4>, Vector4> readbackPool; // Initialization method - prepare required resources. protected override void Initialize() { // Assume 100 floats will be used per data packet. array = new float[100]; computeBuffer = new ComputeBuffer(100, sizeof(float)); // Initialize GPU readback pool with 100 floats capacity. // OnReadbackComplete delegate will be called for each completed readback. readbackPool = new GpuReadbackPool<GpuReadbackData<float>, float>(); readbackPool.Initialize(100, OnReadbackComplete); } // Deinitialization method - free any currently used resources. protected override void Deinitialize() { computeBuffer.Release(); readbackPool.Dispose(); } // Update is called every frame. private void Update() { // Explicitly trigger update of all currently pending readbacks. readbackPool.Process(); // Theoretical method that would calculate output for this sensor. // This is executed even if bridge is not connected, as data might be used inside Simulator. // Data is calculated and kept on the GPU, in the compute buffer. PrepareSensorData(computeBuffer); // Request asynchronous read of the GPU resource. // Completion will trigger OnReadbackComplete declared during initialization. readbackPool.StartReadback(computeBuffer); } // Delegate called when readback is completed private void OnReadbackComplete(GpuReadbackData<float> data) { // Check if bridge exists and is currently connected. if (Bridge is {Status: Status.Connected}) { // data.gpuData contains data read back from the GPU. // Perform copy from NativeArray to managed array, since our message type doesn't support it. data.gpuData.CopyTo(array); // Publish data. See previous examples for details. // Note that time passed is capture time, not current time. var data = new FloatArray {array = array, dataSize = 100, timestamp = data.captureTime}; BridgeMessageDispatcher.Instance.TryQueueTask(Publish, data); } } // Bridge setup and visualization code is omitted in this example. [...] } | https://www.svlsimulator.com/docs/archive/2021.3.1/system-under-test/sensor-message-publishing/ | CC-MAIN-2022-21 | en | refinedweb |
Mercurial > dropbear
view libtommath/bn_mp_mul_2.c @ 475:52a644e7b8e1 pubkey-options
* Patch from Frédéric Moulins adding options to authorized_keys. Needs review.
line source
#include <tommath.h> #ifdef BN_MP_MUL*2 */ int mp_mul_2(mp_int * a, mp_int * b) { int x, res, oldused; /* grow to accomodate result */ if (b->alloc < a->used + 1) { if ((res = mp_grow (b, a->used + 1)) != MP_OKAY) { return res; } } oldused = b->used; b->used = a->used; { register mp_digit r, rr, *tmpa, *tmpb; /* alias for source */ tmpa = a->dp; /* alias for dest */ tmpb = b->dp; /* carry */ r = 0; for (x = 0; x < a->used; x++) { /* get what will be the *next* carry bit from the * MSB of the current digit */ rr = *tmpa >> ((mp_digit)(DIGIT_BIT - 1)); /* now shift up this digit, add in the carry [from the previous] */ *tmpb++ = ((*tmpa++ << ((mp_digit)1)) | r) & MP_MASK; /* copy the carry that would be from the source * digit into the next iteration */ r = rr; } /* new leading digit? */ if (r != 0) { /* add a MSB which is always 1 at this point */ *tmpb = 1; ++(b->used); } /* now zero any excess digits on the destination * that we didn't write to */ tmpb = b->dp + b->used; for (x = b->used; x < oldused; x++) { *tmpb++ = 0; } } b->sign = a->sign; return MP_OKAY; } #endif /* $Source: /cvs/libtom/libtommath/bn_mp_mul_2.c,v $ */ /* $Revision: 1.3 $ */ /* $Date: 2006/03/31 14:18:44 $ */ | https://hg.ucc.asn.au/dropbear/file/52a644e7b8e1/libtommath/bn_mp_mul_2.c | CC-MAIN-2022-21 | en | refinedweb |
Sign out reference APISign out reference API
The
signOut method simply revokes the session for the user. It does not provide any UI for a Sign out button and does not do any redirect in your page on your behalf. Please make sure to implement those yourself.
Example:
import { signOut } from "supertokens-auth-react/recipe/emailpassword"; await signOut();
- signOut:
- Description:
signOutmethod called to revoke the current session. Under the hood, it calls the
revokeSessionfrom
supertokens-website. Note that this method is asynchronous and you should wait for it to return before considering it was successful.
- Output:
200: if successful | https://supertokens.com/docs/auth-react/0.7.X/emailpassword/sign-out | CC-MAIN-2022-21 | en | refinedweb |
You can create a free account with us by accessing the signup page.
To create and run A/B tests, sign in to the VWO dashboard and then select Mobile App A/B on the menu. If you are using the VWO A/B testing feature for the first time, click Start Mobile App A/B Testing to begin.
To create A/B tests for mobile apps:
Add the mobile app to be tested.
Define the variables you want to test.
Create A/B tests
Create an App
Registering your app on VWO is a one-time process.
Adding your app generates an
Api Key, which is used by VWO servers to recognize your app.
Select the Mobile App A/B option under the test menu.
Click Create, and then click Add App. Write a name for your app, and in the Platform option, select Android.
Note the api key generated by the system.
Add the mobile app to be tested
To add a new app for A/B testing, go to the Apps section on the page.
On the right side of the screen, click Create App.
Type the name of the app you want to add, and then click Create.
As you add an app, VWO generates API Keys for both the iOS and Android platforms. You can make a note of the API Key under the Settings section and are used during app initialization.
Defining the Variables You Want To Test
Test variables are elements or parameters of your mobile app. After you define a variable, you can run an unlimited number of A/B tests on the variable, without doing any code changes or redeployment. For example, you can create a string-type variable for testing different text versions in the app screen.
Under the Apps tab, select the mobile app for which you want to create test variables.
To add an element for testing, under Variables section, click Create Variable.
Assign a name to the variable, and then select its data type.
Type Default Value (current value or a value if there is no A/B test).
To add the variable, click Create. You can add multiple variables to an app.
Creating A/B Tests for Mobile Apps
On the Mobile App A/B testing screen, go to the Campaigns tab, and then click Create.
Choose the App you want to test. All mobile apps you have added to VWO are listed here.
Select a platform where the app is running.
Enter a unique identifier in the Define a test key field to filter your tests easily. The test key helps you execute custom logic, as explained in this iOS Code Blocks/Android Code Blocks section.
Select Next and click Add Variable. All the variables you have created for the test are displayed here. You can choose to Create Variable by adding a new test variable.
Select the variable you want to test, and then enter the variation values. You can test multiple variables in one test. In the example above, we have added speed variable, defined value as 20 for the variation. For control, the value is 10, which is the default value for the variable.
Based on the test key and variation names, VWO generates the code snippet that you can use in the mobile app.
To continue, click Next
Define Goals
In the next step, define at least one goal. The Goal is a conversion matrix that you may want to optimize.
To edit the goal name, click the corresponding Goal icon. In the box below the Goal* icon, select the drop-down menu to select an event that you want to track. Provide the relevant information in the Goal Identifier text box.
To define more goals, select Add Another Goal or select Next.
conversionGoal
Finalize
In the Finalize step, we need to specify the campaign name. Next, we can set the percentage of users that we want to include in the campaign.
Under Integrate With Third-Party Products, select the box corresponding to the product name with which you want to integrate the app.
Under Advanced Options, you can also target the campaign for specific user types, enable scheduling, customize traffic allocation for each variation, or make the user part of the campaign on app launch.
For quick setup, we can leave those settings to default.
Click Finish.
On the next screen, click Start Now to run the campaign.
Installing Library
Library can be installed through
npm
$ npm install --save vwo-react-native
iOS
- Add
pod 'VWO'Podfile file present in ios directory.
cd ios && pod install
- Open Xcode workspace, drag all the files from
node_modules/vwo-react-native/iOSto your project.
Android
- Link the vwo-react-native library.
$ react-native link vwo-react-native
- Add this to your
android/build.gradlefile.
allprojects { repositories { ... mavenCentral() ... } }
Manual installation
- Open
android/app/src/main/java/[...]/MainActivity.java
- Add
import com.vwo.VWOReactNativePackage;to the imports at the top of the file.
- Add
new VWOReactNativePackage()to the list returned by the
getPackages()method.
- Append the following lines to
android/settings.gradle:
include ':vwo-react-native' project(':vwo-react-native').projectDir = new File(rootProject.projectDir, '../node_modules/vwo-react-native/android')
- Insert the following lines inside the dependencies block in
android/app/build.gradle:
compile project(':vwo-react-native')
- Add this in your
android/build.gradlefile
allprojects { repositories { ... mavenCentral() ... } }
Enable the preview mode
Preview is by default enabled for debug build. To enable the preview mode in the release build. Shake your device 3-4 times.
Disable the preview mode
Preview mode can be disabled by setting the
disablePreview flag to true in your
config object.
var config = { disablePreview: true }
Code changes
Throughout our SDK in all callbacks, we use Node's convention to make the first parameter an error object (usually null when there is no error) and the rest are the results of the function.
1. Initialising the SDK
After installing the library, you would want to initialize it.
Import the Library as follows:
import VWO from 'vwo-react-native';
Library can be initialized in the following ways:
I. Launching VWO SDK
var config = { optOut: false, disablePreview: false, customVariables: {}} VWO.launch('YOUR_API_KEY', config).then(() => { console.log("Launch success " + key); });
Launch configuration
You can pass a
config object during the launch of the VWO SDK.
Config is a javascript object which can have following keys:
optOut: it can have a boolean value which tells the VWO SDK whether to initialize the SDK or not. It defaults to false.
disablePreview: Boolean value to turn on or off the preview mode. It defaults to false.
customVariables: Takes in a javascript object as its value. Check Targeting Visitor Groups / Targeting Visitor Groups for more details. It defaults to an empty object.
customDimensionKey: String value which is the unique key associated with a particular custom dimension made in the VWO application. Check Push Custom Dimension for more details. It defaults to an empty String.
customDimensionValue: String value which is the value you want to tag a custom dimension with. Check Push Custom Dimension for more details. It defaults to an empty String.
If you do not wish to pass any
config object, you can pass a
null.
var config = { optOut: false, disablePreview: true, customVariables: { user_type: "free" }, customDimensionKey: "CUSTOM_DIMENSION_KEY", customDimensionValue: "CUSTOM_DIMENSION_VALUE" }
You can set this config object as follows:
VWO.launch('YOUR_API_KEY', config).then(() => { console.log("Launch success " + key); });
2. Using campaign
To use the variation defined during campaign creation, use an of the following function to get the value for the campaign keys.
VWO.objectForKey(key, DEFAULT_VALUE).then((result) => { // Your Code here }); VWO.intForKey("key", 1).then((result) => { // Your code here }); VWO.stringForKey("key", "default_value").then((result) => { // Your code here }); VWO.floatForKey("key", 0.0).then((result) => { // Your code here }); VWO.boolForKey("key", false).then((result) => { // Your code here });
When these methods are invoked, the SDK checks if the targeting conditions hold true for the current user.
If targeting/segmentation conditions hold true, the user is made part of the campaign and visitor counts in the report section increments by one (once per user).
Test can also be created without variables. Campaign test key can be used to fetch the variation name. This variation name can be used to execute custom logic.
VWO.variationNameForTestKey("campaign_key").then((variationName) => { if (variationName == "Control") { // Control code } else if (variationName == "Variation-1") { // Variation 1 code } else { // Default code } });
NOTE
Can be called only after SDK initialisation. Otherwise, a null value is returned.
3. Triggering goals
We would track the effect of this campaign on our conversion metric.
Earlier we defined
conversionGoal as a goal.
We need to tell the VWO SDK when this conversion happens. Use the following code to trigger this goal.
var goal = "conversionGoal"; VWO.trackConversion(goal);
For triggering revenue goal use method
trackConversionWithValue(goal, value).
var goal = "conversionGoal"; VWO.trackConversionWithValue(goal, 133.25);
NOTE
Can be called only after SDK initialisation. Otherwise, the goal is not marked.
4. Push Custom Dimension
Pushes a custom dimension for a particular user to the VWO server. It is used for post-segmenting the data in the campaign reports.
Read here on how to create custom dimension in VWO
The API method accepts a custom dimension key - customDimensionKey and custom dimension value - customDimensionValue.
customDimensionKey is the unique key associated with a particular custom dimension made in the VWO application.
customDimensionValue is the value you want to tag a custom dimension with.
VWO.pushCustomDimension("CUSTOM_DIMENSION_KEY", "CUSTOM_DIMENSION_VALUE");
Logging
To enable logging in SDK, use
VWO.setLogLevel(level).
You can set different log levels depending upon the priority of logging as follows:
- logLevelDebug: Gives detailed logs.
- logLevelInfo: Informational logs
- logLevelWarning: Warning is a message level indicating a potential problem.
- logLevelError: Indicates Error
- logLevelOff: No logs are printed
The different methods set the log level of the message. VWO will only print messages with a log level that is greater to or equal to it's current log level setting. So a logger with a level of Warning will only output log messages with a level of Warning, or Error.
VWO.setLogLevel(VWO.logLevelDebug);
See Android Logging for verbose logging in Android SDK.
Opt out
To opt out of tracking by VWO, use
config object to set OptOut to true or false. This
config object is passed when
VWO.launch function is called.
var config = {optOut: false} VWO.launch('YOUR_API_KEY', config).then(() => { console.log("Launch success " + key); });
Reports
From the Mobile App A/B menu, select your campaign and click Detailed Reports to see reports of your campaign.
Source Code
VWO React-Native Library code is available on GitHub:
Next Steps
As a next step, take a look at:
Detailed iOS documentation: SDK Reference
Detailed Android documentation: SDK Reference
We would look forward to hear from you about any question or feedback at [email protected]. | https://developers.vwo.com/reference/react-native-guide | CC-MAIN-2022-21 | en | refinedweb |
Description
This guide gives you an introduction to developing a JavaScript SDK on desktop and mobile web in different platforms and browsers (<99.99% I might skip some browsers), for those developed for non-browser supports (hardware, embedded, node/io js) are excluded in this document and will be considered in the future.
Since I didn't find out a better documentation for the JavaScript SDK, I'm here to gather and note down the knowledge of my personal experiences. This document has been written for months, there is a change we should know, JavaScript-SDK-Design is not just about SDK only, it's the connection between human and browser machine. The more native we write, the more we think, we do care about the performances and differences between platforms and browsers.
Feel free to edit or you can drop me suggestions on the issue list.
READ IT ONLINE / PDF OR Tweet It
javascript-sdk-design alternatives and similar libraries
Based on the "SDK" category.
Alternatively, view javascript-sdk-design alternatives based on common mentions on social networks and blogs.
Spotify SDK1.8 0.0 L4 javascript-sdk-design VS Spotify SDKSpotify SDK | Entity and Collection oriented | Browser and Node support!
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of javascript-sdk-design or a related project?
Popular Comparisons
README
JavaScript SDK Design Guide
Introduction
This guide provides an introduction to develop a JavaScript SDK.
The best one sentence to describe an SDK is: "The SDK is the connection bridging the gap between users and the (browser) machine."
By using this guide, the SDK will be able to run in browsers, desktop, mobile web and various other platforms capable of running JavaScript.
The target audience of this writeup excludes non-browser environments such as hardware, embedded and Node.js.
Suggest improvements by editing, or drop suggestions on the issue list. I owe you a beer :beers:
READ IT ONLINE
<!----> <!----> <!--Click to Tweet--> <!---->
Content
- What is an SDK
- Design Philosophy
- Scope
- Include the SDK
- SDK Versioning
- Changelog Document
- Namespace
- Storage Mechanism
- Event
- Request
- Component of URI
- Debugging
- Tips and Tricks
- Piggyback
- Page Visibility API
- Document Referrer
- Console Logs Polyfill
- EncodeURI or EncodeURIComponent
- YOU MIGHT NOT NEED JQUERY
- You Don't Need jQuery
- Load Script with Callback
- Once Function
- Pixel Ratio Density
- Get Style Value
- Check if Element in Viewport
- Check if Element is Visible
- Get Viewport Size
- User Tracking
- Opt Out
- WTF
- Template
- Book to Read
- Contributors
What is an SDK
This question is pretty ubiquitous, but here it is again.
"Short for software development kit, a programming package that enables a programmer to develop applications for a specific platform. Typically an SDK includes one or more APIs, programming tools, and documentation." - webopedia
Design Philosophy
Depending on the purpose of SDK's service and usage — common shared traits are, but not limited to be native, short, fast, clean, readable and testable.
The widely adopted good practice, is to write SDK with vanilla JavaScript. Languages compiling to JavaScript such as LiveScript, CoffeeScript, TypeScript and others are not recommended.
It is also recommended not to use libraries such as jQuery in SDK development. The exception is of course when it is really important. There are also other jQuery-like libraries, zepto.js etc to choose from, for the DOM manipulation purposes.
In event of HTTP ajax request requirements — there are native equivalent such as
window.fetch. It is light-weight, supported in ever growing platforms.
Backward compatibility is paramount. Every new SDK version released should be enabled with support of previous older versions. Likewise, current version should be designed to support future SDK versions. This is referred to as Forward compatibility.
Moreover, a good Documentation, well commented code, a healthy unit test coverage, as well as end-to-end (user) scenario are key to the success of SDK.
Scope
Based on the book Third-Party JavaScript
Three use cases worth considering while designing a JavaScript SDK:
- [Embedded widgets](./SCOPE.md#embedded-widgets) - Small interactive applications embedded on the publisher's web page (Disqus, Google Maps, Facebook Widget)
- [Analytics and metrics](./SCOPE.md#analytics-and-metrics) - For gathering intelligence about visitors and how they interact with the publisher's website (GA, Flurry, Mixpanel)
- [Web service API wrappers](./SCOPE.md#web-service-api-wrappers) - For developing client-side applications that communicate with external web services. (Facebook Graph API)
Suggest a case in which the use of an SDK in JavaScript environment is deemed important.
Include the SDK
To include the SDK in a user-facing environment, It is a good practice to use Asynchronous Syntax to load the scripts.
This helps to optimize the user experience on the website that are using the SDK. This approach reduces chances of the SDK library interfering with the hosting website.
Asynchronous Syntax
<script> (function () { var s = document.createElement('script'); s.type = 'text/javascript'; s.async = true; s.src = ' var x = document.getElementsByTagName('script')[0]; x.parentNode.insertBefore(s, x); })(); </script>
The
async syntax is used when targetting modern browsers.
<script async src="
Traditional Syntax
<script type="text/javascript" src="
Comparison
Here's the simple graph to show the difference between Asynchronous and Traditional Syntax.
Asynchronous:
|----A-----| |-----B-----------| |-------C------|
Synchronous:
|----A-----||-----B-----------||-------C------|
Asynchronous and deferred JavaScript execution explained
It is good practice to avoid, or minimize, the use of blocking JavaScript, especially external scripts that must be fetched before they can be executed. Scripts that are necessary to render page content can be inlined to avoid extra network requests, however the inlined content needs to be small and must execute quickly (non-blocking fashion) to deliver good performance. Scripts that are not critical to initial render should be made asynchronous or deferred until after the first render.
Problem of Asynchronous
When using an Asynchronous approach, It is ill-advised to execute SDK initialization functions before all libraries are loaded, parsed and executed in the hosting page.
Consider the following snippet as a visual example to the previous statement:
<script> (function () { var s = document.createElement('script'); s.type = 'text/javascript'; s.async = true; s.src = ' var x = document.getElementsByTagName('script')[0]; x.parentNode.insertBefore(s, x); })(); // execute your script immediately here SDKName('some arguments'); </script>
The end result of such initialization will lead to bugs.
The
SDKName() function, undefined at this point, executes before it becomes available in the environment's global variable. The script is not loaded yet.
To make it work, some tricks are necessary to make sure the script executes successfully. The event will (need to) be stored in the
SDKName.q queue array. The SDK should be able to handle and execute the
SDKName.q event and initialize the
SDKName namespace.
The following snippet depicts the statement in previous paragraph.
<script> (function () { // add a queue event here SDKName = SDKName || function () { (SDKName.q = SDKName.q || []).push(arguments); }; var s = document.createElement('script'); s.type = 'text/javascript'; s.async = true; s.src = ' var x = document.getElementsByTagName('script')[0]; x.parentNode.insertBefore(s, x); })(); // execute your script immediately here SDKName('some arguments'); </script>
Or using
[].push
<script> (function () { // add a queue event here SDKName = window.SDKName || (window.SDKName = []); var s = document.createElement('script'); s.type = 'text/javascript'; s.async = true; s.src = ' var x = document.getElementsByTagName('script')[0]; x.parentNode.insertBefore(s, x); })(); // execute your script immediately here SDKName.push(['some arguments']); </script>
Others
There are other different ways to include a script
Import in ES2015
import "your-sdk";
Modular include a Script
There is full source code — and this awesome tutorial "Loading JavaScript Modules" may help for in-depth understanding of concepts discussed above.
module('sdk.js',['sdk-track.js', 'sdk-beacon.js'],function(track, beacon) { // sdk definitions, split into local and global/exported definitions // local definitions // exports }); // you should contain this "module" method (function () { var modules = {}; // private record of module data // modules are functions with additional information function module(name,imports,mod) { // record module information window.console.log('found module '+name); modules[name] = {name:name, imports: imports, mod: mod}; // trigger loading of import dependencies for (var imp in imports) loadModule(imports[imp]); // check whether this was the last module to be loaded // in a given dependency group loadedModule(name); } // function loadModule // function loadedModule window.module = module; })();
SDK Versioning
It is not a good practice to use one of the following versioning styles:
brand-v<timestamp>.js
brand-v<datetime>.js
brand-v1-v2.js,
The reason is that it becomes confusing to track the lastest version. Therefore, previous styling does not help developers who use the SDK.
It is however a good practice to use Semantic Versioning, also known as SemVer, when versioning SDKs.
It has three main parts, each corresponding to importance of a release: "MAJOR.MINOR.PATCH".
Version in
v1.0.0
v1.5.0
v2.0.0 is easier to trace and track in changelog documentation, for instance.
Depending on service design, some of the ways SDK can be distributed (or tracked) by version are the following:
- Using Query String path —
- Using the Folder Naming —
- Using hostname (subdomain) —
Depending on Use Case, there are other environment dependent forms that are commonly advised to use:
- In
stableversion
- In
unstableversion
- In
alphaversion
- In
latestversion
- In
experimentalversion
Reading suggestion: Why use SemVer? on
npmblog.
Changelog Document
It's hard to notice when an SDK has updates (or is upgraded) when no announcement has been issued. It's good practice to write a Changelog to document major, minor and even bug-fix changes. Tracking changes in SDK APIs deliver good developer experience. - Keep a Changelog (Github Repo)
Each version should have:
[Added] for new features. [Changed] for changes in existing functionality. [Deprecated] for soon-to-be removed features. [Removed] for now removed features. [Fixed] for any bug fixes. [Security] in case of vulnerabilities.
In addition, commit-message-emoji uses an emoji to explain the commit's changes itself. Find the best suitable format or changelog generator tool for your project.
Namespace
To avoid collision with other libraries, it is better to define no more than one global SDK namespace. The naming should also avoid using the commonly used words and catch-phrases as namespaces.
As a quick example, SDK playground can well use
(function () { ... })() or ES6 Blocks
{ ... }to wrap all sources.
This is an increasingly common practice found in many popular JavaScript libraries such as (jQuery, Node.js, etc.). This technique creates a closure around the entire contents of the file which, perhaps most importantly, creates a private namespace and thereby helps avoid potential name clashes between different JavaScript modules and libraries. #
To avoid namespace collision
From Google Analytics, define the namespace by changing the value');
From OpenX experience, support a parameter to request the namespace.
<script src="
Storage Mechanism
Cookie
The domain scope of using cookies is quite complex while involving the
subdomain and
path.
For
path=/,
there is a cookie part
first=value1 in domain
and another cookie
second=value2 in domain
There is a cookie
first=value1 in domain
cookie
second=value2 in domain path
and cookie
third=value3 in domain
Check Cookie Writable
Given a domain (Default as current hostname), check whether the cookie is writable.
var checkCookieWritable = function(domain) { try { // Create cookie document.cookie = 'cookietest=1' + (domain ? '; domain=' + domain : ''); var ret = document.cookie.indexOf('cookietest=') != -1; // Delete cookie document.cookie = 'cookietest=1; expires=Thu, 01-Jan-1970 00:00:01 GMT' + (domain ? '; domain=' + domain : ''); return ret; } catch (e) { return false; } };
Check Third-Party Cookie Writable
It's impossible to check only using client-side JavaScript, but a server can help to achieve just that.
Write/Read/Remove Cookie Code
Code snippet for write/read/remove cookie script.
var cookie = { write: function(name, value, days, domain, path) { var date = new Date(); days = days || 730; // two years path = path || '/'; date.setTime(date.getTime() + (days * 24 * 60 * 60 * 1000)); var expires = '; expires=' + date.toGMTString(); var cookieValue = name + '=' + value + expires + '; path=' + path; if (domain) { cookieValue += '; domain=' + domain; } document.cookie = cookieValue; }, read: function(name) { var allCookie = '' + document.cookie; var index = allCookie.indexOf(name); if (name === undefined || name === '' || index === -1) return ''; var ind1 = allCookie.indexOf(';', index); if (ind1 == -1) ind1 = allCookie.length; return unescape(allCookie.substring(index + name.length + 1, ind1)); }, remove: function(name) { if (this.read(name)) { this.write(name, '', -1, '/'); } } };
Session
It's important to know that in JavaScript it is not possible to write a Session. That is the server responsibility. The server-side team should implement Session management related Use Cases.
A page session lasts for as long as the browser is open and survives over page reloads and restores. Opening a page in a new tab or window will cause a new session to be initiated.
LocalStorage
Stores data with no expiration date, storage limit is far larger (at least 5MB) and information is never transferred to the server.
It's good to know that each localStorage from
http and
in the same domain aren't shared.
Creating an iframe inside a website and using
postMessage to pass the value to others.
Check LocalStorage Writable
window.localStorage is not supported by all browsers, the SDK should check if it's available before using it.
var testCanLocalStorage = function() { var mod = 'modernizr'; try { localStorage.setItem(mod, mod); localStorage.removeItem(mod); return true; } catch (e) { return false; } };
Session Storage
Stores data for one session (data is lost when the tab is closed).
Check SessionStorage Writable
var checkCanSessionStorage = function() { var mod = 'modernizr'; try { sessionStorage.setItem(mod, mod); sessionStorage.removeItem(mod); return true; } catch (e) { return false; } }
Event
In client browser, there are events
load
unload
on
off
bind .... Here's some polyfills for you to handle all different platforms.
Document Ready
Please do make sure that the entire page is finished loading (ready) before starting execution of the SDK functions.
// handle IE8+ function ready (fn) { if (document.readyState != 'loading') { fn(); } else if (window.addEventListener) { // window.addEventListener('load', fn); window.addEventListener('DOMContentLoaded', fn); } else { window.attachEvent('onreadystatechange', function() { if (document.readyState != 'loading') fn(); }); } }
DOMContentLoaded - fired when the document has been completely loaded and parsed, without waiting for stylesheets, images, and subframes to finish loading
load event can be used to detect a fully-loaded page
Information from JS Tip -
element-ready from sindresorhus
Message Event
It's about the cross-origin communication between iframe and window, read the API documentation.
// in the iframe parent.postMessage("Hello"); // string // ========================================== // in the iframe's parent //) { // e.origin , check the message origin console.log('parent received message!: ',e.data); },false);
The Post message data should be String, for more advanced use in JSON, use JSON String. Although the modern browsers do support Structured Clone Algorithm on the parameter, not all browsers do.
Orientation Change
Detect device orientation change
window.addEventListener('orientationchange', fn);
Get Orientation Rotate Degree
window.orientation; // => 90, -90, 0
Screen portrait-primary, portrait-secondary, landscape-primary, landscape-secondary (Experimental)
// var orientation = screen.orientation || screen.mozOrientation || screen.msOrientation;
Disable Scroll
In web page, use CSS style
overflow: hidden, in some mobile webs, this CSS doesn't work, use JavaScript event.
document.addEventListener('touchstart', function(e){ e.preventDefault(); }); // or document.body.addEventListener('touchstart', function(e){ e.preventDefault(); }); // use move if you need some touch event document.addEventListener('touchmove', function(e){ e.preventDefault(); }); // target modern browser document.addEventListener('touchmove', function(e){ e.preventDefault(); }, { passive: false });
Request
The communication between our SDK and Server is using Ajax Request. Most common use cases leverage jQuery's ajax http request to communicate with the Server. The good news is that there is an even better solution to achieve that.
Image Beacon
Using the Image Beacon to ask the browser to perform a GET method
request to get an Image.
Ones should always remember to add timestamp (Cache Buster) to prevent caching in browser.
(new Image()).src = '
Some notice for GET Query String, there is the limit of length which is 2048 (Basically it depends on different browsers and server). The following trick helps to handle the case of exceeded length limit.
if (length > 2048) { // do Multiple Post (form) } else { // do Image Beacon }
There are well-known problems using
encodeURI or
encodeURIComponent. However, it is better to understand how these two approaches work. Reading details below.
For the image load success/error callback
var img = new Image(); img.src = ' img.onload = successCallback; img.onerror = errorCallback;
Single Post
it is possible to use the native form element POST method to send a key value.
var form = document.createElement('form'); var input = document.createElement('input'); form.style.display = 'none'; form.setAttribute('method', 'POST'); form.setAttribute('action', ' input.name = 'username'; input.value = 'attacker'; form.appendChild(input); document.getElementsByTagName('body')[0].appendChild(form); form.submit();
Multiple Posts
The Service is often complex, especially when needing to send more data through a POST method.
function requestWithoutAjax( url, params, method ){ params = params || {}; method = method || "post"; // function to remove the iframe var removeIframe = function( iframe ){ iframe.parentElement.removeChild(iframe); }; // make a iframe... var iframe = document.createElement('iframe'); iframe.style.display = 'none'; iframe.onload = function(){ var iframeDoc = this.contentWindow.document; // Make a invisible form var form = iframeDoc.createElement('form'); form.method = method; form.action = url; iframeDoc.body.appendChild(form); // pass the parameters for( var name in params ){ var input = iframeDoc.createElement('input'); input.type = 'hidden'; input.name = name; input.value = params[name]; form.appendChild(input); } form.submit(); // remove the iframe setTimeout( function(){ removeIframe(iframe); }, 500); }; document.body.appendChild(iframe); }
requestWithoutAjax('url/to', { id: 2, price: 2.5, lastname: 'Gamez'});
Iframe
Iframe embedded in html can always be used to cover the use case of generating content within the page.
var iframe = document.createElement('iframe'); var body = document.getElementsByTagName('body')[0]; iframe.style.display = 'none'; iframe.src = ' iframe.onreadystatechange = function () { if (iframe.readyState !== 'complete') { return; } }; iframe.onload = loadCallback; body.appendChild(iframe);
Remove extra margin from INSIDE an iframe
<iframe src="..." marginwidth="0" marginheight="0" hspace="0" vspace="0" frameborder="0" scrolling="no"></iframe>
Putting html content into an iframe
<iframe id="iframe"></iframe> <script> var html_string= "content <script>alert(location.href);</script>"; document.getElementById('iframe').src = "data:text/html;charset=utf-8," + escape(html_string); // alert data:text/html;charset=utf-8..... // access cookie get ERROR var doc = document.getElementById('iframe').contentWindow.document; doc.open(); doc.write('<body>Test<script>alert(location.href);</script></body>'); doc.close(); // alert "top window url" var iframe = document.createElement('iframe'); iframe.src = 'javascript:;\'' + encodeURI('<html><body><script>alert(location.href);</body></html>') + '\''; // iframe.src = 'javascript:;"' + encodeURI((html_tag).replace(/\"/g, '\\\"')) + '"'; document.body.appendChild(iframe); // alert "about:blank" </script>
Script jsonp
This is the case where your server needs to send a JavaScript
response and let the client browser execute it.
Just include the JS script link.
(function () { var s = document.createElement('script'); s.type = 'text/javascript'; s.async = true; s.src = '/yourscript?some=parameter&callback=jsonpCallback'; var x = document.getElementsByTagName('script')[0]; x.parentNode.insertBefore(s, x); })();
To learn more about jsonp
- JSONP only works in GET HTTP requests.
- JSONP lacks error handling, means you cannot detect cases in response status code 404, 500 and so on.
- JSONP requests are always asynchronous.
- Beware of CSRF attack.
- Cross domain communication. Script response side (server-side) don't need to care about CORS.
Navigator.sendBeacon()
Look at the documentation.
This method.
Send POST beacon through the API. It's cool.
navigator.sendBeacon("/log", analyticsData);
XMLHttpRequest
Writing XMLHttpRequest is not a good idea. I assume that you don't want to waste time on battling with IE or other browsers. Here are some polyfills or code you can try:
- window.fetch - A window.fetch JavaScript polyfill. (check also ky)
- got - Simplified HTTP/HTTPS requests
- microjs - list of ajax lib
- more
Fragment Identifier
Also known as hash mark
#. Remember that requests with hash mark at the end are not passed within http requests.
For example, you are in the page
// Sending a request with a parameter url which contains current url (new Image()).src = ' // actual request will be without # (new Image()).src = ' // Solution, encodeURIComponent(url): (new Image()).src = ' + encodeURIComponent('
Maximum Number of Connections
Check the maximum number of the browser's request connections. browserscope
Component of URI
It's important to know if the SDK needs to parse the location url.
authority __________|_________ / \ userinfo host resource __|___ ___|___ __________|___________ / \ / \ / \ username password hostname port path & segment query fragment __|___ __|__ ______|______ | __________|_________ ____|____ | / \ / \ / \ / \ / \ / \ / \ foo://username:[email protected]:123/hello/world/there.html?name=ferret#foo \_/ \ / \ \ / \__________/ \ \__/ | | \ | | \ | scheme subdomain \ tld directory \ suffix \____/ \___/ | | domain filename
Parsing URI
Here's a simple way using the native URL() Interface but it's not supported by all browsers. It is also not a standard yet.
var parser = new URL(' parser.hostname; // => "github.com"
The DOM 's
createElement('a') can be used in browsers that don't have the
URL() Interface yet.
var parser = document.createElement('a'); parser.href = " parser.hostname; // => "github.com"
Debugging
Simulating Multiple Domains
To simulate multiple domains, there is no need to register different domain names. Editing operating system's hosts file can do the trick.
$ sudo vim /etc/hosts
Add the following entries
# refer to localhost 127.0.0.1 publisher.net 127.0.0.1 sdk.net
Every website URL becomes accessible via
and
Developer Tools
Browsers come with debugging tools specific to every vendor. Obviously, these tools can be used to debug SDK JavaScript code -
Chrome Developer Tools
Safari Developer Tools
Firebug. Developer tools also shortened as DevTools.
The DevTools provide web developers deep access into the internals of the browser and their web application. Use the DevTools to efficiently track down layout issues, set JavaScript breakpoints, and get insights for code optimization.. ---
Console Logs
For testing expected output text and other general debugging,
Console Logs can be used through the browser API
console.log(). There are various typeways to format and output messages. There is more on this discussed at this link: Console API.
Debugging Proxy
Debugging proxy gives us a hand on testing SDK in development. Some of the areas covered are:
- Debugging traffic
- modify cookies
- Inspecting headers
- Verifying the cache
- Editing http request/response
- SSL Proxying
- Debugging Ajax and more.
Here's some software you can try
BrowserSync
BrowserSync makes it easy to tweak and test faster by synchronizing file changes and interactions across multiple devices. It’s wicked-fast and totally free.
It really helps a lot to test the SDK across mutliple devices. Totally worth a try =)
Debugging Node.js Apps
To debug SDK scripts in Chrome Developer Tools. (Node.js v6.3.0+ required)
$ node --inspect-brk [script.js]
Tips and Tricks
Piggyback
Sometimes, including all the SDK source code is not required in some use cases. That is the case of a simple 1x1 pixel request -- For example: make a request when someone lands on thank you (last) page. In such a scenario, the developer may include an image file with a the (url) link, as explained in the following snippet.
<img height="1" width="1" alt="" style="display:none" src=" />
Page Visibility API
Sometimes, the SDK wants to detect if a user has a particular page in focus. These polyfills visibly.js and visibilityjs may help achieve just that.
Document Referrer
The
document.referrer can be used to get the url of current or previous page.
It is however advised to remember that this referrer is "Browser Referrer" not the "Human Known Referrer".
The case where a user clicks the browser back button, for example pageA -> pageB -> pageC -> (back button) pageB, current pageB's referrer is pageA, not pageC.
Console Logs Polyfill
The following is not a special polyfill. It just makes sure that calling
console.log API doesn't throw error event to client-side.
if (typeof console === "undefined") { var f = function() {}; console = { log: f, debug: f, error: f, info: f }; }
EncodeURI or EncodeURIComponent
Understand the difference between
escape()
encodeURI()
encodeURIComponent() here.
It's worth mentioning that using
encodeURI() and
encodeURIComponent() has exactly 11 characters different.
These characters are: # $ & + , / : ; = ? @ more discussion.
YOU MIGHT NOT NEED JQUERY
As the title said, you might not need jquery. It's really useful if you are looking for some utilities code - AJAX EFFECTS, ELEMENTS, EVENTS, UTILS
You Don't Need jQuery
Free yourself from the chains of jQuery by embracing and understanding the modern Web API and discovering various directed libraries to help you fill in the gaps.
Useful Tips
Load Script with Callback
It's similar to asynchrnous script loading with additional callback event
function loadScript(url, callback) { var script = document.createElement('script'); script.async = true; script.src = url; var entry = document.getElementsByTagName('script')[0]; entry.parentNode.insertBefore(script, entry); script.onload = script.onreadystatechange = function () { var rdyState = script.readyState; if (!rdyState || /complete|loaded/.test(script.readyState)) { callback(); // detach the event handler to avoid memory leaks in IE ( script.onload = null; script.onreadystatechange = null; } }; }
Once Function
Implementation of the function
once
Quite often, there are functions that are needed only to run once. Oftentimes these functions are in the form of event listeners which may be difficult to manage. Of course if they were easy to manage, it is advised to just remove the listeners. The following is the JavaScript function to make that possible!
// Copy from DWB // function once(fn, context) { var result; return function() { if(fn) { result = fn.apply(context || this, arguments); fn = null; } return result; }; } // Usage var canOnlyFireOnce = once(function() { console.log('Fired!'); }); canOnlyFireOnce(); // "Fired!" canOnlyFireOnce(); // nada. nothing.
Pixel Ratio Density
To better understand terms such as pixel, ratio, density, dimension are while developing mobile web -- the following links can provide more insights:
Get Style Value
Get inline-style value
<span id="black" style="color: black"> This is black color span </span> <script> document.getElementById('black').style.color; // => black </script>
Get Real style value
<style> #black { color: red !important; } </style> <span id="black" style="color: black"> This is black color span </span> <script> document.getElementById('black').style.color; // => black // real var black = document.getElementById('black'); window.getComputedStyle(black, null).getPropertyValue('color'); // => rgb(255, 0, 0) </script>
ref:
Check if Element in Viewport() */ ); }
Check if Element is Visible
var isVisible = function(b) { var a = window.getComputedStyle(b); return 0 === a.getPropertyValue("opacity") || "none" === a.getPropertyValue("display") || "hidden" === a.getPropertyValue("visibility") || 0 === parseInt(b.style.opacity, 10) || "none" === b.style.display || "hidden" === b.style.visibility ? false : true; } var element = document.getElementById('box'); isVisible(element); // => false or true
Get Viewport Size
var getViewportSize = function() { try { var doc = top.document.documentElement , g = (e = top.document.body) && top.document.clientWidth && top.document.clientHeight; } catch (e) { var doc = document.documentElement , g = (e = document.body) && document.clientWidth && document.clientHeight; } var vp = []; doc && doc.clientWidth && doc.clientHeight && ("CSS1Compat" === document.compatMode || !g) ? vp = [doc.clientWidth, doc.clientHeight] : g && (vp = [doc.clientWidth, doc.clientHeight]); return vp; } // return as array [viewport_width, viewport_height]
User Tracking
Assuming that an Evil Advertisement Company wants to track a user, Evil may well generate a personalized unique hash by using fingerprinting. However, Ethical Company uses cookies and offers Opt-out solution.
Opt Out
DIGITAL ADVERTISING ALLIANCE, POWERED BY YOURADCHOICES provides a tool that helps anyone to opt-out from all the participating companies.
WTF
Misspelling Of Referrer
Fun fact about why the HTTP Request Header having the field name
referer not
referrer
According to the Wikipedia
The
misspelling of referreroriginated in the original proposal by computer scientist
Phillip Hallam-Bakerto incorporate the field into the HTTP specification. The misspelling was set in stone by the time of its incorporation into the
Request for Commentsstandards document
RFC 1945; document co-author
Roy Fieldinghas remarked that neither "referrer" nor the misspelling "referer" were recognized by the standard
Unix spell checkerof the period. "Referer" has since become a widely used spelling in the industry when discussing HTTP referrers; usage of the misspelling is not universal, though, as the correct spelling "referrer" is used in some web specifications such as the
Document Object Model.
CSS Flexible Box Layout Module
Be sure to double-check the flexbox functionality in a different browser, especially the partial support in IE10/11.
Template
This guide provides templates and boilerplates to building an SDK.
- [TEMPLATE.md](./Template/README.md)
Books/Nice to Reads
(inspired by
Contributors ✨
Thanks goes to these wonderful people (emoji key):
<!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section --> <!-- prettier-ignore --> Huei Tan📖 Pascal Maniraho📖 Adam🖋
<!-- ALL-CONTRIBUTORS-LIST:END -->
This project follows the all-contributors specification. Contributions of any kind welcome! | https://js.libhunt.com/javascript-sdk-design-alternatives | CC-MAIN-2022-21 | en | refinedweb |
Today I want to share this information with you, in the hope that it will help you in your future projects.
The
wantsJson() method checks the Accept HTML header for the string application/json and returns true if it is set.
The
isJson() method checks that the HTML header CONTENT_TYPE contains the string /json and returns true if it is found.
Both methods are found in vendor/laravel/framework/src/Illuminate/Http/Request.php
use Illuminate\Http\Request;
If you want to check if response needs to be JSON
If you want to check if request is of JSON type
<?php namespace App\Http\Controllers; use Illuminate\Http\Request; class UserController extends Controller { public function index(Request $request) { dd($request->wantsJson()); // output:- false } }
Not all AJAX requests expect a JSON response, so utilizing request()->ajax() is useful where you want to determine if the request was an XmlHttpRequest or not, but the response doesn't care about JSON or not.
Not all requests that contain JSON expect a JSON response. so if you don't care about whether or not the response wants JSON back, but want to determine if JSON was sent in the request, then isJson() is useful for you.
Not all requests that want JSON responses are AJAX driven, so wantsJson is useful in the case where you want to return JSON data, but you don't care how the request came to your server.
I hope you enjoyed the code.
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/morcosgad/laravel-request-wantsjson-isjson-22c8 | CC-MAIN-2022-21 | en | refinedweb |
public class LiveListenerBus extends AsynchronousListenerBus<SparkListener,SparkListenerEvent> implements SparkListenerBus
Until start() is called, all posted events are only buffered. Only after this listener bus has started will events be actually propagated to all attached listeners. This listener bus is stopped when it receives a SparkListenerShutdown event, which is posted using stop().
listenerThreadIsAlive, post, start, stop, waitUntilEmpty
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
onPostEvent
addListener, listeners, postToAll
initializeIfNecessary, initializeLogging, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
public LiveListenerBus()
public void onDropEvent(SparkListenerEvent event)
AsynchronousListenerBus
Note:
onDropEvent can be called in any thread.
onDropEventin class
AsynchronousListenerBus<SparkListener,SparkListenerEvent> | https://spark.apache.org/docs/1.3.1/api/java/org/apache/spark/scheduler/LiveListenerBus.html | CC-MAIN-2022-21 | en | refinedweb |
Re: [SWCollect] eBay Actually INCREASES Functionality!
Chris, would you mind sharing that script with us? :-) C.E. Forman wrote: I wonder if they now let you do more than 20 saved searches. I've stopped using that feature since I rigged up a script to do about 60 searches and e-mail me the results twice per week. B-)
Re: [SWCollect] eBay Actually INCREASES Functionality!
Yup, after changing providers I can now participate in the wonder that is PHP. C.E. Forman wrote: You got PHP? - Original Message - From: Marco Thorek [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Tuesday, July 23, 2002 5:49 PM Subject: Re: [SWCollect] eBay Actually INCREASES
[SWCollect] How to preserve them?
Hello fellow collectors, although I keep my games at fairly adequate environmental conditions (smokefree, no direct sunlight, etc.) I notice that they are working. Some manuals have a sweet smell to them, even on those games that I originally purchased and that were always kept under the
Re: [SWCollect] PHP Script for eBay Searches
Thanks Chris, much appreciated! Marco C.E. Forman schrieb: Since several of you have asked, here is the eBay searches script. I've added comment notes for how to customize it for your own use. If your ISP does not have PHP, you will not be able to use this, though it shouldn't be too
Re: [SWCollect] How to preserve them?
Thank you for the long answer, Alexander. As you apparently are from Germany, too, could you tell me where around here I can find suitable mylar covers? To be honest, I never before heard about mylar. Marco Alexander Zoller schrieb: In my opinion the acid problem, while certainly a
Re: [SWCollect] Our Mission
I once had this case where a young lady from England tried to sell a couple of Infocom games via ebay.com. I stumbled across them and lo and behold, the pictures she used were taken from my Infocom site. Now, I don't really have a problem with people using my scans, provided they a) ask me
Re: [SWCollect] The Motherlode
Hugh Falk schrieb: Not too long ago I did a deal for about 1500 pieces in bulk, and it was an incredible pain to go through, sort, catalog, clean, take the treasures, relist the rest in bulk, repackage and ship. I ended up with a couple hundred good titles for free, and sold the rest for
Re: [SWCollect] computergamecollector
Alexander Zoller schrieb: Registrant: Brad Lima 1521 Kensington Blvd Bluffton SC, SC 29910 US Can't say I ever heard of him before... I'd probably know his eBay ID if I see it. Site is employing a price guide for mint sealed games, this ought to create some controversy.
Re: [SWCollect] Greetings
Jim Leonard schrieb: The cutoff was about 6 months ago; I haven't seen anything large since that time. Around here, in Germany, it was 12-18 months ago. EA was the first to announce that cardboard boxes are outdated and subsequently published new games only in DVD cases. Others followed
Re: [SWCollect] Greetings
Edward Franks schrieb: I've always wondered what John Romero would pay for one (assuming he doesn't have one already). :) He's a big time Ultima and Apple ][ fan. I think he would be a great member of this list if he isn't already. A while ago I started to receive a good number of
Re: [SWCollect] Is nothing sacred?
C.E. Forman schrieb: You should report him to eBay, this has to be against at least one of their 6 billion rules. Indeed. I remember reading something about use of ebay member email adresses for unsolicited emails is prohibited, but can't be bothered right now to spend the next hours wading
Re: [SWCollect] Greetings
Edward Franks schrieb: On Wednesday, October 16, 2002, at 06:58 AM, C.E. Forman wrote: Sure, it'd be great to have someone in the industry who's also a serious collector. I'll second that. In my (limited) dealings with John I've always had a pleasant experience with him.
Re: [SWCollect] Greetings
Edward Franks schrieb: Thank you. Both my wife and I have been touched by everyone's support. It is a big boost to us to see all the support from our fellow collectors despite any past clashes. Helping other enthusiasts is what got both of us into collecting and trading the old
Re: [SWCollect] Two questions
Chris Newman schrieb: Wow, RBBS-PC, the good old days! I don't know if it would have any serious sale value, but it probably has sentimental value (which might translate) to many folks. This might lead to a good thread. What was your favorite BBS program? Was it the venerable PC-Board,
Re: [SWCollect] Multi-format floppy drive
Lee K. Seitz schrieb: I meant to mention this in my last message. Did everyone see the Slashdot story on the Catweasel ()? It's a controller that supports multiple disk formats using a single drive. It also lets you plug
Re: [SWCollect] Finds over vacation
Lee K. Seitz schrieb: Of note for Infocom collectors is the March 1984 issue of _Discover_ that I found. I thought this might be news, but I just now see someone's already put it on the web (). The article really doesn't reveal
Re: [SWCollect] Oldskool gaming boxes: the revenge
Stephen S. Lee schrieb: Hello, we've had discussion about machines for playing old games on a PC before. Before this involved desktops. What I'm interested in is a portable oldskool gaming machine. I have minimal experience using portable computers at all, and I think that what I want
Re: [SWCollect] Multi-format floppy drive
Jim Leonard schrieb: Actually, you can now read Amiga disks on a regular PC, thanks to the insane work of an insane friend: He, pretty neat! Marco -- This message was sent to you because
Re: [SWCollect] Greetings from The Origin Museum
Heah Joe, nice to have you on board :) Perhaps you remember me, Marco from infocom-if.org. You were one of the first people that ever visited my site. Marco Origin Museum schrieb: I was told about this board some time ago, but I finally got the time to join--YAY! I know most of the
Re: [SWCollect] Time for my introduction
Stefan Lindblom schrieb: Greetings everyone, After lurking in this little list for 2 months now I felt now it's the time come out of the closet ;) Sounds like an introduction to a Gamoholics Anonymous group. group goes hi stefan! ;-) Marco
RE: [SWCollect] Floppy Disk Numbers
Cool. Thanks John! Marco John Romero schrieb: Okay guys, If you want a signed copy of Wasteland, send it to: Coresoft Inc. attn: Alan Pavlish 23232 Peralta Dr. Suite 112 Laguna Hills, CA 92653 Tell 'em I sent ya! Oh, and MAKE SURE you include a stamped addressed box to send it
[SWCollect] Another huge auction
If any of you are interested: Only thing on there that would slightly interested me is the Commodore US 8 calculator. Perhaps he should have sold some of those items seperately. There is just too many loose disks on there to warrant
Re: [SWCollect] Seastalker folio/grey box
Hi Stephen, it's the same. Marco Stephen S. Lee schrieb: Hello ... I've got a quick question. Is the sticker in Seastalker the same between the folio and the standard-box versions, or are they different? Thanks! -- Stephen
Re: [SWCollect] Paypal scam Email ?
Probably the latter sigh Marco C.E. Forman schrieb: Check the link again, they're gone. Either someone got them shut down, or they grabbed the info they had and ran. -- This message was sent to you because you are
[SWCollect] Warez version of Masterpieces?
Check this out: Doesn't it look like it? AFAIK there is not one Activision CD that contains all games. Possibly he also used the HTML, scans and PDF docs from the various Infocom sites on the net. Marco
Re: [SWCollect] It Continues
LOL, I really don't wanna know where they found that fluff to stick in there. Do any of you think in the future it might be a worthwhile occupation to be a professional expert at antique software authentication? Look M'am, I'm telling you, that Don't Panic is not authentic, it's a Hongkong
Re: [SWCollect] Warez version of Masterpieces?
C.E. Forman schrieb: as well as a graphic of what the original box looked like Not to mention it has Zork: The Undiscovered Underground, which was only available as a free download from Activision's site, never included in any commercially released collection. I notice he only got one
[SWCollect] Merry Christmas!
Yes, it's the time of the year where I get all sentimental, so Merry Christmas and a Happy New Year to all of you! Also thanks for a good and informative time here on the SWCollect list! Marco -- This message was sent to you
Re: [SWCollect] Star Saga up and MobyScale question
C.E. Forman schrieb: BTW, Lee, I'd rate the no tea in your Hitchhiker's Guide auction as NM. If it was IM, that'd mean there's NO no tea, which would mean there IS tea in the package. B-) LOL, this is getting metaphysical ;-) Infidel
Re: [SWCollect] Heads Up
C.E. Forman schrieb: Hey gang, Wanted to warn everyone here. If you've noticed the Infocom T-shirts being sold on eBay by islandseven, don't bid on them. They're not originals, and the seller has been less than forthcoming about that fact. For that matter, don't buy anything from this
Re: [SWCollect] Some people push it...
C.E. Forman schrieb: Yeah, I believe the QFG Anthology was a Europe-only release, so it's harder to come by. (At least, I've only ever seen it in a European box - multiple languages in the back description manuals.) A couple of months ago I came by a toystore and just because I like going
Re: [SWCollect] King's Quest 1
Jim Leonard schrieb: I wouldn't call that 3D -- it's interactive fiction with graphics drawn in a 3D perspective. To contrast, the Quest games let you move something in front of or behind another on-screen object, so that qualifies more as 3D than Mystery House. I remember that back in
Re: [SWCollect] King's Quest 1
Jim Leonard schrieb: At MobyGames we go over this every so often; people keep wanting to somehow *define* the words adventure game to mean Sierra games (the Quest games, etc.) Well, I can imagine. I remember having vivid discussions over at comp.sys.ibm.pc.games.adventure over this
Re: [SWCollect] Heads Up
C.E. Forman schrieb: I can't find the auction, so would this be the A dirty mind is a terrible thing to waste shirt? Yeah, that's the one. Here's the auction, while eBay still has it: Indeed, that's it. But
Re: [SWCollect] King's Quest 1
Jim Leonard schrieb: It seems to me, the farther we move into the present, the harder it is to classify a game. Some genres have blurred beyond recognition. Trust me, I can classify them. :) Genres haven't blurred; people's minds have. Go ahead -- hit me with something difficult. Hm,
Re: [SWCollect] King's Quest 1
Edward Franks schrieb: The problem is that you can easily swap in role-playing games as a basic building block in place of Adventure. The same justifications work for either. The two are so close together (more than any of the other categories) that it is hard sometimes to see
Re: [SWCollect] King's Quest 1
Jim Leonard schrieb: The party aspect is indeed a strong element of RPGs. I neglected to say this in our RPG genre description, so I'll add it now. I won't limit RPGs to party-based games, but it should indeed be noted that *most* RPGs are party-based. See, discussion does bring about
Re: [SWCollect] King's Quest 1
Stuart Feldhamer schrieb: Jim, Your system is very interesting but I don't like it. Maybe according to YOUR definition of Adventure it encompasses all fantasy-style gaming, but this is not the commonly accepted definition of the genre. As I see it, adventures are games where the focus is
Re: [SWCollect] King's Quest 1
Chris Newman schrieb: Yes, buy it! I have two copies, trade paperback and HC. This ties in to a post I made a couple of months ago about everyone's top five books about the gaming industry and/or PC history. Hackers is on my list. What would the other four be? Marco
Re: [SWCollect] New topic--Collectors UNITE!
Origin Museum schrieb: 1. C.E. and I got into a long discussion on how we store our collections. We agreed that plastic baggies were a 'short-term' solution, and tried to think up another (better) way to preserve our software for the long haul. We agreed on the idea that if we could find
[SWCollect] Think you got everything?
Fat chance, you probably don't have the Zork coffeetable: ;-) Marco -- This message was sent to you because you are currently subscribed to the swcollect mailing list. To
Re: [SWCollect] Heads Up
Hm, he's at it again: Marco C.E. Forman schrieb: Hey gang, Wanted to warn everyone here. If you've noticed the Infocom T-shirts being sold on eBay by islandseven, don't bid on them. They're not originals, and
Re: [SWCollect] Thanks for the help with LotL
Hm, 42? (Which would also mean Tom is the answer to everything) Marco [EMAIL PROTECTED] schrieb: In a message dated 01/29/2003 12:20:51 PM Central Standard Time, [EMAIL PROTECTED] writes: At least give us higher or lower :) Haha, well I don't know about that Stuart, I've kind of
Re: [SWCollect] New topic--Collectors UNITE!
C.E. Forman schrieb: Good shrinkwrap jobs have the pencil-eraser hole in the front and back. The what holes? I thought those were there because the shrinkwrapping machines sucked the air out through those. I may have actually brought this up in a column once (or was going to): If the shrink
Re: [SWCollect] New topic--Collectors UNITE!
You know what? I figured you to be over 40 and Tom to be mid-20 :-) Marco C.E. Forman schrieb: That's correct, 28. Though I vaguely remember Wacky Packs. Got a gift package with some in them at a school or church social event. The big card fad from my era was Garbage Pail Kids, though I
Re: [SWCollect] Heads Up
C.E. Forman schrieb: Yeah, I'm the Ye Olde Geek with no girlfriend he's referring to. *g* This guy is SO MAD at me right now. Wonder what'll happen when I actually post the column? Oh, I'll love to hear about that. BTW, one dude actually bid on it. I told him it is a fake, referencing
Re: [SWCollect] Heads Up
BTW, one dude actually bid on it. I told him it is a fake, referencing your site. He wrote back that he asked the bidder about the authenticity Oops, that should read asked the seller. Marco -- This message was sent to you
Re: [SWCollect] New topic--Collectors UNITE!
Heh, Pedro, you forgot me and my 30 years, which raises the mean to 29.11 and the median to 28 :-) Seriously, we all are rather young. I always thought this hobby would attract more people who were around when Zork was played on mainframes and who now approach 50. Marco Pedro Quaresma schrieb:
Re: [SWCollect] Using MobyGames Info
Hm, can you give me a sample of the cease desist? I'm not that familiar with the US legal system and having that could come in handy. Just recently I found another site that lifted some of the bios I compiled on the Infocom authors in their entirety. Marco (PS. RPGs sure look great there among
Re: [SWCollect] Using MobyGames Info
Thanks Jim. I just wanted to make sure people acknowledge that I know what I'm talking about, so that when I ask them to -please- remove the content or at least mention the source, I'm taken serious. I'm not going to start cross-atlantic court action over it (he, imagine that!). About the RPG
[SWCollect] I don't recall what came with the game
Yeah, right: Then just turn the freaking box over, it says what came with the
Re: [SWCollect] Mind Candy mention
Mind Candy was also mentioned in the previous c't, a highly popular German computer magazine. Jim, if you want it I'll scan the page for you and mail it off to you with a translation to English. For everyone in Germany: It was on p. 276 in c't 6/2003. Nice halfpage complete with pictures. Marco
Re: [SWCollect] Been Awhile, Hasn't It?
Yup, this list switches between frantic activity and solemn silence. Around here, well, I got word from Ken Love, producer at Activision, that they are planning a Zork IV. We've had occasional email contact since then and it looks like it might take off, but it has been awhile. I am also
Re: [SWCollect] FS/T Shrinked new Zak McKracken and others
They should be worth something, especially considering that there still is an active Amiga scene. How much? Well, the market dictates. But it's pretty sure you'd get them sold. Marco Jim Leonard schrieb: [EMAIL PROTECTED] wrote: Hi, I have a sealed Zak for sale/trade. It is Amiga
Re: [SWCollect] You guys see this?
C.E. Forman schrieb: Yeah, what really surprised me was a non-boxed game going that high. Even if it was boxed, wouldn't it still be a little too much? I have no idea how rare that collection is, but I consider the Infocom Masterpieces as comparable. Marco
Re: [SWCollect] Been Awhile, Hasn't It?
C.E. Forman schrieb: Really? That surprises me. The last I'd heard, Activision had dropped the Zork line of games and hadn't bothered to renew their ownership on the trademark. Let's see what happens. It has been some weeks now since I last heard about it and at that point Ken wasn't able
Re: [SWCollect] Been Awhile, Hasn't It?
Jim Leonard schrieb: 34C is about normal for Chicago this time of year :-) And it gets to -20C or worse during the winter. Yep, I hate this place ;-) -20, yup, that's stiff. Around here the winters get milder and milder, translate cold and wet. Snow is becoming a rare thing. Marco
Re: [SWCollect] Surprised
Dan Chisarick schrieb: how he has games from (some number of years ago). I said I had a few from '79. Imagine my surprise when he said his favorite game of all time is Wasteland. He was talking about wanting to write to Brian Fargo about making a real sequel to that (none of this Fallout
Re: [SWCollect] Been Awhile, Hasn't It?
Jim Leonard schrieb: Cool. Maybe someday I'll go ahead with my bootleg Sierra Videos DVD project anyway. I have access to both 1989's and 1990's product demos through friends, and the making of 7th Guest, and some other videos related to making or promoting games created from the
Re: [SWCollect] Paypal protection plan
Stefan Lindblom schrieb: This truly sucks, but what other useful options are there out there? And especially for us not living in the US? Paypal have, until now, been a great and fast way of paying for things, and I thought it was very safe. But this brings everything into new perspective.
Re: [SWCollect] Usurper Mines of Qyntarr - piracy?
Stuart Feldhamer schrieb: Look at this ebay auction: Anyone know what this orb of qyntarr is? The guy's feedback is hidden, and there must be a reason. I think this may be the same guy who sold me the Indiana Jones Revenge of
Re: [SWCollect] Paypal protection plan
C.E. Forman schrieb: I notice the point at which PayPal started to suck seems to be right around when the eBay blob absorbed them. Coincidence? Hell, no! They were not really fantastic before that, but it sure is getting even worse now. Marco
Re: [SWCollect] Paypal protection plan
I just found a very neat Register article regarding the PayPal Money Back Guarantee, which is really worth a read: It ends with this paragraph: This is just one very small story of hundreds dotted all over the Internet. And these stories will
Re: [SWCollect] Paypal protection plan
They become trickier on all accounts (literally): I remember back when I signed up, there was something about them charging $1 to your credit card and when you received your credit card bill, you had to enter a code given in the PayPal position to activate your account. Back then that dollar was
[SWCollect] Help with Zork Classics
I recently received this email, but don't have the Zork Classics CD. If any of you know how to help this guy, could you please answer him? Thanks, Marco Betreff: Inquiry Datum: Sat, 16 Aug 2003 11:27:22 +0200 Von: Josh-Pruitt Mayfield [EMAIL PROTECTED] An: [EMAIL PROTECTED] To Whom It May
Re: [SWCollect] Lost Treasures of Infocom inventory
[EMAIL PROTECTED] schrieb: No, it wasn't. The original parchment in ZZ looked like an actual scrap of paper, with uneven edges, etc, plus the printing was in color. The LTOI version was like a Slash reproduction on white paper. I can't say for sure which packages have the parchment and
Re: [SWCollect] Lost Treasures of Infocom inventory
Lee K. Seitz schrieb: This is all very interesting, but do I take it since no one's answered my question, I'm not missing anything and should be thankful to have the ZZ parchment reproduction? Sorry, Lee, it seems we got carried away a little. Yes, as Chris said, your copy should be
Re: [SWCollect] Ebay trader experiences
Hm, I remember years back I found SWM through their website and sent them an email because I wanted to buy some of the stuff they had listed. They actually never replied. When I saw them selling via ebay for prices higher than what their website offered I knew why. Eventually I bought one item
Re: [SWCollect] Ultima 11
The Japanese believe that everything has a soul. So if that is so, I guess that any game in its right mind knows where it'll have a good home ;-) Marco [EMAIL PROTECTED] schrieb: It's the game, drawing us to it. I've long speculated that we don't find the games, the games find US. B-)
Re: [SWCollect] Coin identification
Google turned this up: So the coin is indeed from a Dungeon Master game. Marco Hugh Falk schrieb: Sorry for including attachements, but I was hoping somebody could identify this coin for me. It came loose in a large box of Amiga and PC games. It
Re: [SWCollect] Ultima 11
You can't imagine how much I believed that my first car had a soul. Still up to today I feel like I somehow betrayed it when I finally had to give it away. And I too prefer to fix things instead of giving up on them. Perhaps that is a trait common to software collectors. Oh well, as long as I
Re: [SWCollect] Hello!
Hello Josh, a warm welcome from me, too! Marco [EMAIL PROTECTED] schrieb: I just subscribed (thank you Chris for making me aware of this group). Just wanted to extend a Hello to all the members and introduce the latest addition to my collection.?xml:namespace prefix = o ns =
Re: [SWCollect] Half and Bidville experiences?
Jim Leonard schrieb: Anyone used Half.com and has experiences to share? I have some esoteric (read: crappy) items to list, but I'm not sure they'd fetch more than $4. (Lest you think I would be wasting my time, I have nearly 100 to sell, so $400 or more is not chump change.) If I sell
Re: [SWCollect] Refund with paypal
Hello Stefan, I had something similar happen while ago, although it was only about $20. I don't remember PayPal's exact regulations and terms of service there, but as an international user you are pretty much licked, as you are exempt from any of their so-called guarantees. Marco Stefan
[SWCollect] The Definitive Infocom Collection
Chris is gonna have a fit over this ;-) Look here: Marco -- This message was sent to you because you are currently subscribed to the swcollect mailing
Re: [SWCollect] Elite questions
Heah Lee, Lee K. Seitz schrieb: Also, I've never seen a copy of the original Elite before. It says it's the gold edition. Does that make it unusual or do they all say that? Is it worth trying to sell on eBay? Or is anyone here interested in it? IIRC the C64 Elite I got for Christmas in
[SWCollect] Platypus stamps on ebay
Just noticed this auction on ebay, and I wonder if Brian Moriarty himself has his hands in it. AFAIK Brian still works for Skotos Tech, which apparently has some serious financial trouble. Perhaps he is selling some of
Re: [SWCollect] Vintage games w/fatal flaws
Pedro Quaresma schrieb: OK let's see if my memory doesn't betray me (again!) It was Ultima 4, but veramocor was the word used to get into the final dungeon, not the word to be used in the end of it. In the end, the word infinity had to be used (after the principles and its virtues), but
Re: [SWCollect] [Fwd: Re: 5.25 disks?]
Edward Franks schrieb: Gamasutra had an interesting article -- you may need to register on Gamasutra to read it -- on the developer's attempts to simply slowdown the cracking of Spyro: Year of the Dragon. Their goal was simply
Re: [SWCollect] Vintage games w/fatal flaws
Jim Leonard schrieb: Is this the same game? Indeed it is. Marco -- This message was sent to you because you are currently subscribed to
Re: [SWCollect] [Fwd: Re: 5.25 disks?]
Pedro Quaresma schrieb: Agreed wholeheartedly, but which companies care about that these days? How many games in the last few years have had a decent manual + props other than on a special or collectors edition? I can't recall any. Even very complicated games like Microsoft's FS9, who really
Re: [SWCollect] Happy Holidays!!
[EMAIL PROTECTED] schrieb: Turn on speaks and let it load, run until night turns to day again. When it is done don't forget to click on the poem at the bottom!! Holidays 2003 Have a good one!! Thanks Tom! And a very Merry Christmas to all of you, too! Marco
Re: [SWCollect] Mt. Drash cassette and market value
Edward Franks schrieb: So, to revisit a discussion, how do the rest of you try to estimate the market value of these types of games? What would, say, the first release of Zork -- the PDP-11 version -- be worth? This is really the hard part of being a dealer of collectibles. What
Re: [SWCollect] Mt. Drash cassette and market value
Stephane Racle schrieb: One package I had never seen on eBay until tonight was Zorkquest II. I've seen all the other Infocomics about a hundred times, but never that one. Is it that uncommon? One would think they'd be plenty of copies lying around... It indeed is that uncommon. Much more so
Re: [SWCollect] Mt. Drash cassette and market value
Edward Franks schrieb: That reminds me of the old economic chestnut: While not everything scarce is valuable, everything valuable is scarce. Had never heard that one. Very neat! :-) Marco -- This message was sent
Re: [SWCollect] Mt. Drash cassette and market value
Stephane Racle schrieb: I still remember that $2000 Starcross saucer very well! Although IMO, someone drove up the price on that one... although if I recall, the buyer was more than happy with the result. Oh yes, I remember that one, too. It was sealed, wasn't it? Still, $2000 is way more
Re: [SWCollect] Mt. Drash cassette and market value
Stephane Racle schrieb: Interesting. Perhaps very few copies were published since it was the last of the four Infocomics and the other ones had relatively little success? Yup. The Infocomics weren't exactly successful, so production was stopped after they'd rolled out comparatively small
[SWCollect] Modern classics
I'm not sure if we had this topic before, but what modern games, say, developed after 1994, would you consider collectible? There's only a very few that come to my mind: - The Dragon Edition of Ultima IX. Although it was the worst Ultima IMHO, people seem to look for this edition. -
Re: [SWCollect] Mt. Drash cassette and market value
Edward Franks schrieb: It is the Ultima VI special edition with the 10 years of Ultima cassette. What is the 10 years of Ultima cassette? Marco -- This message was sent to you because you are currently subscribed
Re: [SWCollect] Modern classics
Jim Leonard schrieb: Marco Thorek wrote: I'm not sure if we had this topic before, but what modern games, say, developed after 1994, would you consider collectible? Collectible meaning high monetary/trade value or game worth owning until end of time because it is a *good* game
Re: [SWCollect] Mt. Drash cassette and market value
Edward Franks schrieb: It is a cassette where Richard Garriott talks about the first time years of Ultima. Is it an audio tape, as Jim hints? At first I thought it may be a video and the same that came as mpg with the Ultima collection. Marco
Re: [SWCollect] Modern classics
Jim Leonard schrieb: Pedro Quaresma wrote: Please don't get me started on Planeboring: Torment. That game should never have been a RPG. Ah yes, Pedro, our resident RPG snob. ;-) If Planescape: Torment is a bad RPG by your standards, could you explain why? Is it all the dialog, or
Re: [SWCollect] Another visitor..
Welcome aboard, Per-Olaf! Marco Per-Olof Karlsson schrieb: Hi everybody! J Im a newcomer to this list, and have just spent quite some time browsing the archives. It seems Ive finally found home, at last, hehe.
Re: [SWCollect] DOSBox: Getting DOS games to run easily
Jim Leonard schrieb: I've been tinkering with this for a month now and have had such great success that I thought I'd inform everyone about it: dosbox.sourceforge.net Wow! Thanks a bunch for this link! Marco -- This
Re: [SWCollect] Paranoid seller and tax evasion
Lee K. Seitz schrieb: Tomas Buteler stated: I don't ask sellers to declare lower values, unless they offer first). [snip] But as a seller I always ask which value they want me to state, because I believe it's the polite thing to do when trading older games. Well, I *would* draw a
Re: [SWCollect] DOSBox: Getting DOS games to run easily
Edward Franks schrieb: Here's a slightly different link for those of you who would like to play some Glide-based (3Dfx) games. I haven't purchased the full version, but Redguard worked for me with the demo version. The site also has a link to
Re: [SWCollect] Modern classics
Jim Leonard schrieb: (Ironically, Wheel of Time, a game based on a Robert Jordan novel, is actually a very good game. The ancientspeak is thick and heavy but since it's an action adventure it's not as irritating.) Although commercially it failed, IIRC. It was by Legend, wasn't it? Marco
Re: [SWCollect] Paranoid seller and tax evasion
Lee K. Seitz schrieb: Marco Thorek stated: Lee K. Seitz schrieb: Well, I *would* draw a distinction between *trading* games and *buying* them. I dislike the thought of being taxed for non-cash transactions. I tried to argue the same to a customs officer. His reasoning was that I
Re: [SWCollect] Need advice regarding a Wasteland purchase
Stefan Lindblom schrieb: Hello group, My name is Stefan and I am gamoholic. :) I need some advice, preferrably from someone who knows what a NEW copy of Wasteland should look like. I have included a picture of a game I just bought. He declared it to be NEW, and said he got this one
Re: [SWCollect] Need advice regarding a Wasteland purchase
Per-Olof Karlsson schrieb: I support this view too. Shrinkwrapped items to me are interesting mainly because I know it's all in the box, and if mint also that it's all in perfect condition. Other than that, I'm not too interested. I can be found removing the shrink when I'm curious enough | https://www.mail-archive.com/search?l=swcollect@oldskool.org&q=from:%22Marco+Thorek%22 | CC-MAIN-2019-18 | en | refinedweb |
.
*It's very nice that you get a warning if you attempt to reference an unknown class, and that a quickfix exists to create that class if necessary. However, if the class is actually available in an inadvertantly not-depended-upon library or module, a quickfix should be available to add the missing dependency, as in Java.
*The reference resolver can't find import files if they are in another module, even if that module has a dependency. This isn't too bad of a problem if you put the imported files in a file list, except that it keeps Spring validation from working. An intention to add imported files to appropriate filesets would also be nice
*The "New Spring Config" action should have checkboxes for any of the standard namespaces (tx:, aop:, util:, etc.), so that I don't have to try to remember where they are. Support for automatically adding standard namespaces after file creation would also be handy
*The property type checker doesn't seem to understand generics. If the type of a property contains a type parameter, it complains, even if that type parameter is bound on the class of the bean.
*Value checking needs to understand the "${property.name}" syntax supported by the PreferencesPlaceholderConfigurer. All of our Spring files include constants externalized to property files, and such are flagged as errors if they are of any type other than String. For extra bonus points, navigation/completion/tooltips for properties file entries would be crucial.
*It should be possible for IDEA to automatically create wirings, for properties for which there is only one correctly-typed bean in scope (a very common occurence). This could either be on a per-property basis (picture an intention that says "Bind datasource property to oracleDataSource bean"), or in batch (an intention that just says "Wire up available properties"). I had this in my personal Spring plugin, and it ruled.
*If a property is annotated as @Required, but is missing, there should be an error flagged, with a quickfix if possible. Waiting till a runtime warning comes from RequiredAnnotationBeanPostProcessor is pointless.
*You need to support the p: pseudo-namespace from Spring 2.0. Here's the details:. A tool to automatically convert Spring XML files to use the p: format would win many extra bonus points
*It would be very handy if I could automatically take class declared as InitializingBean and automatically add 'init-method = "afterPropertiesSet"' to all beans of that class, and then remove the InitializingBean interface from the class. Same for DisposableBean. This supports the Spring folks suggestion that initialization/shutdown should be specifically declared in the configuration, and not implicitly declared via interface anymore.
*The gutter icon for navigating to Spring property bindings, should be a little leaf, not a "p" . (The functionality rules, BTW).
*Spring refactorings: Extract Parent Bean, Pull Properties Up, Push Properties Down, Conver Anonymous Bean to Named, Convert Named Bean to Anonymous, Split Configuration File, Move Bean to Configuration File
*The dependencies graph is cool, but the nodes are too large. Instead of showing properties and bindings in the node, simply label the edges corresponding to the property with the binding for the node
Overall, great work, and it just needs a bit of polishing to be as good as the rest of your product.
--Dave Griffith.
Hello Dave,
Very nice to know that you are now looking into this as well!
Dmitry/Sergey/Peter can now expect (even more) good bug reports/feature requests
flooding them :)
Good idea. I'll file another issue to add missing xsi:schemaLocation mapping.
Spring requires these at runtime (except for "beans" and "p" namespaces).
Both of these could also work for custom namespaces, since info can be retrieved
from "META-INF/spring.schemas".
In addition, there has been a "register namespace" quickfix for unresolved
qualified elements since Demetra.
Everything you name works as described, but for PropertiesPlaceholderConfigurer.
I assume you're writing some desktop application that uses Preferences API?
Anyway, please file a request, since I'm not familiar with PreferencesPlaceholderConfigurer
myself.
In addition I'll submit requests for:
-ServletContextPropertyPlaceholderConfigurer
-The new <context:property-placeholder/> element from spring 2.1?
Please describe in great detail :)
IDEADEV-14383
However, it's possible to configure RequiredAnnotationBeanPostProcessor to
recognize custom annotations - I'll submit a new one for that, since it's
a bit more obscure than basic @Required support.
IDEADEV-14263
Some thoughts:
-The "p:foo-ref" syntax is totally ugly (why didn't spring people choose
separate namespace for references?)
-The "p" namespace name is silly, should have been "property". There's no
coming world shortage in characters yet :)
-Convert/Migrate is a nice idea, my style preference would be to limit this
to simple numeric/boolean/String properties.
-Perhaps simple back-and-forth intention would be a good first step?
Good one.
Why? And why a little leaf.
Perhaps <lookup-method> could get similar support?
Btw, "Create Patch" should also be something else, not a "p".
See also IDEADEV-17228 for a request for bean gutter mark improvement..
Some of these are in JIRA already:
IDEADEV-13688
IDEADEV-13690
IDEADEV-13689
However, some are valuable while others may be frivolous.
Which ones are important for you, and why?
I don't think the property is that important. The dependency is important,
and apparent from the connection.
Anyway, imho graph is a nice extra but not important. However, some small
changes could both simplify and improve it. More details later.
I look forward to more discussion on the subject :)
Kind regards,
Taras
The biggest issue and wish for me would be to step out of the closet and support custom namespaces. That is one of the most important new features in Spring 2, and with the current level of support in the plugin it is just useless. E.g. check
I hope the above mentioned support for resolving via spring.schemas will help push it in the right direction. Just a tip: spring.schemas can reference URIs as well as classpath resources, don't make us file another bug :))
+
Everything you name works as described, but for PropertiesPlaceholderConfigurer.
I assume you're writing some desktop application that uses Preferences API?+
Nope, I'm using some third-hand cut-and-paste config files, and I honestly couldn't say why it was using PreferencesPlaceholderConfigurer. Changing it to PropertiesPlaceholderConfigurer indeed results in everything working. I will file a report for the preferences version.
+?+
You create a bean, setting it's class (or factory, etc.). An intention is then available on the bean which says "Autowire properties". Any setters which can be unambiguously bound in the current context have their properties automatically added to the bean configuration. Constructor args are more difficult, since all of the args for a constructor have to be unambiguously bindable, but it's still doable. It's totally sweet to see a half-dozen property bindings added at a stroke, and the requirements for unambiguousness are very common. Basically, you get all of the ease-of-use of runtime auto-wiring with none of the scariness.
Agreement on all of the points wrt the p: namespace
And why a little leaf.
Leaf means Spring, and when I want to do the search what I'm thinking is "find this in Spring". Little circles mean either declarations, or pointers to declarations (if followed by an arrow). "P" means "property declaration", which isn't what I want at all.
On refactorings
+However, some are valuable while others may be frivolous.
Which ones are important for you, and why?+
I actually had all of them in my personal Spring plugin, and found them very valuable. "Extract Parent Bean" and "Pull Properties Up"/"Push Properties Down" probably had the biggest bang-for-the-buck, but that could have been due to my project structure (a bezillion DAO and Serializer classes, all extending from shared base classes). I'll submit JIRA for those.
--Dave Griffith
Hello Andrew,
Do you have some examples?
-tt
Sure:
The is the namespace used in a Spring config file, which is mapped to a classpath resource META-INF/mule.xsd. There's no file deployed at the URI yet, so it's using classpath 100%.
HTH,
Andrew
Hello Andrew,
Andrew, do you have a link to an example instance document?
As far as I can see, it looks no different than the "standard" way to link
up schemas for namespace handlers in spring:
1) declare namespace in instance document (actual xml document)
2) declare xsi:schemaLocation URL for that namespace (also in instance document)
3) mapping inside "spring.schemas" that links schemaLocation URL to actual
classpath path for resource
(see also)
As far as I understand, mule works exactly this way. Correct?
In addition, do you have good knowledge of mule namespace handlers?
Regards,
-tt
Taras,
The schema is right next to this file at
If you mean an XML using the example, then you can use e.g. this simpler one:
Your assumptions for a schema resolution process are correct.
>> In addition, do you have good knowledge of mule namespace handlers?
What exactly do you mean? :) Mule's schema uses some more complex types, with validations, support for number substitution, etc. Not sure it caused a problem in IDEA so far, but that would be a good test, as Mule pushes Spring's namespace handlers to the limit (Ross submitted enhancements before, most of them incorporated in Spring).
Cheers,
Andrew
Hello Andrew,
1) Do the mule schemas use spring tooling annotations (for attributes)?
2) What do you think would be most valuable in terms of support IDEA could
provide?
For example, should IDEA recognize bean definitions coming from mule
handlers?
-tt
Taras,
Not sure I follow, I've pinged our schema jedi, he may provide more input :)
Well, I didn't dare to ask for it :) But it was on my list of TODO-things-one-always-wants-to-pursue-but-never-does-for-many-reasons. I already took a look at the plugin code (good its public), and don't see why it shouldn't be possible. If IDEA could provide templates for Mule constructs, that would be a killer application of all 3 technologies ;) I would even claim I'd be happier to have this, rather than some fancy drag-n-drop IDE which goes no further than being a nice toy.
Of course, the schemas are still live, and not all of them are available yet, but we are approaching a beta release for Mule 2.x next month.
Andrew
Hi,
Andrew P asked me to comment here because I've been involved in the development of Mule's use of schema. Unfortunately I wasn't there at the start (I've been mainly completing and tidying things).
We don't use Spring tooling annotations, as far as I know. In fact, I hadn't heard of them before and am having trouble googling much about them. Can you give me a pointer?
Also, I don't understand what you have in mind when you ask "should IDEA recognize bean definitions coming from mule handlers?". I guess you're saying that you can tie the schema to the Java classes we use, but I don't understand what you want to do with that. In general, I don't think that relationship is so important for the end user, but it would be nice for us (developers) if one could easily jump back and forth between Java and Schema.
I use IntelliJ Idea (although I'm no expert - I only switched recently and for years used emacs...) and the issues I've noticed while using it to edit XML schema like are:
- correct parsing/verification when xsi:type is used (I hope that file is OK - it passes whatever parser we are using, although IDEA flags delegate-security-provider as incorrect).
- some way of simplifying the import of namespaces. I have no idea how this would work, but we have a whole slew of schema - until the correct schema is included in the xml "header" the appropriate options are not available.
- better tooltip documentation. We are including annotation elements in the schema, but they don't appear in the GUI (at least, I haven't noticed anything).
- some way (again, no idea how) or prompting when xsi:type might be used (ie when subtypes extend the current type). The use of xsi:type is not very "user-friendly" and anything that would make it more intuitive would be useful (I'm assuming you understand all this - I am happy to explain why we are using xsi:type and what it does, if necessary).
Don't know if any of that helps. Probably completely irrelevant: if there was one thing I'd like IntelliJ to improve, it's the amount of disk access IDEA does.
Cheers,
Andrew (email acooke at mulesource dotcom)
Hello Andrew,
>> 1) Do the mule schemas use spring tooling annotations (for
>> attributes)?
>>
Let's look at an example: the "tx" namespace from spring declares an attribute
"transaction-manager" on element <tx:annotation-driven/>.
The schema declaration for that attribute includes something similar to:
---
<xsd:annotation>
<xsd:appinfo>
<tool:annotation
<tool:expected-type
</tool:annotation>
</xsd:appinfo>
</xsd:annotation>
---
Purpose should be clear: it's saying "hey, I expect a bean name reference
to a bean of type PlatformTransactionManager".
There's a similar annotation that applies to property values (aka "I expect
a string containing a FQN").
I'm not familiar with mule spring handlers, so I don't know if contains many
attributes where such annotations are present (or could be added).
By the way, on a purely XSD level, XML editor in recent IDEA builds works
properly with the schemas used by spring (afaik).
If mule schemas use more complicated constructs, you might want to test around
a bit. If there are problems (for example missing or wrong element suggestions),
filing them in Jetbrains JIRA sooner rather than later will increase the
chance for a fix.
>> 2) What do you think would be most valuable in terms
>> of support IDEA could
>> provide?
>> For example, should IDEA recognize bean
>> definitions coming from mule
>> andlers?
I think Mule-specific templates should be in Mule-specific plugin :)
If you want to add this but can't, I suggest to file a request for an extension
point.
I meant to ask: "what generic support do you think IDEA could offer for namespace
handlers (that would also benefit mule)?".
For example, is it common for regular beans to refer to beans that are defined
by mule namespace handlers?
-tt
Ah, thanks for the explanation.
That's cute, and we should add them where they will help. However I doubt we will use them that much because the approach we've taken is a bit different - most of our configuration is better thought of as a little language that configures the system. The beans themselves tend to be implicit - typically the Java code generates a bean according to the element and injects it directly into the bean builder for the parent element in the DOM. So it's more of a "DSL approach" than a direct "wiring together of beans".
Having said that, I am going to raise an issue to remind us to revise the schema and add these tips where necessary.
Cheers,
Andrew | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206880865-Spring-support-issues-and-wishes | CC-MAIN-2019-18 | en | refinedweb |
I would like to modify the Pythonid plugin to create a new module type, "Python Module" analogous to "Java Module." I want the user to be able do the following:
File | New Module -> The "Add Module" dialog appears. I want "Python Module" to appear as a choice. How do I get my module type to show up in this list?
The user will select "Python Module" and then choose "next" to get to the module name/content root step. The user then enters a module name and a chooses an existing directory for the content root. The user presses "Next." IntelliJ will look for source files, but instead of Java source files, it should look for Python source files (*.py) How do I tell IntelliJ what types of files to search for?
Thanks,
Brian
I would like to modify the Pythonid plugin to create a new module type, "Python Module" analogous to "Java Module." I want the user to be able do the following:
Take a look at the J2ME plugin sources that come with the plugin-dev package.
Brian Smith wrote:
Ok, looking at the J2ME example is what I have done, and it is a bit
tedious... ;o) Since I have just done that, I thought I give you some
pointers (if you still need them).
You need to create a subclass of a ModuleBuilder (the J2ME is using the
JavaModuleBuilder, but the sources of JavaModuleBuilder is available, so
you make an equiv for Python). You then create a
public class PuthonModuleType extends ModuleType]]>
with a public default constructor.
You need to override the
public ModuleWizardStep[] createWizardSteps( WizardContext wizardContext,
OsgiModuleBuilder moduleBuilder, ModulesProvider modulesProvider )
And each ModuleWizardStep is an implementation that you provide, which will
provide the JComponent via
public JComponent getComponent()
I hope this helps.
Cheers
Niclas
Hello Brian,
BS> I would like to modify the Pythonid plugin to create a new module
BS> type, "Python Module" analogous to "Java Module." I want the user to
BS> be able do the following:
Don't forget to apply for project membership at
and check in your changes once you get something working. :)
--
Dmitry Jemerov
Software Developer
"Develop with Pleasure!"
Thank you for your answers.
Dmitry, I will send in my changes to Pythonid when they are done. But, I am mostly adding this feature to Pythonid because I want a similar feature for a different plugin that I am designing. My knowledge of Python is lacking, so my contributions to Pythonid will probably not end up being very useful.
Okay, I got to the point where I can successfully create new new Module. Besides the excellent points mentioned above, here are two hurdles I ran up against. It was relatively painless.
I wanted to use ProjectWizardStepFactory#createNameAndLocationStep, but I couldn't because it requires a JavaModuleBuilder. As a result, I ended up creating my own work-alike. I also want to use the standard "Paths" tab in my module settings dialog, but I cannot find a way to do this unless I subclass JavaModuleType. Is it possible otherwise? Am I going to run into a lot of difficulties if my ModuleType is not a subclass of JavaModuleType?
With Pythonid, navigation using CtrlClick works but CtrlN and CtrlShiftAlt+N do not. Are these limitations of the Open API or are they just not implemented (yet) for Pythonid?
Thanks,
Brian
Hello Brian,
BS> With Pythonid, navigation using CtrlClick works but CtrlN and
BS> CtrlShiftAlt+N do not. Are these limitations of the Open API or
BS> are they just not implemented (yet) for Pythonid?
The latter. OpenAPI support is available, but the current version of Pythonid
doesn't build any global index for class or symbol navigation.
--
Dmitry Jemerov
Software Developer
"Develop with Pleasure!"
Please, take a look for JSSymbolContributor in JavaScript module sources
Brian Smith wrote:
--
Best regards,
Maxim Mossienko
IntelliJ Labs / JetBrains Inc.
"Develop with pleasure!" | https://intellij-support.jetbrains.com/hc/en-us/community/posts/207042445-Create-a-non-Java-project | CC-MAIN-2019-18 | en | refinedweb |
Once I add XamlCompilation to the namespace in the PCL project:
[assembly: XamlCompilation(XamlCompilationOptions.Compile)]
namespace MyNamespace
{
}
I get the below runtime error message on UWP:
"Method not found: 'Void Xamarin.Forms.Xaml.Internals.SimpleValueTargetProvider..ctor(System.Object[])'."
I'm using Xamarin.Forms 2.3.3.163-pre3. Does anyone know why I get this error and how to fix it?
Getting same error. Any update on this? Is it solved?
According to this the SimpleValueTargetProvider class has 2 versions:
1.5.0.0 - constructor with one parameter System.Object[]
2.0.0.0 - constructor with 2 parameters System.Object[] and System.Object
I'm using 2.0.0.0 in my project but apparently "something" is still calling SimpleValueTargetProvider 1.5.0.0, but I have no idea what it is.
Please have a look:
Thanks Blazey. I actually solved it already. I was missing Entry placeholder in my xaml, which was causing this error.
I had the same issue. Upgrading XForms nuget to 2.3.3 in all the projects in the solution solved it. Not sure whether it is a fix in 2.3.3 (as mentioned in the SO link @Blazey mentioned above) or due to different versions of nuget in different projects that I had earlier.
Can anyone help on this? Popped up all of a sudden for no apparent reason. Have tried all recommended fixes, nothing works.
If it's a 3.5.0 issue, raise it on github and if you can roll back to 3.4.0 SR2 (not 3.5.0 feature reliant) then switch back, no point raising it here, raise it on github.
@NMackay what makes it seem like a 3.5.0 issue? | https://forums.xamarin.com/discussion/comment/364708/ | CC-MAIN-2019-18 | en | refinedweb |
There are many situations where you'll want an event in your code to continue for an amount of time. Often, this is accomplished using
time.sleep() as in the following code:)
Here, the first NeoPixel turns on for 0.5 seconds, and then turns off for 0.5 seconds before repeating indefinitely. The usage of
time.sleep(0.5) in this code basically says: turn the LED on and wait in that state for half a second, then turn it off and wait in that state for half a second. In many situations, this usage of
time works great. However, during
time.sleep(), the code is essentially paused. Therefore, the board cannot accept any other inputs or perform any other functions for that period of time. This type of code is referred to as being blocking. In the case of the code above, this is sufficient as the code is not attempting to do anything else during that time.
Waiting Without Blocking
However, for this project, we want to continue processing inputs, so instead of
sleeping for 0.5 seconds, we'll process other inputs for 0.5 seconds and change the led when that time expires. To accomplish this, we're going to use
time.monotonic(). Where
time.sleep() expects an amount of time be provided,
time.monotonic() tells us what time it is now, so we can see whether our
0.5 seconds has passed yet. So, we no longer supply an amount of time. Instead, we assign
time.monotonic() to two different variables at two different points in the code, and then compare the results.
At any given point in time,
time.monotonic() is equal to the number seconds since your board was last power-cycled. (The soft-reboot that occurs with the auto-reload when you save changes to your CircuitPython code, or enter and exit the REPL, does not start it over.) When it is called, it returns a number with a decimal, which is called a
float. If, for example, you assign
time.monotonic() to a variable, and then call it again to assign into a different variable, each variable is equal to the number of seconds that
time.monotonic() was equal to at the time the variables were assigned. You can then subtract the first variable from the second to obtain the amount of time that passed.
time.monotonic() example
Let's take a look at an example. You can type the following into the REPL to follow along.
First we
import the time module, then we
time.monotonic(). This is to give you an idea of what is going on in the background. The next two lines assign
x = time.monotonic() and
y = time.monotonic() so we have two variables, and points in time, to compare. Then we
print(y - x). This gives us the amount of time, in seconds, that passed between assigning
time.monotonic() to
x and
y. We
print time.monotonic() again to give you a general idea of the difference. Remember, the two numbers resulting from printing the current time are not exactly the same difference from each other as the two variables due to the amount of time it took to assign the variables and print the results.
Non-Blocking Blink
But, how does this allow us to blink our NeoPixel? The result of the comparison is a period of time. So, if we use that period of time to determine when the state of the LED should change, we can successfully blink the LED in the same way we did in the first program. Let's find out what that looks)
This does exactly the same thing as before! It's exactly what we wanted. Now, let's break it down.
Before the loop begins, we create a
blink_speed variable and set it to
0.5. This allows for easier configuration of the blink speed later if you wanted to alter it. Next, we set the initial state of the LED to be
(0, 0, 0), or off. Then, we call
time.monotonic() for the first time by setting
initial_time = time.monotonic(). This applies once when the program begins, before it enters the loop.
Once the code enters the loop, we set
current_time = time.monotonic(). We call it a second time to compare to the first, to see if enough time has passed. Then we say if
current_time minus
initial_time is greater than
blink_speed, do two things: set
initial_time to now be equal to
current_time and cycle the NeoPixel to the next state. Setting
initial_time = current_time means it starts the time period over again. Essentially, every time the difference reaches
0.5 seconds, it cycles the state and starts again, repeating indefinitely.
Why would we do it this way? It seems way more complicated! We do it this way because this allows us to do other things while the NeoPixel is blinking. Instead of pausing the code to leave the LED in a red or off state, the code continues to run. The code for the Spoka lamp allows you to change speed and brightness without halting the rainbow animation, and this is how we accomplish that! | https://learn.adafruit.com/hacking-ikea-lamps-with-circuit-playground-express/passing-time | CC-MAIN-2019-18 | en | refinedweb |
A QR code generation library for Dart and Flutter.
To start, import the dependency in your code:
import 'package:qr/qr.dart';
To build your QR code data you should do so as such:
final qrCode = new QrCode(4, QrErrorCorrectLevel.L); qrCode.addData("Hello, world in QR form!"); qrCode.make();
Now you can use your
_qrCode instance to render a graphical representation of the QR code. A basic implementation would be as such:
for (int x = 0; x < qrCode.moduleCount; x++) { for (int y = 0; y < qrCode.moduleCount; y++) { if (qrCode.isDark(y, x)) { // render a dark square on the canvas } } }
See the
example directory for further details.
The following libraries use qr.dart to generate QR codes for you out of the box:
QR - Flutter - A Flutter Widget to render QR codes
A working demo can be found here:
2.0.0-dev.17.
Add this to your package's pubspec.yaml file:
dependencies: qr: :qr/qr.dart';
We analyzed this package on Apr 18, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:
Detected platforms: Flutter, web, other
No platform restriction found in primary library
package:qr/qr.dart.
Maintain an example.
None of the files in the package's
example/ directory matches known example patterns.
Common filename patterns include
main.dart,
example.dart, and
qr.dart. Packages with multiple examples should provide
example/README.md.
For more information see the pub package layout conventions. | https://pub.dartlang.org/packages/qr | CC-MAIN-2019-18 | en | refinedweb |
Hi, Michael Niedermayer wrote: > On Sun, Nov 16, 2008 at 10:19:26AM +0100, Jindrich Makovicka wrote: > >> On Sat, Nov 15, 2008 at 23:11, M?ns Rullg?rd <mans at mansr.com> wrote: >> >>> "Kenan Gillet" <kenan.gillet at gmail.com> writes: >>> >>> >>>> Hi, >>>> >>>> On Fri, Nov 14, 2008 at 6:38 PM, Ronald S. Bultje <rsbultje at gmail.com> wrote: >>>> >>>>> Hi, >>>>> >>>>> On Fri, Nov 14, 2008 at 2:42 PM, elupus <elupus at ecce.se> wrote: >>>>> >>>>>> On Thu, 13 Nov 2008 11:05:38 +0100, Jindrich Makovicka wrote: >>>>>> >>>>>> + len = recv(s->udp_fd, buf, size, MSG_DONTWAIT); >>>>>> >>>>>> Doubt this will work in mingw, don't think msg_dontwait is defined. Should >>>>>> probably also include a #ifndef msg_dontwait #define msg_dontwait 0 #endif >>>>>> in the win32 section in os_support.h >>>>>> >>>>> I also get a missing define on OSX 10.4... Not sure what header >>>>> normally contains this. >>>>> >>>>> >>>> I also have a missing define on OS 10.5. >>>> >>>> the define is in /usr/include/sys/socket.h but under #if: >>>> >>>> #if !defined(_POSIX_C_SOURCE) || defined(_DARWIN_C_SOURCE) >>>> #define MSG_DONTWAIT 0x80 /* this message should be nonblocking */ >>>> >>> MSG_DONTWAIT is not a standard flag, so we should not be using it. >>> The standard method is to set O_NONBLOCK on the socket using fcntl(). >>> >> Ok, this one should be portable. >> > > probably ok (without the tabs) > > clean version applied | http://ffmpeg.org/pipermail/ffmpeg-devel/2008-November/054962.html | CC-MAIN-2019-18 | en | refinedweb |
JNDI lookup in JBoss 7Hiep Le Canh Nov 21, 2011 3:18 AM
I am upgrading my application from JBoss 5 to jboss-as-7.0.2.Final. It seems that I have to change all JNDI name in my application. I do know how to change them, but I have a performance issue now:
Eg. I have about 10 ejb modules in my ear file. Some beans in this ejb module and some beans in the other ejb module. In JBoss 5, I use only:
ic.lookup("myBeanName/local"); // Context ic = getContext();
but in JBoss 7 I must use:
ic.lookup("myEJBModuleName/myBeanName"); // Context ic = getContext();?
Thanks
Hiep
1. Re: JNDI lookup in JBoss 7jaikiran pai Nov 21, 2011 3:22 AM (in response to Hiep Le Canh)
Hiep Le Canh wrote:?
I'm not sure why you are doing that. Why would you need a loop (and loop over what?) when you know which bean to lookup? Can you show us some code which has this loop?
2. Re: JNDI lookup in JBoss 7Hiep Le Canh Nov 21, 2011 3:51 AM (in response to jaikiran pai)
Hi jaijiran,
Below is my src code which was run in JBoss 5:
public <T> T getBean(Class<T> clazz) {
final String jndiName = clazz.getSimpleName() + "Bean";
try {
Context ic = getContext();
Object obj;
try {
obj = ic.lookup("myearname/" + jndiName + "/local");
} catch (NamingException x) {
obj = ic.lookup("myearname/" + jndiName + "/remote");
}
return (T) obj;
} catch (NamingException e) {
.......
The param clazz is the interface which placed in ejbModule1, but the instance of it is in ejbModule2. I can not find the name of ejbModule2, so I need to loop all of the ejbModule deplyed to find the instance of it.
3. Re: JNDI lookup in JBoss 7Riccardo Pasquini Nov 21, 2011 4:40 AM (in response to Hiep Le Canh)
Is the use case a several bean implementation for the same interface deployed contextually in your application and fetch the first available?
If not (as i expect) and if you are in a managed context, you should consider injection for the following reasons:
- as7.0 doesn't support remoting, so the catch block is not safe;
- no need to perform a lookup.
Otherwise, you should reconsider the first available policy becuase it is too... dangerous...
If you need a god-method to fetch any EJB then you can create a flyweight map using class/string pairs to be used as index for the getBean method, the map used as index can validate input (contains key) and the jndi name is the value the application expects.
Moreover if the map is loaded at start time from a configuration file, it becomes more maintenable.
i hope this can help.
PS the syntax used in the code is not portable (global AS jndi access for JEE < 6 ), so code fixes aren't something strange for this situation...
4. Re: JNDI lookup in JBoss 7Hiep Le Canh Nov 21, 2011 4:55 AM (in response to Riccardo Pasquini)
Thanks Riccardo, that is what I am thinking, I will create a flyweight map for this case.
5. Re: JNDI lookup in JBoss 7Wolf-Dieter Fink Nov 21, 2011 5:08 AM (in response to Hiep Le Canh)
For me it is not apparent why you use such complex lookup. As you mentioned all ejb-jars are included in your ear and you should know how it is.
Also DI (injection) is free of code for this.
It sounds a bit random if you deploy the same class with different implementations in different ejb components. | https://developer.jboss.org/thread/175119 | CC-MAIN-2019-18 | en | refinedweb |
I am still a newbie to Seam ...
I am using a POJO based seam-gen created application.
I have declared my User entity class like so
@Entity @Table(name = "APPUSER", uniqueConstraints = @UniqueConstraint(columnNames = "USER_NAME")) @Name("user") @Role(name="currentUser", scope=ScopeType.SESSION) public class User implements java.io.Serializable { ... }
and in my authenticate method
@Out(required = false, scope = SESSION) private User user;
The validated user instance is successfully stored in session scope as currentUser after authentication.
What I am trying to do is update audit columns for another entity called Burst managed by a EntityHome.
In Burst entity I have this
@PrePersist @PreUpdate public void beforeSave() { System.out.println("Entering beforeSave method"); User user = (User) Contexts.getSessionContext().get("currentUser"); if ( getBurstId() == null ) { System.out.println("setting updates for new entity instance"); //new instance this.setAddedByUser(user); this.setModifiedByUser(user); this.dateCreated = new java.sql.Date(System.currentTimeMillis()); this.dateModified = new java.sql.Date(System.currentTimeMillis()); } else { System.out.println("setting updates for entity instance "+ getBurstName()); this.setModifiedByUser(user); this.dateModified = new java.sql.Date(System.currentTimeMillis()); } }
This works but my questions are
Is this the right approach to update these values. Should
I have declared a EntityListener class instead to do this?
Secondly, if I were to inject
@(
currentUser) into my Burst entity class, it gives me a null pointer - is that because Seam bijection does not work in a entity class ?
Can anyone share the right way to update these columns after a persist/update operation.
Thanks
Franco | https://developer.jboss.org/thread/182742 | CC-MAIN-2019-18 | en | refinedweb |
view raw
I am attempting to calculate the MTF from a test target. I calculate the spread function easily enough, but the FFT results do not quite make sense to me. To summarize,the values seem to alternate giving me a reflection of what I would expect. To test, I used a simple square wave and numpy:
from numpy import fft
data = []
for x in range (0, 20):
data.append(0)
data[9] = 10
data[10] = 10
data[11] = 10
dataFFT = fft.fft(data)
Your pulse is symmetric and positioned in the center of your FFT window (around N/2). Symmetric real data corresponds to only the cosine or "real" components of an FFT result. Note that the cosine function alternates between being -1 and 1 at the center of the FFT window, depending on the frequency bin index (representing cosine periods per FFT width). So the correlation of these FFT basis functions with a positive going pulse will also alternate as long as the pulse in narrower than half the cosine period.
If you want the largest FFT coefficients to be mostly positive, try centering your narrow rectangular pulse around time 0 (or circularly, time N), where the cosine function is always 1 for any frequency. | https://codedump.io/share/c0V9WxVWTkXD/1/sign-on-results-of-fft | CC-MAIN-2017-22 | en | refinedweb |
#include <CarSelectScene.h>
Inheritance diagram for UI::CarSelectScene:
Each InputDevice avaliable is either assigned a car model or left unused. When at least one InputDevice is not unused, the game can be started with a RT_MENU_SELECT event from a used InputDevice. RT_MENU_BACK from any InputDevice goes back without starting a game.
Definition at line 33 of file CarSelectScene.h.
Definition at line 57 of file CarSelectScene.cpp.
Update the audio.
Implements Engine::Scene.
Definition at line 306 of file CarSelectScene.cpp.
Draw the scene using OpenGL commands.
Must go from any state to ready for a buffer swap.
Implements Engine::Scene.
Definition at line 160 of file CarSelectScene.cpp.
Draw the selection arrows for a given car.
Definition at line 262 of file CarSelectScene.cpp.
Definition at line 99 of file CarSelectScene.cpp.
Definition at line 105 of file CarSelectScene.cpp.
Removes a device from all cars and the unused slot in m_selection.
Definition at line 296 of file CarSelectScene.cpp.
Take input from an input device.
Implements Engine::Scene.
Definition at line 121 of file CarSelectScene.cpp.
Process any status changes that occur due to elapsed time.
Implements Engine::Scene.
Definition at line 156 of file CarSelectScene.cpp.
True if the scene was canceled.
Definition at line 46 of file CarSelectScene.h.
Texture for border for a car.
Definition at line 50 of file CarSelectScene.h.
The textures for the car previews.
Definition at line 54 of file CarSelectScene.h.
Texture for arrow that indicates selected car.
Definition at line 48 of file CarSelectScene.h.
colours for each device
Definition at line 62 of file CarSelectScene.h.
Which InputDevices each car has.
m_selection[0] is the unused option, m_selection[1] is the first car, m_selection[2] is the second car.
Definition at line 60 of file CarSelectScene.h.
Texture for the background of the unused devices section.
Definition at line 52 of file CarSelectScene.h.
Generated at Mon Sep 6 00:41:19 2010 by Doxygen version 1.4.7 for Racer version svn335. | http://racer.sourceforge.net/classUI_1_1CarSelectScene.html | CC-MAIN-2017-22 | en | refinedweb |
last minute fix is in place and the files uploaded, the new version
can be downloaded from both sourceforge's site and SEUL's ftp:
In the ftp, I moved the previous 0.7.1 version to the oldversions
directory to prevent confusion. Drop me an email if you have any furthr
problems compiling it.)
eboard 0.7.1 has just been released.
I just noticed one thing after releasing it, if you get compilation errors
on board.cc (about cerr, ios and endl being undeclared), edit stl.h
and add the line
using namespace std;
just above the using std::list; line. This problem was spotted on gcc
3.0.4. I apologize for the inconvenience, all machines I usually use for
testing compilations are servers and are with a quite high load right now,
so I skipped the tests this time. I'll upload a 0.7.1pl1 version with the
fix in half an hour or so.
The only change between 0.7.1 and 0.7.1pl1 will be that line.
Here goes the changelog since 0.7.0, have fun.
0.7.1
<warning: the translation files have not been updated since 0.7.0,
so new messages and new features will be displayed in english only.
I hope to update the translations soon>
* .
* [gcc3] Fixed iostream inclusions to compile without warnings
on gcc 3.2
* [all] Fixed a bug in the PGN parser. (PGN files that had
no newline char at the end of the last line would
make eboard crash). Thanks to Hicks@... <source,dest>.
.........................................................................
Felipe Paulo Guazzi Bergo - Computer Science MSc Student at Unicamp
bergo@... - Campinas - SP - Brazil - Earth
GPG/PGP mail welcome - GPG/PGP Key: EF8EE808 (keyserver pgp.mit.edu)
* Good thing the FCC makes them put those "Intel Inside" warning labels.
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/eboard/mailman/eboard-users/?viewmonth=200301&viewday=20 | CC-MAIN-2017-22 | en | refinedweb |
flask_admin.contrib.sqla¶
SQLAlchemy model backend implementation.
- class
ModelView(model, session, name=None, category=None, endpoint=None, url=None, static_folder=None, menu_class_name=None, menu_icon_type=None, menu_icon_value=None)[source]¶
SQLAlchemy model view
Usage sample:
admin = Admin() admin.add_view(ModelView(User, db.session))
Class inherits configuration options from
BaseModelViewand they’re not displayed here.
Enable automatic detection of displayed foreign keys in this view and perform automatic joined loading for related models to improve query performance.
Please note that detection is not recursive: if __unicode__ method of related model uses another model to generate string representation, it will still make separate database call.
List of parameters for SQLAlchemy subqueryload. Overrides column_auto_select_related property.
For example:
class PostAdmin(ModelView): column_select_related_list = ('user', 'city')
You can also use properties:
class PostAdmin(ModelView): column_select_related_list = (Post.user, Post.city)
Please refer to the subqueryload on list of possible values.
column_searchable_list¶
Collection of the searchable columns.
Example:
class MyModelView(ModelView): column_searchable_list = ('name', 'email')
You can also pass columns:
class MyModelView(ModelView): column_searchable_list = (User.name, User.email)
The following search rules apply:
- If you enter
ZZZin the UI search field, it will generate
ILIKE '%ZZZ%'statement against searchable columns.
- If you enter multiple words, each word will be searched separately, but only rows that contain all words will be displayed. For example, searching for
abc defwill find all rows that contain
abcand
defin one or more columns.
- If you prefix your search term with
^, it will find all rows that start with
^. So, if you entered
^ZZZthen
ILIKE 'ZZZ%'will be used.
- If you prefix your search term with
=, it will perform an exact match. For example, if you entered
=ZZZ, the statement
ILIKE 'ZZZ'will be used.
column_filters= None¶
Collection of the column filters.
Can contain either field names or instances of
flask_admin.contrib.sqla.filters.BaseSQLAFilterclasses.
Filters will be grouped by name when displayed in the drop-down.
For example:
class MyModelView(BaseModelView): column_filters = ('user', 'email')
or:
from flask_admin.contrib.sqla.filters import BooleanEqualFilter class MyModelView(BaseModelView): column_filters = (BooleanEqualFilter(column=User.name, name='Name'),)
or:
from flask_admin.contrib.sqla.filters import BaseSQLAFilter class FilterLastNameBrown(BaseSQLAFilter): def apply(self, query, value, alias=None): if value == '1': return query.filter(self.column == "Brown") else: return query.filter(self.column != "Brown") def operation(self): return 'is Brown' class MyModelView(BaseModelView): column_filters = [ FilterLastNameBrown( User.last_name, 'Last Name', options=(('1', 'Yes'), ('0', 'No')) ) ]
filter_converter= <flask_admin.contrib.sqla.filters.FilterConverter object>¶
Field to filter converter.
Override this attribute to use non-default converter.
model_form_converter= <class 'flask_admin.contrib.sqla.form.Admin.sqla.form.InlineModelConverter'>¶
Inline model conversion class. If you need some kind of post-processing for inline forms, you can customize behavior by doing something like this:
class MyInlineModelConverter(InlineModelConverter): def post_process(self, form_class, info): form_class.value = wtf.StringField('value') return form_class class MyAdminView(ModelView): inline_model_form_converter = MyInlineModelConverter
fast_mass_delete= False¶
If set to False and user deletes more than one model using built in action, all models will be read from the database and then deleted one by one giving SQLAlchemy a chance to manually cleanup any dependencies (many-to-many relationships, etc).
If set to True, will run a
DELETEstatement which is somewhat faster, but may leave corrupted data if you forget to configure
DELETE CASCADEfor your model.
inline_models= None¶
Inline related-model editing for models with parent-child relations.
Accepts enumerable with one of the following possible the generated field name by:
Using the form_name property as a key to the options dictionary:
class MyModelView(ModelView): inline_models = ((Post, dict(form_label='Hello')))
Using forward relation name and column_labels property:
class Model1(Base): pass class Model2(Base): # ... model1 = relation(Model1, backref='models') class MyModel1View(Base): inline_models = (Model2,) column_labels = {'models': 'Hello'}
form_choices= None¶
Map choices to form fields
Example:
class MyModelView(BaseModelView): form_choices = {'my_form_field': [ ('db_value', 'display_value'), ]}
form_optional_types= (<class 'sqlalchemy.sql.sqltypes.Boolean'>,)¶
List of field types that should be optional if column is not nullable.
Example:
class MyModelView(BaseModelView): form_optional_types = (Boolean, Unicode).
column_display_all_relations¶
Controls if list view should display all relations, not only many-to-one.)[source]¶
Returns a list of tuples with the model field name and formatted field name.
Overridden to handle special columns like InstrumentedAttribute.
get_count_query()[source]¶
Return a the count query for the model type
A
query(self.model).count()approach produces an excessive subquery, so
query(func.count('*'))should be used instead.
See commit
#45a2723for details._pk_value(model)[source]¶
Return the primary key value from a model object. If there are multiple primary keys, they’re encoded into string representation.
get_query()[source]¶
Return a query for the model type.
If you override this method, don’t forget to override get_count_query as well.
This method can be used to set a “persistent filter” on an index_view.
Example:
class MyView(ModelView): def get_query(self): return super(MyView, self).get_query().filter(User.username == current_user.username).
Ignore field that starts with “_”
Example:
class MyModelView(BaseModelView): ignore_hidden = False
inaccessible_callback(name, **kwargs)¶
Handle the response to inaccessible views.
By default, it throw HTTP 403 error. Override this method to customize the behaviour.
init_search()[source]¶
Initialize search. Returns True if search is supported for this view.
For SQLAlchemy, this will initialize internal fields: list of column objects used for filtering, etc.
is_accessible()¶
Override this method to add permission checks.
Flask-Admin does not make any assumptions about the authentication system used in your application, so it is up to you to implement it.
By default, it will allow access for everyone.
is_valid_filter(filter)¶
Verify that the provided filter object is valid.
Override in model backend implementation to verify if the provided filter type is allowed._auto_joins()[source]¶
Return a list of joined tables by going through the displayed columns.
scaffold_list_form(widget=None, validators=None)[source]¶
Create form for the index_view using only the columns from self.column_editable_list.
scaffold_pk()[source]¶
Return the primary key name(s) from a model If model has single primary key, will return a string and tuple otherwise
scaffold_sortable_columns()[source]¶
Return a dictionary of sortable columns. Key is column name, value is sort column/field. | http://flask-admin.readthedocs.io/en/latest/api/mod_contrib_sqla/ | CC-MAIN-2017-22 | en | refinedweb |
Do .
Creating N instance of class
What is the best way to create ‘N’ number of instance of a class. Please check the code below to find the answer.
public class NumberedInstance { private static int instanceNumber; private NumberedInstance() { } public static NumberedInstanceGetInstance() { if(instanceNumber < 5) { instanceNumber++; return new NumberedInstance(); } else { throw new ArgumentOutOfRangeException("Only five instance of the class are allowed"); } } }
As you can see in the above C# code I have created a simple class with private constructor. I can access the public static method named GetInstace(), to get the instance. If the number of instances are more then 5 we will get an exception.
In the above class I have created a class with private constructor. And a static field to keep the number of counts of the instances created.If the count exceeds any specified number an exception is thrown.
Create instance of class for each assembly
If I want to extend this example with an instance for each assembly. I have to write the code as shown below
public class NumberedInstance { private static IDictionary<string, NumberedInstance> assemblyInstance= new Dictionary<string, NumberedInstance>(); private NumberedInstance() { } public static NumberedInstanceGetInstance(string assemblyName) { if(!assemblyInstance.Keys.Contains(assemblyName)) { NumberedInstanceinstance = new NumberedInstance(); assemblyInstance.Add(new KeyValuePair<string, NumberedInstance>(assemblyName, instance)); return instance; } else { return assemblyInstance[assemblyName]; } } }
In the above code I have used a dictionary to keep records of all the instances created for the assemblies. If the dictionary already contains the instance. The same instance is returned otherwise a new instance is returned and stored in the dictionary.
In this article I have discussed the ways to create the n instance of a class in C#. And their usage examples. | http://126kr.com/article/6kruur4o5ma | CC-MAIN-2017-22 | en | refinedweb |
/***********************************************************************/ /** . */ /***********************************************************************/ #ifndef _HPIDSPCD_H_ #define _HPIDSPCD_H_ #include "hpi_internal.h" #ifndef DISABLE_PRAGMA_PACK1 #pragma pack(push, 1) #endif /** Descriptor for dspcode from firmware loader */ 00048 struct dsp_code { /** Firmware descriptor */ 00050 const struct firmware *ps_firmware; struct pci_dev *ps_dev; /** Expected number of words in the whole dsp code,INCL header */ 00053 long int block_length; /** Number of words read so far */ 00055 long int word_count; /** Version read from dsp code file */ 00057 u32 version; /** CRC read from dsp code file */ 00059 u32 crc; }; #ifndef DISABLE_PRAGMA_PACK1 #pragma pack(pop) #endif /** Prepare *psDspCode to refer to the requuested adapter. Searches the file, or selects the appropriate linked array \return 0 for success, or error code if requested code is not available */ short hpi_dsp_code_open( /** Code identifier, usually adapter family */ u32 adapter, /** Pointer to DSP code control structure */ struct dsp_code *ps_dsp_code, /** Pointer to dword to receive OS specific error code */ u32 *pos_error_code); /** Close the DSP code file */ void hpi_dsp_code_close(struct dsp_code *ps_dsp_code); /** Rewind to the beginning of the DSP code file (for verify) */ void hpi_dsp_code_rewind(struct dsp_code *ps_dsp_code); /** Read one word from the dsp code file \return 0 for success, or error code if eof, or block length exceeded */ short hpi_dsp_code_read_word(struct dsp_code *ps_dsp_code, /**< DSP code descriptor */ u32 *pword /**< where to store the read word */ ); /** Get a block of dsp code into an internal buffer, and provide a pointer to that buffer. (If dsp code is already an array in memory, it is referenced, not copied.) \return Error if requested number of words are not available */ short hpi_dsp_code_read_block(size_t words_requested, struct dsp_code *ps_dsp_code, /* Pointer to store (Pointer to code buffer) */ u32 **ppblock); #endif | http://alsa-driver.sourcearchive.com/documentation/1.0.23plus-pdfsg-2ubuntu1/hpidspcd_8h-source.html | CC-MAIN-2017-22 | en | refinedweb |
Answered by:
What is the difference between Public variable and static variable in c#
What is the difference between public and static variable declaration in c#.
public partial class FrmDisplayTest : Form { public static string Webpath = ""; //and public string Webpath = ""; }
Thanks in advance.
Experience the innovation with perfection.
Question
Answers.
All replies
public string Webpath;
will require that the class be instantiated, something like:
FrmDisplayTest l_form = new FrmDisplayTest(); l_form.WebPath = "something";
and applies to a single instance of the class, whereas
public static string Webpath;
does not
FrmDisplayTest.WebPath = "something";and applies to all instances of a class.
It would be greatly appreciated if you would mark any helpful entries as helpful and if the entry answers your question, please mark it with the Answer link.
- Edited by TSoftware-Old Thursday, January 17, 2013 12:01 PM Editor error
- Proposed as answer by William HARY Thursday, January 17, 2013 4:19 PM
Hello,
Changes done to static (or shared) members are scoped to the to class whereas in instance (or nonstatic) members changes are unique to the particular instance of the class.
Regards,
Eyal Shilony
Static variables can be accessed across all objects. While Public variable can be accessed only in that object. For example
//Initializing static variable FrmDisplayTest.Webpath = ""; FrmDisplayTest object1 = new FrmDisplayTest(); //Init object1's public variable object1.Webpath = "Obj1"; //Now here if you try to access the static variable //FrmDisplayTest.Webpath //This will give you FrmDisplayTest object2 = new FrmDisplayTest(); //Init object2's public variable object2.Webpath = "Obj2"; //Now here if you try to access the static variable //FrmDisplayTest.Webpath //This will give you
Hi,
Public is a access specifier that indicates the member will be accessable outside the class code also.
There are other access specifier like private , protected , internal , protected internal. All have different meaning and special usages.
static keyword in C#(Shared in VB.NET) allowes to access that member even without creating object of that class.
In your code first Webpath is static so that member can be accessed just by using the class name like :
FrmDisplayTest.Webpath
On the other hand your second Webpath variable is not static, so that member will come into existance when object of the class is creaed and using that object reference you can access that second Webpath member.
public and static are not in any way related to each other.
For more information please visit these links :
Access Modifiers (C# Reference)
Let me know if you have any more confusion.
One good question is equivalent to ten best answers.
To add to discussion:
Anything marked PUBLIC means that any other class using this class can see and use these variables. It applies to Static and non-Static variables. Look at Public variables and methods as how others can use this class. PUBLIC variables can have PRIVATE (backing store) variables that hold the values the program wants the public to see. This is called data hiding. Data Hiding is the safe way to expose PULIC variables but to only allow them to see what the class wants them to see. Data Hiding is an important concept in OOP and OOD discussions.
STATIC is a keyword in C# which has very unique effect. At runtime, the CLR (Common Language Runtime) does things in a very specific order. One of the first things done, is to initialize STATIC varialbes. This then, allows two very distinct different aspects of a class: The STATIC side of things and the NON-STATIC side of things. The designers of .NET knew these two sides very well, as we can find, for example; all PROPERTIES of a class without instanciating the class. This is a very subtle but important concept that has very profound effect. As you learn more about these two sides, you, like me will always look at classes as having two very profoundly different aspects with two very different purposes in mind. What are those purposes?
In OOP and OOD there's a lot of talk about the Factory pattern. When a method is Static it means you can always call it without an instance of the class! This gives rise to the very nice Factory pattern which looks like this:
var t = Task.Factory.StartNew(()...);
We also see this same Factory like concept being applied to the newer Extension method support. Extension methods are static classes combined with static methods!
As already mentioned we can get all properties of a non-instanciated class:
//note the return type is a WidthProperty not the Width var widthProperty = Button.WidthProperty
Static varialbes can act as a pre-determined constant of the non-static variables to work with once the class is instanciated:
private static int BaseWidth = 10; public Rectangle GetRectangle(){ //the value of BaseWidth is always created before the class is so this value will always be there before using it. return new Rectangle(BaseWidth, 2 * BaseWidth); }
Static variables are often used for Delegates, Actions and Func (Functions).. Why? Because we may want to have an entry into code before we know which instance is being used!
public class MyClass{ public static Action<String> MyAction; public static Event MyAction ItsEvent; //in the CTOR logic of this class hook up the Action public MyClass(){ } public void DoWork(){ //do something to get proper string here //Are there any listeners out there? if(ItsEvent != null){ //There is one or more listeners send the message ItsEvent("Somestring"); } } } //the code above allows us to do this in one or more classes MyClass.ItsEvent += MyEventHandler; //this is very subtlely different than this: MyClassInstance.ItsEvent += MyEventHandler
JP Cowboy Coders Unite!
- Edited by Mr. Javaman II Thursday, January 17, 2013 4:53 PM
Hi Hashim,
I am just wondering a little bit - maybe you meant the correct but I would never say it that words.
A static variable cannot be accessed through objects of the class!
public class Test { public static string TestString = "Test!"; public void TestMethod() { TestString = "otto"; // Compiles Test aTest = new Test(); aTest.TestString = "Test"; // Does not compile! } }
If you try the above you will see that creating an instance of an class and then access a static variable through the object simply gives you the error: Member '<NamespaceName>.<Classname>.<Membername>' cannot be accessed with an instance reference; qualify it with a type name instead
With kind regards,
Konrad
Very good point Konrad and even more ammunition to think in terms of any class as having two very distinct attributions, the Static side of life and the non-static side. It's almost like two totally different worlds...
This would compile however:
public class Test { public static string TestString = "Test!"; public void TestMethod() { TestString = "otto"; // Compiles Test aTest = new Test(); Test.TestString = "SomeTest"; // Does compile! } }
JP Cowboy Coders Unite!
- Edited by Mr. Javaman II Friday, January 18, 2013 8:31 PM
Hi,
you got a lot of nice replies already. I always like to think where the memory of the variable could be found.
- A non static variable is placed inside the instance of a class.
- A static variable is places inside the class / type.
Or to give a real world comparison:
The class is a plan on how to build something. A static variable would be something that is written on the plan.
A non static variable is something that is build with the plan.
So the typical car example: A class could be a plan how to build a specific car. On the plan we make a small sign for each car that we create. (= static variable)
The cars you build have a trunk in which you could put something (= non static variable). So if you have 2 cars of the same plan: If you put something inside the trunk of car1 then the trunk of car2 stays unchanged.
Konrad
Hmm ... I do not fully understand that. I always like the book about smalltalk because in smalltalk you just have objects. In .Net it is more or less like that but there are a vew differences.
So the class is just some kind of source which is compiled to an assembly. When an assembly is loaded, objects are created. So from your class you get a Type object. The static variable is more or less simply a variable inside that type now.
The non static variables are part of the instances so you need an instance.
That is my personal view on it in which I only think of objects. But the types are also just some plain objects for me.
With kind regards,
Konrad
For accessing the static variable, we do not need to instantiate the class. The static variable scope is on class level. But public variable will be accessible via an instance of a class. This is same for method as well. There in some time we declare private method as static. The reason is static method call emit nonvirtual call instruction in msil, that will gain the performance.
- Edited by Suresh Madaswamy Thursday, January 17, 2013 2:36 PM typo
Data Hiding is the safe way to expose PULIC variables but to only allow them to see what the class wants them to see. Data Hiding is an important concept in OOP and OOD discussions.
Data or Information Hiding has a much broader meaning.
The principle advices you to hide anything out of your clients that they shouldn't know.
In OOD - SOLID You can find the Interface Segregation principle that is a subset of the Information Hiding principle which advices you to abstract away or group together functionality into various interfaces so each client will see only the things it expects.
In OOP, this principle is applied through Encapsulation that is one of the three principles that defines an OOP language that suggests it will have the tools to bring this principle into practice in the language itself.
Infromation Hiding is important in OOP languages because of the above point but it doesn't tell you anything about the implementation per se because it has nothing to do with it.
Regards,
Eyal Shilony
- Edited by Eyal SolnikModerator Thursday, January 17, 2013 4:03 PM
- Basically static variables are stored in Stack memory in RAM & other types of variables are stored in initialized and uninitialized and heap memory in RAM.So static variable share same memory location for all the objects of the same class. but in ordinary variable declaration, memory will be occupied by the variable will be different locations(means individual memory space)..
Static variables on the stack? Only local variables go on the stack. And local variables cannot be static in C#. And I don't expect languages which allow that to put them on the stack either. | https://social.msdn.microsoft.com/Forums/vstudio/en-US/5cf98f09-d75e-4891-9850-14d14889a7e4/what-is-the-difference-between-public-variable-and-static-variable-in-c?forum=csharpgeneral | CC-MAIN-2017-22 | en | refinedweb |
Aspect oriented programming is a new buzzword that has been popularized more recently through its integration with the Spring framework. The Spring guys have done a great job in bringing a difficult technology to the masses through its usual style of declarative programming. Spring AOP takes a lot of pain out of you by offering a greatly simplified programming model to have method interceptions baked in your codebase.
But making a technology look simpler has its obvious consequences of being misused. There appears to be lots of cases with programmers where they are using aspects, when good old simple Java design patterns, make a more appropriate cut. In the last couple of months, I found Spring AOP's method interception being used in many instances (including this one in an InfoQ article) when good old decorators could have solved the problem. The basic problem which the developer was trying to solve was to wrap some command with pre- and post- advices. The AOP based solution would make sense only if the same strategy needs to be repeated in multiple places and invocations of the command, which would otherwise have resulted in lots of boilerplates littering the codebase. Otherwise, a command and a bunch of decorators can provide a scalable solution to this ..
// the generic command interface
public interface ICommand {
void execute(final Object object);
}
// and a decorator interface for decorating the command
public abstract class Decorator implements ICommand {
// the command to decorate
private ICommand decorated;
public Decorator(final ICommand decorated) {
this.decorated = decorated;
}
public final void execute(Object object) {
pre();
decorated.execute(object);
}
protected final ICommand getDecorated() {
return decorated;
}
// the pre-hook
protected abstract void pre();
// the post-hook
protected abstract void post();
}
// my custom command class
public class FileCommand implements ICommand {
//.. custom command
}
// my first custom decorator
public class MyDecorator_1 extends Decorator {
public MyDecorator_1(final ICommand command) {
super(command);
}
@Override
protected void post() {
//.. custom post hook
}
@Override
protected void pre() {
//.. custom pre hook
}
}
// another custom decorator
public class MyDecorator_2 extends Decorator {
public MyDecorator_2(final ICommand command) {
super(command);
}
@Override
protected void post() {
//.. custom post hook
}
@Override
protected void pre() {
//.. custom pre hook
}
}
// stack up the decorators
new MyDecorator_2(
new MyDecorator_1(
new FileCommand(...))).execute(..);
Use Aspects to address crosscutting concerns only
I use aspects as a last resort, when all options fail to address the separation of concerns that I am looking for in my code organization. Aspects help avoid the code tangle by identifying joinpoints through pointcuts and helping define advices that will be applied to the joinpoints. But I use them only when all traditional Java tools and techniques fail to localize my solution. Aspects bring in a separate machinery, the heavy lifting of bytecode instrumentation. Spring AOP is, however, pure Java, but based on dynamic proxies, which have their own limitations in method interceptions and performance penalties (however small) of creating proxies on every call. Spring AOP has less magic than pure AOP - but use the technology only if it is the right choice for the problem at hand. At least, recently, I find many instances of this wonderful technology being misused as a result of sheer over-engineering. Often, when we see nails, everything starts to look like a hammer. Keep your solution simple, ensure that it solves the immediate problem at hand and gives you adequate options for extensibility.
5 comments:
What do you mean by "when all options fail to address the separation of concerns that I am looking for in my code organization".
As I mentioned in my blog, there are many ways to address separation of concerns in a piece of software. Traditional design patterns like Decorator, Strategy etc. serve to address these issues in most of the times. Though I have personally experienced people resorting to aspects for cases which could have been solved using patterns. My theory is to go for the minimum weighted technique that solves the problem. And aspects, lie at the other end of the power spectrum. I use aspects *only* to address cross cutting concerns and only when using traditional means will lead to unnecessary code clutter.
I'm 100% with you with this one, debasih.
The Decorator pattern solves a big part of the cases when you need to apply some pre and post code.
Also, if you are working with interfaces, as it is in the case of Services, and you don't want your Service extends any special interface, you can use Java's reflection to create dynamic proxy based decorators. It sounds complex but you can do it in a 50 line "plain old Java class". You don't need to use Spring AOP or AspectJ at all and still can handle 90% of the cases.
Regards.
@Marcos:
Can't agree more. I think the most powerful part of aspects is regular expression based joinpoint matching in pointcuts. And this is where the *crosscutting* nature of aspects revel. This is the feature that the Java guys boast against Ruby metaprogramming. But often I find people misusing the power and use aspects where a simple Decorator would do. | http://debasishg.blogspot.com/2007/07/how-do-you-decide-when-to-use-aspects.html | CC-MAIN-2017-22 | en | refinedweb |
Hi hppa people! I'm hoping you can help me fix a FTBFS that we're getting with Guile on hppa. The build log is here:;ver=1.8.5%2B1-2;arch=hppa;stamp=1217809852 The specific problem is a segmentation fault, at a point in a build that probably won't mean anything to non-Guile folks - but the key point is that we were recently seeing exactly the same segmentation fault (i.e. at the same place) on several other architectures (mips, mipsel, powerpc), and that was caused by the code in configure.in not detecting the stack direction properly. This patch -;a=commit;h=9143131b2766d1e29e05d61b5021395b4c93a6bc - fixed the problem for mips, mipsel and powerpc, but it looks as though we are still getting the stack direction wrong on hppa. (My understanding is that on hppa the stack actually grows upwards, whereas on most platforms it's downwards.) I've appended the relevant bit of configure.in below. Can anyone help with why this might not be working on hppa? Thanks, Neil #-------------------------------------------------------------------- # # Which way does the stack grow? # # Following code comes from Autoconf 2.61's internal _AC_LIBOBJ_ALLOCA # macro (/usr/share/autoconf/autoconf/functions.m4). Gnulib has # very similar code, so in future we could look at using that. # # An important detail is that the code involves find_stack_direction # calling _itself_ - which means that find_stack_direction (or at # least the second find_stack_direction() call) cannot be inlined. # If the code could be inlined, that might cause the test to give # an incorrect answer. #-------------------------------------------------------------------- SCM_I_GSC_STACK_GROWS_UP=0 AC_CACHE_CHECK([stack direction], [SCM_I_GSC_STACK_GROWS_UP], [AC_RUN_IFELSE([AC_LANG_SOURCE( [AC_INCLUDES_DEFAULT int find_stack_direction () { static char *addr = 0; auto char dummy; if (addr == 0) { addr = &dummy; return find_stack_direction (); } else return (&dummy > addr) ? 1 : -1; } int main () { return find_stack_direction () < 0; }])], [SCM_I_GSC_STACK_GROWS_UP=1], [], [AC_MSG_WARN(Guessing that stack grows down -- see scmconfig.h)])]) | https://lists.debian.org/debian-hppa/2008/08/msg00003.html | CC-MAIN-2017-22 | en | refinedweb |
On Mon, 7 Jan 2002 06:40, Erik Hatcher wrote:
> ----- Original Message -----
> From: "Peter Donald" <peter@apache.org>
>
> > Most likely I would -1 it but it would depend upon how it was implemented
> > essentially. I think it is really poor design of tasks that would require
> > this and would almost certainly reject any task that used it ;)
>
> I would implement it using the interface that I proposed, and that Jose
> refined. Simple as that, and probably would only involve a few lines of
> code (at least it should :).
ok will actually have a proepr look at it tonight ;)
> Would you -1 that implementation? I just want to know before I code it and
> get shot down! :)
Theres plenty of things - will have a look and tell you if I don't like.
However the main reason I would -1 is because it is only a workaround for a
limitation of ant and I don't want to be supporting more ugly hack
workarounds into the future ;)
> How does implementing this open the flood gates to bad things?
Increasing the "surface area" of a project always comes at a cost. If the
cost can not be justified by added features etc or the cost is not offset
somehow then it is probably a bad idea to add specific feature.
We already have oodles more "join points" (ie places where we are flexible)
than we actually need if it had designed it properly from the start. Where we
have numerous patterns for things like
addX
createX
addConfiguredX
setX
we could have probably gotten away with just
addX
setX
or even just
setX
if we didn't want to make to much of a distinction between
elements/attributes.
Think of it this way. Give ant 50 points for every minor feature and 100
points for every major feature. Then subtract 5 points for every "access
point" (ie public/protected methods/attributes) and subtract 100 points for
every structure/pattern required and 10 points for every public class.
The higher the result the better ant is from both a user and developers point
of view. However I think ant would actually score rather low as it has a
whole bunch of tasks written in an "interesting" manner. Some have public
attributes (!!!), many include numerous protected/public methods that don't
need to be public/protected, we have many replicated/redundent
patterns/classes that come from different stages of ants evolution etc.
> > XDoclet use-case is the only use-case I have in mind now.
That makes me less inclined to support it if anything ;)
> Keep in mind that I'm of the opinion that Ant probably should be using
> XDoclet in the future to allow a lot of a tasks configuration to be
> specified in meta-data allowing documentation to be generated as well as
> any other artifacts needed (configurator Java classes perhaps?).
Sounds kool Could you give us an example of what something like that would
look like? and have you played with any of it yet?
--
Cheers,
Pete
*------------------------------------------------------*
| "Nearly all men can stand adversity, but if you want |
| to test a man's character, give him power." |
| -Abraham Lincoln |
*------------------------------------------------------*
--
To unsubscribe, e-mail: <mailto:ant-dev-unsubscribe@jakarta.apache.org>
For additional commands, e-mail: <mailto:ant-dev-help@jakarta.apache.org> | http://mail-archives.apache.org/mod_mbox/ant-dev/200201.mbox/%3C200201062025.g06KPQr00461@mail004.syd.optusnet.com.au%3E | CC-MAIN-2017-22 | en | refinedweb |
view raw
I've successfully(?) installed the QJson library following the instructions in the archive. But the compiler gives me this error:
Undefined reference to QJSon::Parser::Parser().
/usr/local/include/json
*.h
#include <QtGui/QApplication>
#include <qjson/parser.h>
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
QJson::Parser parser;
return a.exec();
}
*.cpp
At first you have to find a library file rather than a
*.cpp file. Maybe it
has a name like
"libqjson.a" or
"libqjson.so" and compile this library with your code or pass this keys to
g++ -L(lib path) -lqjson
As it turned out (see comments below), your library path is
/usr/local/lib, so this line becomes:
g++ -L/usr/local/lib -lqjson
Using Qt (qmake), just add this line to your
.pro file in order to pass these two flags to g++:
LIBS += -L/usr/local/lib -lqjson | https://codedump.io/share/XF5SSlFN1zsa/1/undefined-reference-to-qjsonparserparser | CC-MAIN-2017-22 | en | refinedweb |
The topic you requested is included in another documentation set. For convenience, it's displayed below. Choose Switch to see the topic in its original location.
System.Xml.Schema Namespace
.NET Framework 2.0
The System.Xml.Schema namespace contains the XML classes that provide standards-based support for XML schema definition language (XSD) schemas.
Community Additions
Show: | https://msdn.microsoft.com/library/windows/apps/astfyhd4(v=vs.80).aspx | CC-MAIN-2017-22 | en | refinedweb |
06 December 2011 18:14 [Source: ICIS news]
HOUSTON (ICIS)--FMC is evaluating a “significant expansion” of its lithium capacity at ?xml:namespace>
The expansion would come in addition to a 30% expansion due on line in 2012, the company said.
FMC cites its lithium capacity in terms of total lithium carbonate equivalent. On that basis, FMC’s capacity at
FMC also is evaluating an expansion of its lithium hydroxide and lithium metal production at
The company did not disclose the potential cost of the expansions. According to analysts at
FMC, for its part, said it expects “on the order of 10% plus” annual growth in global lithium demand over the coming decade.
“Given this strong growth, the challenges and costs to rapidly bring on new capacity, and the escalation in key raw materials, we expect market prices to rise,” it said.
The main driver of growth in the lithium market is the increasing use and adoption of lithium-based energy storage technology for consumer electronic products, electric vehicle transportation and electricity grid storage, the company said.
FMC said it is the world’s second-largest lithium producer, on a revenue basis, and the only producer fully integrated from lithium mine to lithium metal, and further to organolithiums.
In addition to its operations at
($1 = €0.75) | http://www.icis.com/Articles/2011/12/06/9514498/us-fmc-eyes-doubling-of-lithium-capacity-at-salta-facility.html | CC-MAIN-2015-06 | en | refinedweb |
Tk::FmtEntry - A Formatted Entry widget
my $fe = $mw->FmtEntry(-fcmd => sub {...})->pack; # The this simple example, the insertion curor point does not change, so we just return it un-altered, along with the lower-cased string:
my $fe = $mw->FmtEntry(-fcmd => \&fmt_lc)->pack; . . . sub fmt_lc { my ($old, $i) = @_; return (lc $old, $i); }
The only -validate options supported are 'none' and 'key'. Attempting to set any other -validate option will cause this widget to carp an error.
The -validatecommand (-vcmd) you specify is called out of Tk::FmtEntry's internal validator function, but it works just like it was called from Tk::Entry.
The hardest thing about using this widget is that your custom formatting function needs to return a revised insert cursor position, in addition to the [re]formatted string. The 'insert cursor position' is where new characters are added when you type. If you don't correctly determine and return the revised position, things will go all screwy when the user types; their characters may get scrambled.
For example, the user wants to type 57 into a field that you're formatting as currency (money). The user first types "5", which leaves the cursor at position i=1 (just after the "5" character). Your formatting function changes this string to "$5.00". Because of the dollar sign prefix, the insertion cursor needs to change from position i=1 to position i=2. If this was not done, the cursor would remain at i=1 and typing another digit, say "7", would result in "$75.00" instead of the desired "$57.00".
If the normal Entry widget allowed correction of the value during a -validatecommand callback, then this derived widget wouldn't be necessary. However, doing so can cause strange interactions with the -textvariable, especially if the -textvariable is changed outside the callback function. Hence, we have this FmtEntry widget.
This widget avoids the pitfalls of changing the Entry's value by deferring the change to some time after the Entry finishes its processing. It does this by using the -validatecommand callback on 'key' events, to set an afterIdle() "fixup" callback. This fixup callback is called moments after (unless -delay is used), and the formatting/correction/whatever is done there. If your Tk application is very busy, you may notice a delay before the (re)formatting occurs. I'm not thrilled with this approach, but it works pretty well.
None.
None.
All letters typed in the FmtEntry field are uppercased.
use Tk; use Tk::FmtEntry; my $mw = new MainWindow; $mw->FmtEntry(-fcmd => \&fmt_uc)->pack; MainLoop; sub fmt_uc { my ($old, $i) = @_; return (uc $old, $i); }
In this example, the entry is simply uppercased via the formatting function fmt_uc(). The cursor index $i does not change position, so it's just returned as-is.
This example formats the entry as a 16-digit credit card number, like XXXX-XXXX-XXXX-XXXX . Only digits are accepted.
!/usr/bin/perl -w use strict; use Tk; use Tk::FmtEntry; my $mw = new MainWindow; $mw->Label(-text => 'Enter a Credit Card number')->pack; $mw->FmtEntry(-fcmd => \&fmt_cc)->pack; MainLoop; sub fmt_cc { my ($old, $i) = @_; # To figure the new insert cursor position, # format just the left half and see where it lands. my $lf = substr($old, 0, $i); $lf =~ s/[^\d]//g; # remove all but digits $lf = substr($lf, 0, 16); # max 16 digits while ($lf =~ s/(\d{4})(\d)/$1-$2/) { }; # group to fours my $j = length($lf); # get new position # Now format again the whole thing my $new = $old; $new =~ s/[^\d]//g; # nuke all but digits $new = substr($new, 0, 16); # max 16 digits while ($new =~ s/(\d{4})(\d)/$1-$2/) { }; # group to fours return ($new, $j); }
Note how the revised insert cursor position is determined. Although there's likely more efficient methods, a simple approach is to split the old string at the old cursor postion (call this the 'left' part), then format this left part and see how big it is. The length is the new cursor position. Then repeat, formatting the whole old string and return this as the new.
Example formatting function for currency. This formatter uses the trick of placing a marker character (an asterix) into a second "marked" string, then after formatting it correlates the marked string to the new string, to determine the cursor position. It's not perfect but it works reasonably well (it has a problem with leading zeros - see if you can fix it!)
#!/usr/bin/perl -w # Example Tk::FmtEntry with cash (money) style formatting use strict; use Tk; use Tk::FmtEntry; my $mw = new MainWindow; $mw->Label(-text => 'Enter Money Amount')->pack; $mw->FmtEntry(-fcmd => \&fmt_cash)->pack; MainLoop; sub fmt_cash { my ($old, $i) = @_; # Make the new string my $new = $old; $new =~ s/[^\d\.]//g; # remove all but digits and decimal if ($new eq q{.}) {$old = $new = "0."; ++$i;} # special for dp-only $new =~ s/(\.\d{0,2}).*$/$1/; # max two past dp $new = sprintf '$%4.2f', $new if $new ne q{}; # if blank, leave blank # Add commas $new = reverse $new; $new =~ s/(\d{3})(?=\d)(?!\d*\.)/$1,/g; $new = reverse $new; # Make a marked string my $mrk = substr($old, 0, $i) . q{*} . substr($old, $i, length($old) - $i); $mrk =~ s/[^\d\.\*]//g; # remove all but digits, decimal, and marker # Find new insert point my $j = 0; my $k = 0; foreach my $c (split //, $new) { if ($c eq q{$} || $c eq q{,}) { $j++; next; } last if substr($mrk, $k++, 1) eq q{*}; # found the marker $j++; } return ($new, $j); }
Steve Roscio
<roscio.
FmtEntry | http://search.cpan.org/~roscio/Tk-FmtEntry-0.1/FmtEntry.pm | CC-MAIN-2015-06 | en | refinedweb |
Markup extensions are placeholders in XAML that are used to resolve a property at runtime. A markup extension enables you to extend XAML and set any property that can be set in XAML using attribute syntax. xNull, x:Array, :StaticResource, and DynamicResource are some common markup extensions. The System.Windows.Markup namespace defines many of the markup extension classes. These class names end with the suffix Extension; however, you can omit the Extension suffix. For example, in XAML, you represent the NullExtension markup class as x:Null. A custom markup extension is a class created by extending the MarkupExtension class or the IMarkupExtension interface. It is useful in scenarios where you need to provide functionality or behavior that is beyond the scope of existing built-in markup extensions.Consider that you want to bind a ListBox that will bind to XML data as shown below but for some reason you don't want to use an ObjectDataProvider.<ListBox ItemsSource="<some way to bind the data> Source=Books.xml, Path=/Book/Title}"If you want to declaratively bind as shown above, you will need to use a custom markup extension.The steps to create and use such an extension are:
string[] text = path.Substring(1).Split('/'); string desc = text[0].ToString(); string elementname = text[1].ToString(); List<string> data = new List<string>();
IEnumerable<XElement> elems = xdoc.Descendants(desc); IEnumerable<XElement> elem_list = from elem in elems select elem; foreach (XElement element in elem_list) { String str0 = element.Attribute(elementname).Value.ToString(); data.Add(str0); } return data; } /// <summary> /// Overridden method, returns the source and path to bind to /// </summary> /// <param name="serviceProvider"></param> /// <returns></returns> public override object ProvideValue(IServiceProvider serviceProvider) { if ((Source != null) && (Path != null)) return Parse(Source, Path); else throw new InvalidOperationException("Inputs cannot be blank"); } }}The CustomXMLExtension class is derived from the MarkupExtension class which is defined in the System.Windows.Markup assembly.The MarkupExtension class provides a base class for custom XAML markup extension implementations. This class defines a method named ProvideValue() whose syntax is as follows:Syntax:public abstract Object ProvideValue(IServiceProvider serviceProvider)The ProvideValue() method is overridden (or implemented, if inheriting from the interface) in the derived class) and returns an object. This object will become the value of the target property for the markup extension. In the current example, the ProvideValue() method will return an object used as the source for the XML binding.. | http://www.c-sharpcorner.com/uploadfile/mamta_m/creating-a-custom-markup-extension-in-wpf/ | CC-MAIN-2015-06 | en | refinedweb |
05 September 2012 07:59 [Source: ICIS news]
SINGAPORE (ICIS)--Asian September caprolactam (capro) contract talks are at a stalemate, as buyers and sellers remain divided over price targets, said buyers and sellers on Wednesday.
The majority of price ideas have narrowed to $50-100/tonne (€40-80/tonne) month on month to $2,350-2,400/tonne on a CFR (cost & freight) northeast (NE) Asia basis – with buyers now targeting a hike of $50/tonne and producers a $100/tonne increase at $2,400/tonne CFR NE Asia.
A major Japanese producer said he managed to settle a contract at slightly below $2,400/tonne CFR NE Asia with a major end-user in southern ?xml:namespace>
Other major players in
Producers said they need to recoup margins resulting from the increase in upstream benzene prices.
August contracts were settled at $2,300/tonne CFR NE Asia, $50-70/tonne higher than in July, according to ICIS data.
Contract negotiations are expected to close by the end of this week, said buyers and sellers.
In the downstream nylon (polyamide) chips segment, northeast Asian producers are eyeing to sell their cargoes at $2,700/tonne CFR China, an increase of $80/tonne compared with last week’s offers, on expectations of feedstock capro contracts closing at a potential $100/tonne higher month on month.
Major nylon chips sellers said the weak downstream conditions have made it difficult to quote higher prices to their customers and that their selling ideas at $2,700/tonne CFR China are unfeasible in the current market.
Nylon chips buyers are in no hurry to make purchases, as their requirements are comfortably covered until mid-September, having previously procured cargoes in early August.
Capro is an intermediate product that is primarily used in the production of nylon 6 fibres, plastics and other polymeric materials.
Additional reporting by Angeline Zhang
( | http://www.icis.com/Articles/2012/09/05/9592721/asia-sept-capro-contract-talks-at-stalemate-on-wide-buy-sell.html | CC-MAIN-2015-06 | en | refinedweb |
This tutorial describes how we can create a Hadoop MapReduce Job with Spring Data Apache Hadoop. As an example we will analyze the data of a novel called The Adventures of Sherlock Holmes and find out how many times the last name of Sherlock’s loyal sidekick Dr. Watson is mentioned in the novel.
Note: This blog entry assumes that we have already installed and configured the used Apache Hadoop instance.
We can create a Hadoop MapReduce Job with Spring Data Apache Hadoop by following these steps:
- Get the required dependencies by using Maven.
- Create the mapper component.
- Create the reducer component.
- Configure the application context.
- Load the application context when the application starts.
These steps are explained with more details in the following Sections. We will also learn how we can run the created Hadoop job.
Getting the Required Dependencies with Maven
We can download the required dependencies with Maven by adding the dependency declations of Spring Data Apache Hadoop and Apache Hadoop Core to our POM file. We can declare these dependencies by adding the following lines to our>
Creating the Mapper Component
A mapper is a component that divides the original problem into smaller problems that are easier to solve. We can create a custom mapper component by extending the Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT> class and overriding its map() method. The type parameters of the Mapper class are described in following:
- KEYIN describes the type of the key that is provided as an input to the mapper component.
- VALUEIN describes the type of the value that is provided as an input to the mapper component.
- KEYOUT describes the type of the mapper component’s output key.
- VALUEOUT describes the type of the mapper component’s output value.
Each type parameter must implement the Writable interface. Apache Hadoop provides several implementations to this interface. A list of existing implementations is available at the API documentation of Apache Hadoop.
Our mapper processes the contents of the input file one line at the time and produces key-value pairs where the key is a single word of the processed line and the value is always one. Our implementation of the map() method has following steps:
- Split the given line into words.
- Iterate through each word and remove all Unicode characters that are not either letters or numbers.
- Create an output key-value pair by calling the write() method of the Mapper.Context class and providing the required parameters.
The source code of the WordMapper class looks following: Text word = new Text(); @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); StringTokenizer lineTokenizer = new StringTokenizer(line); while (lineTokenizer.hasMoreTokens()) { String cleaned = removeNonLettersOrNumbers(lineTokenizer.nextToken()); word.set(cleaned); context.write(word, new IntWritable(1)); } } /** * Replaces all Unicode characters that are not either letters or numbers with * an empty string. * @param original The original string. * @return A string that contains only letters and numbers. */ private String removeNonLettersOrNumbers(String original) { return original.replaceAll("[^\\p{L}\\p{N}]", ""); } }
Creating the Reducer Component
A reducer is a component that removes the unwanted intermediate values and passes forward only the relevant key-value pairs. We can implement our reducer by extending the Reducer<KEYIN, VALUEIN, KEYOUT, VALUEOUT> class and overriding its reduce() method. The type parameters of the Reducer class are described in following:
- KEYIN describes the type of the key that is provided as an input to the reducer. The value of this type parameter must match with the KEYOUT type parameter of the used mapper.
- VALUEIN describes the type of the value that is provided as an input to the reducer component. The value of this type parameter must match with the VALUEOUT type parameter of the used mapper.
- KEYOUT describes type of the output key of the reducer component.
- VALUEOUT describes the type of the output key of the reducer component.
Our reducer processes each key-value pair produced by our mapper and creates a key-value pair that contains the answer of our question. We can implement the reduce() method by following these steps:
- Verify that the input key contains the wanted word.
- If the key contains the wanted word, count how many times the word was found.
- Create a new output key-value pair by calling the write() method of the Reducer.Context class and providing the required parameters.
The source code of the WordReducer class is given in following:
import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class WordReducer extends Reducer<Text, IntWritable, Text, IntWritable> { protected static final String TARGET_WORD = "Watson"; @Override protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { if (containsTargetWord(key)) { int wordCount = 0; for (IntWritable value: values) { wordCount += value.get(); } context.write(key, new IntWritable(wordCount)); } } private boolean containsTargetWord(Text key) { return key.toString().equals(TARGET_WORD); } }
Configuring the Application Context
Because Spring Data Apache Hadoop 1.0.0.M2 does not support Java configuration, we have to configure the application context of our application by using XML. We can configure the application context of our application by following these steps:
- Create a properties file that contains the values of configuration properties.
- Configure a property placeholder that fetches the values of configuration properties from the created property file.
- Configure Apache Hadoop.
- Configure the executed Hadoop job.
- Configure the job runner that runs the created Hadoop job.
Creating the Properties File
Our properties file contains the values of our configuration parameters. We can create this file by following these steps:
- Specify the value of the fs.default.name property. The value of this property must match with the configuration of our Apache Hadoop instance.
- Specify the value of the mapred.job.tracker property. The value of this property must match with the configuration of our Apache Hadoop instance.
- Specify the value of the input.path property.
- Add the value of the output.path property to the properties file.
The contents of the application.properties file looks following:
fs.default.name=hdfs://localhost:9000 mapred.job.tracker=localhost:9001 input.path=/input/ output.path=/output/
Configuring the Property Placeholder
We can configure the needed property placeholder by adding the following element to the applicationContext.xml file:
<context:property-placeholder
Configuring Apache Hadoop
We can use the configuration namespace element for providing configuration parameters to Apache Hadoop. In order to execute our job by using our Apache Hadoop instance, we have to configure the default file system and the JobTracker. We can configure the default file system and the JobTracker by adding the following element to the applicationContext.xml file:
<hdp:configuration> fs.default.name=${fs.default.name} mapred.job.tracker=${mapred.job.tracker} </hdp:configuration>
Configuring the Hadoop Job
We can configure our Hadoop job by following these steps:
- Configure the input path that contains the input files of the job.
- Configure the output path of the job.
- Configure the name of the main class.
- Configure the name of the mapper class.
- Configure the name of the reducer class.
Note: If the configured output path exists, the execution of the Hadoop job fails. This is a safety mechanism that ensures that the results of a MapReduce job cannot be overwritten accidentally.
We have to add the following job declaration to our application context configuration file:
<hdp:job
Configuring the Job Runner
The job runner is responsible of executing the jobs after the application context has been loaded. We can configure our job runner by following these steps:
- Configure the job runner.
- Configure the executed jobs.
- Configure the job runner to run the configured jobs when it is started.
The declaration of our job runner bean is given in following:
<hdp:job-runner
We can execute the created Hadoop job by loading the application context when our application is started. We can do this by creating a new ClasspathXmlApplicationContext object and providing the name of our application context configuration file as a constructor parameter. The source code of our create a Hadoop MapReduce job with Spring Data Apache Hadoop. Our next step is to execute the created job. The first thing we have to do is to download The Adventures of Sherlock Holmes. We must download the plain text version of this novel manually since the website of Project Gutenberg is blocking download utilities such as wget.
After we have downloaded the input file, we are ready to run our MapReduce job. We can run the created job by starting our Apache Hadoop instance in a pseudo-distributed mode and following these steps:
- Upload our input file to HDFS.
- Run our MapReduce job.
Uploading the Input File to HDFS
Our next step is to upload our input file to HDFS. We can do this by running the following command at command prompt:
hadoop dfs -put pg1661.txt /input/pg1661.txt
We can check that everything went fine Our MapReduce Job
We have two alternative methods for running our MapReduce job:
- We can execute the main() method of the Main class from our IDE.
- We can build a binary distribution of our example project by running MapReduce everything went fine, we should see a similar directory listing:
Found 2 items -rw-r--r-- 3 xxxx supergroup 0 2012-08-05 12:31 /output/_SUCCESS -rw-r--r-- 3 xxxx supergroup 10 2012-08-05 12:31 /output/part-r-00000
Now we will finally find out the answer to our question. We can get the answer by running the following command at command prompt:
hadoop dfs -cat /output/part-r-00000
If everything went fine, we should see following output:
Watson 81
We now know that the last name of doctor Watson was mentioned 81 times in the novel The Adventures of Sherlock Holmes.
What is Next?
My next blog entry about Apache Hadoop describes how we can create a streaming MapReduce job by using Hadoop Streaming and Spring Data Apache Hadoop.
PS. A fully functional example application that was described in this blog entry is available at Github.
74 comments… add one
My background: I know Maven and did some Projects at the university, but I’m new to Spring and Hadoop, so I needed to take a closer look at the spring configuration part.
Here are a few questions and hints which might improve your great article:
– “…Holmes and find out how many the last name of Sherlock’s…” -> “…Holmes and find out how many ”’times”’ the last name of Sherlock’s…”
– maybe create the maven project with the artifact:generate command, and add the dependencies afterwards
mvn archetype:generate \
-DarchetypeArtifactId=maven-archetype-quickstart \
-DgroupId=com.company.division \
-DartifactId=appName \
-Dversion=1.0-SNAPSHOT \
-Dpackage=com.company.division.appName
– Creating the Mapper Component: You could add a note that we will configure the location of the classes in the application.xml afterwards + a proposal to create a package for those classes
– Path for the files applicationContext.xml and application.properties: You could add a note that the path to the applicationContext.xml will be configured in the Main class afterwards and tat the convention is that the file goes into /src/main/ressources/META-INF/applicationContext.xml
Links to basic explanations for the bare minimum of a applicationContext skeleton might be usefull, including the needed xmlns definitions.
– funny typo: the “napper class”
– for the “hadoop dfs –ls /input” command you have used a “–” instead of “-” (I don’t know the english terms) which will give copypasting readers some headache. (same for “hadoop dfs –cat”)
– A Link to Instructions for the assembly target would be usefull.
– Add the instruction to execute the command with Maven:
mvn exec:java -Dexec.mainClass=”com.company.division.appName.Main”
Thank you for the great Tutorial
Hi Konfusius,
Thank you for your comment. I appreciate that you took the time to write me a note about these errors and improvement ideas. I fixed the typos and I will also make changes to the article (and code) later.
Hi Petri,
Thanks for this very helpful tutorial. I built a sample using Spring-Hadoop based on the steps you suggested. For running the job, I am using the binary distribution mechanism and running the job using the startup script. I thing that I noted was that when running the job in this way, I don’t see the Job appearing on the JobTracker user interface. Any ideas why?
Also, the logs show, “Unable to load native-hadoop library for your platform… using builtin-java classes where applicable”. Will you be able to provide any elaboration on this.
Thanks again.
Hi Saurabh,
It is nice to hear that you find tutorial helpful.
I have a few questions concerning your problem with the JobTracker UI:
The log line you mentioned is written to the log when the native Hadoop libraries are not found. Check the Native Hadoop Libraries section of the Hadoop documentation for more information.
It seems that if a JobTracker is not configured, the job will not be visible in the JobTracker user interface.
Dear Sir
It gives me more confidence to work on hadoop pls. send me more information regarding
hadoop.
regards
Unmesh
Hi Unmesh,
It is nice to know that this blog entry was useful to you. Also, thanks for pointing out that you want to read more about Apache Hadoop.
Hey Petri,
Thanks for a great article.
I have executed your project (got it from github). it executed successfully. I used gradle to build the project.
But surprised to see no output directory created. I have input directory and data in HDFS. Can you please help me out. Tried many things like changed Mapper and reducer. Hadoop parameters are correct as per my cluster.
Hoping for your quick response,
Amar
Hi Amar,
I have got a few questions for you:
Unfortunately I don’t have access to a Hadoop cluster but I have got a local installation of Hadoop 1.0.3 which runs in the pseudo-distributed mode. If you could answer to my email and send me your Gradle build script, I can test it in my local environment.
Hi Petri,
Really appreciate quick response. I am using Hadoop 1.0.3 version only. Yes I am using Spring data hadoop 1.0.0. Also I did try it using maven but no luck :(. Surprising thing is it get executed successfully without any error. If I remove hadoop bean and a driver class then it works properly. Am I missing any configuration stuff. My application context is:
fs.default.name=hdfs://Ubuntu05:54310
simple job runner
Configures the reference to the actual Hadoop job.
I have hard coded some properties. Also tried with a property file. But no luck at all.
Thanks,
Amar
Hi Amar,
It seems that the configuration of the job runner has changed between 1.0.0.M2 (The Spring Data Apache Hadoop version used in this blog entry) and 1.0.0.RC1. Have you checked out the instructions provided in the Running a Hadoop Job section of the Spring Data Apache Hadoop reference documentation?
Hi Petri,
Changing version to M2 worked. I hearty appreciate your help.
Thanks,
Amar
Amar,
It is good to hear that you were able to solve your problem. I will update this blog entry and the example application when I have got time to do it.
Hi,
I’m trying out spring-data-hadoop and running it as a webapplication on tomcat. I configured everything and when running the Job from my servlet I get the following exception
SEVERE: PriviledgedActionException as:tomcat7 cause:java.io.IOException: Failed to set permissions of path: /hadoop_ws/mapred/staging/tomcat71391258236/.staging to 0700
here hadoop.tmp.dir=/hadoop_ws
I know tomcat7 user doesn’t have access to the above directory, but I’m not sure how to pass this exception.
I tried following, but now luck:
1. started tomcat as the user that have permission to /hadoop_ws directory. I changed TOMCAT7_USER and TOMCAT7_GROUP
2. hadoop dfs -chmod -R /hadoop_ws
3. changed mapreduce.jobtracker.staging.root.dir to different folder and set 777 permission.
none of the above approach worked. All the examples I find in internet is either configuring the mapreduce jobs in xml as in this post, or the application is a standalone application which runs as logged-in user.
Any help highly appreciated.
Thanks.
run: hadoop fs -chmod -R 777 /
Regards,
JP
Hello Petri,
hello
I followed your nice tutorial but with a change.
I used spring hadoop 1.0.0.RC2.
I have needed a change in the JobRuner definition in the spring config file. I added
otherwise no output directory was created.
I’ll go to your next blog entry.
Rafa
I
Hi Rafa,
Thank you for your comment.
As you found out, the configuration of the job runner bean has changed between the 1.0.0.M2 and 1.0.0.RC2 versions of Spring Data Hadoop.
I have been supposed to update these blog entries but I have been busy with other activities. Thank you for pointing out that I should get it done as soon as possible.
I updated the Spring Data Apache Hadoop version to 1.0.0.RC2.
Hi Petri,
I am doing loading data in bulk to hbase with Hbase MapReduce. Here I can configure HFileOuputFormat. Is there any way to configure same with spring application context?
Hoping for your quick response,
Amar
I have not personally used HBase and that is why I cannot give a definitive answer to your question. However, the following resources might be useful to you:
Hi Petri,
Great tutorial!!!
I followed the steps and works locally perfectly with startup.sh.
When deploy in a master-slave cluster and run $>hadoop jar mapreduce.jar
the job start, the tasks start in both nodes but in map fase I got:)
Any idea?
Thanks
Hi Bill,
thank you for your comment.
The root cause of your problem is that the mapper class is not found from the classpath. Did you use my example application or did you create your own application?
I would start by checking out that the configuration of your job is correct (check that the mapper and reducer classes are found). Also, if you created your own application, you could try to test my example application and see if it is working. I have tested it by running Hadoop in a pseudo distributed mode and it would be interesting to hear if it works correctly in your environment.
Hi again Petri,
Thanks for your reply.
I am using your example. I have a linux host with two virtual machines(Vmware)
one as the master node and once as a slave node, cluster tested successful.
My steps below:
#HOST
1) Configure application.properties
fs.default.name=hdfs://master:54310
input.path=/user/billbravo/gutenberg
output.path=/user/billbravo/gutenberg/output
2) Make assembly(Yours example of course)
$host>mvn assembly:assembly
3) Run example locally(Successful)
$host>unzip target/mapreduce-bin.zip
$host>cd mapreduce
$host>sh startup.sh
4) Copy to master node
$host>scp mapreduce-bin.zip billbravo@master:
4) Run in the cluster
$host>ssh billbravo@master
#MASTER NODE
$master>unzip mapreduce-bin.zip
$master>cd mapreduce
$master>hadoop jar mapreduce.jar
In this step occurs the previously comment error:)
My goal is understand how to make a hadoop aplication(with spring of corse), run local for debug and then deploy in a remote cluster like Amazon EMR.
Thanks again for your attention
:)
2) Copy assembly to master node(The cluster has been tested previously successful)
$scp target/mapreduce-bin.zip billbravo@master:
Hi Bill,
you are welcome (about my attention). These kind of puzzles are always nice because usually I learn a lot of thing things by solving them. :)
I found the problem. I updated this blog entry and made two changes to my example:
I also updated Spring Data Apache Hadoop to version 1.0.0.RELEASE.
This should solve your problem.
Hi Petri,
It works perfectly!
I found a work around copying mapreduce.jar to the Distributed Cache
and using the option hdp:cache in applicationContex.xml:
But your solution is cleaner.
Greetings
Hi Bill,
It is good to hear that this solved your problem. :)
I have the same problems. but I don not know how to fix it . Could you get me more code to descript it. Thank you.
Hi,
The solution to his was problem is described in this comment. You can find the relevant files from Github:
Hi again Petri,
Thanks for your reply.
I saw the comment. and added JobTracker configuration to both application.properties and applicationContext.xml
application.properties:
fs.default.name=hdfs://localhost:9100
mapred.job.tracker=localhost:9101
applicationContext.xml:
fs.default.name=${fs.default.name}
mapred.job.tracker=${mapred.job.tracker}
In addition, I updated Spring Data Apache Hadoop to version 1.0.1.RELEASE
# hadoop dfs -mkdir /input
# hadoop dfs -ls /
Found 3 items
drwxr-xr-x – root supergroup 0 2013-10-12 11:40 /hbase
drwxr-xr-x – root supergroup 0 2013-10-12 12:05 /input
drwxr-xr-x – root supergroup 0 2013-10-12 11:40 /tmp
# hadoop dfs -put sample.txt /input/sample.txt
# hadoop dfs -ls /input
Found 1 items
-rw-r–r– 3 root supergroup 51384 2013-10-12 12:09 /input/sample.txt
Unfortunately, when I run the progrem
java.lang.RuntimeException: java.lang.ClassNotFoundException: net.petrikainulainen.spring.data.apachehadoop.Word):1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.ClassNotFoundException: net.petrikainulainen.spring.data.apachehadoop.WordMapper:249)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:855)
… 8 more
OMG!
I am sorry. I am running this example in eclipse ide.when I am running this for local files on my disk it is OK.
By the way, do you has any example about hbasetemplate?
Good to hear that you were able to solve your problem (eventually). Unfortunately I haven’t written anything about the hdbasetemplate. I should probably pay more attention to this stuff in the future though.
great tutorial. Many thanks.
Could you please provide some information around configuring org.apache.avro.mapred.AvroJob.
My mapred job uses AvroInputformat.
Any information aorund configuring AvroJob using Spring Data would be very helpful.
Thanks
Thank you for your comment. It is nice to hear that this blog post was useful to you.
Also, thank you for providing me an idea for a new blog post. I have added your idea to my to-do list and I will probably write something about it after I have finished my Spring Data Solr tutorial (four blog posts left).
Hi Petri,
I was following your tutorial step by step.i have hadoop version 0.20.0 installed.but running the program it is throwing class not found exception for the mapper class.
can you please help me.i am new to this technology.
The requirements of Spring Data Apache Hadoop are described in its reference manual. It contains the following paragraph:
Spring.
I am not sure what is the difference between 0.20.2 (minimum supported version) and 0.20.0 (your version) but it might not be possible to use Spring Data Apache Hadoop with Hadoop 0.20.0. Did you try to run the example application of this blog post or did you create your own application?
Hi Petri,
i have hadoop 0.20.2(sorry for wrong information).i am running this example in eclipse ide.when i am running this for local files on my disk it is working fine.running same map reduce for hdfs it is throwing mapper class not found.
check bellow link for log.
This happened to me as well and I had to make some changes to my example application. These changes are described in this comment. Compare these files with the configuration files of your project:
I hope that this solves your problem.
i have the same files in my project and using spring-data-hadoop 1.0.0 release version.still facing the problem :( .Is there any other files i need to change..??
This commit fixed the problem in my example application. As you can see, I made changes only to the files mentioned in my previous comment (and updated the version number of Spring Data Apache Hadoop to the pom.xml).
Is there any chance that I can see your code? I cannot figure out what is wrong and seeing the code would definitely help.
Thanks for your inputs.
i have commented mapred.job.tracker=${mapred.job.tracker} in applicationContext.xml and now its working fine.but i dont know y this is happening.do u know the reason.
The reason for this might be that when the job tracker is not set, Hadoop will use the default job tracker which could be the local one. In other words, your map reduce job might not be executed in the Hadoop cluster. Check out this forum thread for more details about this.
Hello, Petri.
My current configuration includes one linux machine with the NameNode, TaskTracker and JobTracker daemons, another linux with a DataNode and a third Windows 7 machine with Eclipse/Netbeans for development.
At least with this configuration if you want to run the mapreduce job from the development machine it is mandatory to include the jar attribute in the applicationContext.xml file:
Other way the classes will not be available in the execution cluster. Hope it helps.
jv
Hi Javier,
Thank you for you comment.
Unfortunately Wordpress decided to remove the XML which you added to your comment (I should probably figure out if I can disable that feature since it is quite annoying).
Anyway, the configuration of my example application uses the jar-by-class attribute which did the trick when I last run into this problem.
However, your comment made me check if you can explicitly specify the name of the jar file. I erad the schema of Spring for Apache Hadoop configuration and find out that you can do this by using the jar attribute.
This information is indeed useful because if the jar file of the map reduce job cannot be resolved when the jar-by-class attribute is used, you can always use the jar attribute for this purpose.
Again, thanks for pointing this out.
Hello Interesting read, am a undergraduate student and wanting to leverage these techniques for an application as final year project Bsc IT. I am expecting a lot of data etc so I opted for mongodb. I’ve been using spring-data-mongo and been wondering if one could use spring-data-mongo and spring-data-hadoop together. mongodb has a way to integrate with hadoop. so how does everything play nice together? thank you
It is possible to use multiple Spring Data libraries in the same project. I have not personally used spring-data-hadoop and spring-data-mongo in the same project but it should be doable.
On the other hand, it is kind of hard to say if this makes any sense because I have no idea what kind of an application you are going to create. Could you shed some light on this?
If you can describe the use cases where you would like to use spring-data-hadoop and spring-data-mongo, I can probably give you a better answer.
Hi Petri,
First of all thank you for the tutorial. I downloaded the example maven project from github. I am able to run the application. But, It gives ClassNotFoundException for WordMapper class. I am using apache hadoop 1.2.1 version. Spring 3.1.0.
I am connecting to hdfs from remote vm. can you give me any suggesstion to solve this problem.
I have to confess that I haven’t really used Hadoop or Spring Data for Apache Hadoop after I wrote a few blog posts about them.
Did you try the things mentioned in these comments:
Hi sir, i am trying to run your example above mentioned . i am getting the below error ,unable to solve this .. plz help me. i tried in some other forums and blogs unable to solve this….
INFO – sPathXmlApplicationContext – Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@46ae506e: startup date [Tue Oct 29 11:02:44]
WARN – JobClient – No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).10291053_0002
INFO – JobClient – map 0% reduce 0%
INFO – JobClient – Task Id : attempt_201310291053_0002_m_000000_0, Status : FAILED
java.lang.RuntimeException: java.lang.ClassNotFoundException: test.WordMapper
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:857)
Update: I removed the unnecessary lines so that the comment is a bit cleaner. – Petri
Hi,
Did you already try the advice given in this comment?
hi petri .
the same code which i tried above working fine today. but the output of my job was nothing . there are some word matchings in the input file ..
a)input file content:
hadoop shiva rama hql
java hadoop
b) my properties file
fs.default.name=hdfs://localhost:54310
mapred.job.tracker=localhost:54311
input.path=/input3/
output.path=/output7/
c) application context was same as yours..
d) i am using apache hadoop 1.2.1 in pseudo distributed in my system. and trying ur example.. using STS(spring tools suite ) , by running Main.java class a java application im getting below output in console
INFO – sPathXmlApplicationContext – Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@46ae506e: startup date [Wed Oct 30 12:53:2110301123_0008
INFO – JobClient – map 0% reduce 0%
INFO – JobClient – map 100% reduce 0%
INFO – JobClient – map 100% reduce 33%
INFO – JobClient – map 100% reduce 100%
INFO – JobClient – Job complete: job_201310301123_0008
INFO – JobClient – Counters: 28
INFO – JobClient – Job Counters
INFO – JobClient – Launched reduce tasks=1
INFO – JobClient – SLOTS_MILLIS_MAPS=6477
INFO – JobClient – Total time spent by all reduces waiting after reserving slots (ms)=0
INFO – JobClient – Total time spent by all maps waiting after reserving slots (ms)=0
INFO – JobClient – Launched map tasks=1
INFO – JobClient – Data-local map tasks=1
INFO – JobClient – SLOTS_MILLIS_REDUCES=9332
INFO – JobClient – File Output Format Counters
INFO – JobClient – Bytes Written=0
INFO – JobClient – FileSystemCounters
INFO – JobClient – FILE_BYTES_READ=76
INFO – JobClient – HDFS_BYTES_READ=134
INFO – JobClient – FILE_BYTES_WRITTEN=109536
INFO – JobClient – File Input Format Counters
INFO – JobClient – Bytes Read=34
INFO – JobClient – Map-Reduce Framework
INFO – JobClient – Map output materialized bytes=76
INFO – JobClient – Map input records=2
INFO – JobClient – Reduce shuffle bytes=76
INFO – JobClient – Spilled Records=12
INFO – JobClient – Map output bytes=58
INFO – JobClient – CPU time spent (ms)=1010
INFO – JobClient – Total committed heap usage (bytes)=163581952
INFO – JobClient – Combine input records=0
INFO – JobClient – SPLIT_RAW_BYTES=100
INFO – JobClient – Reduce input records=6
INFO – JobClient – Reduce input groups=5
INFO – JobClient – Combine output records=0
INFO – JobClient – Physical memory (bytes) snapshot=237084672
INFO – JobClient – Reduce output records=0
INFO – JobClient – Virtual memory (bytes) snapshot=1072250880
INFO – JobClient – Map output records=6
INFO – JobRunner – Completed job [wordCountJob]
In my hdfs ,im getting the output7 directory with _success,logs and part-00000 files , the part-00000 is empty file ,it should display hadoop 2 as per my input file .
plz help me out .,
2)plz suggest me me how to use that “jar attribute” in my applicationcontext.xml .
hi petri .. got the answer for my problem . thanks …
i am making a jar of my project in the local drive, giving that path to classpath of my projects run configuration… though im not using any “jar ” attribute in the applicationcontext.xml file.
Hi,
It is good to hear that you could solve your problem!
Hi Petrik,
I am facing the similar issue class not found when running from eclipse and submitting to jobs to pseudo mode hadoop machine.
Tried the options below you have provided not works for the sample project you have prvided.
Add jar-by-class attribute to the job configuration
Use the jar attribute if jar-by-class does not work
But adding attribute libs and providing the path of jar but that approach is hard of development to bind libs attribute in every job bean.
Why jar-by-class attribute is not working?
How the big workflows are developed using Spring Hadoop?
Kindly share your thoughts!
I have to admit that I haven’t been using Spring Data for Apache Hadoop after I wrote this tutorial but I remember that the version of Spring Data for Apache Hadoop which I used in this tutorial didn’t support all Hadoop versions. Which Hadoop version are you using?
Hi,
I use spring-data-hadoop version 1.0.2.RELEASE
and use hadoop-core 1.2.1 …
The problem is empty output file named part-r-00000.
The program finished successfully but when I use command:
hadoop dfs -cat /output/part-r-00000
nothing to list…
Please help
I have to confess that I have no idea what your problem is. I will update the example application of this blog post to use the latest stable version of Spring for Apache Hadoop and see if I run into this issue as well.
Hi Petri,
i found one easiest place of you to integrate spring and hadoop. I followed your instructions step by step and every thing went without errors.
But, i got INFO: Starting job [wordCountJob] and not ended which means execution was not completed. Why this is so, could you help me?
Can you see the job in the job tracker user interface?
Hi,
I have problem about executing project within eclipse IDE. I got noClassFoundError…
Do I need to add eclipse hadoop plugin or anything else?
Do I need any extra configuration about eclipse?
Note: I can execute project after making jar and execute with “hadoop jar …jar”
Hi,
I have solved the problem by adding below code in hdp:job tag:
libs=”${LIB_DIR}/mapreduce.jar”
But still I don’t know why couldnt execute without above code.
Can you help?
Thanks…
Did you import the project into Eclipse by using m2e (how to import Maven project into Eclipse)? I haven’t used Eclipse in a seven years so I am not sure how the Maven integration is working at the moment.
However, it seems that for some reason Eclipse didn’t add the dependencies of the project into classpath. I assume that if the project is imported as a Maven project, the Maven integraation of Eclipse should take care of this.
Yes. I imported project into eclipse by using m2e. But the same problem occured in IntelliJ IDE.
You are right that libraries can not be added to classpath.
By the way I want to debug project on eclipse. Any help?
Thanks so much
i am successfully connected multi node and also uploaded files in hdfs(it’s only master files).
how to direct to upload the files in slave to master hdfs on ubuntu?
Hi Petri,
Iam trying to run word cout program using spring-hadoop.
Iam using hadoop-2.0.0-cdh4.2.0 MR1 in stand alone mode.
While executing ,its resulting in following exception.
Server IPC version 7 cannot communicate with client version 4.
Iam unable to connect to hdfs.Please help me out
Hi,
The example application of this blog post uses Spring for Apache Hadoop 1.0.0. This version doesn’t support Apache Hadoop 2.0.0 (See this comment for more details about this).
You might want to update Spring for Apache Hadoop to version 1.0.2. It should support Apache Hadoop 2.0.0 (I haven’t tried it myself).
Hi I followed above steps.but I got this type of error.
WARNING: Failed to scan JAR [file:/B:/Software/STS-SpringToolSuite/springsource/vfabric-tc-server-developer-2.9.5.SR1/Spring-VM/wtpwebapps/Hadoopfitness/WEB-INF/lib/core-3.1.1.jar] from WEB-INF/lib
java.util.zip.ZipException: invalid CEN header (bad signature)
The jar file called
core-3.1.1.jaris probably corrupted in some way. This StackOverflow answer provides some clues which might help you to solve this issue.
The location of the jar file suggests that you have downloaded it manually. Is there some reason why you didn’t get it by using Maven?
Hi,
I’m struggling with getting this thing working, already put it on stackoverflow – can u check and help me figure out what’s wrong :
Hi,
Have you tried following the advise given in this comment?
Hi Petri,
Yes, I have already done what has been mentioned in the comment – I think the issue is with the Mapper and Reducer classes not becoming available to the Hadoop framework but I’m not able to figure out !
Can you have a look at the stackoverflow thread I have mentioned(even a bounty hasn’t helped me out :( )
Hi Petri,
Just to clarify – I tried adding jar-by-class=”com.hadoop.basics.Main” and creating independent mapper and reducer classes(not updated in the stackoverflow thread where I have put the code with inner classes) but still the same error.
Regards,
KA
Did you add the job tracker configuration to your application context configuration file?
When I was investigating the problem of the person who originally asked the same question, I noticed that adding the
jar-by-classattribute didn’t solve the problem. I had to configure the job tracker as well.
You can configure the job tracker by adding the following snippet to your application context configuration file (you can replace the property placeholders with actual values as well):
Hi Petri,
Yeah u r right about the ‘jar-by-class’ not solving the problem but I have already configured the job tracker in the applicationContext.xml as follows:
Yet, I’m getting the ClassNotFoundException for mapper and reducer classes
Can u spare some time and have a look at the issue detailed at :
It seems that there are a few differences between your code and my code: | http://www.petrikainulainen.net/programming/apache-hadoop/creating-hadoop-mapreduce-job-with-spring-data-apache-hadoop/ | CC-MAIN-2015-06 | en | refinedweb |
04 July 2008 04:42 [Source: ICIS news]
By Anu Agarwal
SINGAPORE (ICIS news)--A near three-month delay in the restart of Sumitomo Chemical’s 80,000 tonne/year No 2 methyl methacrylate (MMA) production line has resulted in tightened supply in the Asian market, buyers and sellers said on Friday.
The line in ?xml:namespace>
The restart date was currently 11 July but it was not certain that the plant would start due to the spate of technical problems, added the source.
Sumitomo is also preparing to shut its No 1 53,000 tonnes/year MMA line for a two-week routine maintenance starting 24 July, he added.
The No 3 and newest production line of 90,000 tonnes/year also needed a small shutdown, which had been planned for July but would now have to be delayed due to problems in the No 1 line, he said.
Sumitomo’s problems have caused tightened supply in Asia and upcoming planned shutdowns in Europe have also meant that less deep-sea cargoes would be available in
Therefore, prices have been rising with sellers now targeting prices above $2,100/tonne CFR (cost and freight) SE (southeast) Asia for July business, some 7-9% higher than June, said another trader based in
MMA producers have also been reeling from surging feedstock costs of acetone, MTBE and methanol and said price hikes were necessary to restore margins.
Major MMA producers in Asia include Sumitomo Chemical, Mitsubishi Rayon | http://www.icis.com/Articles/2008/07/04/9137750/sumitomo-mma-restart-delay-tightens-supply.html | CC-MAIN-2015-06 | en | refinedweb |
Introduction
A mediation module is created with IBM Integration Designer V7.5 (hereafter called Integration Designer) and deployed on WebSphere Enterprise Service Bus (ESB) or Business Process Manager (BPM) V7.5 Advanced Edition. WebSphere ESB supports the transformation and routing capabilities required from enterprise services. ESB solutions often perform message and protocol transformations to adhere to the interface definitions defined by the service providers and clients.
Part 1 of this article defined the available mediation primitives (hereafter referred to as primitives and named with uppercase, such as “Trace primitive”) and runtime features, and which are best suited to logging or tracing. In some cases, there is a grey area because you can use a primitive for either purpose.
This article describes different ways to handle errors in mediation flows. With the exception of the new Error flow capability in V7.5, this article applies equally to version 7 of WebSphere Integration Developer, WebSphere ESB, and WebSphere Process Server. Mediation flow components inside modules and mediation modules can also be deployed in BPM Advanced Edition.
When a mediation flow is implementing logic or calling existing providers, there are choices about how to handle failures. These failures can be “modeled”, for example, declared faults on the WSDL interfaces, or “unmodeled”, for example, system failures. This article explores various ways to handle errors in the mediation flow, enabling the right design and configuration.
When a technique is considered a leading or common practice, it is shown in a sidebar. These recommendations do not fit every situation and most of the article describes what is possible so that you can choose appropriately for your needs.
Requestors, mediations, and providers
Error handling by mediation components can typically involve transforming the provider (commonly referred to as “backend” or “service provider”) error messages into well-defined messages defined in the context of the business domain. The mediation exposes a service to the requestor as depicted in Figure 1. These components can also handle provider system errors (for example, network unavailable) and provide more simplified error information to the requestor (or “client”). For terminology and other ESB discussion, see this article, The Enterprise Service Bus, re-examined.
Figure 1. Mediations typically sit between a client and an existing service provider
Each mediation module can contain a number of mediation flow components. Each mediation flow component can implement a number of operations defined on the WSDL interfaces. Each implementation of an operation is called a mediation flow. During execution of a mediation flow (a “mediation flow instance” or “flow” for short), different types of failures can occur.
Synchronous web services scenario
The focus of this article is synchronous scenarios, such as web services integrations (see Common patterns of usage for error handling). This is where both the Export and Import in Figure 2 use a web service binding.
Figure 2. Mediation flow component connected to the outside world with Imports and Exports
For this article, we will use the term provider when talking about what is being invoked through an Import (Figure 2). We will use the term mediation to describe the application that is created by implementing a mediation flow inside a mediation flow component. The Web Service Export is exposing a physical web service endpoint for the requestors.
The examples of messages and behavior are specific for the web service binding, for example the use of SOAP as the payload format. That said, most of the patterns for designing mediation flows described here apply to the choice of bindings (for example, SCA) or synchronicity. This includes error handling sub-flows, retry of invocations, primitives, and modeling faults.
The Resources section contains a number of references to material discussing asynchronous scenarios, which include messaging integrations. Although the design of error handling inside mediation flows is usually the same, the difference is often what happens when a flow fails. For an asynchronous invocation of a mediation, a Failed Event is generated (depending on configuration), instead of a failure response being sent to the requestor. Transactionality is beyond the scope of this article, apart from the section on forcing a transaction rollback.
The runtime has additional features that are not covered here, such as Store and Forward and Failed Event Manager. You can use these to halt (and then later resume) processing when a provider is unavailable and to modify and replay messages. These only apply to mediations that are invoked asynchronously, such as by consuming a message from a JMS queue.
Reasons for error handling
This section describes different reasons to perform error handling.
Implementing error processing policy
Errors experienced by mediations such as network unavailability of providers or other runtime exceptions can be transformed into simple error messages. If native components do not log these errors, then you can design mediation layers to log the root cause while returning a suitable response message to the requestor, informing about the temporary service unavailability.
Handling provider implementation specific error codes
Service providers can return errors using different approaches. This can involve anything from SOAP faults to proprietary structures inside message headers or bodies. Appropriate transformation rules can be applied to return errors in a consistent manner that is independent of the target service provider.
Handling sensitive information
When errors occur, messages often contain sensitive information, such as protocols used, server IP addresses, and so on. Appropriate rules need to be applied to filter any sensitive information before tracing or creating a response message.
Error handling features in mediation flows
This section describes modeling and handling failures inside mediations.
Overview of faults
Review the Information Center topic, Error handling in the mediation flow component, for a description of the capabilities. First, we will briefly recap on the concepts.
When working with providers or defining mediations, we use WSDL interfaces. These can declare inputs, outputs, and faults. The faults declared on WSDL interfaces correspond to known error cases that are defined as part of the service interface.
Modeled and unmodeled faults
There are two types of faults, modeled and unmodeled (see Figure 3). Modeled faults correspond to errors defined on the WSDL interface. Unmodeled faults are errors that are not declared on the WSDL interface.
Figure 3. WSDL interface with two declared faults
Conversely, for most interface definitions it is not necessary, advised, or possible to model the set of system issues related to its implementation (for example, network unavailable). The requestor or the system will deal with these issues. However, when working at the ESB technical integration layer, there are cases where modeling a provider system issue as a declared fault on the interface will be the intention. An example is an aggregator that shields the caller from provider failures.
When implementing mediations, it is important to handle faults caused when invoking providers. This is done by wiring nodes and primitives, which will be described in the following sections (a complete list is provided in the Summary of faults section). It might be necessary to create faults as part of the service interface being exposed. In contrast, this is done by wiring to certain nodes and primitives.
Working with modeled faults
We will look at designing modeled faults and then how to use them.
Designing modeled faults
There might be a number of different “non-happy path” situations where a fault message needs to be returned to the requestor. How these are grouped might be pre-determined by a contract for the service being implemented. If not and where possible, group the failures into a small number of categories that make sense for the requestors to handle. The reason is that as the number of fault message types increase, so does the amount of logic required in the service and in the requestors.
There might be information common to all fault messages and this can be put in a common business object definition (XSD), as shown in Figure 4.
The fault types defined in the interface from Figure 3 both extend a common type since they share common failure related data.
Figure 4. Defining Business Object types that are used in fault messages
The runtime uses the type information to distinguish between faults when working with web services.
Also note that the interface in Figure 3
consolidates all input and output parameters into two business object
types. The
UpdateCustomerRequest type contains
two fields for customer
id and
detail. Like the fault types, these input and
output types can also inherit from common request or response type
definitions.
Creating and sending modeled faults to the requestor
To send a modeled fault response, the flow must reach an Input Fault node, either on the request or response. For the interface in Figure 3, there are two terminals available with message types corresponding to the declared faults. The yellow pop-up in Figure 5 is displayed by clicking the small “i” symbol (circled in blue).
Figure 5. Terminals on Input Fault node to create modeled faults
The message body (
/body of the SMO) must have
the appropriate type for that fault message. This can be done with an XSL
primitive and is demonstrated in Figure 6 with the
invalidCustomerId fault. In the map, the three
output fields are populated and two of these are inherited from the
FaultCommon XSD type. The
id field takes the value directly from a
customerId input field from the request
message. The
reason field’s value is built
using some text and the same
customerId field
using a Custom XPath expression of
concat('Invalid customer Id : ',$customerId).
The
code field receives a constant value.
Figure 6. Wiring and map to create a modeled fault
Testing this service with the Generic Service Client (available on the Web Services context menu when selecting the Web Service Port for the Export), or using the TCP/IP Monitor view can reveal the SOAP message for the modeled fault.
The HTTP status code is 500 for any modeled fault. Returning a modeled fault does not force the transaction (if any) to a rollback.
Listing 1. Example SOAP Fault message for invalidCustomerId fault (with most of the type declarations omitted)
<soapenv:Envelope> <soapenv:Body> <soapenv:Fault> <faultcode>m:Server</faultcode> <faultstring>Invalid customer Id : 12345</faultstring> <detail> <io7:operation1Fault1_invalidCustomerId xmlns: <reason>Invalid customer Id : 12345</reason> <code>INVALID_ID</code> <id>12345</id> </io7:operation1Fault1_invalidCustomerId> ...
Note that the fault type in the message body must be populated with a
non-nill. There is something called an
xsi:nill
value for the correct fault message to be generated by the Web Service
binding. This means assigning at least a single item of data in a complex
type (like above in Listing 1), or populating a value into a simple type
(for example, if the fault type is
xsd:string).
Handling modeled faults from multiple providers
The faults defined by service providers might vary in style or naming and also can be different from those that need to be returned from the service. Figure 7 shows a flow that might receive a fault from two different providers, depending on which one is chosen to call.
Figure 7. Mapping faults from different provider interfaces
Also, the fault messages must be translated to the format required by the
service being exposed. The map shown in Figure 8 maps two equivalent
fields and assigns a constant value to the
code
field.
Figure 8. Mapping fields from provider fault to service response fault message
The map above has its root set as
/body because
for modeled faults, the information is contained in the SMO body. The SMO
trace for the Callout Fault flow resulting from receiving the SOAP message
in Listing 2 is listed below to demonstrate how the body is populated from
the SOAP fault detail. The faultcode in
/headers/SMOHeaders/SOAPFaultInfo is also
populated and this matches the body message type.
Listing 2. SMO trace (simplified) for invalidCustomerId fault received
<smo> <context/> <headers> ... <SMOHeader> <MessageType>Exception</MessageType> </SMOHeader> <SOAPFaultInfo> <faultcode xmlns: ns1:updateCustomer_invalidCustomerIdMsg</faultcode> </SOAPFaultInfo> </headers> <body xmlns: </smo>
Observe that in the above mediation flow in Listing 2, two of the Callout Fault terminals are unwired. This behaves the same as wiring to a Stop primitive.
However, a sticky note documenting the flow probably has the same effect. Also, the fail terminals of the three primitives are unwired. There is further discussion below on the Stop primitive and using Error flows to handle unwired fail terminals.
Errors inside mediation flows
Mediation primitives that process messages have a fail terminal (the
rectangular shaped terminal at the bottom right of the primitive), which
propagates exception information along with the input message when there
is a failure in that primitive. The exception information is stored in the
failInfo element in the message context. You
must wire the fail terminal of a mediation primitive to another primitive
to access
failInfo, as shown in Figure 9. The
fail terminal of a primitive provides failure information about the
current primitive, most notably the
failureString field providing a message.
Figure 9. The failInfo structure
The
origin field gives the primitive name, and
invocationPath gives the list of previous
primitives flowed through leading up to the failure.
Simple error handling – Stop primitive
You can implement simple error handling by using some of the provided tracing and logging primitives. For example, if a primitive fails during the input process due to incorrect input data or some other processing failure of the primitive, simply logging the input data and stopping the flow might be sufficient.
Figure 10. Using a Stop primitive when a mapping failure occurs
In the flow shown in Figure 10, when a mapping failure occurs, the input message is logged in the database and then a Stop primitive is used. For a web services interaction, this has the effect of sending back a SOAP response with an empty body (HTTP status 200), assuming no other path continues to send a response. Using a Stop primitive does not cause a transaction rollback.
Note that wiring an output terminal (such as from the Message Logger) to a Stop primitive has the same effect as not wiring it at all, but it is good practice to be explicit with the wiring.
Consolidating an error logic with a sub flow
The “any message type” terminal type means that this sub flow is used for any WSDL message (see Figure 11).
The sub-flow appears just like a regular primitive when instantiated in a flow.
Figure 11. Sub flow definition that can work with any message type
You can promote the properties of each primitive in the sub-flow. In the
above example, you can promote the
enabled
property of the Trace and Message Logger primitives to allow the
administrator to choose file or database logging for each sub flow
instance.
By default, the level of granularity is very fine, but the flow designer
can choose to reduce this by grouping related properties. This is used to
reduce the complexity of administration. Promoted properties across all
sub flow instances (such as the
enabled
property of the Trace primitive) can be given the same group and alias by
the designer of the flow. This means that an administrator can disable all
instances of the embedded Trace primitives by configuring a single
promoted property value on the module in the Integrated Solutions
Console.
Error flows (new in V7.5)
You need to wire fail terminals unless it is desired to have the default technical error message related to the failed primitive returned to the caller. This is equivalent to using a Fail primitive (discussed later). You can use the Error flow (new in V7.5) to automatically “wire” each unwired fail terminal.
The terminal is set to “any type” because the message can have any WSDL operation type. A generic logging flow (or sub-flow) is used to handle all failures.
If the desired behavior depends upon which part of the flow failed, then use a Type filter primitive to choose an appropriate action as shown in Figure 12. This might be practical if there are a small number of message types that need special handling.
Figure 12. Handling multiple failures in an Error flow
Alternatively, the data inside the SMO is inspected to determine the action. For example, consider if the results from the invocations are stored in one of the contexts by the request flow and behavior needs are chosen based on this data. A Message Filter primitive in conjunction with XPath conditions is used to select an appropriate flow to handle that failure. See the next section and Table 2 in the Summary of faults section for further information about the differences between Stop and Fail primitives.
Working with unmodeled faults
This section will describe how to create and process unmodeled faults.
When invoking providers, if there is a failure that is not modeled (declared on an interface) - such as an unhandled runtime error - then this typically results in receiving an unmodeled fault. However, this depends on how the provider is implemented.
An unmodeled fault represents an error condition and left unhandled, it propagates to the caller (for a Web Service Export) and causes a transaction rollback.
The unmodeled fault is considered a runtime error and produces a stack trace in the logs. If used for normal or expected operation, then this can have a performance impact.
Creating using the Fail primitive
A common way for unmodeled faults to get returned by mediations is by not wiring fail terminals on primitives.
You can use the Fail primitive to return an unmodeled fault and provide a user-supplied error message (Figure 13). See Table 2 in the Summary of faults section for a comparison with the Stop primitive.
Figure 13. Fail primitive used to return an unmodeled fault with a customer error message
A custom error message is defined and the placeholder {4} is populated with
data from the SMO at the XPATH location
/body/updateCustomer/id. The default Root path
is
/context/failInfo, which contains the
failure information from the previous primitive. Other placeholders are
available and documented in the Information Center topic, Fail mediation primitive.
Listing 3. Example SOAP fault message representing unmodeled fault
<soapenv:Envelope> <soapenv:Body> <soapenv:Fault> <faultcode>m:Server</faultcode> <faultstring> javax.xml.ws.WebServiceException: com.ibm.websphere.sca.ServiceRuntimeException: CWSXM0201E: Exception returned by mediation flow for component Unmodeled in module Diagrams: CWSXM3300E: Fail primitive 'Map_Fail', component 'Unmodeled', module 'Diagrams', interface '{}ProviderInterface', operation 'updateCustomer', raised a FailFlowException. The user-supplied error message is 'Map failed for customer Id 12345'...
The above SOAP in Listing 3 illustrates the user-supplied error message being populated. The HTTP status code is 500.
It is not possible to precisely control the format (such as
faultstring) of the SOAP message using the Fail
primitive or using modeled faults. If this is required, then use a JAX-WS
handler to modify the SOAP message at the boundary. Alternatively, use a
service gateway (discussed later) to manually construct the SOAP
envelope.
Handling unmodeled faults and sending modeled faults for the requestor
The flow in Figure 14 shows an unmodeled fault being handled by the wiring from the Service Invoke primitive’s fail terminal. This originated because of the following reasons:
- Failure to send the request or receive the response (for example, HTTP status 404 – URL not found).
- Response that is not SOAP (for example, plain text and HTTP status 500).
- SOAP response that does not match the expected response format.
- Valid SOAP containing a fault that is not modeled (for example, unmodeled, HTTP 500).
Figure 14. Converting an unmodeled fault into a modeled fault
The “Provider error” map creates a message suitable for one of the two Input Fault terminals, each representing a modeled fault. The particular modeled fault represents a provider failure.
The failure information for unmodeled faults and fail terminals is
contained in the XPATH location
/context/failInfo in the SMO. In the example
shown in Figure 15, the failureString is copied into a field in the output
map and sent back as part of a modeled fault using the Input Fault
terminal.
Figure 15. Mapping from the failInfo structure
The SOAP fault message received is logged in
/context/failInfo/failureString as a string. An
alternative to returning the complete SOAP fault message to the caller
might be to log it in a database and return a message relevant for the
requestor.
The behavior of the Callout Response node is the same as the Service Invoke primitive.
By handling the unmodeled fault, the failure by the provider does not cause a transaction rollback. The provider failure message is mediated by design. If there is a requirement to preserve the SOAP format, then it is necessary to use a service gateway (discussed later).
Summary of faults and stop and fail primitives
This section gives an overview of the behavior of faults, and the stop and fail primitives.
Table 1. Differences between modeled and unmodeled faults
Table 2. Differences between Stop and Fail primitives
Advanced topics
This section describes topics that are helpful in more challenging integration scenarios.
Retrying a web service invocation
WebSphere ESB has built-in capability to retry failed service invocations. The Service Invoke primitive and Callout nodes have options to retry on modeled faults or unmodeled faults up to a specified number of times (see Figure 16).
Figure 16. Configuring retry on Service Invoke primitive
The properties here are all promotable. That means that they can be exposed for an administrator to modify dynamically while the application is running.
In normal operation, the total time spanned by the request and response of the mediation flow should not exceed the transaction timeout defined on the server (default 120s). This includes invocations, retry delays, and other processing in each mediation flow instance.
Using alternative endpoints
If you like alternative endpoints (URL in the case of web services) to be used upon failure, then the SMO must be populated with them before the invocation (Figure 17).
Figure 17. Setting alternative endpoints in the SMO before invocation
The runtime cycles through the endpoints, starting with the default
/headers/SMOHeader/Target/address, and then the
“alternate endpoints” until success or the retry count is reached.
A standard approach is to use Endpoint Lookup primitive in conjunction with WebSphere Service Registry and Repository to retrieve endpoints.
Retry an invocation after a specific modeled fault
Sometimes the built-in capability is not enough for your requirements. An example is when you want to retry an invocation after receiving a particular modeled fault (for example, one that describes a temporary or unknown provider failure), but not other modeled faults.
You can use the Fan Out primitive to loop until a condition is reached, or up to a specified number of times. In this example, it is used to retry invocations.
In this example, let us say that we want to retry any unmodeled faults and
a specific modeled fault
UnknownSystemException
declared in the WSDL interface for the provider that represents a possibly
transient provider error. We will have up to three invocations, that is,
two retries.
In Figure 18, a retry occurs for the two highlighted paths. The top path is
for the
UnknownSystemException modeled fault,
and the lower of the top two is for unmodeled faults received.
Figure 18. Using the Fan Out primitive in iterate mode to implement a retry
The retry paths are wired to the “in” input terminal of the Fan In primitive. This is the terminal that causes the Fan Out to fire again and to loop round. If three iterations have been reached and the flow reaches the “in” terminal of the Fan In primitive, then the “out” terminal is fired. In this scenario, this means that there have been three unsuccessful invocations.
The other paths (OK response and the remaining modeled “Business” faults) are wired to the “stop” terminal of the Fan In primitive. This terminal causes the Fan Out iteration to cease (before or at the total number of iterations), and the “incomplete” output terminal is fired. In this scenario, that corresponds to a successful invocation or retry attempt.
There is a bit of “bootstrapping” needed for the above example to work. This is because the Fan Out primitive’s iteration capability is designed to only loop over an input array (for an aggregation scenario). To use it for the purpose of looping (such as up to three times), it is necessary to construct an array of length three for input to the Fan Out primitive. The steps to make the example work are:
- Declare a BO with a field
loopthat is a list or array of string (or any type).
- Set this BO as the type of the transient context.
- In Properties, set the Fan Out to fire the output terminal for each element in the XPath expression
/context/transient/loop. Set the Fan In to fire the output terminal when the associated Fan Out primitive has iterated through all messages.
- Use a Message Element Setter to populate the
loopfield with as many array elements as the maximum number of invocations needed (Figure 19). This needs to be before the Fan Out primitive.
Figure 19. Populating an input array that drives Fan Out iteration
The behavior of the Fan Out loop can be made more dynamic with promoted
properties (for example, total retries). The
loop array just needs to be populated
(Java™ code also works) with the absolute maximum number of
invocations for any configuration.
Another option is to wire the retries directly so that there are multiple Service Invoke primitives on the flow (number of retries + 1). This gets messy after one retry.
Error handling gateways
In some cases, the particular format of messages from providers is not relevant to the ESB. It is possible that all failure messages (HTTP code 500) should be handled similarly and the Service Gateway pattern (open the Patterns Explorer view) allows this without importing and wiring to all of the WSDLs and schemas for the various service providers (see Figure 20).
Figure 20. Dynamic Service Gateway pattern
The gateway sits between requestors and providers and typically makes routing decisions by using a well-known property of the incoming message, such as a SOAP Action HTTP header. It can act as a gateway to different service applications without dependencies on the interfaces of the services.
Propagating all messages without alteration
Since the Service Gateway pattern deals with the SOAP message as pure text, it is well-suited to passing faults messages from the provider straight back to the requestor without modification. The wiring is simple and does not require mapping. Figure 21 shows the response flow of such a gateway whose purpose is to log failed calls to a number of providers. The key benefit here is that the providers can have different interfaces and the gateway does not need to import the WSDLs.
Figure 21. Logging all fault messages using a Dynamic Service Gateway
A Dynamic Service Gateway does not make use of WSDL interface definitions
for services it invokes. This means there is no concept of a modeled fault
and all failures described by valid SOAP messages (HTTP 500 code) appear
on the
gatewayMessage terminal of the
Callout Fault node.
If there is a system related failure (for example, HTTP 404) or no valid SOAP message is returned, then the Fail terminal of the Callout Response node fires. In Figure 21 above, the Fail terminal eventually connects to a Fail primitive that is used to send an unmodeled fault to the requestor.
Catching all failure messages
Another use of a Service Gateway pattern is to conditionally act as a pass through. The pattern described here acts as a pass through for successful service provider response calls. However, it will mediate invocation failures and SOAP fault messages by transformation. This is useful to prevent provider failure information, such as network details or exception traces percolating back to clients. It might be the case in that there are requirements to handle all failures in a consistent message format.
Options for gateway responses for the failed provider invocations include:
- SOAP response with empty body (HTTP 200)
- Normal SOAP response with a defined message type (HTTP 200)
- SOAP fault response
The first option above is easily achieved using a Stop primitive.
The second and third cases are more difficult to achieve in a gateway scenario because the ServiceGateway interface that is available in the flow does not have strongly defined types for the payload. This is usually the benefit of the Service Gateway. The SMO will contain the raw data of the SOAP envelope in a string/XML representation.
There are two mechanisms to handle each of the response types:
- Manual construction
- Automatic parsing of the Service Gateway message into a concrete message
The following steps show how to manually construct responses for the more adventurous developers.
Manual construction: Creating a happy SOAP response from a Callout Fault
To create a SOAP response that matches a defined WSDL interface, first a message body matching the operation type must be populated and then it must be serialized to a string form suitable for the Input Response node.
The example in Figure 22 creates a normal response from a SOAP fault. The
output type of the XSL map matches a response operation
(
operation1ResponseMsg) on an interface
MyGateway.
Figure 22. Creating and populating a modeled SMO body type
The Data Handler then serializes the SMO body to a XML string that forms the basis of the gateway’s response, as shown in Figure 23.
Figure 23. SMO for the ServiceGateway with a TextBody message field
The steps to create this flow are:
- Create an XSL primitive and set the Output terminal type to the intended response message type from a WSDL interface available in the project or its dependencies.
- Create a map with
root=/body(or
root=/if you want to access SOAP Fault codes) to populate the target.
Optionally, copy the SOAP Fault information from
/headers,such as
/headers/SOAPFaultInfo/faultstringto the target.
- Create a Data Handler primitive:
- Set the Data Handler Configuration as
UTF8XMLDataHandler.
- Refine the body
/body/messageto the actual field type of
{}TextBody.This is the type that contains the string field named
value.
- Choose the Action to “Convert from a Business Object to native data format”
- The Source and Target XPaths are
/bodyand
/body/message/value, respectively.
Automatic parsing of the Service Gateway message: Creating a happy SOAP response from a Callout Fault
Within a Service Gateway, you may want to automatically inflate the inbound message from its generic TextBody structure into the concrete business object (without a DataHandler primitive). This is possible when a TextBody structure with XML data is encoded within the gateway message (such as a message from a web service binding) and the Integration Developer selects the Automatically convert the ServiceGateway message checkbox on any of the input nodes within the mediation flow editor (Figure 24).
Figure 24. Automatically de-serializing and serializing the gateway message
A requirement for this processing is the availability of the schema information so that the message is de-serialized into a concrete business object.
This mechanism is especially useful when handling fault messages as the structure of a SOAP Fault can be complex to manually handle.
For instance, a SOAP 1.2 fault message includes the following information shown in Listing 4.
Listing 4. Example SOAP fault message representing unmodeled fault
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <soapenv:Envelope xmlns: <soapenv:Body> <soapenv:Fault xmlns: <soapenv:Code> <soapenv:Value>soapenv:Receiver</soapenv:Value> <soapenv:Subcode> <soapenv:Value xmlns:m:CustomerIdFault</soapenv:Value> </soapenv:Subcode> </soapenv:Code> <soapenv:Reason> <soapenv:Text xml: Invalid customer Id : 12345</soapenv:Text> </soapenv:Reason> <soapenv:Detail> <io7:operation1Fault1_invalidCustomerId xmlns: <reason>Invalid customer Id : 12345</reason> <code>INVALID_ID</code> <id>12345</id> </io7:operation1Fault1_invalidCustomerId> </soapenv:Detail> </soapenv:Fault> </soapenv:Body> </soapenv:Envelope>
When this arrives into the Gateway, you want the details section to be automatically de-serialized into the body of the Service Message Object, and the remaining information to be populated into the SOAPFaultInfo section. This is exactly what occurs based on the above inbound message. The following Service Message Object in Listing 5 is generated.
Listing 5. Example SOAP fault message representing unmodeled fault
<p:smo xmlns: <context/> <headers> <SMOHeader> <MessageUUID>3E3BB9A4-0131-4000-E000-1F1809B4A751</MessageUUID> <Version> <Version>7</Version> <Release>0</Release> <Modification>0</Modification> </Version> <MessageType>Exception</MessageType> <Operation>requestResponse</Operation> <Action></Action> <SourceNode>ServiceGatewayImport1</SourceNode> <SourceBindingType>WebService</SourceBindingType> <Interface>wsdl:</Interface> </SMOHeader> <SOAPFaultInfo> <faultcode xmlns:ns0:Receiver </faultcode> <faultstring>Invalid customer Id : 12345</faultstring> <extendedFaultInfo> <Code> <ns0:Value xmlns:ns0:Receiver </ns0:Value> <ns0:Subcode xmlns: <ns1:Value xmlns: ns0:CustomerIdFault</ns1:Value> </ns0:Subcode> </Code> <Reason> <ns2:Text xmlns: Invalid customer Id : 12345</ns2:Text> </Reason> </extendedFaultInfo> </SOAPFaultInfo> <HTTPHeader> <control> <ns3:URL xmlns:</ns3:URL> </control> </HTTPHeader> </headers> <body xmlns: </p:smo>
This allows the Integration Developer to use standard transformation primitives to change the content of the message and generate a valid response message as shown in Figure 25.
Figure 25. Creating a normal response message
You notice the final SetMessageType primitive is required to reset (from a tooling point of view) the message to a Gateway message structure, instead of the concrete type generated. The runtime only completes this logic within the InputResponse node, instead of at the SetMessageType primitive.
Manual construction: Creating a SOAP fault response from a failure
To return a fault response, it is necessary to serialize a BO of type
Fault from the
schema (assuming you are using SOAP 1.1) into the value field of the
TextBody. You can use the example shown in
Figure 26 to return a modeled fault when the provider invocation fails
with an HTTP 404 code.
The flow shown in Figure 26 first populates a SMO body using a map of the
desired fault type
(
operation1_serviceProviderFaultMsg). It then
copies this body into the
detail element of a
SOAP Fault structure defined in the transient context. Finally, it
serializes the SOAP Fault as text using a Data Handler.
Figure 26. Creating a SOAP envelope with a Fault
The steps to create this flow are:
- Import the into your project or a referenced library. This (
soap-1.1.xsd) was found in the
.metadata\.plugins\com.ibm.ccl.soa.test.common.coredirectory in my workspace. Not finding it, nor having access to my workspace, it can be also be imported via HTTP using the Import WSDL feature of Integration Designer.
- Create a new BO type with a field named “fault” of type Fault from the imported schema. Declare this as the transient context variable.
- Create an XSL primitive and set the Output terminal type to the intended fault message type from a WSDL interface. Populate the map as required.
- Create a Message Element Setter and use to it to copy
/bodyto
/context/transient/fault/detail.
- Create a Data Handler primitive to serialize the fault structure into the gateway message body.
- Click the Browse button to create a new Data Handler Configuration. Select XML and specify Document root name as Fault and Document root namespace as.
- Use this new configuration for the Data Handler because this is necessary to serialize the SOAP Fault correctly.
- Refine the body
/body/messageto the actual field type of
{}TextBody.
- Choose the Action to Convert from a Business Object to native data format.
- Source and Target XPaths are
/context/transient/faultand
/body/message/value, respectively.
The value field in the TextBody is populated with a string that is a
serialized SOAP Fault and starts with
<en:Fault and contains the modeled fault
data in an inner
<detail element.
The above example has constructed a modeled fault response, but you can use this technique to create an arbitrary SOAP response.
Automatic parsing of the Service Gateway message: Creating a SOAP fault response from a failure
Similar to the section on Creating a happy SOAP response from a Callout Fault, it is possible to change the content of the fault section in a similar manner and allow WebSphere ESB to handle the de-serialization and serialization automatically. If you want to generate a SOAP Fault, then simply wire to the corresponding InputFault as shown in Figure 27.
Figure 27. Creating a fault response
If you change the content of the SOAPFaultInfo structure within the Service Message Object, these changes are automatically re-serialized into the SOAP Fault message. This approach greatly simplifies the setup and configuration for handling faults.
Responding gracefully but forcing a transaction rollback
Earlier we mentioned that an unhandled primitive failure or using a Fail primitive causes an unmodeled fault, and this causes a transaction rollback. Using a Stop primitive or returning a modeled fault does not cause a transaction rollback.
Sometimes, a service might want to return a “happy” HTTP 200 response (or a modeled fault) to the requestor, but also rollback the transaction (local or global, whichever applies). You can force a transaction rollback by using the following Java code:
javax.transaction.TransactionManager tm = com.ibm.ws.Transaction.TransactionManagerFactory .getTransactionManager(); tm.setRollbackOnly();
You can place this in a custom mediation primitive anywhere in a mediation flow before the response node.
Service Invoke primitive versus Callout nodes
The Service Invoke primitive has similar functionality to the Callout Fault nodes on response flows. It has a fault terminal for each fault declared on the partner interface.
For the sake of error handling, it does not usually matter whether a Service Invoke primitive or Callout node is used. Typically, the choice is made for other reasons. Service Invoke is used in more complex integrations such as aggregations, and callouts are used for more typical requestor-provider scenarios. One relevant benefit of the primitive is that a fail terminal is available for invocations of services with one-way (“fire and forget” or “request only”) interfaces (Figure 28). This allows handling of failures invoking a one-way service.
Figure 28. Using Service Invoke primitive to handle a one-way invocation failure
A slight drawback of the primitive is that is necessary to re-implement the primitive on the canvas if the faults are modified during development.
One other difference is that, by default, when a Callout Response node’s Fail terminal fires, it does not contain the message from the request flow. However, there is a checkbox property on the Callout Response node to preserve the request message if it is desired at some performance cost. The Service Invoke primitive always maintains the message from the In terminal when firing the Fail terminal.
Conclusion
During development when a failure occurs, the user can debug and investigate the issue and then retry. When a solution moves towards test or production, it is important for the right information to get logged and to handle failure in the manner expected by the specification. This can help the support team make a quick resolution for technical issues or identify what data caused the problem.
For a complete error handling strategy, the whole system needs to be considered. As an example, in a scenario where messages are delivered from a queue to a mediation, there are additional considerations when the flow instance fails. Options include immediately retry of delivery of the message, manual retry by an administrator (using Failed Event Manager), or moving the message to a failure queue. See the Resources section for links describing asynchronous scenarios and overall strategy.
This article has described some of the building blocks available in Integration Designer and WebSphere ESB and given some examples of how you might put them together in synchronous web services interactions.
Acknowledgements
The authors would like to thank Andy Garratt, Sergiy Fastovets, Gabriel Telerman, and Kim Clark for reviewing the article.
Resources
- WebSphere Enterprise Service Bus V7.5 Information Center
- WebSphere Enterprise Service Bus V7.5 Information Center: Error handling in the mediation flow component
- WebSphere Enterprise Service Bus V7.5 Information Center: Common patterns of usage for error handling
- Implementing tracing, logging, and error handling in mediation modules using WebSphere Integration Developer and WebSphere ESB V7, Part 1
- Error handling in WebSphere Process Server, Part 1: Developing an error handling strategy
- WebSphere Process Server invocation styles
- Asynchronous processing in WebSphere Process Server
- Exception handling and recovery using synchronous or asynchronous service invocation
- IBM Integration Designer product page
- WebSphere ESB product page
- developerWorks WebSphere ESB zone
- developerWorks Business Process. | http://www.ibm.com/developerworks/websphere/library/techarticles/1108_toth/1108_toth.html | CC-MAIN-2015-06 | en | refinedweb |
23 August 2011 04:04 [Source: ICIS news]
SINGAPORE (ICIS)--Saudi Arabia's Petro Rabigh restarted its propylene oxide (PO) plant over the weekend following a turnaround that lasted almost four months, market sources said on Tuesday.
The 200,000 tonne/year PO facility, located at Rabigh in ?xml:namespace>
The company had targeted to restart the
Petro Rabigh officials declined to comment on plant operations.
Tight availability of PO imports sent spot prices in Asia surging by about $200/tonne (€140/tonne) over the past four weeks to $2,030-2,090/tonne CFR (cost and freight)
"Any PO cargoes that Petro Rabigh can dispatch to
Prices may soften in the coming weeks when regional supplies normalise, he said.
The Rabigh petrochemical complex includes a 400,000 bbl/day refinery and high olefins fluid catalytic cracker (HOFCC) that produce 1.3m tonnes/year of ethylene and 900,000 tonnes/year of propylene.
The HOFCC resumed production on 31 July.
Petro Rabigh is a 50:50 joint venture between Saudi Aramco and
($1 = €0.70)
Additional reporting by Vikki Shen | http://www.icis.com/Articles/2011/08/23/9487115/saudis-petro-rabigh-resumes-production-at-po-facility.html | CC-MAIN-2015-06 | en | refinedweb |
Up to [DragonFly] / src / sys / bus / cam / scsi
Request diff between arbitrary revisions
Keyword substitution: kv
Default branch: MAIN
Unbreak LINT build
Warns cleanup.
cdevsw -> dev_ops.
Rename printf -> kprintf in sys/ and add some defines where necessary (files which are used in userland, too)..
Remove spl*() calls from the bus/ infrastructure, replacing them with critical sections. Remove splusb() from everywhere, replacing it with critical sections.
Bring in some CAM bug fixes from FreeBSD. Submitted-by: Xin LI <delphij@frontfree.net> Obtained-from: Free.
Add the DragonFly cvs id and perform general cleanups on cvs/rcs/sccs ids. Most ids have been removed from !lint sections and moved into comment sections.
import from FreeBSD RELENG_4 1.22.2.7 | http://www.dragonflybsd.org/cvsweb/src/sys/bus/cam/scsi/scsi_target.c?f=h | CC-MAIN-2015-06 | en | refinedweb |
insertionSort - Java Beginners
));
}
}
For more information on Java Array visit to :
Thanks
Good - Java Beginners
java ...can you give me a sample program of insertion sorting...
with a comment,,on what is algorithm..
Hi Friend,
Please visit the following
Very simple `Hello world' java program that prints HelloWorld
Hello World Java
Simple Java Program for beginners (The HelloWorld.java)
Java is powerful programming language and it is used... step. This short
example shows how to write first java application and compile
merge sorting in arrays - Java Beginners
,
Please visit the following link:
Thanks
Beginners in Java
Beginners in Java Hi, I am beginners in Java, can someone help me... tutorials for beginners in Java with example?
Thanks.
Hi, want to be command over Java, you should go on the link and follow the various beginners
java sorting codes - Java Beginners
java sorting codes I want javasorting codes. please be kind enogh and send me the codes emmediately/// Hi Friend,
Please visit the following link:".
1.
programs - Java Beginners
information. Array Programs How to create an array program in Java? Hi public class OneDArray { public static void main (String[]args){ int
.
http...:
http
Java Tutorial for Beginners in Java Programming
Java Tutorial for Beginners - Getting started tutorial for beginners in Java
programming language
These tutorials for beginners in Java will help you... discussed here is for very beginner in Java and teaches you Java
from scratch - Java Beginners
://
Here you
java - Java Beginners
:
Thanks
array manipulation - Java Beginners
example at:
Please help me... its very urgent
Please help me... its very urgent Please send me a java code to check whether INNODB is installed in mysql...
If it is there, then we need to calculate the number of disks used by mysql
java - Java Beginners
link:... in JAVA explain all with example and how does that example work.
thanks
... Search:
java - Java Beginners
information.
Reply Me - Java Beginners
Reply Me Hi deepak,
your sending web application is good (alphabetical searching dropdown menu)
Steps:-
1:-searching is good... will be not displayed.
please help me how it is open selection of the user its very urgent
Algorithm_2 - Java Beginners
Sort,please visit the following link:
Thanks
Algorithm_3 - Java Beginners
the following links:
java
the following link:
array in java - Java Interview Questions
Friend,
Please visit the following link:
Thanks...array in java array is a object in java. is it true, if true
Insertion Sort - Java Beginners
Insertion Sort Hello rose india java experts.If you don't mind.Can you help me.What is the code for Insertion Sort and Selection Sort that displays...:
public class InsertionSort {
public static void sort(String[] array) {
int
core java - Java Beginners
core java Hi guys,
String class implements which interface plzzzzzzzzzzzzz can any body tell me its very very urgentttttttttttttttt
Thanks String implements the Serializable, CharSequence | http://www.roseindia.net/tutorialhelp/comment/92921 | CC-MAIN-2015-06 | en | refinedweb |
13 July 2007 07:00 [Source: ICIS news]
LONDON (ICIS news)--These were the top stories at 06:00 GMT in the following European newspapers’ online versions on Friday. To go to the individual websites, click the links below:
?xml:namespace>
Front page
?xml:namespace>
Rio Tinto, the UK-listed mining group, said it was planning to make billions of dollars worth of disposals to help pay for Alcan, the Canadian aluminium group, which it agreed to buy in a friendly $44bn (€31.6bn) deal.
France turns up the pressure on EADS
Companies and markets
TPG asked back for Alitalia bid
The Italian government has re-opened contacts with TPG, the
Cherney appeals in Rusal case
Mikhail Cherney, a controversial founding father of the post-Soviet aluminium industry, is making an appeal in the London High Court claiming a 20% stake in Rusal, part of the Russian aluminium group United Company Rusal now gearing up for a full listing on the London Stock Exchange.
International Herald Tribune
Front page
A firm Bush tells Congress not to dictate war policy
President George W Bush struck an aggressive new tone on Thursday in his clash with Congress over
A Chinese reformer betrays his cause, and pays
Zheng Xiaoyu was once ranked as one of the most powerful regulators in
Marketplace
Ventana spurns $3bn offer from Roche
Ventana Medical Systems, the
Mergers and buyouts curb Indian air fare war
Mergers and buyouts are ending the cut-throat airline competition in
TheThe
Front page
Gazprom taps total for Shtokman
Gazprom on Thursday invited French energy major Total to help develop the Shtokman project, ending years of wrangling over whether foreign companies would take part in developing one of the world's largest and most difficult gas fields.
Furry thespians in danger of loosing stage
Yury Kuklachyov and his cats have played
Business
Lawysayd he will defend Berezovsky
A
Rosneft will seek to fill pipe toAsia
Rosneft will almost completely fill
Front page
Lawless Danish settlement approaches date with fate
Thirty-six years after it was founded, the "Free State of Christiania" is now being forced to comply with
Following the tour de France in the shadow of doping
The Tour de France has begun, but does anybody really care? Cycling is suffering as fans and sponsors turn away in droves, disgusted by the sport's inability to solve its persistent doping problem.
Turkish DailyTurkish Daily
Front page
Turkish tobacco exports could go up in smugglers’ smoke
Chinese are getting results on their ‘grand tobacco scheme’, which involves producing tobacco in
Kurdish leader pledges common sense over emotions
Pro-Kurdish groups wanted to attend the funerals of soldiers killed in the fight against the outlawed Kurdistan Workers' Party (PKK) but had not done so out of fear of anger and prejudices directed at them, said Aysel Tuğluk, former deputy leader of the pro-Kurdish Democratic Society Party (DTP).
Business and finance
Prices jump 15% in Russian Olympic host city
Real estate prices in
Yay-Sat media distribution started selling the English-language versions of Beijing Review, China Today and China Pictorial in
Front page
Poles are among the happiest lovers in the world
Poles rank fourth in the world in terms of satisfaction with their sex lives, according to the latest Durex's Sexual Wellbeing Survey, which interviewed 26,000 respondents from 26 countries.
"CBA provocation" backfires against the PM
After it was revealed by the media that the corruption affair which led to the dismissal of Agriculture Minister Andrzej Lepper was from start to finish run by the officers of the Central Anti Corruption Bureau (CBA), opposition parties have demanded the resignation of Prime Minister Jarosław Kaczyński and the head of the CBA Mariusz Kamiński.
Business
Business groups back SLD proposal to cut VAT
The Democratic Left Alliance (SLD) yesterday submitted a bill to reduce the basic VAT rate from 22% to 19%, drawing a favorable response from business circles.
FSO breaks into profit thanks to soaring sales
FSO turned a profit for the first time in six years in the 2006 fiscal year amid surging output and booming export sales, | http://www.icis.com/Articles/2007/07/13/9044751/in-fridays-europe-papers.html | CC-MAIN-2015-06 | en | refinedweb |
17 September 2009 12:00 [Source: ICIS news]
?xml:namespace>
CRUDE: October WTI: $72.58/bbl, up $0.07/bbl. November BRENT: $71.65/bbl, down $0.02/bbl
Crude prices were range bound either side of Wednesday’s close, which saw prices rise by well over a dollar after the weekly US Stock figures revealed a large, unexpected draw on crude stocks.
NAPHTHA: Open spec spot cargoes were assessed in a $623-633/tonne CIF (cost, insurance and freight) NWE (northwest Europe) range, up by $7/tonne CIF NWE on the buy side of the range set at the end of trading on Wednesday. October swaps were pegged at $623-624/tonne CIF NWE.
BENZENE: On Wednesday, two September benzene deals were done at $807-810/tonne CIF ARA (
STYRENE: A September styrene trade was done on Wednesday at $1,105/tonne FOB (free on board)
TOLUENE: The market remained quiet, with no firm bids or offers heard. The range was notionally pegged at $760-800/tonne FOB
MTBE: Bids and offers were heard at a factor of 1.13-1.15 against gasoline on Thursday morning, unchanged from Wednesday’s levels. Gasoline traded at $659-663/tonne
XYLENES: The paraxylene market remained quiet, with no firm buying interest heard. The range remained stable at $840-910 | http://www.icis.com/Articles/2009/09/17/9248067/noon-snapshot-europe-markets-summary.html | CC-MAIN-2015-06 | en | refinedweb |
12 August 2011 19:35 [Source: ICIS news]
LONDON (ICIS)--H&R – formerly H&R WASAG – reported a 9% year-on-year increase in first-half earnings before interest, tax, depreciation and amortisation, to €56m ($80m), as sales rose almost 11%, the Germany-based producer of specialties, waxes, plasticisers and white oils said on Friday.
H&R's sales for the six months ended 30 June were €595.3m, up from €537.9m in the same period a year ago.
H&R credited a strong performance in the first three months to 31 March for the improvement.
Beginning in April, however, H&R’s results suffered as crude oil prices soared, it said.
In addition, scheduled maintenance work at H&R’s main production site in ?xml:namespace>
For the full year ending 31 December, H&R expects EBITDA of €90m-100m, CEO Gert Wendroth said.
However, Wendroth warned that the turmoil in financial markets and the potential impact on the real economy, as well as uncertainty over crude oil prices, make it difficult to predict business developments over the next few months.
H&R also said it expects to start up a mechanically completed propane de-asphalting plant at
The unit will convert residue from H&R’s
Effective on 1 August, H&R changed its name to H&R AG, from H&R WASAG AG. The WASAG acronym referred to an explosives business the company stopped operating in 2007.
( | http://www.icis.com/Articles/2011/08/12/9484965/germanys-h.html | CC-MAIN-2015-06 | en | refinedweb |
ExtJs HMVC
ExtJs HMVC).
- Join Date
- Mar 2007
- Location
- Gainesville, FL
- 37,743
- Vote Rating
- 926
I would put this up on GitHub. People like me like to read code without downloading.
mandro, do you think that your good solution, would be possible to integrate with "ext.ux.desktop", using "desktop" like the main aplication ?¿
Thanks
this solution is under any license ? (sorry about my english)
Manel
@mandro
Thank you for your sample application. It is good inspiration for improving the framework.
A couple observations/questions:
* It does not appear that your app/module controllers support nested controllers, which means only 2 tiers are supported - the main & the module. For example, panelA is supervised by controllerA. PanelA contains a button to launch a different view (e.g. a config window). The sub-view should be supervised by a sub-controller. The pattern could continue to an Nth level. How would you refactor your example to support a nested design in order to avoid bloat in the main controller?
* When your app loads, new globals get added that correspond with each bundle. Seems like that approach could lead to namespace bloat & potential conflicts. Would it be better to keep all bundles isolated to the product namespace? For example, AM.Reverse, AM.Dashboard, and AM.Viewshed would be used instead.
ExtJS4 HMVC
ExtJS4 HMVC
Previously I developed a sample application to use other pattern in my solutions.
This solution only has 2 levels in depth, using MVC at level.
Now, I recently work in new simplified and improved solution to use HMVC, using components with MVC pattern at any level.
When I instanciate components, these create global namespaces, (I use the class Ext.ClassManager).But these meaning that you do not have 2 components with same name in a same level.
New version.
Sorry I could'nt submit these files at the github. | http://www.sencha.com/forum/showthread.php?180656-ExtJs-HMVC | CC-MAIN-2015-06 | en | refinedweb |
ctypes.pythonapi not usable?
@omz Following code raises an
AttributeErrorin Pythonista:
import ctypes print ctypes.pythonapi.PyThreadState_SetAsyncExc
In Pythonista, the error is
AttributeError: dlsym(RTLD_DEFAULT, PyThreadState_SetAsyncExc): symbol not found.
But it works with PC version Python and the correct output should be something
<_FuncPtr object at 0x10223be20>
I looked into
__init__.pyof the ctypes module and found following code
if _os.name in ("nt", "ce"): pythonapi = PyDLL("python dll", None, _sys.dllhandle) elif _sys.platform == "cygwin": pythonapi = PyDLL("libpython%d.%d.dll" % _sys.version_info[:2]) else: pythonapi = PyDLL(None)
Apparently, the
elseclause should take effect. Since the Pythonista environment is quite special, are there some settings missing in the DLL initialisation? I tried
PyDLL('libpythonista.a')but got error
image not found.
Any tips?
Strange, this prints something like
<_FuncPtr object at 0x106ab31c8>for me:
import ctypes print ctypes.pythonapi.PyThreadState_SetAsyncExc
@omz My device is an iPad Air 2 WiFi 64GB. I do also have an iPhone5 which I believe is a 32-bit device. But you are on 64-bit as well, right?
I am also on the latest Beta (160023).
Here is the full Trackback:
Traceback (most recent call last): File "<string>", line 1, in <module> File "/private/var/mobile/Containers/Bundle/Application/650B1932-78AB-40BF-80BC-9BE928D8090B/Pythonista.app/pylib/ctypes/__init__.py", line 378, in __getattr__ func = self.__getitem__(name) File "/private/var/mobile/Containers/Bundle/Application/650B1932-78AB-40BF-80BC-9BE928D8090B/Pythonista.app/pylib/ctypes/__init__.py", line 383, in __getitem__ func = self._FuncPtr((name_or_ordinal, self)) AttributeError: dlsym(RTLD_DEFAULT, PyThreadState_SetAsyncExc): symbol not found
I guess it might have to do with me running a debug build... Could you try a different function in
pythonapi, something like
PyObject_Str? There's this note in the docs about
PyThreadState_SetAsyncExc, so I think it might be "special" somehow:
To prevent naive misuse, you must write your own C extension to call this.
No luck. Still the same error with
PyObject_Str...
The most likely explanation seems to be that debug symbols are stripped from release builds (TestFlight betas are built with the "release" configuration), but not from debug builds that I deploy directly from Xcode on my device... (which is probably why it works for me). I'm installing the TestFlight build right now to verify this.
I'm not sure if I want to turn off symbol stripping just to make the Python C API accessible. I honestly don't see a lot of practical use cases for this, and it might have a negative impact on size and performance of the binary.
Okay, I can verify that this doesn't work in the TestFlight build.
I am no expert on iOS app building process. But I don't quite understand why you have to enable the entire debug builds just for exposing the Python C API.
The PC version Python is definitely not released as debug builds, yet it still allow access to the C API. I would guess there is something minor that could be changed in build settings to allow the C API becoming available. Maybe it is just a matter of including a DLL file. On OS X, I am able to load the C Library by calling
print ctypes.PyDLL('libpython2.7.dylib').PyThreadState_SetAsyncExc.
@omz
I wonder whether this could be a **permission ** issue which prevents the library file being loaded by
PyDLL?
In 1.5, we can access the
Pythonista.appfolder alongside with the
Documentsfolder. But it is no longer there in 1.6. Presumably this folder is where the library file (e.g.
libpython.dylib) resides. Is this something worth to look at? Again I am no expert and could be very wrong. Please bear with me.
My intention is to use this API for better thread management in StaSh so that a worker thread can be interupted/stopped by user from the UI thread. | https://forum.omz-software.com/topic/2069/ctypes-pythonapi-not-usable/2 | CC-MAIN-2021-43 | en | refinedweb |
Originally posted on towardsdatascience.
Using YFinance and Plotly libraries for Stock Data Analysis
In this article, I will explain to you how you can use YFinance a python library aimed to solve the problem of downloading stock data by offering a reliable, threaded, and Pythonic way to download historical market data from Yahoo! finance.
In the later part, we will see how we can use this data to plot different visually appealing and highly interactive financial charts using Plotly a python library. Plotly a Python library is an interactive, open-source plotting library that supports over 40 unique chart types covering a wide range of statistical, financial, geographic, scientific, and 3-dimensional use-cases.
Let’s Get Started. Initially, we will start by installing the YFinace library which we will use to download the stock data and see some of its features.
Installing YFinance
Run the command given below in command prompt to install yfinance using pip.
pip install yfinance
Exploring YFinance Library in Jupyter Notebook
Let’s start by importing the library and downloading the stock data. Here the stock ticker I am using is HINDPETRO.NS which is Hindustan Petroleum Corporation, you can choose any stock to analyze, just replace the stock ticker with your stock’s ticker.
#importing Library
import yfinance as yf#setting the ticker
hindpetro = yf.Ticker("HINDPETRO.NS")#Display stock information
hindpetro.info
Now let us explore some functions that YFinance Library provides. This is just a small example there is much more, that you can explore here.
# Dsiplay all the actions taken in the lifetime of the stock i.e # dividends and splits with the dates when they are providedhindpetro.actions
Similarly, you can use the following given below commands to display the look at the stock dividends and stock split separately.
#Display Dividends
hindpetro.dividends#Display Splits
hindpetro.splits
Now let us download the data into a data frame and display it.
df = hindpetro.history(period="max")
df
For performing further operations we need to reset the index of the data frame and covert the respective columns to float datatype. Below given commands will solve our purpose.
#Reseting the index
df = df.reset_index()#Converting the datatype to float
for i in ['Open', 'High', 'Close', 'Low']:
df[i] = df[i].astype('float64')
After this, let us start with the visualization part. For this first, we will need to install Plotly.
Installing Plotly
pip install plotly
Creating a Line chart using Plotly Graph_objects with Range slider and button
A line chart is highly used for time series analysis, for viewing the stock trend over a time period. Here I will explain to you how you can create an interactive line chart using Plotly. The commands below will create a line chart of the stock data stored in the data frame over the maximum time period.
The code also includes the lines for creating buttons that can be selected to display a line chart for particular time periods.
import plotly.graph_objects as go
import pandas as pdfig = go.Figure([go.Scatter(x=df['Date'], y=df['High'])])fig.update_xaxes(
rangeslider_visible=True,
rangeselector=dict(
buttons=list([
dict(count=1, label="1m", step="month",
stepmode="backward"),
dict(count=6, label="6m", step="month",
stepmode="backward"),
dict(count=1, label="YTD", step="year",
stepmode="todate"),
dict(count=1, label="1y", step="year",
stepmode="backward"),
dict(step="all")
])
)
)
fig.show()
Creating OHLC(Open, High, Low, Close) Chart
An OHLC chart is a type of bar chart that shows open, high, low, and closing prices for each period. OHLC charts are useful since they show the four major data points over a period, with the closing price being considered the most important by many traders.
The code given below will create an OHLC chart with range selector.
Creating a candlestick chart with the range slider
Candlestick charts are used by traders to determine possible price movements based on past patterns. Candlesticks are useful when trading as they show four price points (open, close, high, and low) throughout the period of time the trader specifies.
Creating an area chart
An area chart or area graph displays graphically quantitative data. It is based on the line chart. The area between axis and line is commonly emphasized with colors, textures, and hatchings.
The code given below will create an area chart of the stock data.
All these charts are created using Plotly so that you can interact with the charts. All the charts mentioned above are the main charts that are used for financial analysis.
Conclusions
In this article, we started with downloading the stock data and performing different operations/functions using YFinance. After that, we plotted different financial charts using Plotly which are used for financial/Stock Data Analysis.
Source: towardsdatascience | https://learningactors.com/downloading-stock-data-and-representing-it-visually/ | CC-MAIN-2021-43 | en | refinedweb |
Python is considered as one of the languages with the simplest syntax, along with the huge applicability. There are some common mistakes that developers make when working with the language. Here are some of the most common mistakes that Python developers make.
1. Error handling
Errors in Python have a very specific form, called a traceback. When you forget a colon at the end of a line, accidentally add one space too many when indenting under an if statement, or forget a parenthesis, you will encounter a syntax error. This means that Python couldn’t figure out how to read your program.
2.Incorrect Indentation
To indicate a block of code in Python, each line of the block has to be indicated by the same amount.
In Python, unlike other languages, indentation means a lot more than making code look clean. It is required for indicating what block of code a statement belongs to. Many characteristics depend on indentation. Some indentation errors in Python are harder to spot than others. For example, mixing spaces and tabs can be difficult to spot in a code. If that happens, whatever is seen in the editor may not be the thing that is seen by the Python when the tabs are being counted as a number of spaces. Jupyter notebook automatically replaces tabs with spaces, but it gives an error in most cases. To avoid this mistake, for the entire block, all spaces or all tabs must be used.
3.Misusing The __init__ Method
The init is a reserved method in python classes used as constructors. In object-oriented terminology, it is called as a constructor And it gets called when Python allocates memory to a new class object. This method is called when an object is created from a class and it allows the class to initialize the attributes of the class. The objective is to set the values of instance members for the class object. Trying to explicitly return a value from the init method implies that the user wants to deviate from its actual purpose.
4. Class variables use
In Python, class variables are internally handled as dictionaries and follow what is often referred to as Method Resolution Order or MRO. It defines the class search path used by Python to search for the right method to use in classes with multi-inheritance. This causes a Python problem unless it’s handled properly. Consider the following example:
class A:
def rk(self):
print(” In class A”)
class B(A):
def rk(self):
print(” In class B”)
r = B()
r.rk()
the methods that are invoked is from class B but not from class A, and this is due to Method Resolution Order(MRO). The order that follows in the above code is- class B – > class A. This causes a Python problem unless it’s handled properly.
5.Variable Binding
Python has late binding behavior. There is often confusion among Python developers regarding how Python binds its variables. What it means is that it binds its variables in closures or in the surrounding global scope and hence the values of variables used in closures are looked up at the time the inner function is called.
6. Python Standard Library Module Names
Python is rich with library modules which comes out of the box. A common mistake is using the same name for a module and a module in the Python standard library. This will lead to importing another library which will then try to import the module in the Python standard library and since there will be a module with the same name, the other package will import the defined module, in place of the Python standard library module.
7. LEGB Rule
Python scope resolution is based on what is known as the LEGB rule, which is shorthand for Local, Enclosing, Global, Built-in. Python uses a different approach for scoping variables than other programming languages. If a user makes an assignment to a variable in a scope, that variable is automatically considered by Python to be local to that scope and shadows any similarly named variable in an outer scope. This error is particularly common for developers when using lists.
Source: analyticsindiamag | https://learningactors.com/7-common-mistakes-python-developers-should-avoid/ | CC-MAIN-2021-43 | en | refinedweb |
shuCourses Plus Student 390 Points
What is a argument?
so my last task, i was trying to produce "Hi " a multiple of times so it would look like Hi Hi Hi Hi Hi so it was def printer(count): print(count * "Hi ")
so i'm assuming count is the argument. i tried looking up arguments i could use for python. and it was just confusing haha.
i managed to pass my task with a much much better understanding thanks to the help of the community, but i still don't understand what a argument is. does it have its own special property?
i get how i can call them whatever i want, but the vastness of that confuses me.
def product(count, interger): print({} * {}.format(count, interger)) product(5 * 6)
1 Answer
andren28,503 Points
Technically speaking,
count in the previous task is not actually an argument, it is a parameter. Arguments and parameters are closely linked but they are not the same thing.
Let's start with a simple example, say I wanted to make a function that takes two numbers and performs addition on them, and then returns the result. That would look like this:
# Defining function add def add(num1, num2): return num1 + num2 # Calling function add add(5, 10)
Above I first define a function with two parameters, the first is
num1 and the second is
num2. Parameters can be thought of as variables that are given a value when the function is called. So even though you don't give them a value inside your function you can treat them as variables that do have a value, since they will be assigned one when the function is called.
After defining the function I call it. When you call a function you can pass arguments to it inside the parenthesis. 5 is the first argument and 10 is the second argument. Arguments passed to a function is assigned to the parameters of the function based on their position. That is to say that the first argument is assigned to the first parameter and so on. That means that
num1 is assigned 5 and
num2 is assigned 10.
Arguments are separated using a comma, just like you do when you define multiple parameters for a function.
Looking at the example above you should be able to figure out how to solve the task at hand, but if you are still confused about anything then feel free to ask me more questions, I'll answer anything I can.
richard shuCourses Plus Student 390 Points
richard shuCourses Plus Student 390 Points
best explanation ever, thankyou! | https://teamtreehouse.com/community/what-is-a-argument-2 | CC-MAIN-2021-43 | en | refinedweb |
- This topic has 6 replies, 3 voices, and was last updated 1 year, 4 months ago by
diego.
- AuthorPosts
Jay2thaworldParticipant
Hello,
I was following along with writing variables, when I updated the new mac address variable and ran the script to assign, I can see that the print displayed the correct new mac address, but it did not update.
I ran the process manually in terminator and got a message stating “Cannot assign requested address”.
I’ve tried a few different strings with no results. Any idea why this is happening?
After the first initial change it will not let me change it again…
Jay2thaworldParticipant
This is the message that displays:
/root/PycharmProjects/hello/venv/bin/python /root/PycharmProjects/hello/mac_changer.py
interface > eth0
new_mac > 00:11:22:33:44:55
[+] Changing MAC Address for eth0 to 00:11:22:33:44:55
Usage:
ifconfig [-a] [-v] [-s] <interface> [[<AF>] <address>]
[add <address>[/<prefixlen>]]
[del <address>[/<prefixlen>]]
[[-]broadcast [<address>]] [[-]pointopoint [<address>]]
[netmask <address>] [dstaddr <address>] [tunnel <address>]
[outfill <NN>] [keepalive <NN>]
[hw <HW> <address>] [mtu <NN>]
[[-]trailers] [[-]arp] [[-]allmulti]
[multicast] [[-]promisc]
[mem_start <NN>] [io_addr <NN>] [irq <NN>] [media <type>]
[txqueuelen <NN>]
[[-]dynamic]
[up|down] …
<HW>=Hardware Type.
List of possible hardware types:
loop (Local Loopback) slip (Serial Line IP) cslip (VJ Serial Line IP)
slip6 (6-bit Serial Line IP) cslip6 (VJ 6-bit Serial Line IP) adaptive (Adaptive Serial Line IP)
ash (Ash) ether (Ethernet) ax25 (AMPR AX.25)
netrom (AMPR NET/ROM))
irda (IrLAP) ec (Econet) x25 (generic X.25)
eui64 (Generic EUI-64)
<AF>=Address family. Default: inet
List of possible address families:
unix (UNIX Domain) inet (DARPA Internet) inet6 (IPv6)
ax25 (AMPR AX.25) netrom (AMPR NET/ROM) rose (AMPR ROSE)
ipx (Novell IPX) ddp (Appletalk DDP) ec (Econet)
ash (Ash) x25 (CCITT X.25)
`There should be a new tutorial for python 3 because it’s really head scratching trying to figure out things ourselves online. I ran this code exactly as it was taught in the tutoring but it’s giving me error.the error is at the bottom.
#!/usr/bin/env python
import subprocess
import optparse
def get_argument():
parser=optparse.OptionParser()
parser.add_option(“-i”, “–interface”, dest=”interface”, help=”interface to change its Mac address”)
parser.add_option(“-m”, “–mac”, dest=”new mac”, help=”New Mac address” )
return parser.parse_args()
def change_mac():
print(“[+] Changing Mac address for” + interface + “to” + new_mac)
subprocess.call([“ifconfig”, interface, “down”])
subprocess.call([“ifconfig”, interface, “hw” “ether”, new_mac])
subprocess.call([“ifconfig”, interface, “up”])
(options, arguments) = get_argument()
change_mac(options.interface, options.new_mac)
Traceback (most recent call last):
File “main.py”, line 20, in <module>
change_mac(options.interface, options.new_mac)
AttributeError: Values instance has no attribute ‘new_mac’
Hi!
You are missing a comma between “hw” and “ether”.
Change it and let me know how it goes!
Diego
- AuthorPosts
- You must be logged in to reply to this topic. | https://zsecurity.org/forums/topic/issues-assigning-new-mac-address/ | CC-MAIN-2021-43 | en | refinedweb |
You've answered 0 of 138 questions correctly. (Clear)
Question #105 Difficulty:
According to the C++17 standard, what is the output of this program?
#include <iostream> using namespace std; class A { public: A() { cout << "a"; } ~A() { cout << "A"; } }; int i = 1; int main() { label: A a; if (i--) goto label; }. | https://cppquiz.org/quiz/question/105 | CC-MAIN-2021-43 | en | refinedweb |
The easiest way to create web applications with Go
web.go is the simplest way to write web applications in the Go programming language. It's ideal for writing simple, performant backend web services.
web.go should be familiar to people who've developed websites with higher-level web frameworks like sinatra or web.py. It is designed to be a lightweight web framework that doesn't impose any scaffolding on the user. Some features include:
Make sure you have the a working Go environment. See the install instructions. web.go targets the Go
releasebranch.
To install web.go, simply run:
go get github.com/hoisie/web
To compile it from source:
git clone git://github.com/hoisie/web.git cd web && go build
package main
import ( "github.com/hoisie/web" )
func hello(val string) string { return "hello " + val }
func main() { web.Get("/(.*)", hello) web.Run("0.0.0.0:9999") }
To run the application, put the code in a file called hello.go and run:
go run hello.go
You can point your browser to .
Route handlers may contain a pointer to web.Context as their first parameter. This variable serves many purposes -- it contains information about the request, and it provides methods to control the http connection. For instance, to iterate over the web parameters, either from the URL of a GET request, or the form data of a POST request, you can access
ctx.Params, which is a
map[string]string:
package main
import ( "github.com/hoisie/web" )
func hello(ctx *web.Context, val string) { for k,v := range ctx.Params { println(k, v) } }
func main() { web.Get("/(.*)", hello) web.Run("0.0.0.0:9999") }
In this example, if you visit, you'll see the following printed out in the terminal:
a 1 b 2
API docs are hosted at
If you use web.go, I'd greatly appreciate a quick message about what you're building with it. This will help me get a sense of usage patterns, and helps me focus development efforts on features that people will actually use.
web.go was written by Michael Hoisie | https://xscode.com/hoisie/web | CC-MAIN-2021-43 | en | refinedweb |
Yellow box warnings are useful while developing react native mobile apps. Even though warnings are not critical as red box errors, the yellow boxes catch the attention of the developer to attend the issue. Proper fixing of the warnings can make your app more optimised.
But, there are some scenarios where the yellow box warnings are pure annoyance to the developer. For example, if you have a warning due to the usage of any third party libraries/dependencies such as react navigation then you cannot not fix the issue directly. In such situations, you may need to disable yellow box warnings.
The method console.warn() is used to create warnings. You can disable yellow box warnings completely using the snippet below:
console.disableYellowBox = true;
You can also disable specific selected warnings instead of disabling all warnings by setting an array of prefixes that should be disabled.
For your knowledge, Red box errors and yellow box warnings are disabled automatically in your production builds.
There are two warnings in the componentDidMount life cycle of the react native example given below. Using the YellowBox.ignoreWarnings method, the First Warning has been disabled. Hence, you can see only the second warning in the output.
import React, {Component} from 'react'; import {View, Text, YellowBox, StyleSheet} from 'react-native'; export default class App extends Component { constructor(props) { super(props); this.state = {}; } componentDidMount() { YellowBox.ignoreWarnings(['First Warning!']); console.warn('First Warning!'); console.warn('Second Warning!'); } render() { return ( <View style={styles.container}> <Text> App </Text> </View> ); } } const styles = StyleSheet.create({ container: { flex: 1, backgroundColor: 'white', }, });
I hope this blog post will help you to disappear those annoying yellow box warnings from your react native project. Keep visiting my blog! | https://reactnativeforyou.com/how-to-disable-yellow-box-warnings-in-react-native/ | CC-MAIN-2021-43 | en | refinedweb |
escarcega8,667 Points
not sure what I'm doing here
a couple of examples would be helpful
thank you guys in advance,
def product(): print('{}, * {}') return product
1 Answer
Bapi Roy14,237 Points
def product(a, b): return a * b
You are taking two arguments a and b. Then you are multiplying it (a *b) and return the value from function product. | https://teamtreehouse.com/community/not-sure-what-im-doing-here | CC-MAIN-2021-43 | en | refinedweb |
1、 Foreword
The plug-in system in my mind should be like NOP (more awesome, such as orchard and OSGi. Net). Each plug-in module is not just a pile of DLLs that implement a business interface, and then called by reflection or IOC technology, but a complete MVC applet. I can control the installation and disabling of plug-ins in the background. The directory structure is as follows:
After generation, it is placed in the plugins folder under the root directory of the site, and each plug-in has a subfolder
Plugins/Sms.AliYun/
Plugins/Sms.ManDao/
I am a lazy person with obsessive-compulsive disorder. I don’t want to copy the generated DLL file to the bin directory.
2、 Problems to be solved
1. By default, the asp.net engine will only load the DLLs in the “bin” folder, and the plug-in files we want are scattered in various subdirectories under the plugins directory.
2. How to handle when a model is used in a view? By default, razorviewengine uses buildmanager to compile views into dynamic assemblies, and then uses activator.createinstance to instantiate newly compiled objects. When using plug-in DLL, the current AppDomain does not know how to resolve this view that references the model because it does not exist in “bin” or GAC. To make matters worse, you won’t receive any error messages telling you why it doesn’t work or where the problem is. Instead, he will tell you that the file cannot be found in the view directory.
3. A plug-in is running under the site. Directly overwriting the DLL of the plug-in will tell you that the DLL is currently in use and cannot be overwritten.
4. How to load the view file if it is not placed in the view directory of the site.
3、 Net 4.0 makes all this possible
A new feature of net4.0 is the ability to execute code before application initialization (preapplicationstartmethodattribute). This feature enables applications to_ Some work can be done before star. For example, we can tell our MVC plug-in where to put the DLL of the system and do preloading before the application starts. About several new features of. Net, I have written a blog to introduce them. Click me., Some bloggers have written about preapplicationstartmethodattribute. Please click me. ABP’s startup module should also be implemented using the feature principle of preapplicationstartmethodattribute. I haven’t seen whether this is the case.
4、 Solution
1. Modify the web.config directory of the primary site so that the runtime can load files from other directories in addition to the bin directory
<runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <probing privatePath="Plugins/temp/" /> </assemblyBinding> </runtime>
2. Develop a simple plug-in management class, which is used in application_ Before start, copy the DLLs in each subdirectory of plugins to the folder specified in step 1. In order to make the demo as simple as possible, there is no detection of duplicate DLLs (for example, the EF assembly is referenced in the plug-in, and the master site also refers to it. If the EF DLL already exists in the site bin directory, it is not necessary to copy the DLL in the plug-in to the dynamic assembly directory set above)
using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Reflection; using System.Text; using System.Threading.Tasks; using System.Web; using System.Web.Compilation; using System.Web.Hosting; [assembly: PreApplicationStartMethod(typeof(Plugins.Core.PreApplicationInit), "Initialize")] namespace Plugins.Core { public class PreApplicationInit { static PreApplicationInit() { PluginFolder = new DirectoryInfo(HostingEnvironment.MapPath("~/plugins")); ShadowCopyFolder = new DirectoryInfo(HostingEnvironment.MapPath("~/plugins/temp")); } /// <summary> ///Directory information of plug-in /// </summary> private static readonly DirectoryInfo PluginFolder; /// <summary> ///The DLL directory specified when the program should run /// </summary> private static readonly DirectoryInfo ShadowCopyFolder; public static void Initialize() { Directory.CreateDirectory(ShadowCopyFolder.FullName); //Empty the files in the running directory of the plug-in DLL foreach (var f in ShadowCopyFolder.GetFiles("*.dll", SearchOption.AllDirectories)) { f.Delete(); } foreach (var plug in PluginFolder.GetFiles("*.dll", SearchOption.AllDirectories).Where(i=>i.Directory.Parent.Name== "plugins")) { File.Copy(plug.FullName, Path.Combine(ShadowCopyFolder.FullName, plug.Name), true); } foreach (var a in ShadowCopyFolder .GetFiles("*.dll", SearchOption.AllDirectories) .Select(x => AssemblyName.GetAssemblyName(x.FullName)) .Select(x => Assembly.Load(x.FullName))) { BuildManager.AddReferencedAssembly(a); } } } }
3. How can the view engine find our view? The answer is to rewrite the method of razorviewengine. I adopt the method that the Convention is greater than the configuration (assuming that our plug-in project namespace is plugins.apps.sms, the default controller namespace is plugins.apps.sms.controllers, and the folder after plug-in generation must be / plugins / plugins. Apps. SMS /), By analyzing the current controller, you can know the view directory location of the current plug-in
using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using System.Web; using System.Web.Mvc; using System.Web.WebPages.Razor; namespace Plugins.Web { public class CustomerViewEngine : RazorViewEngine { /// <summary> ///Define the address where the view page is located. /// </summary> private string[] _viewLocationFormats = new[] { "~/Views/Parts/{0}.cshtml", "~/Plugins/{pluginFolder}/Views/{1}/{0}.cshtml", "~/Plugins/{pluginFolder}/Views/Shared/{0}.cshtml", "~/Views/{1}/{0}.cshtml", "~/Views/Shared/{0}.cshtml", }; public override ViewEngineResult FindView(ControllerContext controllerContext, string viewName, string masterName, bool useCache) { string ns = controllerContext.Controller.GetType().Namespace; string controller = controllerContext.Controller.GetType().Name.Replace("Controller", ""); //Description is the controller in the plug-in, and the view directory needs to be handled separately if (ns.ToLower().Contains("plugins")) { var pluginsFolder = ns.ToLower().Replace(".controllers", ""); ViewLocationFormats = ReplacePlaceholder(pluginsFolder); } return base.FindView(controllerContext, viewName, masterName, useCache); } /// <summary> ///Replace pluginfolder placeholder /// </summary> /// <param name="folderName"></param> private string[] ReplacePlaceholder(string folderName) { string[] tempArray = new string[_viewLocationFormats.Length]; if (_viewLocationFormats != null) { for (int i = 0; i < _viewLocationFormats.Length; i++) { tempArray[i] = _viewLocationFormats[i].Replace("{pluginFolder}", folderName); } } return tempArray; } } }
Then specify the razor engine in the global.asax of the primary site as the one we rewritten
4. Start making a plug-in directory, which is not much different from the MVC project we usually establish, but some settings need to be made during release.
. the generation path should be written according to the agreement in Article 3, otherwise the view file will not be found
The web.config and. Cshtml files in the. View directory should be copied to the generation directory (right click in the file)
3. Set the generation attribute in the reference project. If it already exists under the main program, set “copy to output directory” to none. Otherwise, an error will occur when copying to the dynamic bin directory. You can modify the type in step 2 and add the file comparison function. Only if there is nothing in the bin directory can it be copied to the dynamic bin directory.
4. The generated directory structure is as follows:
5. After running, everything is normal, the controller in the plug-in works normally, and there is no problem that the model is referenced in the view
At this point, even if the core part of a plug-in system is completed, you can continue to expand and add the discovery, installation and uninstall functions of plug-ins. These are Pediatrics compared with the core functions. In the future, I will publish an article on plug-in system based on ABP framework. If you are interested, prepare the small bench and buy melon seeds and peanuts:)
5、 Source code
Download plugins link: Password: 85v1
The above is the whole content of this article. I hope it will be helpful to your study, and I hope you can support developpaer. | https://developpaper.com/using-asp-net-mvc-engine-to-develop-plug-in-system/ | CC-MAIN-2021-43 | en | refinedweb |
We‘re very wary of changes that increase the complexity of Ninja (in particular, new build file syntax or command-line flags) or increase the maintenance burden of Ninja. Ninja is already successfully used by hundreds of developers for large projects and it already achieves (most of) the goals we set out for it to do. It’s probably best to discuss new feature ideas on the mailing list or in an issue before creating a PR.
Generally it's the Google C++ Style Guide with a few additions:
#if __cplusplus >= 201103L.
using namespace std;a lot in the past. For new contributions, please try to avoid relying on it and instead whenever possible use
std::. However, please do not change existing code simply to add
std::unless your contribution already needs to change that line of code anyway.
///for Doxygen (use
\ato refer to arguments).
CanonicalizePath(string* path, string* err), the arguments are hopefully obvious).
If you're unsure about code formatting, please use clang-format. However, please do not format code that is not otherwise part of your contribution. | https://fuchsia.googlesource.com/third_party/ninja/+/d52a43d105040b92442e7c6657b50a2188b80ebd/CONTRIBUTING.md | CC-MAIN-2021-43 | en | refinedweb |
You can easily display gif images in ios but things are not the same in Android. In a normal straightforward case, the following snippet will work in ios but not on Android devices.
Image source={{uri: ''}} />
Now, assume that I want to show following gif giphy website. It’s a gif image in webp format.
In order to get gif support on Android devices, you need to use the dependencies of Facebook’s Fresco library. Open YourProject>Android>App>build.gradle file and add the following lines to dependencies {} tag.
implementation 'com.facebook.fresco:fresco:2.0.0' // For animated GIF support implementation 'com.facebook.fresco:animated-gif:2.0.0' // For WebP support, including animated WebP implementation 'com.facebook.fresco:animated-webp:2.0.0' implementation 'com.facebook.fresco:webpsupport:2.0.0'
As given in the comments, the dependencies are for gif support as well as animated webp. If you want to know information about dependencies go here.
After adding the dependencies, the following code will work on Android devices. Don’t forget to run the project after adding dependencies using react-native run-android command.
import React, { Component } from 'react'; import { View, Image, StyleSheet } from 'react-native'; class Home extends Component { constructor(props){ super(props); } render() { return ( <View style={styles.container}> <Image style={styles.image} source={{uri:'' }} /> </View> ); } } const styles = StyleSheet.create({ container: { flex: 1, justifyContent: 'center', alignItems: 'center', backgroundColor: 'white', }, image: { height: 250, width: 250 } }); export default Home;
The output of react native example for gif support in Android os will be as given below:
That’s how you add gif support to react native android apps!
4 thoughts on “How to Display Gif Images in React Native Android App”
Making GIF is very difficult. But After reading your article i can solve my problem.This code is very helpful to us. I think I got all the information I was looking for. Thanks for this wonderful piece of article.
It was very hard for me to create various kind of GIF’s. After reading your article I found all the tips very much useful. Thanks for posting blogs like this.
great tutorial also if your want to use some thrid-party gif like Giphy I recommend check out my post
Thank
get gif support on Android devices, you need to use the dependencies of Facebook’s Fresco library. Keep posting more articles on GIF. | https://reactnativeforyou.com/how-to-display-gif-images-in-react-native-android-app/ | CC-MAIN-2021-43 | en | refinedweb |
Hello to all, welcome to therichpost.com. In this post, I will tell you, How to call the helper function in laravel Controller or Views? Laravel is one of the top php mvc framework. Helper file in laravel helps to create global functions and these all are use in all laravel project.
In my last post, I told How to create helper file in laravel?
and today I will tell you how to call helper functions in laravel controller or views.
First, I will tell you controller example:
<?php namespace App\Http\Controllers; use Illuminate\Http\Request; class HomeController extends Controller { public function index() { pr($data); return view('home', compact('result')); } } ?>
Second, I will tell you view example:
@section('content') <div class="container"> <div class="row"> <div class="col-md-8"> @php pr($data); @endphp </div> </div> </div> @endsection
There are so many code tricks in laravel and i will let you know all. Please do comment if you any query related to this post. Thank you. Therichpost.com
Recent Comments | https://therichpost.com/call-helper-function-laravel-controller-views/ | CC-MAIN-2021-43 | en | refinedweb |
Lesson 2. Open, Plot and Explore Lidar Data in Raster Format with Python Spatial data open source python Workshop
Learning Objectives
After completing this tutorial, you will be able to:
- Open a lidar raster dataset in
Pythonusing
rasterioand a context manager to handle file connections.
- Be able to identify the resolution of a raster in
Python.
- Be able to plot a lidar raster dataset in
Pythonusing
matplotlib. open a plot a lidar raster dataset in
Python._0<<
Raster Facts
A few notes about rasters:
- Each cell is called a pixel.
- And each pixel represents an area on the ground.
- The resolution of the raster represents the area that each pixel represents the area it represents on the ground. So, a 1 meter resolution raster, means that each pixel represents a 1 m_1<<
Open Raster Data in Python
You can use the
rasterio library combined with
numpy and
matplotlib to open, manipulate and plot raster data in
Python. To begin you will load a suite of python libraries required to complete this lesson. These libraries are all a part of the
earth-analytics-python environment.
Be sure to set your working directory
os.chdir("path-to-you-dir-here/earth-analytics/data")
import os import numpy as np import matplotlib.pyplot as plt import rasterio as rio from rasterio.plot import show from rasterio.plot import show_hist from shapely.geometry import Polygon, mapping from rasterio.mask import mask # a package created for this class that will be discussed later in this lesson import earthpy as et import earthpy.spatial as es import earthpy.plot as ep # set home directory and download data et.data.get_data("spatial-vector-lidar") os.chdir(os.path.join(et.io.HOME, 'earth-analytics'))
Downloading from Extracted output to /root/earth-analytics/data/spatial-vector-lidar/.
Next, download the data. This line of code should only be run if you don’t have the data on your computer already!
Note that you import the
rasterio library using the alias (or shortname)
rio. You use the
rio.open("path-to-raster-here") function to open a raster dataset using
rio in
Python.
# define path to digital terrain model sjer_dtm_path = "data/spatial-vector-lidar/california/neon-soap-site/2013/lidar/SOAP_lidarDTM.tif" # open raster data lidar_dem = rio.open(sjer_dtm_path) # optional - view spatial extent lidar_dem.bounds
BoundingBox(left=296906.0, bottom=4100038.0, right=300198.0, top=4101554.0)
You can quickly plot the raster using the
rasterio function,
show().
# plot the dem using raster.io fig, ax = plt.subplots(figsize = (10,8)) show(lidar_dem, title="Lidar Digital Elevation Model (DEM) \n Boulder Flood 2013", ax=ax) ax.set_axis_off()
Opening and Closing File Connections
The
rasterio library is efficient as it establishes a connection with the raster file rather than directly reading it into memory. Because it creates a connection, it is important that you close the connection after it is opened AND after you’ve finished working with the data!
# close the file connection lidar_dem.close()
# this returns an error as you have closed the connection to the file. show(lidar_dem) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-7-dad244dfd7d3> in <module>() 1 # this returns an error as you have closed the connection to the file. ----> 2 show(lidar_dem) ~/anaconda3/envs/earth-analytics-python/lib/python3.6/site-packages/rasterio/plot.py in show(source, with_bounds, contour, contour_label_kws, ax, title, **kwargs) 80 elif isinstance(source, RasterReader): 81 if source.count == 1: ---> 82 arr = source.read(1, masked=True) 83 else: 84 try: rasterio/_io.pyx in rasterio._io.RasterReader.read (rasterio/_io.c:10647)() rasterio/_io.pyx in rasterio._io.RasterReader._read (rasterio/_io.c:15124)() ValueError: can't read closed raster file
Once the connection is closed, you can no longer work with the data. You’ll need to re-open the connection. Like this:
# open raster data connection - again lidar_dem = rio.open(sjer_dtm_path) fig, ax = plt.subplots(figsize = (10,10)) show(lidar_dem, title="Once the connection is re-opened \nyou can work with the raster data", ax=ax) ax.set_axis_off()
lidar_dem.close()
Context Manager to Open/Close Raster Data
A better way to work with raster data in
rasterio is to use the context manager. This will handle opening and closing the raster file for you.
with rio.open('name of file') as scr: src.rasteriofunctionname()
# view spatial extent of raster object with rio.open(sjer_dtm_path) as src: print(src.bounds)
BoundingBox(left=296906.0, bottom=4100038.0, right=300198.0, top=4101554.0)
Once you are outside of the
with statement, you can no long access the
src object which contains the spatial raster information.
Raster Plots with Matplotlib
Let’s try this again. Open the same DEM using a context manager. Then let’s plot again but this time using
earthpy
plot_bands. Using
matplotlib allows you to fully customize your plots. Do the following
- use .read() to read in your raster data as a numpy array
- set masked = True to ensure that no data values get translated to
nan.
- only read in the first band of your single band image. If you don’t specify
1when you read in a raster you will get a 3 dimensional array.
# read in all of the data without specifying a band with rio.open(sjer_dtm_path) as src: # convert / read the data into a numpy array: lidar_dem_im = src.read(masked= True) # view array shape -- notice that you have 3 dimensions below print(lidar_dem_im.shape)
(1, 1516, 3292)
# specify a band so you get a 2 dimensional image array with rio.open(sjer_dtm_path) as src: # convert / read the data into a numpy array: lidar_dem_im = src.read(1, masked= True) sjer_ext = rio.plot.plotting_extent(src) # view array shape -- here you have a 2 dimensional array as you would expect to have print(lidar_dem_im.shape)
(1516, 3292)
sjer_ext
(296906.0, 300198.0, 4100038.0, 4101554.0)
Plot A Raster Using EarthPy
You are now ready to plot your data using
plot_bands and EarthPy.
ep.plot_bands(lidar_dem_im, cmap='Greys', extent=sjer_ext, cbar=False)
<matplotlib.axes._subplots.AxesSubplot at 0x7f6f2f558a90>
If you provide the spatial extent of the raster to the plot, it will be plotted in the correct spatial location. This is important if you plan to overlay another spatial data layer on top of your raster plot.
ep.plot_bands(lidar_dem_im, cmap='Greys', extent=sjer_ext, cbar=False, # Add a title arguement title="Digital Elevation Model - Pre 2013 Flood\n Plotted Using the Correct Spatial Extent") plt.show()
Adding a
; at the end of the last line of your plot will turn off the message that you might otherwise get from matplotline:
Text(0.5,1,'Digital Elevation Model - Pre 2013 Flood')
ep.plot_bands(lidar_dem_im, cmap='Greys', extent=sjer_ext, title="Digital Elevation Model - Pre 2013 Flood\n Plotted Using the Correct Spatial Extent", cbar=False) plt.show()
Let’s plot again but this time you will:
- add a colorbar by allowing
ep.plot_bands()to add one. You have been setting the
cbararguement to
Falsein the previous plots. The default for
plot_bands()is for
cbarto be set to
True
- fix the colorbar scaling. By default,
plot_bands()will scale a colorbar to a 0-255 scale. However, since you are looking at elevation data, you would like the original values of the raster. You can prevent this scaling by setting the
scalearguement of
plot_bands()to
False
- turn off the annoying matplotlib message by adding a semicolon
;to the end of the last line
ep.plot_bands(lidar_dem_im, cmap='Greys', extent=sjer_ext, title="Lidar Digital Elevation Model \n Pre 2013 Boulder Flood | Lee Hill Road", scale=False) plt.show()
Below you tweak the height of your colorbar to ensure it lines up with the top and bottom edges of your plot. To do this you use the
make_axes_locatable package from the
mpl_toolkits.axes_grid1 library.
Color Ramps
To plot you can select pre-determined color ramps from
matplotlib, you can reverse a color ramp by adding
_r at the end of the color ramps name, for example
cmap = 'viridis_r'.
ep.plot_bands(lidar_dem_im, cmap='viridis_r', extent=sjer_ext, title="Digital Elevation Model - Pre 2013 Flood", scale=False) plt.show()
Explore Raster Data Values with Histograms
Next, you will explore a histogram of your data. A histogram is useful to help you better understand the distribution of values within your data. In this case given you are looking at elevation data, if there are all small elevation values and the histogram looks uniform (not too much variation in values) you can assume that your study area is relative “flat” - not too hilly. If there is a different distribution of elevation values you can begin to understand the range of elevation values in your study area and the degree of difference between low and high regions (ie is it flat or hilly?). Is it high elevation vs low elevation?
To plot a histogram use the
earthpy.plot -
hist() function.
# create histogram of data ep.hist(lidar_dem_im) plt.show()
# create histogram of data ep.hist(lidar_dem_im, bins=100, title="Lee Hill Road - Digital Elevation (terrain) Model - \nDistribution of Elevation Values") plt.show()
On Your Own - Challenge
The file that you opened above was an elevation model representing the elvation of a field site in California. Next, open up and plot the file:
"data/spatial-vector-lidar/california/neon-sjer-site/2013/lidar/SJER_lidarDSM.tif". If you want, produce a histogram of the data to better understand the range of elevation values in your data.
Share onTwitter Facebook Google+ LinkedIn
| https://www.earthdatascience.org/workshops/gis-open-source-python/open-lidar-raster-python/ | CC-MAIN-2021-43 | en | refinedweb |
Journey to Java: Episode 1 “Pilot”
To learn a new skill or language with no incentive is hard. A majority of people won’t spend their scarce free time to learn something that they aren’t being paid for. Others will not spend their money on books or resources to invest into themselves to achieve their goals. If you are reading this blog series as a employer I hope to display an amount of grit and commitment to becoming an experience software engineer and improving my skills to benefit a company and it’s users. If a developer stumbles across this blog I hope they become motivated enough to learn whatever they feel like learning. The biggest step forward is the first one.
As I continue my job search after graduating my Software Engineering course, I feel the more things I can learn the more employable I will be. I value learning things that are valuable rather than sexy, After playing sports my entire life, I value refining fundamentals of those skills. This will be a mini blogging series that will be different than my other blogs that were supposed to be a reference to myself and others learning a new concept. This series will be my journey learning a new language from complete scratch. A mixture of technical concepts I am learning plus feelings and my headspace may be helpful to show new programmers learning isn’t a beautiful linear process as well as employers that I am coachable, positive, and hungry to learn.
I knew I wanted to learn something new on the server side. I was stuck between Python (the internet seems to thing this is the best thing to learn) and Java (job applications have shown me employers want this). As I am applying and building my network on LinkedIn, most jobs I am seeing posted keep mentioning Java. Not only is Java desired and one of the most popular languages, learning is supposed to be beginner friendly, and opens up possibility for Android mobile development which is an added bonus. All of this has lead me to invest into myself again and purchase a Java course. This is part 1 to what I am learning and how I am feeling about it. This series will also act as notes for my own study and others to pick up syntax and vocabulary.
First I should mention besides the fact the course recommends IntelliJ as the IDE but I am using Eclipse based on a recommendation from a friend. Switching IDEs from Visual Studio Code to Eclipse has actually been a challenge in itself for me. Everything from opening a project to setting up the environment and pushing to GitHub has been awkward. I am sure it will become second nature after I look more into Eclipse and practice while I work in Java.
New Vocabulary and Concepts
JDK — Java Development Kit — Software to allow making of java projects.
Access modifier- allows us to define the scope of what we can do or others can do to this code.
Java Keywords
public- Full access access modifier (see above).
class- Used to create a java class. The following word will be the class name.
main- A special method java looks for when running. The entry point.
static- No definition for this yet in the course but it is needed for me to write my first Java program.
void- The method won’t return anything
int- Integer
Syntax
Print- Things put in the parenthesis will be printed to the console.
System.out.println()
Class- Things inside the nested curly braces are in the main method and will be included in the program.
public class ClassName { public static void main(String[] args) { }}
variables- You have to declare the data type of the variable in Java which is new for me coming from ruby.
int variableName = 5;
Takeaways
At the end of my first day setting up environments and studying Java I was able to make my first Java Program! This is obviously not a big deal in the grand scheme of things but it is a giant milestone to take on a new challenge where everything is new and uncomfortable.
My first Java Program!
My first java program is a simple but a giant step for me just to commit to the learning experience. I am going to simply create a class with all of the keywords I have learned. This is very different syntax than what I am used to in Ruby. Inside the class is the main method which is the starting point of my code. From there I simply set a variable and print it to the screen when I run the code. As an experienced Ruby developer this is simple but necessary and I am just happy to expand knowledge.
public class HelloWorld { public static void main(String[] args) { String stringVariable = "Hello Medium"; System.out.println(stringVariable); }} | https://adamadolfo8.medium.com/journey-to-java-episode-1-b043e58ed6f7 | CC-MAIN-2021-43 | en | refinedweb |
.
For :
flatc -n SaveSchema.txt --gen-onefile @pause :
// example save file namespace CompanyNamespaceWhatever; enum Color : byte { Red = 1, Green, Blue } union WeaponClassesOrWhatever { Sword, Gun } struct Vec3 { x:float; y:float; z:float; } table GameDataWhatever { pos:Vec3; mana:short = 150; hp:short = 100; name:string; inventory:[ubyte]; color:Color = Blue; weapon:WeaponClassesOrWhatever; } table Sword { damage:int = 10; distance:short = 5; } table Gun { damage:int = 500; reloadspeed:short = 2; } root_type GameDataWhatever; file_identifier "WHAT"; :
// Create flatbuffer class FlatBufferBuilder fbb = new FlatBufferBuilder(1); // Create our sword for GameDataWhatever //------------------------------------------------------ WeaponClassesOrWhatever weaponType = WeaponClassesOrWhatever.Sword; Sword.StartSword(fbb); Sword.AddDamage(fbb, 123); Sword.AddDistance(fbb, 999); Offset<Sword> offsetWeapon = Sword.EndSword(fbb); /* // For gun uncomment this one and remove the sword one WeaponClassesOrWhatever weaponType = WeaponClassesOrWhatever.Gun; Gun.StartGun(fbb); Gun.AddDamage(fbb, 123); Gun.AddReloadspeed(fbb, 999); Offset<Gun> offsetWeapon = Gun.EndGun(fbb); */ //------------------------------------------------------ // Create strings for GameDataWhatever //------------------------------------------------------ StringOffset cname = fbb.CreateString("Test String ! time : " + DateTime.Now); //------------------------------------------------------ // Create GameDataWhatever object we will store string and weapon in //------------------------------------------------------ GameDataWhatever.StartGameDataWhatever(fbb); GameDataWhatever.AddName(fbb, cname); GameDataWhatever.AddPos(fbb, Vec3.CreateVec3(fbb, 1, 2, 1)); // structs can be inserted directly, no need to be defined earlier GameDataWhatever.AddColor(fbb, CompanyNamespaceWhatever.Color.Red); //Store weapon GameDataWhatever.AddWeaponType(fbb, weaponType); GameDataWhatever.AddWeapon(fbb, offsetWeapon.Value); var offset = GameDataWhatever.EndGameDataWhatever(fbb); //------------------------------------------------------ GameDataWhatever.FinishGameDataWhateverBuffer(fbb, offset); // Save the data into "SAVE_FILENAME.whatever" file, name doesn't matter obviously using (var ms = new MemoryStream(fbb.DataBuffer.Data, fbb.DataBuffer.Position, fbb.Offset)) { File.WriteAllBytes("SAVE_FILENAME.whatever", ms.ToArray()); Debug.Log("SAVED !"); } !
ByteBuffer bb = new ByteBuffer(File.ReadAllBytes("SAVE_FILENAME.whatever")); if (!GameDataWhatever.GameDataWhateverBufferHasIdentifier(bb)) { throw new Exception("Identifier test failed, you sure the identifier is identical to the generated schema's one?"); } GameDataWhatever data = GameDataWhatever.GetRootAsGameDataWhatever(bb); Debug.Log("LOADED DATA : "); Debug.Log("NAME : " + data.Name); Debug.Log("POS : " + data.Pos.X + ", " + data.Pos.Y + ", " + data.Pos.Z); Debug.Log("COLOR : " + data.Color); Debug.Log("WEAPON TYPE : " + data.WeaponType); switch (data.WeaponType) { case WeaponClassesOrWhatever.Sword: Sword sword = new Sword(); data.GetWeapon<Sword>(sword); Debug.Log("SWORD DAMAGE : " + sword.Damage); break; case WeaponClassesOrWhatever.Gun: Gun gun = new Gun(); data.GetWeapon<Gun>(gun); Debug.Log("GUN RELOAD SPEED : " + gun.Reloadspeed); break; default: break; } | https://exiin.com/blog/flatbuffers-for-unity-sample-code/ | CC-MAIN-2021-43 | en | refinedweb |
Inko 0.2.4 released
Inko 0.2.4 has been released.
This release contains quite a few drastic changes compared to previous releases. Most notable, panics have been overhauled, cleaning up resources (such as closing file handles) has been made easier, and various bugs have been resolved.
For Inko 0.3.0 we plan to start working on features such as:
FFI support might be delayed to a future release, as this is likely going to take a lot of work to implement.
For more information, see the issues scheduled for the 0.3.0 milestone.
Noteworthy changes in 0.2.4
- More consistent syntax when passing blocks as the last argument
- Sending unknown messages to Nil works again
- Cleaning up resources using deferred blocks
- Responding to panics using panic handlers
- Obtaining environment data using std::env
- std::io::Close.close can no longer throw
- std::test now uses panics, instead of throwing values
- Memory usage has been reduced
- Prefetching is now supported on Rust stable
- The implicit "self" argument has been removed
The full list of changes can be found in the CHANGELOG.
More consistent syntax when passing blocks as the last argument
When passing arguments using parentheses, Inko now allows you to place a block outside of these parentheses, causing it to be treated as the last argument:
import std::stdio::stdout [10, 20, 30].each() do (number) { stdout.print(number) }
This would be parsed the same way as the following code:
import std::stdio::stdout [10, 20, 30].each(do (number) { stdout.print(number) })
Previously, when passing a block as the last argument the recommended style was to leave out the parentheses, meaning you'd write the following:
import std::stdio::stdout [10, 20, 30].each do (number) { stdout.print(number) }
However, this is inconsistent, and can at times make the code harder to read. This new syntax allows for a more consistent syntax, without having to place the block inside parentheses, which can look unappealing. Inko's own unit tests benefited quite a bit from these changes, allowing us to turn this:
test.group 'std::fs::dir.list', do (g) { g.test 'Listing the contents of an empty directory', { with_temp_dir [], do (path) { let contents = try! dir.list(path) assert.equal(contents, []) } } }
Into this:
test.group('std::fs::dir.list') do (g) { g.test('Listing the contents of an empty directory') { with_temp_dir([]) do (path) { let contents = try! dir.list(path) assert.equal(contents, []) } } }
Sending unknown messages to Nil works again
Inko allows you to send any message to
Nil and another
Nil will be returned.
Unfortunately, recent refactoring of the compiler broke support for this. Inko
0.2.4 resolves these problems, allowing for code such as
Nil.does_not_exist to
compile again.
Cleaning up resources using deferred blocks
Every language offers a way to clean up resources, such as closing file handles, or removing temporary files. Many dynamic languages, such as Ruby, use finalizers for this. Finalizers are difficult to implement right, and are difficult to use. There's often no guarantee when they run, or if they run at all. If finalizers are executed concurrently with a program race conditions can occur, but if they don't they may slow down the program. In short, we felt it was best to avoid them at all costs.
Unfortunately, Inko didn't really provide a viable alternative. Manually closing resources would work, except in the event of a panic such operations may not be executed.
Inko 0.2.4 introduces the concept of "deferred blocks". The idea is taken from Go, and is quite simple. A deferred block is simply a block of code that is executed when we return from the scope that defined it. Such blocks are always executed, even when throwing, an error or when triggering a panic. This allows you to clean up resources, even in the event of a panic.
Using deferred blocks can be done using
std::process.defer:
import std::fs::file import std::process let file = try! ::file.write_only('test.txt') process.defer { file.close } try! file.write_string('hello')
Here
file.close will always be executed, ensuring the file handle is closed.
Using
std::process.defer directly can lead to rather verbose code, so we are
considering introducing more high-level abstractions on top in the future. There
is no exact implementation yet, but the idea is to offer something similar to
Python's "with" statement:
import std::fs::file try! { file.write_only('test.txt') }.with do (file) { try! file.write_string('hello') }
Here the idea is that once the block passed to
with returns (or throws, or
panics), the
file object is closed before we continue.
Note that at this point this is just an idea, and the final implementation could differ significantly.
Responding to panics using panic handlers
Prior to Inko 0.2.4, a panic would terminate the entire program. Starting with 0.2.4, this is no longer always the case. Processes can now register a panic handler, which is a block that will be executed in the event of a panic. Once the handler finishes, the process is terminated. There is no way to recover from a panic, as panics are usually the result of a serious bug, and usually the only sane response is to restart the process. Since a process panicked, it may not be able to restart itself (or even know how to do so), and so we terminate it.
If a process does not define its own panic handler, the default global panic handler will be executed. This handler prints a stack trace, then terminates the entire program.
This particular setup means that by default a panic is very obvious, because our program crashes. At the same time, we're able to scope this to individual processes by telling them how to react to a panic.
Registering a process specific panic handler is done using
std::process.panicking:
import std::process import std::stdio::stderr process.panicking { stderr.print('oops, we ran into a panic!') }
The global handler can be overwritten using
std::vm.panicking:
import std::vm import std::stdio::stderr vm.panicking { stderr.print('oops, we ran into a panic!') }
Note that you can not restore the global panic handler after you have redefined
it. Also keep in mind that if you overwrite the global panic handler, Inko will
not terminate the program for you, as this is done by the default global
handler. This means that if you still want to terminate the program, you have to
do so manually using
std::vm.exit:
import std::vm import std::stdio::stderr vm.panicking { stderr.print('oops, we ran into a panic!') vm.exit(1) }
Obtaining environment data using std::env
Environment data, such as environment variables and command-line arguments, can
now be accessed using the module
std::env. For example, we can read
environment variables like so:
import std::env env['HOME'] # => '/home/alice'
We can also obtain directory information, such as the home directory and the temporary directory:
import std::env env.home_directory # => '/home/alice' env.temporary_directory # => '/tmp'
std::io::Close.close can no longer throw
A while back the
std::io module was changed quite a bit, and
std::io::Close.close was changed to allow it to throw. This release reverts
this. Whether or not closing a resource fails or not doesn't really matter, as a
program can just continue running. Requiring the use of
try or
try! when
using
Close.close thus led to unnecessarily verbose code.
std::test now uses panics, instead of throwing values
With the changes to panics, and the introduction of panic handlers,
std::test
has been changed to panic whenever an assertion is not met, instead of throwing
an error. This means you can now write
assert.equal(a, b) instead of
try assert.equal(a, b), simplifying the process of writing unit tests.
You can now also test for panics using
std::assert.panic and
std::assert.no_panic.
Memory usage has been reduced
The memory necessary to start a process has been reduced from at least 944 bytes to at least 832 bytes, a reduction of 112 bytes. Note that we say at least, because the moment a process allocates memory it will request a 32KB block of memory.
The exact amount of memory necessary to just spawn a process is probably a bit
higher, as the above number of bytes is the type size of the
Process structure
in the virtual machine.
Prefetching is now supported on Rust stable
In Inko 0.2.0 we introduced support for building the virtual machine using stable Rust. However, support for prefetching was only available when using a nightly build of Rust.
Starting with Inko 0.2.4, prefetching support is now available on stable Rust. This means you no longer need a nightly build of Rust to get the best performance.
The implicit "self" argument has been removed
Prior to version 0.2.4, the receiver of a method or block was passed to the implicit first argument, called "self". This made it hard for the VM to store and later execute blocks, as it wouldn't know what object to pass to this argument.
As of 0.2.4, the use of this implicit argument has been removed entirely.
Instead, blocks now explicitly store their receiver, and using the
self
keyword results in that receiver being retrieved.
These changes simplify the compiler, allow the VM to schedule blocks more
easily, and ensure that
self doesn't show up in the list of arguments of a
block when using
std::mirror::BlockMirror.argument_names. | https://inko-lang.org/news/inko-0-2-4-released/ | CC-MAIN-2018-51 | en | refinedweb |
Realm React Native 1.0: Realm Mobile Platform Support
Today, we’ve got two announcements for the React Native community. First, after nearly a year of open source collaboration, Realm React Native has reached 1.0, marking a key milestone as a powerful object database, and a foundation for great reactive apps. We’re also very excited to announce that the Realm Mobile Platform now supports React Native. With just a few lines of code, React Native developers can now build exciting new experiences on top of our realtime platform.
When we launched the beta of Realm React Native early last year, we found lots of uptake from React Native’s passionate developer community, and from companies like TaskRabbit. Because Realm is a cross-platform database, you can build apps that reuse more code, making your data layer more maintainable and less fragile. And Realm’s live objects and collection notifications mean that your app really is reactive — it responds in realtime as the Realm-backed data updates. To see how easy it is to get started, just check out our Realm React Native documentation.
But Realm is more than just a client-side database. With the release of Realm React Native 1.0, React Native developers can now build powerful new apps on top of the Realm Mobile Platform. By connecting to the Realm Object Server and then creating synced Realms in your app, your Realm-backed objects will automatically sync whenever their properties change, while still being stored locally for offline use. The Realm Mobile Platform also gives you tools for handling conflict resolution, user authentication, and customizable permissions, so that you have all you need to build compelling, collaborative experiences.
The Professional and Enterprise Editions also include Event Handling, which lets you write reactive server-side JavaScript code. Now, you can bring reactive principles across your whole codebase, ensuring that your app responds predictably whenever and wherever user data changes.
It’s easy to integrate your own React Native app with the Realm Mobile Platform. Authenticate a user, connect to the Realm Object Server, then reactively update your app’s UI with this code sample:
import React, { Component } from 'react'; import { Text } from 'react-native'; import Realm from 'realm'; import { ListView } from 'realm/react-native'; export default class DogsView extends Component { constructor(props) { super(props); // Initialize the component with an empty data source const ds = new ListView.DataSource({rowHasChanged: (r1, r2) => r1 !== r2}); this.state = { dataSource: ds }; } componentWillMount() { // `this.props` contains the username and password passed to the DogsView instance Realm.Sync.User.login('', this.props.username, this.props.password, (error, user) => { let realm = new Realm({ schema: [ { name: 'Dog', properties: { name: 'string' } } ], sync: { user, url: 'realms://my-realm-server.com/~/dogs'} }); // Once the user is logged in and we have a synced realm, // reset the DataSource with all the Dog objects in the realm. // Also subscribe for changes on the Dogs collection and refresh the UI when they occur const dogs = realm.objects('Dog'); this.setState({ realm, dataSource: this.state.dataSource.cloneWithRows(dogs) }); dogs.addListener(() => this.forceUpdate()); }); } render() { return (<ListView dataSource={this.state.dataSource} renderRow={(item) => <Text>{item.name}</Text>} />); } }
To get started with the Realm Mobile Platform, check out our documentation. Start making something your users will really love — across all their devices, whether they’re online or offline. | https://realm.io/blog/realm-react-native-1-0/ | CC-MAIN-2018-51 | en | refinedweb |
How can i locate a file or a folder using python like some when we right click in downloaded files using firefox and open containing folder,and it opens the explorer and pre selects the file downloaded.
how can this functionaly be implemented using python standard library ?
however this code does opens the explorer for me, but doesn select the file specified..
import subprocess subprocess.Popen('explorer "E://temp//"') | https://www.daniweb.com/programming/software-development/threads/440415/locate-file-or-folder-in-windows-explorer | CC-MAIN-2018-51 | en | refinedweb |
Kodu Action Grab (derived from Kodu Action).
More...
#include <KoduActionGrab.h>
Kodu Action Grab (derived from Kodu Action).
Definition at line 13 of file KoduActionGrab.h.
List of all members.
Constructor.
Definition at line 16 of file KoduActionGrab.h.
Copy constructor.
Definition at line 22 of file KoduActionGrab.h.
Destructor.
Definition at line 28 of file KoduActionGrab.h.
Returns the target object (the object to grab).
Definition at line 7 of file KoduActionGrab.cc.
[static]
Tests if the primitive argument is the same as the clling class.
Reimplemented from Kodu::KoduPrimitive.
Definition at line 14 of file KoduActionGrab.cc.
Assignment operator.
Definition at line 33 of file KoduActionGrab.h.
[virtual]
Prints the attributes of a particular instance.
Reimplemented from Kodu::KoduAction.
Definition at line 22 of file KoduActionGrab.cc.
Used to reinitialize certain variables (e.g. when switching to another page).
Definition at line 18 of file KoduActionGrab.cc.
[private]
States whether or not the "it" modifier was specified (by the user).
Definition at line 54 of file KoduActionGrab.h.
Referenced by operator=(), and printAttrs(). | http://tekkotsu.org/dox/classKodu_1_1KoduActionGrab.html | CC-MAIN-2018-51 | en | refinedweb |
Pass maps are an established visualisation in football analysis, used to show the area of the pitch where a player made their passes. You’ll find examples across the Football Manager series, TV coverage, and pretty much all formats of football journalism. Similar plots are used to show shots or other events in a game, and multiple other sports make use of similar maps of what goes on during a game. This article runs through one way to create these in Python, making use of the Matplotlib library. Let’s fire up our modules, open our dataset and take a look at what we are working with:
import pandas as pd import numpy as np import matplotlib.pyplot as plt from matplotlib.patches import Arc %matplotlib inline data = pd.read_csv("EventData/passes.csv") data.head()
*** Plotting Lines
Our dataset contains Zeedayne’s passes from her match. We have when they happened, in additon to the starting and ending X and Y locations. With this information, matplotlib makes it easy to draw lines. We can use the ‘.plot()’ function to draw lines if we give it two lists:
- List one must contain the start and end X locations
- List two gives the start and end Y locations
For example, plt.plot([0,1],[2,3] will plot a line from location (0,2) to (1,3).
We could write this line to plot each of Zeedayne’s passes, but we hate repeating ourselves and are a little bit lazy, so let’s use a for loop to do this. Take a look at our code below to see it in action:
fig, ax = plt.subplots() fig.set_size_inches(7, 5) for i in range(len(data)): plt.plot([int(data["Xstart"][i]),int(data["Xend"][i])], [int(data["Ystart"][i]),int(data["Yend"][i])], color="blue") plt.show()
Great job on plotting all of the passes! Unfortunately, we do not know where they happened on the pitch, or the direction, or much else, but we will get there!
Let’s start with adding a circle at the starting point of each pass to understand the direction. This is as easy as before, we just plot the start data, like below:
fig, ax = plt.subplots() fig.set_size_inches(7, 5) for i in range(len(data)): plt.plot([int(data["Xstart"][i]),int(data["Xend"][i])], [int(data["Ystart"][i]),int(data["Yend"][i])], color="blue") plt.plot(int(data["Xstart"][i]),int(data["Ystart"][i]),"o", color="green") plt.show()
Another massive and easy improvement would be to add a pitch map – as our article here explains. Let’s steal the code and add the pitch here – obviously feel free to steal the pitch too!
') for i in range(len(data)): plt.plot([int(data["Xstart"][i]),int(data["Xend"][i])],[int(data["Ystart"][i]),int(data["Yend"][i])], color="blue") plt.plot(int(data["Xstart"][i]),int(data["Ystart"][i]),"o", color="green") #Display Pitch plt.show()
Awesome, now we can see Zeedayne’s pass locations – seems to cover just about everywhere!
Summary
Plotting simple pass maps is pretty easy – we just need to use matplotlib’s ‘.plot’ functionality to draw our lines, and a for loop to run through X/Y origin and destiniation data to plot each line.
On their own, they do not offer much information, but once we add start location and a pitch map, we start to see where a player played their passes, where they ended up and the range that they employed in the match.
To develop on this, we can look to colour code our lines for success, or another variable. We could even look to plot a heatmap to show where a player was active. Watch out for a further article on these! | https://fcpython.com/visualisation/drawing-pass-map-python | CC-MAIN-2018-51 | en | refinedweb |
Iterators
Table of contents
Introduction
Iterators are used for iterating over the values of a collection, such as an
Array or
HashMap. Typically a programming language will use one of two
iterator types:
- Internal iterators: iterators where the iteration is controlled by a method, usually by executing some sort of callback (e.g. a block).
- External iterators: stateful data structures from which you "pull" the next value, until you run out of values.
Both have their benefits and drawbacks. Internal iterators are easy to implement and usually offer good performance. Internal iterators can not be composed together (easily), they are eager (the method only returns once all values have been iterated over), making it harder (if not impossible) to pause and resume iteration later on.
External iterators do not suffer from these problems, as control of iteration is given to the user of the iterator. This does come at the cost of having to allocate and mutate an iterator, which can sometimes lead to worse performance when compared with internal iterators.
Iterators in Inko
Inko primarily uses external iterators, but various types will allow you to use
internal iterators for simple use cases, such as just traversing the values in a
collection. For example, we can iterate over the values of an
Array by sending
each to the
Array:
import std::stdio::stdout [10, 20, 30].each do (number) { stdout.print(number) }
We can also do this using external iterators:
import std::stdio::stdout [10, 20, 30].iter.each do (number) { stdout.print(number) }
Using external iterators gives us more control. For example, we can simply take the first value (skipping all the others) like so:
let array = [10, 20, 30] array.iter.next # => 10
Because external iterators are lazy, this would never iterate over the values
20 and
30.
Implementing iterators
Implementing your own iterators is done in two steps:
- Create a separate object for your iterator, and implement the
std::iterator::Iteratortrait for it.
- Define a method called
iteron your object, and return the iterator created in the previous step. If an object provides multiple iterators, use a more meaningful name instead (e.g.
keysor
values).
To illustrate this, let's say we have a very simple
LinkedList type that (for
the sake of simplicity) only supports
Integer values. First we define an
object to store a single value, called a
Node:
object Node { def init(value: Integer) { let @value = value # The next node can either be a Node, or Nil, hence we use `?Node` as the # type. We specify the type explicitly, otherwise the compiler will infer # the type of `@next` as `Nil`. let mut @next: ?Node = Nil } def next -> ?Node { @next } def next=(node: Node) { @next = node } def value -> Integer { @value } }
Next, let's define our
LinkedList object that stores these
Node objects:
object LinkedList { def init { let mut @head: ?Node = Nil let mut @tail: ?Node = Nil } def head -> ?Node { @head } def push(value: Integer) { let node = Node.new(value) @tail.if true: { @tail.next = node @tail = node }, false: { @head = node @tail = node } } }
With our linked list implemented, let's add the import necessary to implement our iterator:
import std::iterator::Iterator
Now we can create our iterator object, implement the
Iterator trait for it,
and define an
iter message for our
LinkedList object:
# Iterator is a generic type, and in this case takes a single type argument: the # type of the values returned by the iterator. In this case our type of the # values is `Integer`. object LinkedListIterator impl Iterator!(Integer) { def init(list: LinkedList) { let mut @node: ?Node = list.head } # This will return the next value from the iterator, if any. def next -> ?Node { let node = @node @node.if_true { @node = @node.next } node } # This will return True if a value is available, False otherwise. def next? -> Boolean { @node.if true: { True }, false: { False } } } # Now that our iterator object is in place, let's reopen LinkedList and add the # `iter` method to it. impl LinkedList { def iter -> LinkedListIterator { LinkedListIterator.new(self) } }
With all this in place, we can use our iterator like so:
let list = LinkedList.new list.push(10) list.push(20) let iter = list.iter stdout.print(iter.next.value) # => 10 stdout.print(iter.next.value) # => 20
If we want to (manually) cycle through all values, we can do so as well:
let list = LinkedList.new list.push(10) list.push(20) let iter = list.iter { iter.next? }.while_true { stdout.print(iter.next.value) # => 10, 20 }
Since the above pattern is so common, iterators respond to
each to make this
easier:
let list = LinkedList.new list.push(10) list.push(20) let iter = list.iter # Because of a bug in the compiler () # we need to manually annotate the block's argument for the time being. iter.each do (node: Node) { stdout.print(node.value) # => 10, 20 } | https://inko-lang.org/manual/getting-started/iterators/ | CC-MAIN-2018-51 | en | refinedweb |
Elements RTL
Elements RTL (also referred to as RTL2) is a new cross-platform base library and abstraction layer that allows you to write code (in any of the four Elements languages) that can easily be shared across all four platforms (.NET, Cocoa, Java/Android, and Island).
Elements RTL provides abstractions for common low-level types such as Strings ad Collections, and tasks such as value conversions, text encoding and mathematical operations. It also includes fully native, cross-platform XML and Json document implementations, HTTP access, and more.
Elements RTL supersedes and replaces Sugar. Applications using Sugar should port over to Elements RTL easily, since many APIs are the same or similar.
Using Elements RTL
A pre-compiled version of Elements RTL is included with Elements, and can be added to your projects via the Add References dialog in Fire or Visual Studio, where the
Elements library should show automatically. The New Project dialog in Fire will also give you the option to start new projects based on Elements RTL.
Simply adding the reference and adding the
RemObjects.Elements.RTL namespace to your uses/import statements will make the Elements RTL classes available for you. For ease of use, we recommend adding
RemObjects.Elements.RTL to the global "Default Uses" Project Setting.
Classes and Types
Currently, the following major types are supported/implemented. Unless otherwise noted, all parts of Elements RTL are available across all platforms, including Island.
Classes
- Binary
- BinaryStream
- BroadcastManager
- Consts
- Convert
- CurlHelper
- DateFormatter
- DateTime
- Dictionary<T,U>
- Encoding
- Environment
- Environment.macOS
- Event
- File
- FileHandle
- FileStream
- Folder
- Guid
- Http
- HttpBinaryRequestContent
- HttpRequest
- HttpRequestContent
- HttpResponse
- HttpResponseContent<T>
- ImmutableBinary
- ImmutableDictionary<T,U>
- ImmutableList<T>
- ImmutableListProxy<T>
- ImmutableQueue<T>
- ImmutableStack<T>
- JsonArray
- JsonBooleanValue
- JsonDocument
- JsonFloatValue
- JsonIntegerValue
- JsonNode
- JsonNullValue
- JsonObject
- JsonStringValue
- JsonValue<T>
- KeyValuePair<T,U>
- List<T>
- Locale
- LocaleUtils
- Math
- MemoryStream
- Method
- Monitor
- Notification
- NullHelper
- Parameter
- Path
- Process
- Property
- Queue<T>
- Random
- RangeHelper
- Registry
- SimpleCommandLineParser
- Stack<T>
- Stream
- String
- StringBuilder
- StringFormatter
- Thread
- Timer
- TimeZone
- Type
- Uri
- Url
- Urn
- WrappedPlatformStream
- XmlCData
- XmlCData
- XmlComment
- XmlComment
- XmlDocCurrentPosition
- XmlDocument
- XmlDocument
- XmlDocumentType
- XmlElement
- XmlElement
- XmlError
- XmlErrorInfo
- XmlFormattingOptions
- XmlNamespace
- XmlNamespace
- XmlNode
- XmlNode
- XmlParser
- XmlProcessingInstruction
- XmlRange
- XmlText
- XmlTokenizer
Enums
- CURLCode
- CURLINFO
- CURLOption
- EventMode
- FileOpenMode
- GuidFormat
- HttpRequestMode
- OperatingSystem
- SeekOrigin
- ThreadPriority
- ThreadState
- XmlNewLineSymbol
- XmlNodeType
- XmlPositionKind
- XmlTagStyle
- XmlTokenKind
- XmlWhitespaceStyle
Exceptions
- ArgumentException
- ArgumentNullException
- ArgumentOutOfRangeException
- ConversionException
- FileNotFoundException
- FormatException
- HttpException
- InvalidCastException
- InvalidOperationException
- IOException
- JsonException
- JsonInvalidTokenException
- JsonInvalidValueException
- JsonNodeTypeException
- JsonParserException
- JsonUnexpectedTokenException
- KeyNotFoundException
- NotImplementedException
- NotSupportedException
- NSErrorException
- QueueEmptyException
- RTLException
- StackEmptyException
- UrlException
- UrlParserException
- XmlException
Attributes
Aliases
How Elements RTL Works
A large portion of Elements RTL is implemented using an Elements compiler feature called Mapped Types that allows platform-native classes to be accessed using a different, shared API defined by Elements RTL.
For example,
RemObjects.Elements.RTL.Dictionary is the class Elements RTL provides for working with a dictionary of keys and values. The class, and its methods, can be used on .NET, Cocoa, Java and Island in exactly the same way, so code that makes use of a
RemObjects.Elements.RTL.Dictionary can be compiled for all platforms. But it is not a "real" class. Instead, it is a mapping to
System.Collections.Generic.Dictionary<T,U>on .NET
java.util.HashMap<T,U>on Java
NSMutableDictionaryon Cocoa
RemObjects.Elements.System.Dictionaryfrom Island RTL on Island
That means that when your code is using
RemObjects.Elements.RTL.Dictionary on, say, Cocoa, the compiler will actually translate it to code that directly works with an
NSMutableDictionary. The same code, compiled for Java, will seamlessly use a
HashMap.
This has several benefits:
- Rather than "reinventing the wheel", Elements RTL makes use of existing (and well-tested) classes and APIs provided by the underlying frameworks.
- Elements RTL never boxes you in; you can always access features of the underlying framework classes, simply by casting (although of course that part of the code then becomes platform dependent).
- Casting between Elements RTL and framework classes is toll free, so your platform-specific UI code can be written to use the regular framework classes, but can seamlessly pass those into your shared business code, which expects and works with Elements RTL classes.
Contributing
Elements RTL is open source and available on GitHub. It also ships pre-compiled and pre-installed with Elements 9.1 and later. Contributions are welcome.
The bulk part of Elements RTL is written in the Oxygene language, but you can open the Elements RTL projects in Fire and Visual Studio 2017 without requiring an Oxygene license, so even C#, Swift or Iodine developers can contribute. We accept contributions in any of the four Elements languages. | https://docs.elementscompiler.com/API/ElementsRTL/ | CC-MAIN-2018-51 | en | refinedweb |
#include <wx/richtext/richtexthtml.h>
Handles HTML output (only) for wxRichTextCtrl content.
The most flexible way to use this class is to create a temporary object and call its functions directly, rather than use wxRichTextBuffer::SaveFile or wxRichTextCtrl::SaveFile.
Image handling requires a little extra work from the application, to choose an appropriate image format for the target HTML viewer and to clean up the temporary images later. If you are planning to load the HTML into a standard web browser, you can specify the handler flag wxRICHTEXT_HANDLER_SAVE_IMAGES_TO_BASE64 (the default) and no extra work is required: the images will be written with the HTML.
However, if you want wxHTML compatibility, you will need to use
wxRICHTEXT_HANDLER_SAVE_IMAGES_TO_MEMORY or
wxRICHTEXT_HANDLER_SAVE_IMAGES_TO_FILES.
In this case, you must either call wxRichTextHTMLHandler::DeleteTemporaryImages before the next load operation, or you must store the image locations and delete them yourself when appropriate.
You can call wxRichTextHTMLHandler::GetTemporaryImageLocations to get the array of temporary image names.
The following flags can be used with this handler, via the handler's SetFlags() function or the buffer or control's SetHandlerFlags() function:
Constructor.
Clears the image locations generated by the last operation.
Deletes the in-memory or temporary files generated by the last operation.
Delete the in-memory or temporary files generated by the last operation.
This is a static function that can be used to delete the saved locations from an earlier operation, for example after the user has viewed the HTML file.
Saves the buffer content to the HTML stream.
Implements wxRichTextFileHandler.
Returns the mapping for converting point sizes to HTML font sizes.
Returns the directory used to store temporary image files.
Returns the image locations for the last operation.
Reset the file counter, in case, for example, the same names are required each time.
Sets the mapping for converting point sizes to HTML font sizes.
There should be 7 elements, one for each HTML font size, each element specifying the maximum point size for that HTML font size. For example:
Sets the directory for storing temporary files.
If empty, the system temporary directory will be used.
Sets the list of image locations generated by the last operation. | https://docs.wxwidgets.org/trunk/classwx_rich_text_h_t_m_l_handler.html | CC-MAIN-2018-51 | en | refinedweb |
Shared Image Gallery overview
Shared Image Gallery is a service that helps you build structure and organization around your custom VM images. Shared Image Gallery provides three main value propositions:
- Simple management
- Scale your custom images
- Share your images - share your images to different users, service principals, or AD groups within your organization as well as different regions using the multi-region replication
A managed image is a copy. The managed image remains in storage and can be used over and over again to create new VMs.
If you have a large number of managed images that you need to maintain and would like to make them available throughout your company, you can use a Shared Image Gallery as a repository that makes it easy to update and share your images. The charges for using a Shared Image Gallery are just the costs for the storage used by the images, plus any network egress costs for replicating images from the source region to the published regions.
The Shared Image Gallery feature has multiple resource types:
Regional Support
Regional support for shared image galleries is in limited preview, but will expand over time. For the limited preview, here is the list of regions where you can create galleries and the list of regions where you can replicate any gallery image: VMSS using that image version in the region.
Access
As the Shared Image Gallery, Shared Image and Shared Image version are all resources, they can be shared using the built-in native Azure RBAC controls. Using RBAC you can share these resources to other users, service principals, and groups in your organization. The scope of sharing these resources is within the same Azure AD tenant. Once a user has access to the Shared Image version, they can deploy a VM or a Virtual Machine Scale Set in any of the subscriptions they have access to within the same Azure AD tenant as the Shared Image version. Here is the sharing matrix that helps understand what the user gets access to:
Billing
There is no extra charge for using the Shared Image Gallery service. You will be charged for the following resources:
- Storage costs of storing the Shared Image versions. This depends on the number of replicas of the version and the number of regions the version is replicated to.
- Network egress charges for replication from the source region of the version to the replicated regions.
Frequently asked questions
Q. How do I sign up for the Shared Image Gallery Public Preview?
A. In order to sign up for the Shared Image Gallery public preview, you need to register for the feature by running the following commands from each of the subscriptions in which you intend to create a Shared Image Gallery, Image definition, or Image version resources, and also where you intend to deploy Virtual Machines using the image versions.
CLI:
az feature register --namespace Microsoft.Compute --name GalleryPreview az provider register --name Microsoft.Compute
PowerShell:
Register-AzureRmProviderFeature -FeatureName GalleryPreview -ProviderNamespace Microsoft.Compute Register-AzureRmResourceProvider -ProviderNamespace Microsoft.Compute
Q. How can I list all the Shared Image Gallery resources across subscriptions?
A. In order to list all the Shared Image Gallery resources across subscriptions that you have access to on the Azure portal, follow the steps below:
- Open the Azure portal.
- Go to All Resources.
- Select all the subscriptions under which you’d like to list all the resources.
Look for resources of type Private gallery.
To see the image definitions and image versions, you should also select Show hidden types.
To list all the Shared Image Gallery resources across subscriptions that you have permissions to, use the following command in the Azure CLI:
az account list -otsv --query "[].id" | xargs -n 1 az sig list --subscription
Q. How do I share my images across subscriptions?
A. You can share images across subscriptions using Role Based Access Control (RBAC). Any user that has read permissions to an image version, even across subscriptions, will be able to deploy a Virtual Machine using the image version.
Q. Can I move my existing image to the shared image gallery?
A. Yes. There are 3 scenarios based on the types of images you may have.
Scenario 1: If you have a managed image, then you can create an image definition and image version from it.
Scenario 2: If you have an unmanaged generalized image, you can create a managed image from it, and then create an image definition and image version from it.
Scenario 3: If you have a VHD in your local file system, then you need to upload the VHD, create a managed image, then you can create and image definition and image version from it.
- If the VHD is of a Windows VM, see Upload a generalized VHD.
- If the VHD is for a Linux VM, see Upload a VHD
Q. Can I create an image version from a specialized disk?
A. No, we do not currently support specialized disks as images. If you have a specialized disk, you need to create a VM from the VHD by attaching the specialized disk to a new VM. Once you have a running VM, you need to follow the instructions to create a managed image from the Windows VM or Linux VM. Once you have a generalized managed image, you can start the process to create a shared image description and image version.
Q. Can I create a shared image gallery, image definition, and image version through the Azure portal?
A. No, currently we do not support the creation of any of the Shared Image Gallery resources through Azure portal. However, we do support the creation of the Shared Image Gallery resources through CLI, Templates, and SDKs. PowerShell will also be released soon.
Q. Once created, can I update the image definition or the image version? What kind of details can I modify?
A. The details that can be updated on each of the resources are mentioned below:
Shared image gallery:
- Description
Image definition:
- Recommended vCPUs
- Memory
- Description
- End of life date
Image version:
- Regional replica count
- Target regions
- Exclusion from latest
- End of life date
Q. Once created, can I move the Shared Image Gallery resource to a different subscription?
A. No, you cannot move the shared image gallery resource to a different subscription. However, you will be able to replicate the image versions in the gallery to other regions as required.
Q. Can I replicate my image versions across clouds – Azure China 21Vianet, Azure Germany and Azure Government Cloud?
A. No, you cannot replicate image versions across clouds.
Q. Can I replicate my image versions across subscriptions?
A. No, you may replicate the image versions across regions in a subscription and use it in other subscriptions through RBAC.
Q. Can I share image versions across Azure AD tenants?
A. No, currently shared image gallery does not support the sharing of image versions across Azure AD tenants. However, you may use the Private Offers feature on Azure Marketplace to achieve this.
Q. How long does it take to replicate image versions across the target regions?
A..
Q. How many shared image galleries can I create in a subscription?
A. The default quota is:
- 10 shared image galleries, per subscription, per region
- 200 image definitions, per subscription, per region
- 2000 image versions, per subscription, per region
Q. What is the difference between source region and target region?
A..
Q. How do I specify the source region while creating the image version?
A..
Q. How do I specify the number of image version replicas to be created in each region?
A. like this: .
Q. Can I create the shared image gallery in a different location than the one where I want to create the image definition and image version?
A. Yes, this is possible. But as a best practice, we encourage you to keep the resource group, shared image gallery, image definition and image version in the same location.
Q. What are the charges for using the Shared Image Gallery?
A. There are no charges for using the Shared Image Gallery service, except the storage charges for storing the image versions and network egress charges for replicating the image versions from source region to target regions.
Q. What API version should I use to create Shared Image Gallery, Image Definition, Image Version, and VM/VMSS out of the Image Version?
A. For VM and Virtual Machine Scale Set deployments using an image version, we recommend you use API version 2018-04-01 or higher. To work with shared image galleries, image definitions, and image versions, we recommend you use API version 2018-06-01.
Next steps
Learn how to deploy shared images using Azure CLI. | https://docs.microsoft.com/en-us/azure/virtual-machines/windows/shared-image-galleries | CC-MAIN-2018-51 | en | refinedweb |
Note: Apologies for the table formatting in this article. They’ll be fixed soon, but for now, hopefully the code and visualisations will explain what we are learning here!
Looking for things that cause other things is one of the most common investigations into data. While correlation (a relationship between variables) does not equal cause, it will often point you in the right direction and help to aid your understanding of the relationships in your data set.
You can calculate the correlation for every variable against every other variable, but this is a lengthy and inefficient process with large amounts of data. In these cases, seaborn gives us a function to visualise correlations. We can then focus our investigations onto what is interesting from this.
Let’s get our modules imported, a dataset of player attributes ready to go and we can take a look at what the correlations.
import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline data = pd.read_csv("../../Data/FIFAPlayers.csv") data.head(2)
2 rows × 44 columns
Our data has lots of columns that are not attribute ratings, so let’s .drop() these from our dataset.
data = data.drop(["player_api_id","preferred_foot","attacking_work_rate","defensive_work_rate","player_name","birthday", "p_id","height","weight"],axis=1) data.head(2)
2 rows × 35 columns
Now we have 35 columns, and a row for each player.
As mentioned, we want to see the correlation between the variables. Knowing these correlations might help us to uncover relationships that help us to better understand our data in the real world.
DataFrames can calculate the correlations really easy using the ‘.corr()’ method. Let’s see what that gives us:
data.corr()
35 rows × 35 columns
We get 35 rows and 35 columns – one of each for each variable. The values show the correlation score between the row and column at each point. Values will range from 1 (very strong positve correlation, as one goes up, the other tends to, too) to -1 (very strong negative correlation, one goes up will tend to push the other down, or vice-versa), via 0 (no relationship).
So looking at our table, the correlation score (proper name: r-squared) between curve and crossing is 0.8, suggesting a strong relationship. We would expect this, if you can curve the ball, you tend to be able to cross.
Additionally, heading accuracy has no real relationship (0.17) with potential ability. So, if like me, you are awful in the air, you can still make it!
Looking through lots of numbers is pretty draining – so let’s visualise this table. with a ‘.heatmap’:
fig, ax = plt.subplots() fig.set_size_inches(14, 10) ax=sns.heatmap(data.corr())
There is a lot happening here, and we wouldn’t try to present insights with this, but we can still learn something from it.
Clearly, goalkeepers are not rated for their outfield ability! There is negative correlation between the GK skills and outfield skills – as shown by the streaks of black and purple.
Simiarly, we can see negative correlation between strength and acceleration and agility. Got a strong player? They are unlikely to be quick or agile. If you can find one that is, they should command a decent fee due to their unique abilities!
Summary
In a page, we have been able to take a big dataset and try to ascertain relationships within it. By using ‘.corr()’ and ‘.heatmap()’ we create numerical and graphical charts that easily illustrate the data.
With our example, we spotted how stronger players usually have a lack of pace and agility. Also looking at the chart above, reactions seems to be the best indicator of overall rating. Maybe being a talented player isn’t about just being quick, or scoring from 35 yards, maybe reading the game is the key!
Next up, take a different look at plotting relationships between variables with scatter plots, or read up on correlation as a whole. | https://fcpython.com/visualisation/looking-for-correlations-with-heatmaps-in-seaborn | CC-MAIN-2018-51 | en | refinedweb |
I agree that we should leverage open mbeans for this as much as we
can. Also, I was wondering if we should consider creating namespace
handler that could directly expose individual beans within the
blueprint container as mbeans. But maybe exposing a bean as a service
(and therefore as a mbean) is good enough.
Jarek
On Fri, Oct 30, 2009 at 1:54 AM, Rex Wang <rwonly@gmail.com> wrote:
> This topic might be a little bit independent from what we are doing in
> current osgi integration work. However, I would like to raise such
> discussion because I believe blueprint will act as an important role in our
> future framework. So, if we wanna leverage blueprint as a common way to
> construct geronimo plugins and hope use JMX for remote management, we
> definitely need a set of mbeans to track the blueprint bundles. Currently, I
> am working on this work item.
>
> OSGi Alliance is planing to release an enterprise spec which contains rfc
> 139(mbeans for core framework and 3 compendium services), but there is no
> mbeans for blueprint. So I think our jobs go ahead of the standard. We did a
> quick look on karaf and spring dm, and did not found them using mbeans to
> track and manage the state of blueprint. I hope the works we are doing are
> helpful as a complement of rfc 139.
>
> OK, although RFC139 says it is not to provide a generic mechanism that be
> used to expose management of arbitrary OSGi services through JMX, we still
> deside to keep our design consistent with it. That is to leverage the
> openmbean's open type in the data structure of mbeans' return value, such as
> compositeType, tabularType.. And there is not too much APIs exposed by
> blueprint, so I think only one Mbean is enough right now.
>
> A problem is that how we track the status. In the rfc124 spec, blueprint
> bundle's status can be identified by listening the events that pre-defined.
> The blueprint extender sends those events to the Event Admin service, but in
> RFC 139 there is no mbeans designed to manage the event admin. So looks like
> we need a mbean provides the APIs to track bluepirnt application and its
> implementation must also implement the BlueprintListener interface. That is
> what we are thinking currently.
>
> Is anybody insteresting on this topic or do you know anythings behind the
> scenes from Karaf/Springsource that say why they seems not plan to design
> such mbeans?
> Any comments is appreciated.
>
> Regards
> -Rex
> | http://mail-archives.apache.org/mod_mbox/geronimo-dev/200910.mbox/%3C5eb405c70910300750k6e550369o734427015cafd8d7@mail.gmail.com%3E | CC-MAIN-2016-44 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.